Sei sulla pagina 1di 382

Installation and Maintenance

of Hitachi NAS Platform


TCI2035

Courseware Version 3.0


Notice: This document is for informational purposes only, and does not set forth any warranty, express or
implied, concerning any equipment or service offered or to be offered by Hitachi Data Systems. This
document describes some capabilities that are conditioned on a maintenance contract with Hitachi Data
Systems being in effect, and that may be configuration-dependent, and features that may not be currently
available. Contact your local Hitachi Data Systems sales office for information on feature and product
availability.
Hitachi Data Systems sells and licenses its products subject to certain terms and conditions, including limited
warranties. To see a copy of these terms and conditions prior to purchase or license, please call your local
sales representative to obtain a printed copy. If you purchase or license the product, you are deemed to have
accepted these terms and conditions.
THE INFORMATION CONTAINED IN THIS MANUAL IS DISTRIBUTED ON AN "AS IS" BASIS
WITHOUT WARRANTY OF ANY KIND, INCLUDING WITHOUT LIMITATION, ANY IMPLIED
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR
NONINFRINGEMENT. IN NO EVENT WILL HDS BE LIABLE TO THE END USER OR ANY THIRD PARTY
FOR ANY LOSS OR DAMAGE, DIRECT OR INDIRECT, FROM THE USE OF THIS MANUAL, INCLUDING,
WITHOUT LIMITATION, LOST PROFITS, BUSINESS INTERRUPTION, GOODWILL OR LOST DATA,
EVEN IF HDS EXPRESSLY ADVISED OF SUCH LOSS OR DAMAGE.
Hitachi Data Systems is registered with the U.S. Patent and Trademark Office as a trademark and service
mark of Hitachi, Ltd. The Hitachi Data Systems logotype is a trademark and service mark of Hitachi, Ltd.
The following terms are trademarks or service marks of Hitachi Data Systems Corporation in the United
States and/or other countries:
Hitachi Data Systems Registered Trademarks
Hi-Track, ShadowImage, TrueCopy, Essential NAS Platform, Universal Storage Platform

Hitachi Data Systems Trademarks


HiCard, HiPass, Hi-PER Architecture, HiReturn, Hi-Star, iLAB, NanoCopy, Resource Manager, SplitSecond,
TrueNorth, Universal Star Network

All other trademarks, trade names, and service marks used herein are the rightful property of their respective
owners.
NOTICE:
Notational conventions: 1KB stands for 1,024 bytes, 1MB for 1,024 kilobytes, 1GB for 1,024 megabytes, and
1TB for 1,024 gigabytes, as is consistent with IEC (International Electrotechnical Commission) standards for
prefixes for binary and metric multiples.
© Hitachi Data Systems Corporation 2013. All Rights Reserved
HDS Academy 1073

Contact Hitachi Data Systems at www.hds.com.

This training course is based on firmware version 11.1.3250.xx also referenced as Angel-2.

Page ii HDS Confidential: For distribution only to authorized parties.


Contents

INTRODUCTION ..............................................................................................................IX
Welcome and Introductions ........................................................................................................ix
Course Description ..................................................................................................................... x
Required Knowledge and Skills ..................................................................................................xi
Supplemental Courses............................................................................................................... xii
Course Objectives ..................................................................................................................... xiii
Course Topics ........................................................................................................................... xiv
Learning Paths ...........................................................................................................................xv
Collaborate and Share .............................................................................................................. xvi
HDS Academy Is on Twitter and LinkedIn ............................................................................... xvii

1. PLATFORM OVERVIEW ............................................................................................ 1-1


Module Objectives ................................................................................................................... 1-1
Hitachi NAS Platform ............................................................................................................... 1-2
Hitachi NAS Portfolio ............................................................................................................... 1-3
Hitachi Unified Storage ............................................................................................................ 1-4
Hitachi Unified Storage Options ............................................................................................... 1-5
Hitachi Unified Storage (HUS) ................................................................................................. 1-6
What Is Hitachi NAS Platform or NAS Gateway Technology? ................................................ 1-7
High-level Implementation ....................................................................................................... 1-8
Platform Performance Specifications ....................................................................................... 1-9
Differences Between Models 3080 and 3090 ........................................................................ 1-10
HNAS 3090 Performance Accelerator ................................................................................... 1-11
Differences Between Models 4060 and 4080 ........................................................................ 1-12
What Is What ......................................................................................................................... 1-13
High-performance NAS Platform 3200 Rear View ................................................................ 1-14
Hitachi NAS Platform Models 3080 and 3090 ....................................................................... 1-15
Cable Side HNAS 3080 or 3090 G1 and G2 ......................................................................... 1-16
Hitachi NAS Platform Models 4xx0 ........................................................................................ 1-17
Summary Hitachi NAS Platform 4100.................................................................................... 1-18
Module Summary ................................................................................................................... 1-19
Module Review ...................................................................................................................... 1-20

2. HARDWARE ARCHITECTURE .................................................................................... 2-1


Module Objectives ................................................................................................................... 2-1
Hitachi NAS 3080 and 3090 Simplified Block Diagram ........................................................... 2-2
Hitachi NAS 4060 and 4080 Simplified Block Diagram ........................................................... 2-4
Mercury (Main) FPGA Board (MFB) Model 30x0 .................................................................... 2-6
Memory and Cache per Single Node....................................................................................... 2-7
Mercury (Main) Motherboard (MMB) ....................................................................................... 2-8
Mercury (Main) FPGA Board (MFB) ........................................................................................ 2-9
Hitachi NAS Platform 30x0 Rear Panel ................................................................................. 2-10
Hitachi NAS Platform Port Layout 3080/3090 ....................................................................... 2-11
Hitachi NAS Platform 4xx0 Rear Panel ................................................................................. 2-12
Hitachi NAS Platform Port Layout 4060/4080/4100 .............................................................. 2-13
Hitachi NAS Platform Models 4xx0 Flavors ........................................................................... 2-14
MMB Module Flavors and Port Layout ................................................................................. 2-15
NVRAM or Battery Status LED .............................................................................................. 2-16
Facia and Status LEDs 3100 and 3200 ................................................................................. 2-17
Facia and Status LEDs 3080 and 3090 ................................................................................. 2-18
Facia and Status LEDs 4060, 4080, and 4100 ...................................................................... 2-19
Power/Server Status LED ...................................................................................................... 2-20
NAS Node Status LED (Alert) ................................................................................................ 2-21
Reset and Power Switch ........................................................................................................ 2-22
Redundant and Hot Swappable Power Supply Unit (PSU) ................................................... 2-23

HDS Confidential: For distribution only to authorized parties. Page iii


Contents

SMU200 and SMU300 Replaces SMU100 ........................................................................... 2-24


SMU400 Early Information .................................................................................................... 2-25
Module Summary .................................................................................................................. 2-26
Module Review ...................................................................................................................... 2-27

3. SOFTWARE ARCHITECTURE ..................................................................................... 3-1


Module Objectives ................................................................................................................... 3-1
Software Components Hitachi NAS Platform Models ............................................................. 3-2
Node Boot Sequence .............................................................................................................. 3-3
BOS and Linux Incorporated (BALI) ........................................................................................ 3-4
Platform API (PAPI) ................................................................................................................. 3-5
NAS Platform Software Suite .................................................................................................. 3-6
Hitachi NAS Platform Software Licensing ............................................................................... 3-7
Hitachi NAS Software Bundles ................................................................................................ 3-8
Module Summary .................................................................................................................. 3-10
Module Review ...................................................................................................................... 3-11

4. INSTALLATION OF HITACHI NAS PLATFORM .............................................................. 4-1


Module Objectives ................................................................................................................... 4-1
Installation Outline of Hitachi NAS Platform ............................................................................ 4-2
Rack Mounting......................................................................................................................... 4-3
Login User Accounts Using Embedded SMU ......................................................................... 4-4
Login User Accounts Using External SMU .............................................................................. 4-5
Null Modem Cable Configuration ............................................................................................ 4-6
Three Important Success Criteria............................................................................................ 4-7
Single HNAS 30x0 or 4xx0 with Embedded SMU ................................................................... 4-8
Initial Setup Single Node Embedded SMU ............................................................................. 4-9
Default Interface Settings for 3080 and 3090 ........................................................................ 4-10
Single Node Initial Setup: Models 3080 and 3090 ................................................................ 4-11
Node Initial Setup: Models 4060, 4080, and 4100 ................................................................ 4-12
Node Initial Setup Model 4xx0 1 of 3 .................................................................................... 4-13
Node Initial Setup Model 4xx0 2 of 3 .................................................................................... 4-14
Node Initial Setup Model 4xx0 3 of 3 .................................................................................... 4-15
Single Node Initial Setup: License Keys ............................................................................... 4-16
Initial Node Setup: Hitachi NAS Platform GUI....................................................................... 4-17
Adding License Key ............................................................................................................... 4-18
Initial Setup: Hitachi NAS Platform Node GUI....................................................................... 4-19
Server Setup Wizard ............................................................................................................. 4-20
Single Node Initial Setup: File Service EVS .......................................................................... 4-21
Hitachi NAS Platform Management Console ........................................................................ 4-22
Clustering from A to Z ........................................................................................................... 4-23
Initial Setup: First Node in a Cluster ...................................................................................... 4-25
Cluster Initial Setup: Model 30x0 CLI First Node .................................................................. 4-26
Initial Setup: External SMU ................................................................................................... 4-27
Initial Setup: External SMU CLI ............................................................................................. 4-28
Initial Setup: SMU Wizard ..................................................................................................... 4-29
Initial Setup: SMU GUI .......................................................................................................... 4-30
Initial Setup: Managed Servers ............................................................................................. 4-31
Initial Setup: Hitachi NAS Platform Node GUI....................................................................... 4-32
Cluster Initial Setup: License Keys ........................................................................................ 4-33
Initial Setup: Hitachi NAS Platform Licenses ........................................................................ 4-34
Adding License Key ............................................................................................................... 4-35
Cluster Initial Setup: Enable Clustering ................................................................................. 4-36
Initial Setup: Promote Clustering ........................................................................................... 4-37
Promoted to a Single-Node Cluster....................................................................................... 4-38
HNAS Clustered with External SMU ..................................................................................... 4-39
Cluster Initial Setup: Second Node ....................................................................................... 4-40

Page iv HDS Confidential: For distribution only to authorized parties.


Contents

Cluster Initial Setup: Models 30x0 CLI Second Node............................................................ 4-41


Initial Setup: Flow and IP Addressing .................................................................................... 4-42
Initial Setup: Hitachi NAS Platform Node GUI ....................................................................... 4-43
Initial Setup: License Key ...................................................................................................... 4-44
Adding License Key ............................................................................................................... 4-45
Initial Setup: Join the Second Node....................................................................................... 4-46
Initial Setup: Add Single Node 2 to Clustered Node 1 ........................................................... 4-47
Two Node Cluster Configured ............................................................................................... 4-48
Initial Setup: File Service EVS ............................................................................................... 4-49
Module Summary ................................................................................................................... 4-50
Module Review ...................................................................................................................... 4-51

5. ETHERNET AND FIBRE CHANNEL NETWORKS ............................................................ 5-1


Module Objectives ................................................................................................................... 5-1
GbE Cable Distances............................................................................................................... 5-2
HNAS 30x0 Cluster 10GbE Interface (XFI) ............................................................................. 5-3
Finisar Small Form Factor (SFP+) ........................................................................................... 5-4
HNAS Models 4xx0 Use SFP+ ................................................................................................ 5-5
Cable Distance and Optical Media Type ................................................................................. 5-6
HNAS 4xx0 SFP+ Copper TwinAx Cable Assembly ............................................................... 5-7
Cable Distance and Copper Media Type ................................................................................. 5-8
NAS Platform Models 3080 and 3090 Networks ..................................................................... 5-9
NAS Platform Models 4060, 4080, and 4100 Networks ........................................................ 5-10
Hitachi NAS 30x0 Network and Embedded SMU .................................................................. 5-11
Hitachi NAS 4xx0 Network and External SMU ...................................................................... 5-12
Hitachi NAS 4xx0 Network and Clustering ............................................................................ 5-13
Private and Public Management Network Embedded SMU 30x0 ......................................... 5-14
Private and Public Management Network External SMU 30x0 Cluster ................................. 5-15
Private and Public Management Network with SMU Managed Legacy Storage................... 5-16
EVS Connectivity in a Cluster ................................................................................................ 5-17
IP Addressing and EVS ......................................................................................................... 5-18
Aggregation Configuration Screen Models 30x0 ................................................................... 5-19
Aggregation Configuration Screen Models 4xx0 ................................................................... 5-20
LACP Protocol Usage ............................................................................................................ 5-21
NTP and Management Network ............................................................................................ 5-22
Fibre Channel Connectivity .................................................................................................... 5-23
Storage Considerations: Platform Differences ...................................................................... 5-24
AMS200, 500, 1000, 2000 and HUS ..................................................................................... 5-25
Enterprise Including VSP (Not HUS VM) ............................................................................... 5-26
Hitachi Unified Storage VM .................................................................................................... 5-27
Fibre Channel Minimum Configuration for 2-Node 2200 Cluster .......................................... 5-28
Fibre Channel Configuration for 2-Node 3100 Cluster and Enterprise Storage .................... 5-29
High-performance NAS Platform 3200 Connectivity ............................................................. 5-31
Fibre Channel Switchless Configuration for 2-Node 3100 or 3200 Cluster ........................... 5-33
Fibre Channel Switchless Configuration for Single 3100 or 3200 Node ............................... 5-34
Fibre Channel Fabric Configuration for 2-Node Cluster HNAS 4xx0 .................................... 5-35
Fibre Channel Best Practice Configuration for 2-Node Cluster Using Secure Storage Domains5-38
Fibre Channel Recommended Configuration for 2-Node Cluster Enterprise 4xx0 ............... 5-39
Fibre Channel Configuration for 2-Node Cluster Enterprise 4xx0 ......................................... 5-40
Fibre Channel Switch-less Configuration for 2-Node Cluster Modular 30x0 ......................... 5-41
Fibre Channel Switch-less Configuration for 2-Node Cluster Enterprise 4xx0 ...................... 5-43
Fibre Channel Switch-less 2-Node Cluster Configuration 30x0 and NetApp 2680 ............... 5-44
Fibre Channel Switch-less Configuration for 2-Node Cluster 4xx0 Enterprise ...................... 5-45
Most Important SCSI Command Node 1 ............................................................................... 5-46
Most Important SCSI Command Node 2 ............................................................................... 5-47
Problem Determination Example 1 ........................................................................................ 5-48
Problem Determination Example 2 ........................................................................................ 5-49

HDS Confidential: For distribution only to authorized parties. Page v


Contents

Storage Considerations ......................................................................................................... 5-50


Storage Enhancements for HNAS ........................................................................................ 5-51
HUS 100 Options and HNAS ................................................................................................ 5-52
Module Summary .................................................................................................................. 5-53
Module Review ...................................................................................................................... 5-54

6. FILE SYSTEM AND ACCESS PROTOCOLS ................................................................... 6-1


Module Objectives ................................................................................................................... 6-1
From Disk Drive to HNAS Virtualized Storage ........................................................................ 6-2
Hitachi Storage System Integration ......................................................................................... 6-3
BlueArc RAID Rack Discovery ................................................................................................ 6-4
Create System Drives ............................................................................................................. 6-5
System Drives – Create SD .................................................................................................... 6-6
CLI Displaying the System Drives ........................................................................................... 6-8
From Disk Drive to HNAS Virtualized Storage ........................................................................ 6-9
Hitachi Dynamic Provisioning (HDP) and HNAS ................................................................... 6-10
From Physical Disk to Storage Pool ...................................................................................... 6-11
Expanding a Storage Pool ..................................................................................................... 6-12
File System in a Storage Pool ............................................................................................... 6-13
File System Using Auto Expansion ....................................................................................... 6-14
System Drive Groups (SDG) ................................................................................................. 6-15
Hitachi Dynamic Provisioning (HDP) ..................................................................................... 6-16
Storage Pool Best Practices.................................................................................................. 6-17
Storage Pools Specifications................................................................................................. 6-19
Creating a Storage Pool ........................................................................................................ 6-20
File System Specifications .................................................................................................... 6-21
File System Definition ............................................................................................................ 6-22
Tiered File Systems (Tiered Storage Pools) ......................................................................... 6-23
Creating a Tiered Storage Pool ............................................................................................. 6-24
Creating a Tiered Store Pool ................................................................................................. 6-25
Displaying a Tiered Storage Pool .......................................................................................... 6-26
From Disk Drive to Drive Letter and UNIX Mount Point ........................................................ 6-27
What Are the Similarities? ..................................................................................................... 6-28
What Is Different? .................................................................................................................. 6-29
UNIX Permissions ................................................................................................................. 6-30
Windows Permissions ........................................................................................................... 6-31
Common Internet File System (CIFS) Authentication/Active Directory Service (ADS) ......... 6-32
ADS and Network Basic Input/Output System (NetBIOS) .................................................... 6-33
ADS and Domain Name System (DNS) ................................................................................ 6-34
ADS Computers..................................................................................................................... 6-35
ADS Computer Properties ..................................................................................................... 6-36
CIFS Shares .......................................................................................................................... 6-37
Network File System (NFS) and Exports .............................................................................. 6-38
Multi-protocol Access ............................................................................................................ 6-39
Module Summary .................................................................................................................. 6-40
Module Review ...................................................................................................................... 6-41

7. N-WAY CLUSTERING AND ENTERPRISE VIRTUAL SERVER (EVS) ................................ 7-1


Module Objectives ................................................................................................................... 7-1
Enterprise Virtual Servers (EVS) Attributes ............................................................................ 7-2
EVS Configuration Summary .................................................................................................. 7-3
Virtual Server Configuration .................................................................................................... 7-4
Automatic EVS Migration (Clustering) Network Problem ........................................................ 7-5
Automatic EVS Migration (Clustering) Node HW Problem ..................................................... 7-6
2-node Clustering .................................................................................................................... 7-7
Clustering Basics ..................................................................................................................... 7-8
NVRAM Usage in a 2-way Clustered Configuration................................................................ 7-9

Page vi HDS Confidential: For distribution only to authorized parties.


Contents

N-way Clustering .................................................................................................................... 7-10


NVRAM Usage in a 4-way Clustered Configuration .............................................................. 7-11
Cluster Configuration ............................................................................................................. 7-12
EVS Failover Functionality and Process Summary ............................................................... 7-13
IP Address before Failover .................................................................................................... 7-14
On Failing Over ...................................................................................................................... 7-15
After Failover .......................................................................................................................... 7-16
Cluster Failover Reporting ..................................................................................................... 7-17
Let’s Have a Look at a Single Node ...................................................................................... 7-18
A Cluster Improves Things .................................................................................................... 7-19
Hitachi Synchronous Disaster Recovery (Sync DR) Cluster Service .................................... 7-20
Sync DR Components and Connectivity................................................................................ 7-21
This Is NOT a Sync DR Cluster ............................................................................................. 7-22
Module Summary ................................................................................................................... 7-23
Module Review ...................................................................................................................... 7-24

8. MAINTENANCE........................................................................................................ 8-1
Module Objectives ................................................................................................................... 8-1
Node IP Addresses 1 of 2 ........................................................................................................ 8-2
Node IP Addresses 2 of 2 ........................................................................................................ 8-3
Management Facilities ............................................................................................................. 8-4
Securing Management Access ................................................................................................ 8-5
Useful Command Line Utilities ................................................................................................. 8-6
CLI Commands and Context ................................................................................................... 8-7
Maintenance Actions................................................................................................................ 8-8
Software Patching .................................................................................................................... 8-9
Software Version Numbers and Names ................................................................................ 8-10
Software Upgrades ................................................................................................................ 8-15
Upgrade Path in Release Notes ............................................................................................ 8-16
Software Version Example from Daily Summary Email......................................................... 8-17
Saving External SMU Configuration Before Upgrade............................................................ 8-18
Saving Embedded SMU and 30x0/4xx0 Server Registry ...................................................... 8-19
External SMU SW Upgrade and Downgrade ........................................................................ 8-20
1a. Selecting CentOS Installation Method Second ................................................................ 8-21
1b. Selecting CentOS Installation Method Clean .................................................................. 8-22
2. External SMU Application Upgrade Procedures ................................................................ 8-23
Embedded SMU Upgrade and Downgrade 30x0/4xx0.......................................................... 8-24
Upgrade of Embedded SMU SW from the GUI ..................................................................... 8-25
Model 30x0 and 4xx0 Server Upgrade Procedures ............................................................... 8-26
Hitachi Command Suite (HCS) and Device Manager ............................................................ 8-27
Hitachi Command Suite (HCS) 7.3.0 ..................................................................................... 8-28
Hitachi Command Suite (HCS) Version 7.4 and up ............................................................... 8-29
SNMP Manager Connectivity (First SNMP Hi-Track) ............................................................ 8-30

9. TROUBLESHOOTING AND REPLACEMENT .................................................................. 9-1


Module Objectives ................................................................................................................... 9-1
Other Hitachi NAS Platform Management Interfaces .............................................................. 9-2
Storage Array Setup ................................................................................................................ 9-3
Alert SMTP Connectivity .......................................................................................................... 9-4
Configuring SMTP Servers ...................................................................................................... 9-5
Configuring SMU Email Alerts Forwarding .............................................................................. 9-6
Set up Email Forwarding on the SMU ..................................................................................... 9-7
Set Up Email Profile ................................................................................................................. 9-8
Daily Health Check Email ........................................................................................................ 9-9
Alerts Summary Email ........................................................................................................... 9-10
Diagnostic Download ............................................................................................................. 9-11
Diagnostic Report: Email for the Nodes................................................................................. 9-12

HDS Confidential: For distribution only to authorized parties. Page vii


Contents

Diagnostic Report: Email for SMU and More ........................................................................ 9-13


Performance Information Report (PIR) ................................................................................. 9-14
Performance Graph ............................................................................................................... 9-15
Using the trouble Command.................................................................................................. 9-16
trouble Reporter Examples .................................................................................................... 9-17
trouble Performance Reporter Examples .............................................................................. 9-18
Server-Based Packet Capturing ............................................................................................ 9-19
Fascia (Bezel) Removal ........................................................................................................ 9-20
Model 30x0 G1 Fan Replacement Procedure ....................................................................... 9-21
Model 30x0 G1 Removing Fan Unit ...................................................................................... 9-22
Model 30x0 G2/4xx0 Fan Replacement ............................................................................... 9-23
Model 30x0/4xx0 Battery Pack .............................................................................................. 9-24
General Battery Precautions ................................................................................................. 9-25
Model 30x0 G1 NVRAM Battery Replacement ..................................................................... 9-26
Model 30x0 G1 Battery Connector ........................................................................................ 9-27
Model 30x0 G2/4xx0 Battery Replacement........................................................................... 9-28
Battery Replacement in Caddy.............................................................................................. 9-29
Model 30x0 G1 Hard Disk Replacement Procedure ............................................................. 9-30
Model 30x0 G1 Hard Disk Cabling and Positioning .............................................................. 9-31
Model 30x0 G2/4xx0 G2 Hard Disk Replacement ................................................................ 9-32
Hardware Field System Testing ............................................................................................ 9-33
Manufacturing Test and Diagnostic Software (MTDS) .......................................................... 9-34
MTDS Console ...................................................................................................................... 9-35
MTDS Test Commands ......................................................................................................... 9-36
Executing: mtds field-test ...................................................................................................... 9-37
Ending: mtds field-test ........................................................................................................... 9-38
Mercury Motherboard Memory Test Memtest86+ ................................................................. 9-39
Unrecoverable Configuration or Logical Errors ..................................................................... 9-40
Factory Reset to Default Assessment ................................................................................... 9-41
Fixing Logical Errors .............................................................................................................. 9-42
Resetting Servers to Factory Defaults .................................................................................. 9-43
HNAS Server Node Replacement ......................................................................................... 9-44
Spare Part List Model 30x0 ................................................................................................... 9-45
Spare Part List SMU, Switches, and Optics .......................................................................... 9-46
General Precautions .............................................................................................................. 9-47
Module Summary .................................................................................................................. 9-48
Module Review ...................................................................................................................... 9-49

NEXT STEPS ............................................................................................................. N-1


GLOSSARY .............................................................................................................. G-1
EVALUATING THIS COURSE ....................................................................................... E-1

Page viii HDS Confidential: For distribution only to authorized parties.


Introduction
Welcome and Introductions

 Student Introductions
• Name
• Position
• Experience
• Your expectations

HDS Confidential: For distribution only to authorized parties. Page ix


Introduction
Course Description

Course Description

Page x HDS Confidential: For distribution only to authorized parties.


Introduction
Required Knowledge and Skills

Required Knowledge and Skills

 Successfully completed:
• Hitachi Enterprise Storage Systems Installation, Configuration and
Support
or
• Hitachi Modular Storage Systems Installation, Configuration and Support

 For the best results from this training, it is important that you have
experience and skills in:
• NAS and SAN concepts
• TCP/IP networking concepts such as router and switches
• Network management and maintenance
• UNIX/Linux administration
• Microsoft® Windows® administration

HDS Confidential: For distribution only to authorized parties. Page xi


Introduction
Supplemental Courses

Supplemental Courses

 Supplemental courses include:


• TCI2102 — Administration and Operation of Hitachi NAS Platform

Page xii HDS Confidential: For distribution only to authorized parties.


Introduction
Course Objectives

Course Objectives

HDS Confidential: For distribution only to authorized parties. Page xiii


Introduction
Course Topics

Course Topics

Modules Lab Activities


Course Introduction
1. Platform Overview 1. Component Identification
2. Hardware Architecture 2. Hitachi 30x0 and 4xx0 Initial Setup
3. Software Architecture 3. External SMU Initial Setup
4. Installation of Hitachi NAS 4. Hitachi 30x0 or 4xx0 LUN Discovery
Platform Models 30x0 and 4xx0
5. Ethernet and Fibre Channel 5. Networking
Networks
6. File System and Access 6. File System and Basic CIFS
Protocols Administration
7. N-way Clustering and Enterprise 7. Switch-less Clustering
Virtual Server (EVS)
8. Maintenance 8. Maintenance and Firmware Upgrade
9. Troubleshooting and 9. Troubleshooting and Replacement
Replacement

Page xiv HDS Confidential: For distribution only to authorized parties.


Introduction
Learning Paths

Learning Paths

 Are a path to professional


certification
 Enable career advancement
 Are for customers, partners
and employees
• Available on HDS.com,
Partner Xchange and HDSnet
 Are available from the instructor
• Details or copies

HDS.com: http://www.hds.com/services/education/
Partner Xchange Portal: https://portal.hds.com/
HDSnet: http://hdsnet.hds.com/hds_academy/
Please contact your local training administrator if you have any questions regarding
Learning Paths or visit your applicable website.

HDS Confidential: For distribution only to authorized parties. Page xv


Introduction
Collaborate and Share

Collaborate and Share

 Learn what’s new in the Academy


 Ask the Academy a question
 Discover and share expertise
 Shorten your time to mastery
 Give your feedback
 Participate in forums

Academy in theLoop!

theLoop: http://loop.hds.com/community/hds_academy/course_announcements_
and_feedback_community ― HDS internal only

Page xvi HDS Confidential: For distribution only to authorized parties.


Introduction
HDS Academy Is on Twitter and LinkedIn

HDS Academy Is on Twitter and LinkedIn

Follow the HDS Academy on Twitter for regular


training updates.

LinkedIn is an online community that enables


students and instructors to actively participate in
online discussions related to Hitachi Data Systems
products and training courses.

These are the URLs for Twitter and LinkedIn:


 http://twitter.com/#!/HDSAcademy
 http://www.linkedin.com/groups?gid=3044480&trk=myg_ugrp_ovr

HDS Confidential: For distribution only to authorized parties. Page xvii


Introduction
HDS Academy Is on Twitter and LinkedIn

Page xviii HDS Confidential: For distribution only to authorized parties.


1. Platform Overview
Module Objectives

 Upon completion of this module, you should be able to:


• State the purpose and benefits of using Hitachi NAS Platform
• State the concept of the Hitachi NAS Platform architecture
• Identify the positioning of the Hitachi NAS Platform in the Hitachi Data
Systems NAS portfolio

HDS Confidential: For distribution only to authorized parties. Page 1-1


Platform Overview
Hitachi NAS Platform

Hitachi NAS Platform

BlueArc Corporation, now a part of Hitachi Data Systems:


 Private company founded in 1998
 Headquartered in San Jose, CA with an R&D center in the United Kingdom
 Highest performing NAS server in the industry
 File serving
 Email serving
 Second largest high-end NAS company (Gartner)
 Fastest growing NAS company three years in a row (Gartner)
 Many years of sales success
 Global sales, professional services, and support infrastructure
 BlueArc has been part of Hitachi Data Systems since September 2011

Page 1-2 HDS Confidential: For distribution only to authorized parties.


Platform Overview
Hitachi NAS Portfolio

Hitachi NAS Portfolio

4100
Hitachi NAS Platform
140K IOPS per Node
3090 PA 4080 32PB Max Capacity

96K IOPS per Node 105K IOPS per Node


4PB Max Capacity 16PB Max Capacity
Price

3090 4060

73K IOPS per Node 70K IOPS per Node


4PB Max Capacity 8PB Max Capacity
3080

41K IOPS per Node


F1140 2PB Max Capacity

10K IOPS per Node


2PB Max Capacity

Features/Capacity/Performance

Performance numbers are only used for comparison purposes. HNAS 3090 is shown
with and without Performance Accelerator. HNAS 3090 PA is with Performance
Accelerator installed. For more exact and customer facing numbers consult the
appropriate and updated performance documents.
F1140 = Hitachi NAS Platform F1140
3080 = Hitachi NAS Platform 3080
3090 = Hitachi NAS Platform 3090
3090 PA = Hitachi NAS Platform 3090 including Performance Accelerator license
4060 = Hitachi NAS Platform 4060
4080 = Hitachi NAS Platform 4080
4100 = Hitachi NAS Platform 4100

HDS Confidential: For distribution only to authorized parties. Page 1-3


Platform Overview
Hitachi Unified Storage

Hitachi Unified Storage

CIFS FC
NFS iSCSI

Hitachi Command Suite


(HCS)

Page 1-4 HDS Confidential: For distribution only to authorized parties.


Platform Overview
Hitachi Unified Storage Options

Hitachi Unified Storage Options

FC-SAN = + #10: FC

IP-SAN/
FCoE = + #11: iSCSI
#12: FCoE

= +
File Module (HNAS)
CIFS/
NFS

HDS Confidential: For distribution only to authorized parties. Page 1-5


Platform Overview
Hitachi Unified Storage (HUS)

Hitachi Unified Storage (HUS)

Hitachi Unified Storage (HUS)

Entry Mid Max


HUS 110 HUS 130 HUS 150

F3080 or F30x0 or
F4060 F4xx0
File
modules F3080 or F3080 or F30x0 or
F4060 F4060 F4xx0

Block
modules Model: XS Model: S Model: MH

Hitachi Command Suite

F3080 is File module M1


F3090 is File module M2

Page 1-6 HDS Confidential: For distribution only to authorized parties.


Platform Overview
What Is Hitachi NAS Platform or NAS Gateway Technology?

What Is Hitachi NAS Platform or NAS Gateway Technology?

LAN / WAN

Fibre Channel

The Gateway technology works like a converter between LAN/WAN file-level data
access and Fibre Channel block-level data access. A NAS Gateway is primarily
designed to perform the data store and retrieve tasks among the huge number of tasks
a file server normally is designed to take care of. By designing a server primarily to
carry out the data store and retrieve tasks, it often helps to outperform file servers
designed to span over multiple file server functions.
Benefits:
 Feature rich
 Asset protection
 NAS/SAN consolidation for improved Total Cost of Ownership (TCO)

HDS Confidential: For distribution only to authorized parties. Page 1-7


Platform Overview
High-level Implementation

High-level Implementation

Servers
(NFS, CIFS, FTP, IP Data
and iSCSI) Network

Private
Management
Network

Hitachi NAS Platform


Two-node Cluster

Dual Fibre
Standby SMU
Channel
Switches/SANs
SMU

Hitachi Data Systems


Enterprise Storage
Public
and Public
SAN/Storage
Hitachi Data Systems Management
Management
Modular and Unified Storage Network
Network Management WS

Consult HiFIRE for interoperability with FC switches, and supported firmware levels.
Support for Enterprise Storage Systems:
 Hitachi Unified Storage VM (HUS VM)
 Hitachi Virtual Storage Platform (VSP)
 Hitachi Universal Storage Platform V (USPV)
 Hitachi Universal Storage Platform VM (USPVM)

Support for Modular Storage Systems:


 Hitachi Unified Storage 110 (HUS 110)
 Hitachi Unified Storage 120 (HUS 120)
 Hitachi Unified Storage 130 (HUS 130)
 Hitachi Adaptable Modular Storage 2100 (AMS2100)
 Hitachi Adaptable Modular Storage 2300 (AMS2300)
 Hitachi Adaptable Modular Storage 2500 (AMS2500)
 Hitachi Simple Modular Storage 100 (SMS100)
 Hitachi Workgroup Modular Storage 100 (WMS100)
 Hitachi Adaptable Modular Storage 200 (AMS200)
 Hitachi Adaptable Modular Storage 500 (AMS500)
 Hitachi Adaptable Modular Storage 1000 (AMS1000)

Page 1-8 HDS Confidential: For distribution only to authorized parties.


Platform Overview
Platform Performance Specifications

Platform Performance Specifications

3080/M1 3090/M2 4060 4080 4100


4 initial / 4 initial /
Cluster Nodes max per HNAS cluster 2 2 2
8 later 8 later

16PB Initial /
Usable Capacity max per Cluster 2PB 4PB 8PB 16PB
32 PB later

256TB initial / 256TB Initial


FS Size Max per Cluster 64TB 128TB 256TB
512TB later / 1PB later

Max # of FS per Cluster 125 125 125 125 125

Max # of System Drives per Cluster 512 512 512 512 512

Max concurrent:
Connections 30,000 45,000 60,000 60,000 60,000
Open Files 22,000 90,000 221,000 221,000 474,000
per single Node/Server
6 X 1Gb1 + 6 X 1Gb1 +
LAN / File Serving 4 x 10Gb3 4 x 10Gb3 4 x 10Gb3
2 X 10Gb2 2 X 10Gb2

Fibre Channel / Backend Storage 4 x 4GbFC4 4 x 4GbFC4 4 x 8GbFC5 4 x 8GbFC5 4 x 8GbFC5

Cluster Interconnect 2 X 10Gb2 2 X 10Gb2 2 X 10Gb3 2 X 10Gb3 2 X 10Gb3

1) 1GbE copper 4) SFP modules – multi mode optical


2) XFP modules – multi & single mode optical 5) SFP+ modules – multi mode optical
3) SFP+ modules – passive copper, multi & single mode optical

HDS Confidential: For distribution only to authorized parties. Page 1-9


Platform Overview
Differences Between Models 3080 and 3090

Differences Between Models 3080 and 3090

3090

3080

Page 1-10 HDS Confidential: For distribution only to authorized parties.


Platform Overview
HNAS 3090 Performance Accelerator

HNAS 3090 Performance Accelerator

 The Performance Accelerator enables a throughput and I/O


performance enhancement within the Mercury server VLSI
• Throughput component
▪ Connection between the Storage Interface (SI) FPGA and the Tachyon
Fibre Channel controller changes from 4 lanes to 8 lanes
• IOPS component
▪ The number of cache controllers within the SI FPGA increases from 1 to 2
 As with all performance changes the exact
results depend on many factors and will be
different for each customer’s applications.
 If the bottleneck in a system is neither the
PCIe connection to Tachyon nor the SI cache
controller, then installing Performance
Accelerator is unlikely to make any difference.

Licensing
 Performance Accelerator is a licensed feature and will only be enabled if the
Performance Accelerator license is present.
Performance Accelerator is supported on:
 NAS 3090 only
Performance Accelerator is installed by:
 Installing a Performance Accelerator license
 Performing a full system reboot
 If clustered, reboot one node at a time

HDS Confidential: For distribution only to authorized parties. Page 1-11


Platform Overview
Differences Between Models 4060 and 4080

Differences Between Models 4060 and 4080

4060

4080

Page 1-12 HDS Confidential: For distribution only to authorized parties.


Platform Overview
What Is What

What Is What

Hitachi Data Systems BlueArc Mercury


HNAS 3080 (G1) Mercury 50
HNAS 3080 (G2) and
Mercury 55
F3080 (File Module M1)
HNAS 3090 (G1) Mercury 100
HNAS 3090 (G2) and
Mercury 110
F3090 (File Module M2)
HNAS 4060/4080
Mercury 220
F4060/F4080
HNAS 4100
Mercury 230
F4100

HDS Confidential: For distribution only to authorized parties. Page 1-13


Platform Overview
High-performance NAS Platform 3200 Rear View

High-performance NAS Platform 3200 Rear View


FSA/
NIM FSX FSB SIM

Power Supply Units (PSUs) Batteries

System Management Unit (SMU)

High-performance NAS Server


The basic High-performance NAS Server consists of a System Management Unit
(SMU), connected to a 4U form factor chassis. The chassis includes redundant fans
and Power Supplies, four blade modules called Network Interface Modules (NIMs),
a File System X module (FSX), a File System B module (FSB) and a Storage Interface
Module (SIM).
System Management Unit
The SMU is a dedicated appliance for execution of management functions on the
High-performance NAS system. The SMU can manage multiple nodes in a clustered
environment and several clusters. It is not involved with any client data movement.
The SMU consist of an “Off the shelf” PC running Linux and using a BlueArc-
developed application for the SMU functionality.

Page 1-14 HDS Confidential: For distribution only to authorized parties.


Platform Overview
Hitachi NAS Platform Models 3080 and 3090

Hitachi NAS Platform Models 3080 and 3090

Mercury FPGA Board (MFB) Mercury Motherboard (MMB)

Power Supply Units (PSUs)

System Management Unit (SMU)

Hitachi NAS Platform Models 3080 and 3090


The Hitachi NAS Platform models 3080 and 3090 are able to support the same file
services features as the traditional high-performance NAS Platform called the
BlueArc Titans. The 3080 and 3090 consist of only one 1U blade built into a 3U
cabinet. This blade is not an FRU in Generation 1 (G1), so the cabinet including this
blade is one FRU. The only FRUs in the cabinet are the two power supply units
(PSUs), the three fan assemblies (FANs), the two HDDs, the 10Gb Small Form Factor
Pluggables (XFPs), and Small Form-Factor Pluggables (SFPs).
The 3080 and 3090 can use a “built-in” (embedded) SMU in the same processor box
controlled by Linux as the OS and an SMU application running on top of Linux. This
SMU application can be disabled and an external SMU can be used for management.
The external SMU is the same HW and Linux version as the SMU used for the
Hitachi High-performance NAS Platform models 3100 and 3200 or BlueArc Titan 3,
but the SMU software might need to be upgraded to the same level as the firmware
in the 3080 and 3090.

HDS Confidential: For distribution only to authorized parties. Page 1-15


Platform Overview
Cable Side HNAS 3080 or 3090 G1 and G2

Cable Side HNAS 3080 or 3090 G1 and G2

G1

G2

Page 1-16 HDS Confidential: For distribution only to authorized parties.


Platform Overview
Hitachi NAS Platform Models 4xx0

Hitachi NAS Platform Models 4xx0

4060/
4080

4100

HDS Confidential: For distribution only to authorized parties. Page 1-17


Platform Overview
Summary Hitachi NAS Platform 4100

Summary Hitachi NAS Platform 4100

HNAS 4100 Performance Targets


• NFS Spec2008 in IOP/s: 140K per node
• Throughput: 2000MBs

Scalability Targets
• 125 file systems per cluster • Unified NAS and IP SAN
• File system sizes up to 1PB • Hardware accelerated
• Up to 32PB shared storage • Virtual volumes and servers
• Disk capacity 32PB • Multi-protocol support
• Directory capacity up to • Multi-Tiered Storage (MTS)
16 million files • Policy-based management
• Up to 1,024 snapshots • Data protection features

High Availability
• Hot swappable units
• Clustering up to 8 nodes
• NVRAM mirroring
• Parallel RAID striping
• Active-Active clustering

Page 1-18 HDS Confidential: For distribution only to authorized parties.


Platform Overview
Module Summary

Module Summary

 In this module, you have learned to:


• State the purpose and benefits of using Hitachi NAS Platform
• State the concept of the Hitachi NAS Platform architecture
• Identify the positioning of the Hitachi NAS Platform in the Hitachi Data
Systems NAS portfolio

HDS Confidential: For distribution only to authorized parties. Page 1-19


Platform Overview
Module Review

Module Review

1. Which Hitachi storage systems does the Hitachi NAS Platform


support?
2. Is a 10GbE customer data LAN supported on the Hitachi NAS
Platform 3080 model?
3. Does the 4060 model support 1GbE UTP on the customer data
LAN?
4. Is an external SMU required?
5. List the external connectivity differences between models 3080 and
3090.
6. How many nodes can be controlled by the embedded SMU?

Page 1-20 HDS Confidential: For distribution only to authorized parties.


2. Hardware
Architecture
Module Objectives

 Upon completion of this module, you should be able to:


• Identify the hardware components of the Hitachi NAS
Platform
• Interpret important indicators and status
• Explain the external connectivity specification

HDS Confidential: For distribution only to authorized parties. Page 2-1


Hardware Architecture
Hitachi NAS 3080 and 3090 Simplified Block Diagram

Hitachi NAS 3080 and 3090 Simplified Block Diagram

HNAS Chassis
Memory Memory Mercury FPGA Board (MFB1)
3GB 3GB
10GbE SiliconFS™
Network Data
10GbE Interface Movement File system
(NI) (TFL) Metadata (WFS)
10GbE Cache
10GB
10GbE
NVRAM Memory
2GB 2GB
GbE

GbE Fastpath
Disk FCI FC
Fastpath
Interface Q
GbE
(DI) E FC
GbE 4 FC
Sector
Fastpath Fastpath +
GbE Cache FC
4GB
GbE
(MBI)

BALI Mercury Motherboard (MMB)


SMU Memory 8GB
GbE eth0 eth1 GbE
Intel Core 2 Duo E8400 3.0Ghz

HNAS 3080 and 3090 documentation: Mercury FPGA Board (MFB)


HNAS 4060, 4080, and 4100 documentation: Main FPGA Board (MFB)
 Network Interface (NI)
 Ethernet TX and RX
 Ethernet TCP and framing
 Replaces RX, TX and TCP
 Data movement and NVRAM (TDP, FDP and WLOG = TFL)
 TDP FDP WLOG
 Replaces TDP, FDP and WLOG
 Motherboard Interface (MBI)
 Bridges PCI bus to other FPGAs
 Deals with interrupts
 Replaces PCI block in WLOG
 Wise File System (WFS)
 WFS file system chip
 Deals with all file system functions

Page 2-2 HDS Confidential: For distribution only to authorized parties.


Hardware Architecture
Hitachi NAS 3080 and 3090 Simplified Block Diagram

 Replaces WFILE, WDIR, OBJ, FSA


 Disk Interface (DI)
 Moving data to and from disk, sector cache
 Equivalent of Storage Interface Module (SIM)
 Fibre Channel Interface (FCI)
 Interfaces DI and Tachyon QE4+
 Replaces PCI block in Storage Interface Module (SIM)

HDS Confidential: For distribution only to authorized parties. Page 2-3


Hardware Architecture
Hitachi NAS 4060 and 4080 Simplified Block Diagram

Hitachi NAS 4060 and 4080 Simplified Block Diagram

HNAS Chassis
Memory Memory Main FPGA Board (MFB2)
4GB 4GB
10GbE SiliconFS™
Network Data
10GbE Interface Movement File system
(NI) (TFL) Metadata (WFS)
Cache
10GB

10GbE
NVRAM Memory
4GB 8GB
10GbE
Fastpath
Disk FCI FC
Fastpath

10GbE
Interface Q
(DI) E FC
10GbE 8 FC
Fastpath
Sector
Fastpath
Cache FC
4GB
(MBI)

BALI Main Motherboard (MMB)


SMU Memory 16GB
GbE eth0 eth1 GbE
Intel Xeon Quad Core E31225 3.1Ghz

HNAS 3080 and 3090 documentation: Mercury FPGA Board (MFB)


HNAS 4060, 4080, and 4100 documentation: Main FPGA Board (MFB)

 Network Interface (NI)


 Ethernet TX and RX
 Ethernet TCP and framing
 Replaces RX, TX and TCP
 Data movement and NVRAM (TDP, FDP and WLOG = TFL)
 TDP FDP WLOG
 Replaces TDP, FDP and WLOG
 Motherboard Interface (MBI)
 Bridges PCI bus to other FPGAs
 Deals with interrupts
 Replaces PCI block in WLOG
 Wise File System (WFS)
 WFS file system chip

Page 2-4 HDS Confidential: For distribution only to authorized parties.


Hardware Architecture
Hitachi NAS 4060 and 4080 Simplified Block Diagram

 Deals with all file system functions


 Replaces WFILE, WDIR, OBJ, FSA
 Disk Interface (DI)
 Moving data to and from disk, sector cache
 Equivalent of Storage Interface Module (SIM)
 Fibre Channel Interface (FCI)
 Interfaces DI and Tachyon QE4+
 Replaces PCI block in Storage Interface Module (SIM)

HDS Confidential: For distribution only to authorized parties. Page 2-5


Hardware Architecture
Mercury (Main) FPGA Board (MFB) Model 30x0

Mercury (Main) FPGA Board (MFB) Model 30x0

 MBI (Arria) Motherboard Interface


 TFL (Stratix III) Data movement and NVRAM
 WFS (Stratix III) Supports all file system functions
 NI (Stratix III) Network Interface
 DI (Stratix III) Disk Interface
 FCI (Stratix II) Fibre Channel Interface
 Product Marketing will often only count the 4 main Stratix III
FPGA in customer-facing brochures and material

HNAS 3080 and 3090 documentation: Mercury FPGA Board (MFB)


HNAS 4060, 4080, and 4100 documentation: Main FPGA Board (MFB)
Model 4xx0 will be using a newer FPGA Family: Stratix IV.

Page 2-6 HDS Confidential: For distribution only to authorized parties.


Hardware Architecture
Memory and Cache per Single Node

Memory and Cache per Single Node

3080/M1 3090/M2 4060 4080 4100

CPU Memory in
8 8 16 16 32
GBs

NVRAM1 in GBs 2 2 4 4 8

Metadata Cache in
10 10 10 10 36
GBs
Sector Cache in
4 4 4 4 16
GBs
Other in GBs 8 8 12 12 16
Total in GBs 32 32 46 46 108

(1) The NVRAM Data Retention period will be 72 hours and the NVRAM battery needs replacing every 2 years

HDS Confidential: For distribution only to authorized parties. Page 2-7


Hardware Architecture
Mercury (Main) Motherboard (MMB)

Mercury (Main) Motherboard (MMB)

MMB
 Off the shelf x86 motherboard
 Single processor
• 30x0 dual core and 4xx0 quad core
 On board 10/100/1000 Ethernet (3)
 Connected to 2 x 2.5” HDD (Linux SW RAID-1 configuration)
 Runs Debian Linux 5.0
 Inter-module communications over loopback Linux sockets and
shared memory
 64 bit architecture
 Model 30x0 8GB memory
 Model 40X0 16GB memory
 Model 4100 32GB memory

HNAS 3080 and 3090 documentation: Mercury MotherBoard (MMB)


HNAS 4060, 4080, and 4100 documentation: Main MotherBoard (MMB)
The MMB contains a multi core CPU and 8, 16, 32GB of system memory. All of the
software tasks run on the MMB. All the custom hardware functionality resides on
the MFB. The MFB contains all the FPGA functionality found in Hitachi NAS
models.

Page 2-8 HDS Confidential: For distribution only to authorized parties.


Hardware Architecture
Mercury (Main) FPGA Board (MFB)

Mercury (Main) FPGA Board (MFB)

MFB
 Single custom PCB is similar in size to a motherboard
 Connects to MMB using four PCIe lanes
 Six FPGAs (Replacing 13 High-performance NAS FPGAs)
 Model 30x0 24GB memory
 Model 40X0 50GB memory
 Model 4100 76GB memory

HNAS 3080 and 3090 documentation: Mercury MotherBoard (MMB)


HNAS 4060, 4080, and 4100 documentation: Main MotherBoard (MMB)

HDS Confidential: For distribution only to authorized parties. Page 2-9


Hardware Architecture
Hitachi NAS Platform 30x0 Rear Panel

Hitachi NAS Platform 30x0 Rear Panel

6 x 1G ETHERNET PRIVATE 10/100 ETHERNET 4 x 1/2/4G FIBRE NVRAM STATUS


2 x 10G ETHERNET 2 x 10G ETHERNET NETWORK PORTS CHANNEL PORTS
5-PORT SWITCH LED
CLUSTER PORTS NETWORK PORTS (1000BASE-T COPPER ) (SFP )
(100BASE-T COPPER )
(XFP) (XFP) POWER STATUS
LED

ALERT
LED

PWR
SWITCH

RESET
SWITCH

2 X REDUNDANT, RESERVED 2 x USB SERIAL RESERVED 2 x 10/100/1000 MOTHERBOARD PORT LAYOUT MAY VARY.
HOT-SWAPPABLE MOTHERBOARD MOUSE PORT MOTHERBOARD
ETHERNET KEY MOTHERBOARD PORTS ARE
AND KEYBOARD VIDEO
PSU RJ45 MANAGEMENT PORTS IDENTIFIED BY LABELLING.
(future use)

Five sets of Ethernet ports:


 3 x 10/100/1000 Motherboard ports (RJ45)
 2 active management ports, 1 inactive reserved for future use
 6 x 1G file serving ports (RJ45)
 2 x 10G file serving ports (XFP)
 2 x 10G cluster ports (XFP)
 Five port unmanaged switch (RJ45, no internal connections)
Can aggregate file serving ports:
 Up to 8 aggregations
 Cannot mix 1G and 10G ports in an aggregation
Also, USB ports, serial port, VGA, keyboard and mouse

Page 2-10 HDS Confidential: For distribution only to authorized parties.


Hardware Architecture
Hitachi NAS Platform Port Layout 3080/3090

Hitachi NAS Platform Port Layout 3080/3090

2 x 10GbE 2 x 10GbE 6 x GbE 5 x 10/100 4 x FC


Cluster File File Switch for Storage
Interconnect Serving Serving private (2 optional)
(Optional) (Optional) network
management
Public Private

KVM for initial setup.

USB ports for Keyboard and/or


Serial port for initial setup.
external media

Hitachi NAS 3080 and 3090 have 5 sets of Ethernet ports.


From left to right we have 2 10GbE cluster interconnect ports for use when clustering NAS 3080 and
3090 systems. Two 10GbE XFP and 6 1 GbE RJ45 Ethernet ports are for connecting to your public
network for client access.
Two 10GbE cluster interconnects (XFP). For client file services, there are 2 x 10GbE file serving ports
(XFP) and 6 x 1GbE file serving ports (RJ45).
Can aggregate file serving ports. All of the “like” Ethernets ports can be combined into one or more
aggregations. The only restriction is that the 10GbE ports cannot be combined with the 1GbE ports in
the same aggregation. Up to 8 aggregations. Direct traffic to specific ports by giving aggregations the
appropriate IP address.
Next in line is a five-port unmanaged RJ45 switch (no internal connections).
Then, there are four 1/2/4Gbps Fibre Channel storage service ports.
All four Fiber Channel ports can be used simultaneously and still maintain their maximum speed of
4Gbps.
On the MMB, we have the mouse and keyboard PS2 ports and a video port for connection to a KVM
switch, two USB ports, one serial interface.
Two Ethernet ports for connection to the public and private networks for management access. The
third Ethernet port above the USB ports is not currently active but might be used in the future.

HDS Confidential: For distribution only to authorized parties. Page 2-11


Hardware Architecture
Hitachi NAS Platform 4xx0 Rear Panel

Hitachi NAS Platform 4xx0 Rear Panel

2 x 10G ETHERNET 4 x 10G ETHERNET 4 x 2/4/8G FIBRE NVRAM STATUS


CLUSTER PORTS NETWORK PORTS CHANNEL PORTS LED
(SFP+) (SFP+) (SFP+ )
POWER STATUS
LED
ALERT
LED

PWR
SWITCH

RESET
SWITCH

2 X REDUNDANT, MOTHERBOARD 2 x USB RJ45 SERIAL MOTHERBO 2 x 10/100/1000 MOTHERBOARD PORT LAYOUT MAY VARY.
HOT-SWAPPABLE MOUSE AND PORT ARD VIDEO ETHERNET KEY MOTHERBOARD PORTS ARE
PSU KEYBOARD MANAGEMENT PORTS IDENTIFIED BY LABELLING.

Three sets of Ethernet ports:


 3 x 10/100/1000 Motherboard ports (RJ45)
 2 active management ports, 1 inactive reserved for future use
 2 x 10G file serving ports (SFP+)
 2 x 10G cluster ports (SFP+)
Can aggregate file serving ports:
 Up to 4 aggregations
Also, USB ports, serial port, VGA, keyboard and mouse

Page 2-12 HDS Confidential: For distribution only to authorized parties.


Hardware Architecture
Hitachi NAS Platform Port Layout 4060/4080/4100

Hitachi NAS Platform Port Layout 4060/4080/4100

2 x 10GbE 4 x 10GbE 4 x FC Public Private


Cluster File Storage
Serving Intelligent Platform (2 optional)
Interconnect
Management Interface
(Optional)
(IPMI) (Future use)

Mouse

Keyboard

VGA

USB ports for Keyboard and/or external media Serial port for initial setup.

Hitachi NAS 4060, 4080, and 4100 have 3 sets of Ethernet ports.
From left to right we have 2 10GbE cluster interconnect ports for use when
clustering NAS 4060 and 4080 systems.
Then four 10GbE cluster interconnects (SFP+). For client file services, there are 4 x
10GbE file serving ports (SFP+).
Can aggregate file serving ports. From 1 and up to 4 aggregations. Direct traffic to
specific ports by giving aggregations the appropriate IP address.
Next in line, there are four 2/4/8Gbps Fibre Channel storage service ports.
All four Fiber Channel ports can be used simultaneously and still maintain their
maximum speed of 8Gbps.
On the MMB, we have the mouse and keyboard PS2 ports and a video port for
connection to a KVM switch, two USB ports, one serial interface.
Two Ethernet ports for connection to the public and private networks for
management access. The third Ethernet port above the USB ports is not currently
active but might be used in the future for Intelligent Platform Management (IPMI).

HDS Confidential: For distribution only to authorized parties. Page 2-13


Hardware Architecture
Hitachi NAS Platform Models 4xx0 Flavors

Hitachi NAS Platform Models 4xx0 Flavors

Supermicro

Tyan

Page 2-14 HDS Confidential: For distribution only to authorized parties.


Hardware Architecture
MMB Module Flavors and Port Layout

MMB Module Flavors and Port Layout

HNAS 3080/3090
TYAN Toledo

HNAS 4060/4080/4100
Supermicro

HNAS 4060/4080/4100
TYAN

HDS Confidential: For distribution only to authorized parties. Page 2-15


Hardware Architecture
NVRAM or Battery Status LED

NVRAM or Battery Status LED

NVRAM STATUS LED

3100/3200 FSB Module 3080/3090/4060/4080/4100 MFB


 NVRAM LED

– Off: disabled or battery exhausted


– Green solid: Operational
– Green flash: Contents protected by battery
– Hold reset button for five seconds to isolate battery for shipping
NVRAM enabled when functional software boots

Page 2-16 HDS Confidential: For distribution only to authorized parties.


Hardware Architecture
Facia and Status LEDs 3100 and 3200

Facia and Status LEDs 3100 and 3200

Status Power LED Status LED

HDS Confidential: For distribution only to authorized parties. Page 2-17


Hardware Architecture
Facia and Status LEDs 3080 and 3090

Facia and Status LEDs 3080 and 3090

Status Power LED Status LED

Page 2-18 HDS Confidential: For distribution only to authorized parties.


Hardware Architecture
Facia and Status LEDs 4060, 4080, and 4100

Facia and Status LEDs 4060, 4080, and 4100

Status Power LED Status LED

HDS Confidential: For distribution only to authorized parties. Page 2-19


Hardware Architecture
Power/Server Status LED

Power/Server Status LED

POWER STATUS LED


3100/3200 NIM Module 3080/3090/4060/4080/4100 MFB

 Power/Status LEDs (Mirror the fascia LEDs)


– Off — The Server is not powered up.
– Flash (5Hz) — The Server is booting.
– Flash (0.6Hz) — The Server is available to host file services but is
not currently doing so.
Green — Normal operation with a single Server or
an active Server in a clustered operation.

Page 2-20 HDS Confidential: For distribution only to authorized parties.


Hardware Architecture
NAS Node Status LED (Alert)

NAS Node Status LED (Alert)

STATUS LED
3100/3200 NIM Module 3080/3090/4060/4080/4100 MFB

 Status LED (Amber)

– Off — Normal operation.


– Amber — Critical failure and the NAS Server is not operational
– Slow Flash — System shutdown has failed; flashes once every three
seconds
– Flash (0.8Hz) — The NAS Server needs attention and a non-critical
failure has been detected; for example, a fan or power
supply has failed

HDS Confidential: For distribution only to authorized parties. Page 2-21


Hardware Architecture
Reset and Power Switch

Reset and Power Switch

RESET Switch
3100/3200 NIM Module 3080/3090/4060/4080/4100 MFB

 RESET Switch
• With all Hitachi NAS Platforms, pressing the reset button is always preferable
to pulling the power cables or using the main switch
• Generates diagnostic dumps THE RESET SWITCH
AND POWER
 Power Switch 3080/3090/4060/4080/4100 SWITCHES ARE
• Effectively a motherboard power switch RECESSED AND
REQUIRE THE
• Should not be required in normal use INSERTION OF A PEN
OR SIMILAR OBJECT
TO ACTIVATE

Page 2-22 HDS Confidential: For distribution only to authorized parties.


Hardware Architecture
Redundant and Hot Swappable Power Supply Unit (PSU)

Redundant and Hot Swappable Power Supply Unit (PSU)

3080/3090 (450W)

DC GOOD LED

PSU STATUS LED

AC GOOD LED

Cable retention feature 4060/4080/4100 (500W)


 90-264V, 47-63Hz AC
 PSU used in 30x0 450W is not compatible
with the PSU used in 4xx0 500W
3100/3200  System has dual load-sharing PSUs
and can function on one unit

HNAS 3100/3200 PSU Status:


Green = main power, DC, internal fans, battery OK
Amber = PSU fault, including internal fans, battery
Off = no main power or switched off
HNAS 30x0/4xx0:
 AC good LED
 On: AC input is powered, operating normally
 Off: Check AC input feed
 DC good LED
 On: DC output operating normally
 Off: Disconnect power, wait for 10 seconds, reconnect
 If this does not fix the problem, replace the PSU
 PSU status LED
 Off: OK
 On: Internal fault – e.g. exceeds acceptable temperature or fan failure
 If operating range has been exceeded, disconnect for 10 minutes, and then reconnect
 Replace PSU if LED remains on

HDS Confidential: For distribution only to authorized parties. Page 2-23


Hardware Architecture
SMU200 and SMU300 Replaces SMU100

SMU200 and SMU300 Replaces SMU100

 Newer SMU200 or SMU300 replaces previous SMU100


 Faster processor, more memory, larger HDD, DVD-ROM
drive, and no floppy
 After Tiger-1 v7.0 SMU100 HW is end of life (EOL)
 With Angel-1 v11.0 SMU200 HW is end of life (EOL)
 SMU400 HW will replace the SMU200 in CY 2013

SMU100 SMU200 SMU300

April 2011

Pentium 4 2.8 GHz, Pentium Dual-Core 1.8 GHz, Intel Core 2 Duo E7500 2.93 GHz,
1 GB, 80GB (SATA) 1 GB, 500GB (SATA) 4 GB, 1TB (SATA)

As of today, Hitachi only sells SMU200s and SMU300s. If an SMU100 is earmarked


for replacement due to a defect, only an SMU200 or SMU300 is delivered as a
replacement unit. Hitachi does not stock SMU100s for spare parts. The SMU200 will
run of stock during 2011, and then only SMU300s will be delivered and stocked for
spare parts.

Page 2-24 HDS Confidential: For distribution only to authorized parties.


Hardware Architecture
SMU400 Early Information

SMU400 Early Information

 Front-swappable disk, to facilitate RMA’s of only disk


drives (instead of the entire SMU).
 Dual-redundant power supplies
 IPMI – to facilitate remote KVM (no more need for
physical access to an SMU).
(PROVIDED AS-IS, neither supported nor maintained by HNAS engineering or HDS
Support.)
Private Public

Intel Xeon E3-1220v2 3.1GHz CPU


1 TB SATA disk - front-swappable.
Only 1 of 4 slots is used.
8 GB RAM.

SMU400 will not be released and GA at the same time as HNAS 4xx0 and Angel-2
SW release.

HDS Confidential: For distribution only to authorized parties. Page 2-25


Hardware Architecture
Module Summary

Module Summary

 In this module, you have learned to:


• Identify the hardware components of the Hitachi NAS
Platform
• Interpret important indicators and status
• Explain the external connectivity specification

Page 2-26 HDS Confidential: For distribution only to authorized parties.


Hardware Architecture
Module Review

Module Review

1. Can the Customer Data LAN media be twisted pair or optical?


2. Which bit-rates are supported on the Cluster Interconnect interface?
3. Which media is used for the Cluster Interconnect interface?
4. How many PCBs are included in HNAS 3080 or 3090?
5. How many PCBs are included in HNAS 4060, 4080, and 4100?
6. Are the PSUs in the 30x0 and 4xx0 interchangeable?

HDS Confidential: For distribution only to authorized parties. Page 2-27


Hardware Architecture
Module Review

Page 2-28 HDS Confidential: For distribution only to authorized parties.


3. Software
Architecture
Module Objectives

 Upon completion of this module, you should be able to:


• Identify the BlueArc Operating System (BOS) in the Hitachi NAS Platform
nodes
• Identify the software components of the Hitachi NAS Platform nodes
• Follow the individual steps in the boot process
• Explain the structure of licensing for the HNAS system
• List the components in the Hitachi NAS Platform software suite

HDS Confidential: For distribution only to authorized parties. Page 3-1


Software Architecture
Software Components Hitachi NAS Platform Models

Software Components Hitachi NAS Platform Models

1 2 3 4
MFB MBI EVS0

MMB BALI PCIe SMU

server

SOAP
Client
SOAP

Atlas
Atlas
PAPI
SOAP
client

MCP
PAPI PAPI
SOAP
server

BALI
Linux
SMU

The underlying operating system is Linux. Linux manages the hardware, including
the mirrored HDDs and the network protocol stack.
SOAP = Simple Object Access Protocol
MFB = Mercury FPGA Board
MMB = Mercury Motherboard
MCP = Mercury Charge/Power Board
SMU = System Management Unit
PAPI = Platform API
BALI = BOS And Linux Incorporated

Page 3-2 HDS Confidential: For distribution only to authorized parties.


Software Architecture
Node Boot Sequence

Node Boot Sequence

1 2 3 4
MFB MBI EVS0

MMB BALI PCIe SMU

server

SOAP
Client
SOAP

Atlas
Atlas
PAPI
SOAP
client

MCP
PAPI
PAPI
SOAP
server

BALI
Linux
SMU

 To make the node operational, the first requirement is to boot up the


motherboard and load the Linux kernel.
 Next step is the bring up the 3 most important application modules, BALI, PAPI
and Embedded SMU (if the embedded SMU services are not disabled).
 Having BALI active enables the Mercury FPGA Board to reset and load the
firmware for file services and start the Enterprise Virtual Servers (EVSs).
 EVS0 is by default configured and will now be accessible for administrative
purposes like configuration tasks and monitoring.

HDS Confidential: For distribution only to authorized parties. Page 3-3


Software Architecture
BOS and Linux Incorporated (BALI)

BOS and Linux Incorporated (BALI)

 “BOS And Linux Incorporated” (BALI)

 A software platform

 Fundamental Hitachi NAS Platform enabler

 Locked to a single core (core 1)

BALI starts after Linux is running and is the software that controls the NAS node
functionality.
BALI = BOS and Linux Incorporated.

Page 3-4 HDS Confidential: For distribution only to authorized parties.


Software Architecture
Platform API (PAPI)

Platform API (PAPI)

 “Platform API” (PAPI) is a Linux application

 Provides platform independence for managing Linux


configuration
• Network, (DNS, NIS, IP), Date/Time, Package
management, Version and status
 BALI registry is the “master”
• So we can propagate changes round the cluster and
overwrites Linux configuration if there is a mismatch
 PAPI client in both BALI and SMU
• Never accessed directly
 PAPI has a housekeeper
• Regularly scans for configuration mismatches and fixes
them

PAPI communicates the necessary information to the Linux platform for execution.
And, it scans periodically for Linux configuration changes and fixes any
discrepancies. The custom FPGA System Board is managed through a device driver
as any other device would be. The Linux network stack provides connectivity used
for management.
There is a SOAP client or server for each of the major BlueArc software components.
SOAP was implemented first in Stone-1 v6.0, which enables different firmware
versions to communicate. SOAP is an industry standard. If you are not familiar
with SOAP, it is a simple XML based protocol used to allow applications to
exchange information over HTTP. It makes the individual components fairly
independent of each other making development and modifications much simpler.
SOAP = Simple Object Access Protocol
PAPI = Platform API if you try to change Linux, PAPI will overwrite the changes
API = Application Programming Interface.
XML = Extensible Markup Language
The PAPI services can be restarted on request.

HDS Confidential: For distribution only to authorized parties. Page 3-5


Software Architecture
NAS Platform Software Suite

NAS Platform Software Suite

 Virtualization
• Virtual file system and volumes
• Basic and Premium Deduplication
• Enterprise Virtual servers (EVS)
• Clustered Name Space (CNS)

 Storage Management
• Integrated tiered storage
• Tiered File Systems (TFS)
• Policy-based data migration and replication

 Data Protection
• Snapshots
• Asynchronous replication
• Anti-virus scanning
• Disk-to-disk and disk-to-tape backup
• TrueCopy Remote Replication and
ShadowImage Replication
• Synchronous Disaster Recovery

TrueCopy Remote Replication refers to Hitachi TrueCopy Remote Replication


bundle
ShadowImage Replication refers to Hitachi ShadowImage Replication
Synchronous Disaster Recovery refers to Synchronous Disaster Recovery for Hitachi
NAS Platform

Page 3-6 HDS Confidential: For distribution only to authorized parties.


Software Architecture
Hitachi NAS Platform Software Licensing

Hitachi NAS Platform Software Licensing

EVS Security
Model

iSCSI

Ultra
BASE
Key

The above example is only to explain the concept of license bundles and individual
licenses. The bundles might be changed for reasons like adjusting the solution to the
market and competitive solutions.

HDS Confidential: For distribution only to authorized parties. Page 3-7


Software Architecture
Hitachi NAS Software Bundles

Hitachi NAS Software Bundles


Entry Value Ultra
 Same software package
 CIFS and NFS  4x Enterprise Virtual  64x Virtual Server on HUS & HUS VM.
 2x Enterprise Virtual Server  Value bundle, plus:
Server  Entry bundle, plus:  Replication, incl. Object,  Licenses are perpetual
 Storage Pool, FS Audit  iSCSI IDR, IBR, ADC licenses, per node
 File System Rollback  File System Recover from  XVL (External Cross
 Quick Snapshot Restore Snapshot Volume Links)  Enterprise Virtual Server
 Base Deduplication  Data Migrator  Data Migrator to Cloud1 license upgrades are
 Replication, incl. Object,  File Clone available in Insight
 60-day Trial License
IDR, IBR, ADC  Read Caching
 Cluster Name Space
 XVL (External Cross  Synchronous Image  Enterprise License
 HA cluster Volume Links) Agreement provides
Backup
 iSCSI  Data Migrator to Cloud1 volume discounts for 10+
 Virtual Server Security
 File System Recover from  File Clone nodes; available as term
 Virtual Server Migration
Snapshot or perpetual licenses
 Enterprise Virtual Server  PerfAccelerator2
 Data Migrator
 Read Caching  Premium Deduplication  NAS Virtual
 Replication, incl. Object,
IDR, IBR, ADC  Synchronous Image  Terabyte License Infrastructure Integrator
Backup (V2I) is an optional
 XVL (External Cross  WORM
Volume Links)  Virtual Server Security application
 Data Migrator to Cloud1  Virtual Server Migration
 1Data Migrator to Cloud
 File Clone  Premium Deduplication
will be release April 2013
 Read Caching  PerfAccelerator2 in HNAS OS v11.1
 Synchronous Image  Terabyte License
Backup  WORM  2PerfAcceleratoronly for
 Virtual Server Migration File Module M2 or 3090
 Virtual Server Security
 Premium Deduplication Optional Items, in
 PerfAccelerator2
blue, purchased
 Terabyte License
 WORM
separately.

Valid and active license key options:


CIFS - license for CIFS
NFS - license for NFS
ISCSI - license for ISCSI
WORM - license for WORM filesystem
SFM - license for EVS migration within a server farm
DM - license for DataMigrator
CNS - license for CNS
QSR - license for snapRestore
FSR - license for FS roll back
ReadCache - license for ReadCache
EvsSecurity - license for EVS security
MetroCluster - license for MetroCluster
JetMirror - license for JetMirror
XVL - license for XVL

Page 3-8 HDS Confidential: For distribution only to authorized parties.


Software Architecture
Hitachi NAS Software Bundles

FSRS - license for FS Recover from Snapshot


HDS - license for HDS storage
JetClone - license for JetClone
JetImage - license for JetImage
JetCenterStandard - license for JetCenterStandard
JetCenterFoundation - license for JetCenterFoundation
PerfAccelerator - license for PerfAccelerator
BaseDeduplication - license for base Deduplication
PremiumDeduplication - license for premium Deduplication
DMCloud - license for Data Migrator Cloud Option
CLUSTER:<n> - license for cluster with <n> nodes
EVS:<n> - license for <n> EVS in cluster ('max' for unlimited EVS)
ModelType:<type> - license for upgrading HNAS model (valid values: '4080')
TB:<n> - license for <n> TB
EXP:mm/dd/yyyy - Expiry date for the license
(List might not be complete and subject for change in newer firmware version,
please refer to release notes.)

HDS Confidential: For distribution only to authorized parties. Page 3-9


Software Architecture
Module Summary

Module Summary

 In this module, you have learned to:


• Identify the BlueArc Operating System (BOS) in the Hitachi NAS Platform
nodes
• Identify the software components of the Hitachi NAS Platform nodes
• Follow the individual steps in the boot process
• Explain the structure of licensing for the HNAS system
• List the components in the Hitachi NAS Platform software suite

Page 3-10 HDS Confidential: For distribution only to authorized parties.


Software Architecture
Module Review

Module Review

1. From where is the FW loaded and updated in a HNAS 4100 model?


2. From where is the FW loaded and updated in a HNAS 3080 or 3090
models?
3. What is a Cluster Name Space?
4. List some of the Data Protection features.

HDS Confidential: For distribution only to authorized parties. Page 3-11


Software Architecture
Module Review

Page 3-12 HDS Confidential: For distribution only to authorized parties.


4. Installation of Hitachi NAS
Platform

Module Objectives

 Upon completion of this module, you should be able to:


• Identify the physical environmental and specifications
• Understand the login accounts used on the different networks
• Recognize the individual steps in the hardware and software installation
flow for the Hitachi NAS Platform as a single node
• List the individual steps in the hardware installation flow for the external
SMU installation and set up procedures
• Perform the procedure to join an additional Hitachi NAS Platform node
into the N-way cluster

HDS Confidential: For distribution only to authorized parties. Page 4-1


Installation of Hitachi NAS Platform
Installation Outline of Hitachi NAS Platform

Installation Outline of Hitachi NAS Platform

1. Rack mounting
2. Pre cabling
a) To avoid IP-address conflicts do not connect any customer
facing network to the nodes until initial setup is completed
3. Fibre Channel (FC) switch configuration
4. Storage subsystem configuration
5. Initial setup of the first node
If a single node, install SMU application and process stops here;
otherwise continue:
6. Initial setup of SMU
a) SMU Initial Configuration (CLI — Command Line Interface)
b) SMU Wizard (GUI)
7. Initial setup of the second node in the cluster
8. Join the second node to the cluster

The sequence above is a suggestion to get the basic configuration completed.


Reference:
MK-90BA021-xx
Hitachi NAS Platform System Installation Guide
MK-92HNAS015-xx
Hitachi NAS Platform model 4000 System Installation Guide Release 11.1.3250

Page 4-2 HDS Confidential: For distribution only to authorized parties.


Installation of Hitachi NAS Platform
Rack Mounting

Rack Mounting

 Telescopic clip in rails fit 610 mm – 740 mm deep racks


 Minimal fasteners and simple installation
 Secure with screws
 Do not use any other rail kit

HDS Confidential: For distribution only to authorized parties. Page 4-3


Installation of Hitachi NAS Platform
Login User Accounts Using Embedded SMU

Login User Accounts Using Embedded SMU


supervisor
supervisor
Public data network
1 2 3 4
MFB MBI EVS0 SSC

MMB BALI PCIe SMU Private

server

SOAP
Client
management

SOAP
SOAP
client

WEB

SSC
SOAP server
PAPI
IMS
server

eth1
supervisor
supervisor
Linux SSC
SSH
server
manager supervisor admin
root manager nasadmin supervisor nasadmin
nasadmin nasadmin
root@mercury(bash):~#

Console
eth0
Console

IMS
Console

Public management

MFB = Mercury (Main) FPGA Board


MMB = Mercury (Main) MotherBoard
SMU = System Management Unit
BALI = BOS And Linux Incorporated
PAPI = Platform API

Page 4-4 HDS Confidential: For distribution only to authorized parties.


Installation of Hitachi NAS Platform
Login User Accounts Using External SMU

Login User Accounts Using External SMU


supervisor
supervisor
Public data network
1 2 3 4
MFB MBI EVS0 SSC

MMB BALI PCIe SMU Private


management

server

SOAP
Client
SOAP
SOAP supervisor
supervisor
client

SSC
IMS

SSC
SOAP
PAPI server

eth1
supervisor SMU
supervisor
Linux SSC
SSH
server
manager admin
root manager
nasadmin nasadmin
nasadmin nasadmin

Console
root@mercury(bash):~#

eth0
Console

IMS
Console

Public management

HDS Confidential: For distribution only to authorized parties. Page 4-5


Installation of Hitachi NAS Platform
Null Modem Cable Configuration

Null Modem Cable Configuration

 A standard null modem cable from an electronics shop will work


 Linksys, DF700 and similar null modem cables do not work
 Ensure that the cable used is configured as shown in this cable
pin connection chart
Signal Abbreviation Pin Pin Abbreviation Signal

Data Carrier Detected DCD 1 1 DCD Data Carrier Detected

Receive Data RD 2 2 RD Receive Data

Transmit Data TD 3 3 TD Transmit Data

Data Terminal Ready DTR 4 4 DTR Data Terminal Ready

Signal Ground SG 5 5 SG Signal Ground

Data Set Ready DSR 6 6 DSR Data Set Ready

Request To Send RTS 7 7 RTS Request To Send

Clear To Send CTS 8 8 CTS Clear To Send

Ring Indicator RI 9 9 RI Ring Indicator

Interface configuration:
 115,200 bps
 8 data bits
 1 stop bit
 No parity
 No Flow control
 VT100 emulation

Page 4-6 HDS Confidential: For distribution only to authorized parties.


Installation of Hitachi NAS Platform
Three Important Success Criteria

Three Important Success Criteria

Private LAN (eth1): 1. Planning . . . . . .


Your Values Examples
Subnet mask 255.255.255.0
Component Name IP-address Name IP-address 2. Planning . . . . . .
Admin EVS Node 1 Avn1 192.0.2.15

Admin EVS Node 2 Temp1 192.0.216


3. Follow the PLAN!!
Cluster name Mycluster

Node1
Collect all customer
(Clustername-1)
Node 2
Mycluster-1 192.0.2.11
related information
(Clustername-2) Mycluster-2 192.0.2.12
and fill in this form as
External SMU Smu1 192.0.2.10 a minimum.

Public LAN SMU (eth0): Configure the nodes


Your Values Examples
with the data even
Subnet mask 255.255.0.0
Component Name IP-address Name IP-address before connecting to
External SMU Smu1 10.123.789.10
the network.

HDS Confidential: For distribution only to authorized parties. Page 4-7


Installation of Hitachi NAS Platform
Single HNAS 30x0 or 4xx0 with Embedded SMU

Single HNAS 30x0 or 4xx0 with Embedded SMU


SSH/GUI, NTP, Storage,
NTP,
SMTP, Hi-Track®, ….
Switches,
Two IP addresses
Hi-Track…. associated with Admin
Virtual Node (AVN)
• eth0 public address,
equivalent to SMU
address on HNAS
12.120.56.111
• eth1 private network
address
(Optional)
192.0.2.15
Addresses are
permanently assigned to
the node as there are no
clustering considerations
to worry about
AVN:12.120.56.111 AVN: 192.0.2.15

The IP address in the red network is optional. If components on the red network
need to be managed by the internal SMU or the customer has services or Hi-Track
on that network, an IP address for the Admin Virtual Node is required.

Page 4-8 HDS Confidential: For distribution only to authorized parties.


Installation of Hitachi NAS Platform
Initial Setup Single Node Embedded SMU

Initial Setup Single Node Embedded SMU

Public

Node 1 eth0

Admin EVS IP
1. CLI setup 12.120.56.111
eth1
GW

Private Public

HDS Confidential: For distribution only to authorized parties. Page 4-9


Installation of Hitachi NAS Platform
Default Interface Settings for 3080 and 3090

Default Interface Settings for 3080 and 3090

 Please check, could change without warning

Default settings
Setting Default 1 Default 2
Root password nasadmin nasadmin
Manager password nasadmin nasadmin
Admin password nasadmin nasadmin
Admin EVS public IP address (eth0) 192.168.31.xxx 192.168.4.xxx
Subnet mask 255.255.255.0 255.255.255.0
Admin EVS private IP address (eth1) 192.0.2.2 192.0.2.2
Node private IP address (eth1) 192.0.2.200 192.0.2.200
Subnet mask 255.255.255.0 255.255.255.0
Gateway 192.168.31.254 192.168.4.1
Host name myhost testhost
Domain mydomain.com testdomain.com

The 4060, 4080, and 4100 models are not preconfigured with any default
configuration. Therefore the process for initial setup is somewhat different from the
3080 and 3090 models.

Page 4-10 HDS Confidential: For distribution only to authorized parties.


Installation of Hitachi NAS Platform
Single Node Initial Setup: Models 3080 and 3090

Single Node Initial Setup: Models 3080 and 3090

1. Connect serial null-modem cable


a. 115.200bps, 8 bits/ byte, 1 stop bit, no parity, no flow-control
2. Log in as manager, password nasadmin will bring up the BOS console
3. Use the evsipaddr command to list and update the public IP address
a. example list:
evsipaddr –l
b. example update:
evsipaddr –e 0 –u –i 12.120.56.111 –m 255.255.240.0 –p eth0
Or
evsipaddr –e 0 –a –i 12.120.56.111 –m 255.255.240.0 –p eth0
evsipaddr –e 0 –r –i xxx.xxx.xxx.xxx

HDS Confidential: For distribution only to authorized parties. Page 4-11


Installation of Hitachi NAS Platform
Node Initial Setup: Models 4060, 4080, and 4100

Node Initial Setup: Models 4060, 4080, and 4100

1. The 4xx0 nodes only load Linux and will


not be able to continue to load the
BALI console.
2. Power status flashing.
3. The state is indicated with missing:
/etc/opt/mfb.ini network setup file.

4. The advantage is being able to setup and control networks settings


using one easy script and use customer specific settings.
5. IP address conflicts will be avoided just following the plan.
6. See the next two pages for the initial setup flow.

Page 4-12 HDS Confidential: For distribution only to authorized parties.


Installation of Hitachi NAS Platform
Node Initial Setup Model 4xx0 1 of 3

Node Initial Setup Model 4xx0 1 of 3


mercury login: root
Password: ♦♦♦♦♦♦♦♦ (nasadmin)
Last login: Tue May 14 10:54:23 UTC 2013 on ttyS0
Linux mercury 2.6.32-5-amd64 #1 SMP Sun May 6 04:00:17 UTC 2012 x86_64

WARNING: root access should be used only under instruction from your support
team. Modifying system settings or installed packages could adversely affect
the server.
root@mercury(bash):~# nas-preconfig

This script configures the server's basic network settings (when such settings
have not been set before).
Please provide the server's:
- IP address
- netmask
- gateway
- domain name
- host name

After this phase of setup has completed, further configuration may be carried out via web
browser.

Please enter the Admin Service Private (eth1) IP address Continue on next page > >

HDS Confidential: For distribution only to authorized parties. Page 4-13


Installation of Hitachi NAS Platform
Node Initial Setup Model 4xx0 2 of 3

Node Initial Setup Model 4xx0 2 of 3


< <<< continued

Please enter the Admin Service Private (eth1) IP address


192.0.2.45
Please enter the Admin Service Private (eth1) Netmask
255.255.255.0
Please enter the Optional Admin Service Public (eth0) IP address
12.120.56.111
Please enter the Admin Service Public (eth0) Netmask
255.255.240.0
Please enter the Optional Physical Node (eth1) IP address
192.0.2.41
Please enter the Physical Node (eth1) Netmask
255.255.255.0
Please enter the Gateway
12.120.56.254
Please enter the Domain name (without the host name)
hds.com
Please enter the Hostname (without the domain name)
Nas1

Continue on next page > >

Page 4-14 HDS Confidential: For distribution only to authorized parties.


Installation of Hitachi NAS Platform
Node Initial Setup Model 4xx0 3 of 3

Node Initial Setup Model 4xx0 3 of 3


< <<< continued

Admin Public (eth0): IP = 12.120.56.111 ; Netmask = 255.255.240.0


Admin Private (eth1): IP = 192.0.2.45 ; Netmask = 255.255.255.0
Physical Node (eth1): IP = 192.0.2.41 ; Netmask = 255.255.255.0
Gateway: 12.120.56.254
Domain: hds.com
Unit Hostname: nas1

Are the above settings correct? [y/n] y

Configuration written to /etc/opt/mfb.ini.


root@mercury(bash):~# reboot

Broadcast message from root@mercury (ttyS0) (Tue May 14 09:10:37 2013):

The system is going down for reboot NOW!


root@mercury(bash):~#

HDS Confidential: For distribution only to authorized parties. Page 4-15


Installation of Hitachi NAS Platform
Single Node Initial Setup: License Keys

Single Node Initial Setup: License Keys

Public

Node 1 eth0

Admin EVS IP
12.120.56.111
eth1
GW
1. CLI setup

2. GUI License Keys:


CFS, NFS, …

Private Public

The license keys for the single node are added.

Page 4-16 HDS Confidential: For distribution only to authorized parties.


Installation of Hitachi NAS Platform
Initial Node Setup: Hitachi NAS Platform GUI

Initial Node Setup: Hitachi NAS Platform GUI

 Add license key as needed

HDS Confidential: For distribution only to authorized parties. Page 4-17


Installation of Hitachi NAS Platform
Adding License Key

Adding License Key

Page 4-18 HDS Confidential: For distribution only to authorized parties.


Installation of Hitachi NAS Platform
Initial Setup: Hitachi NAS Platform Node GUI

Initial Setup: Hitachi NAS Platform Node GUI

 Finish the server setup by clicking Server Setup Wizard


 The Server Setup Wizard is a sequence of tasks that can be done
individually

HDS Confidential: For distribution only to authorized parties. Page 4-19


Installation of Hitachi NAS Platform
Server Setup Wizard

Server Setup Wizard

 Recommended to delete Storage Pools, EVS and File System after


implementation test

Page 4-20 HDS Confidential: For distribution only to authorized parties.


Installation of Hitachi NAS Platform
Single Node Initial Setup: File Service EVS

Single Node Initial Setup: File Service EVS

Public

Node 1 eth0

Admin EVS IP
12.120.56.111
eth1
Data EVS1 GW
1. CLI setup
213.1.15.22

2. GUI License Keys:


CFS, NFS, …

Private Public

After the process for initializing the node configuration, the administration process
can be initiated.
Data EVS can be created to enable File Services being offered for the clients
connected via the data network.
In the above example only one EVS (EVS1) is created.

HDS Confidential: For distribution only to authorized parties. Page 4-21


Installation of Hitachi NAS Platform
Hitachi NAS Platform Management Console

Hitachi NAS Platform Management Console

Pay attention to the lack of a scroll function. In the embedded SMU GUI there is no
scroll function. Only one server can be managed.

Page 4-22 HDS Confidential: For distribution only to authorized parties.


Installation of Hitachi NAS Platform
Clustering from A to Z

Clustering from A to Z
SSH/GUI, NTP, Storage, NTP
Switches
SMTP, Hi-Track, ….
Hi-Track….
One IP address
associated with AVN
• eth1 private network
address 192.0.2.45
SMU:12.120.56.222 SMU: 192.0.2.40
Addresses are permanent
as there are no clustering
considerations to worry
about.

AVN IP address on eth0 is


optional.

AVN:12.120.56.111 AVN: 192.0.2.45

The tasks list from the white paper “Clustering from A – Z”


1. Planning and System Assurance Document (SAD)
2. Initial setup external SMU
3. Initial setup Node 1
a) Initial setup 4060/4080/4100 Node 1
b) or Initial setup 3080/3090 Node 1
c) or Initial setup 3100/3200 Node 1
4. Initial setup Node 2
a) Initial setup 4060/4080/4100 Node 2
b) or Initial setup 3080/3090 Node 2
c) or Initial setup 3100/3200 Node 2
5. Cabling
a) Cabling private and public network
b) Cabling storage
c) Cabling cluster interconnect
d) Cabling customer data network

HDS Confidential: For distribution only to authorized parties. Page 4-23


Installation of Hitachi NAS Platform
Clustering from A to Z

6. Manage node 1
7. Add license bundles, TB and cluster key to node 1
8. Promote node 1 as single node cluster
9. Manage node 2
10. Add cluster license key to node 2
11. Add node 2 into cluster

Page 4-24 HDS Confidential: For distribution only to authorized parties.


Installation of Hitachi NAS Platform
Initial Setup: First Node in a Cluster

Initial Setup: First Node in a Cluster

Public

Node 1 eth0

eth1
Admin EVS IP GW
1. CLI setup 192.0.2.45

Private Public

HDS Confidential: For distribution only to authorized parties. Page 4-25


Installation of Hitachi NAS Platform
Cluster Initial Setup: Model 30x0 CLI First Node

Cluster Initial Setup: Model 30x0 CLI First Node

1. Connect serial null-modem cable


a. 115.200bps, 8 bits/ byte, 1 stop bit, no parity, no flow-control
2. Log in as manager, password nasadmin will bring up the BOS
console
3. Use the evsipaddr command to list and update the public IP
address
a. example list:
evsipaddr –l
b. example update:
evsipaddr –e 0 –u –i 192.0.2.45 –m 255.255.250.0 –p eth1
or
evsipaddr –e 0 –a –i 192.0.2.45 –m 255.255.255.0 –p eth1
evsipaddr –e 0 –r –i xxx.xxx.xxx.xxx

For HNAS 4060, 4080, 4100 models refer to the 4 pages starting with page 13.

Page 4-26 HDS Confidential: For distribution only to authorized parties.


Installation of Hitachi NAS Platform
Initial Setup: External SMU

Initial Setup: External SMU

2. CLI setup

SMU Public IP Address


12.120.56.222 Public

Node 1 eth0

eth1
Admin EVS IP GW
1. CLI setup 192.0.2.45

Private Public

This part of initial setup is among other parameters defining the public network
access of the SMU. The customer will need to supply the address on the public
customer network, which is intended to be used for the management interface
address.

HDS Confidential: For distribution only to authorized parties. Page 4-27


Installation of Hitachi NAS Platform
Initial Setup: External SMU CLI

Initial Setup: External SMU CLI

1. Connect serial null-modem cable


• 115,200bps, 8 bits/ byte, 1 stop bit, no parity, no flow-control
2. Log in as root
• Default root password is nasadmin
• Run smu-unconfig to revert to factory defaults
• SMU reboots after the smu-unconfig process is complete
3. Run smu-config
• Login as root password nasadmin and run smu-config
• Follow CLI-based setup wizard to supply the SMU network configuration
• SMU reboots after the process is complete
4. Next step:
• Finish SMU setup using SMU wizard GUI

The serial cable is only intended to be used for the initial installation process. It is
strongly recommended to remove the serial cable after installation to avoid any
performance impact on the management plane.

Page 4-28 HDS Confidential: For distribution only to authorized parties.


Installation of Hitachi NAS Platform
Initial Setup: SMU Wizard

Initial Setup: SMU Wizard

3. GUI SMU Wizard 2. CLI setup

Private IP Address SMU Public IP Address


Public
192.0.2.40 12.120.56.222

Node 1 eth0

eth1
Admin EVS IP GW
1. CLI setup 192.0.2.45

Private Public

During the SMU Wizard process, the private LAN address is given. It is
recommended you use the default address range on the “Rack Network”: 192.0.2.X.
Escalation and dump analysis is easier when the addressing follows the default
address and ranges for the private network. The passwords, DNS and Domain,
SMTP host, time zone, and public NTP host access are defined during this process as
well.

HDS Confidential: For distribution only to authorized parties. Page 4-29


Installation of Hitachi NAS Platform
Initial Setup: SMU GUI

Initial Setup: SMU GUI

 Point your browser to the public IP of the SMU


 Click SMU Setup Wizard and complete the SMU configuration
• SMU application will restart upon completion

Page 4-30 HDS Confidential: For distribution only to authorized parties.


Installation of Hitachi NAS Platform
Initial Setup: Managed Servers

Initial Setup: Managed Servers

3. GUI SMU Wizard 2. CLI setup

Private IP Address SMU Public IP Address


192.0.2.40 12.120.56.222 Public
4. GUI
Managed
Servers

Node 1 eth0

eth1
Admin EVS IP GW
1. CLI setup 192.0.2.45

Private Public

In the Managed Servers GUI, specify the IP address of the Admin EVS and the
UserID/Password for the node. This specification is for the SMU to make a
connection to the Admin EVS.

HDS Confidential: For distribution only to authorized parties. Page 4-31


Installation of Hitachi NAS Platform
Initial Setup: Hitachi NAS Platform Node GUI

Initial Setup: Hitachi NAS Platform Node GUI

 Log in to the SMU and click Managed Servers


• Click Add and follow the prompts to specify a new managed
Hitachi NAS server (Admin EVS)
• Once specified, the managed server will appear in the
Server Status Console

Page 4-32 HDS Confidential: For distribution only to authorized parties.


Installation of Hitachi NAS Platform
Cluster Initial Setup: License Keys

Cluster Initial Setup: License Keys

3. GUI SMU Wizard 2. CLI setup

Private IP Address SMU Public IP Address


192.0.2.40 12.120.56.222 Public
4. GUI
Managed
Servers

Node 1 eth0

eth1
Admin EVS IP GW
1. CLI setup 192.0.2.45

5. GUI License Keys:


CFS, NFS, … Cluster:1

Private Public

In step 5, the license keys for the primary node are added.

HDS Confidential: For distribution only to authorized parties. Page 4-33


Installation of Hitachi NAS Platform
Initial Setup: Hitachi NAS Platform Licenses

Initial Setup: Hitachi NAS Platform Licenses

 Add license key as needed

Page 4-34 HDS Confidential: For distribution only to authorized parties.


Installation of Hitachi NAS Platform
Adding License Key

Adding License Key

HDS Confidential: For distribution only to authorized parties. Page 4-35


Installation of Hitachi NAS Platform
Cluster Initial Setup: Enable Clustering

Cluster Initial Setup: Enable Clustering

3. GUI SMU Wizard 2. CLI setup

Private IP Address SMU Public IP Address


192.0.2.40 12.120.56.222 Public
4. GUI
Managed
6. GUI Cluster Wizard
Servers

Node 1 eth0

Private IP Address
192.0.2.41
eth1
Admin EVS IP
GW
1. CLI setup
192.0.2.45

5. GUI License Keys:


CFS, NFS, … Cluster:1

Private Public

The Cluster Wizard defines the physical IP address of the primary node which is
used for heartbeat and Cluster Interconnect addresses by the cluster software. It also
promotes the Active-Active cluster and provides a cluster name. This process ends
with a primary node restart.

Page 4-36 HDS Confidential: For distribution only to authorized parties.


Installation of Hitachi NAS Platform
Initial Setup: Promote Clustering

Initial Setup: Promote Clustering

1. Go to SMU
2. Under Server Settings, click Cluster Wizard
3. Enter cluster name and node IP Address
a. Refer to Lab Configuration Sheet

The quorum device would normally be the SMU that manages the node, but could
actually be any SMU containing quorums and addressable on the private rack
network. As an example, the MetroCluster solution recommends using a quorum in
different location than the Primary SMU or Standby SMU.
Having this flexibility, the quorum could be located in a third location different from
the Primary/Secondary site.

HDS Confidential: For distribution only to authorized parties. Page 4-37


Installation of Hitachi NAS Platform
Promoted to a Single-Node Cluster

Promoted to a Single-Node Cluster

 Ready to add more nodes to the cluster

Page 4-38 HDS Confidential: For distribution only to authorized parties.


Installation of Hitachi NAS Platform
HNAS Clustered with External SMU

HNAS Clustered with External SMU


SSH/GUI NTP Storage, NTP
Hi-Track, …. Switches One IP address
Hi-Track….
associated with AVN
• eth1 private network
address
SMU:12.120.56.222 SMU: 192.0.2.40 192.0.2.45
• transitory as AVN can
migrate
One IP address per node
Node: 192.0.2.42 associated with node
• eth1 private network
address
AVN:12.120.56.111 AVN: 192.0.2.45 192.0.2.41 and
192.0.2.42
• permanently configured
so can contact a node
Node: 192.0.2.41 even if BALI doesn’t come
up
AVN IP address on eth0 is
AVN:12.120.56.111 AVN: 192.0.2.45 optional

The IP address in the blue network is optional. If the customer has services or Hi-
Track on that network, an IP address for the Admin Virtual Node is required as well
on the blue network.

HDS Confidential: For distribution only to authorized parties. Page 4-39


Installation of Hitachi NAS Platform
Cluster Initial Setup: Second Node

Cluster Initial Setup: Second Node

3. GUI SMU Wizard 2. CLI setup

Private IP Address SMU Public IP Address


192.0.2.40 12.120.56.222 Public
4. GUI
Managed 7. CLI
6. GUI Cluster Wizard
Servers setup

Node 1 eth0 Node 2 eth0

Private IP Address
192.0.2.41
eth1 eth1 GW
GW
1. CLI setup Admin EVS IP Admin EVS IP
192.0.2.45 192.0.2.46

5. GUI License Keys:


CFS, NFS, … Cluster:1

Private Public

Page 4-40 HDS Confidential: For distribution only to authorized parties.


Installation of Hitachi NAS Platform
Cluster Initial Setup: Models 30x0 CLI Second Node

Cluster Initial Setup: Models 30x0 CLI Second Node

1. Connect serial null-modem cable


a. 115,200bps, 8 bits/ byte, 1 stop bit, no parity, no flow-control
2. Log in as manager, password nasadmin will bring up the BOS
console
3. Use the evsipaddr command to list and update the public IP
address
a. example list:
evsipaddr –l
b. example update:
evsipaddr –e 0 –u –i 192.0.2.46 –m 255.255.250.0 –p eth1
Or
evsipaddr –e 0 –a –i 192.0.2.46 –m 255.255.250.0 –p eth1
evsipaddr –e 0 –r –i xxx.xxx.xxx.xxx

For HNAS 4060, 4080, 4100 models refer to the 4 pages starting with page 13.

HDS Confidential: For distribution only to authorized parties. Page 4-41


Installation of Hitachi NAS Platform
Initial Setup: Flow and IP Addressing

Initial Setup: Flow and IP Addressing

3. GUI SMU Wizard 2. CLI setup

Private IP Address SMU Public IP Address


192.0.2.40 12.120.56.222 Public
4. GUI
Managed 7. CLI
6. GUI Cluster Wizard
Servers setup

Node 1 eth0 Node 2 eth0

Private IP Address
192.0.2.41
eth1 eth1 GW
GW
1. CLI setup Admin EVS IP Admin EVS IP
192.0.2.45 192.0.2.46

8. GUI
5. GUI License Keys: Managed
CFS, NFS, … Cluster:1
Servers

Private Public

In case the single node 2 is going to join an existing cluster, this node also needs to
be managed by the SMU in order to install the license key for clustering.

Page 4-42 HDS Confidential: For distribution only to authorized parties.


Installation of Hitachi NAS Platform
Initial Setup: Hitachi NAS Platform Node GUI

Initial Setup: Hitachi NAS Platform Node GUI

 Log in to the SMU and click Managed Servers.


• Click Add and follow the prompts to specify the new managed
NAS server (Admin EVS).
• Once specified, the managed server will appear in the Server
Status Console.

Select the Managed Server for Node 1 and add Node 2 to the list of managed servers
by specifying the IP Address of the admin EVS of Node 2.

HDS Confidential: For distribution only to authorized parties. Page 4-43


Installation of Hitachi NAS Platform
Initial Setup: License Key

Initial Setup: License Key

3. GUI SMU Wizard 2. CLI setup

Private IP Address SMU Public IP Address


192.0.2.40 12.120.56.222 Public
4. GUI
Managed 7. CLI
6. GUI Cluster Wizard
Servers setup

Node 1 eth0 Node 2 eth0

Private IP Address
192.0.2.41
eth1 eth1 GW
GW
1. CLI setup Admin EVS IP Admin EVS IP
192.0.2.45 192.0.2.46

8. GUI 9. GUI
5. GUI License Keys: Managed License Key
CFS, NFS, … Cluster:1 Servers Cluster:1

Private Public

Select the admin EVS of Node 2 and choose Server Settings to add the license key to
Node 2.

Page 4-44 HDS Confidential: For distribution only to authorized parties.


Installation of Hitachi NAS Platform
Adding License Key

Adding License Key

Only one license is added to Node 2 (The cluster license: “MAX 1 Nodes”), since the
“protocols” are already licensed on the primary Node 1.

HDS Confidential: For distribution only to authorized parties. Page 4-45


Installation of Hitachi NAS Platform
Initial Setup: Join the Second Node

Initial Setup: Join the Second Node

3. GUI SMU Wizard 2. CLI setup

Private IP Address SMU Public IP Address


192.0.2.40 12.120.56.222 Public
4. GUI
Managed 7. CLI
6. GUI Cluster Wizard 10. GUI Cluster Conf.
Servers setup

Node 1 eth0 Node 2 eth0

Private IP Address Private IP Address


192.0.2.41 192.0.2.42
eth1 eth1 GW
GW
1. CLI setup Admin EVS IP Admin EVS IP
192.0.2.45 192.0.2.46

8. GUI 9. GUI
5. GUI License Keys: Managed License Key
CFS, NFS, … Cluster:1
Cluster:1+1=2 Servers Cluster:1

Private Public

Page 4-46 HDS Confidential: For distribution only to authorized parties.


Installation of Hitachi NAS Platform
Initial Setup: Add Single Node 2 to Clustered Node 1

Initial Setup: Add Single Node 2 to Clustered Node 1

1. Go to SMU
2. In the Server status console window scroll down and select Node 1
(Admin EVS)
3. Under Server Settings, click Cluster Configuration and select Add
Cluster Node
a. Enter the IP Address for Node 2

HDS Confidential: For distribution only to authorized parties. Page 4-47


Installation of Hitachi NAS Platform
Two Node Cluster Configured

Two Node Cluster Configured

 After automatic reboot of Node 2, the node will join the cluster
defined in Node 1

Page 4-48 HDS Confidential: For distribution only to authorized parties.


Installation of Hitachi NAS Platform
Initial Setup: File Service EVS

Initial Setup: File Service EVS

3. GUI SMU Wizard 2. CLI setup

Private IP Address SMU Public IP Address


192.0.2.40 12.120.56.222 Public
4. GUI
Managed 7. CLI
6. GUI Cluster Wizard 10. GUI Cluster Conf.
Servers setup

Node 1 eth0 Node 2 eth0

Private IP Address Private IP Address


192.0.2.41 192.0.2.42
eth1 eth1 GW
GW
1. CLI setup Admin EVS IP Data EVS1
192.0.2.45
213.1.15.22

8. GUI 9. GUI
5. GUI License Keys: Managed License Key
CFS, NFS, …Cluster:1+1=2
Cluster:1+1=2
Servers Cluster:1

Private Public

After the process for initializing and establishing a 2-node cluster configuration, the
administration process can be initiated.
Data EVS on either node can be created to enable File Services being offered for
clients connected via the data network.
In the above example only one EVS (EVS1) is created on Node 2.

HDS Confidential: For distribution only to authorized parties. Page 4-49


Installation of Hitachi NAS Platform
Module Summary

Module Summary

 In this module, you have learned to:


• Identify the physical environmental and specifications
• Understand the login accounts used on the different networks
• Recognize the individual steps in the hardware and software installation
flow for the Hitachi NAS Platform as a single node
• List the individual steps in the hardware installation flow for the external
SMU installation and set up procedures
• Perform the procedure to join an additional Hitachi NAS Platform node
into the N-way cluster

Page 4-50 HDS Confidential: For distribution only to authorized parties.


Installation of Hitachi NAS Platform
Module Review

Module Review

1. Which brand of rail kit is mandatory?


2. License keys are electronically stored in which component?
3. Which components are not fitted into a node you receive from the
distribution center?
4. List the 3 criteria for a successful installation.
5. Is the external SMU initial setup done by CLI or GUI?
6. Is the node initial setup done by CLI or GUI?
7. How is initial setup initiated on HNAS 4xx0 models?

HDS Confidential: For distribution only to authorized parties. Page 4-51


Installation of Hitachi NAS Platform
Module Review

Page 4-52 HDS Confidential: For distribution only to authorized parties.


5. Ethernet and Fibre
Channel Networks
Module Objectives

 Upon completion of this module, you should be able to:


• List the Gigabit Ethernet (1GbE and 10GbE) network maximum cable
length
• Explain the private and public network configuration scenarios for both
platforms
• Differentiate between private Rack LAN and public User Data LAN
• Examine “The good, the bad and the ugly” back-end SAN configurations

HDS Confidential: For distribution only to authorized parties. Page 5-1


Ethernet and Fibre Channel Networks
GbE Cable Distances

GbE Cable Distances

1000Base-SX Multi Mode 850nm 62.5 micron 250m


2000,
3100 1000Base-SX Multi Mode 850nm 50 micron 550m
and
3200 1000Base-LX Single Mode 1300nm 9 micron 5km
only.

1000Base-CX 2 pair DB9 Twinax 25m

1000Base-TX 4 pair UTP RJ45 CAT5 or 100m


CAT5E
2000,
3100, 1000Base-ZX Single Mode 1550nm 9 micron 70km
3200, (Not IEEE standard)
3080,
and
3090. GbE cable distances

Page 5-2 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
HNAS 30x0 Cluster 10GbE Interface (XFI)

HNAS 30x0 Cluster 10GbE Interface (XFI)

10Gb Protocol

XFP 300m XFP 10Km XFP 40Km

LC Multimode OM3 LC Singlemode LC Singlemode


Check current specification documents and both Cluster Interconnect and Data ports that may differ.

"X" = 10
XFI is the standard interface for connecting 10 Gigabit Ethernet MAC devices to a
XFP interface.
As of the mid-year 2006, most 10GbE products use XAUI interface that has four
lanes running at 3.125Gbit/sec using 8B/10B encoding. XFI provides a single lane
running at 10.3125Gbit/sec with a 64B/66B encoding scheme. The XFP (10Gigabit
Small Form Factor Pluggable) used in models 3080 and 3090 is a hot-swappable,
protocol independent optical transceiver. It typically operates at 850nm, 1310nm, or
1550nm for 10GB/sec SONET/SDH, Fibre Channel, Gigabit Ethernet and other
applications including DWDM links.

HDS Confidential: For distribution only to authorized parties. Page 5-3


Ethernet and Fibre Channel Networks
Finisar Small Form Factor (SFP+)

Finisar Small Form Factor (SFP+)

10Gb Protocol 8Gb Fibre Channel Protocol

SFP+ 300m SFP+ 10Km SFP+ 300m

LC Multimode OM3 LC Singlemode LC Multimode OM3


Check current specification documents and both Cluster Interconnect and Data ports that may differ.

Page 5-4 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
HNAS Models 4xx0 Use SFP+

HNAS Models 4xx0 Use SFP+

 The 10GbE and 8Gbp SFP+ is not interchangeable.

 2 x SFP+ 10GbE
Cluster Ports
 4 x SFP+ 10GbE
Network Ports
• FTLX8571D3BCV
• X = 10 (10GbE)

 4 x SFP+ 8Gbp
FC Storage Ports
• FTLF8528P3BNV
• F = FC (Fibre Channel)

HDS Confidential: For distribution only to authorized parties. Page 5-5


Ethernet and Fibre Channel Networks
Cable Distance and Optical Media Type

Cable Distance and Optical Media Type

0M1 0M2 0M3 0M4 0S1


(Multimode Fibre) (Multimode Fibre) (Multimode Fibre) (Multimode Fibre) (Singlemode Fibre)
62.5µ 50µ 50µ 50µ 9µ
Application
850 1300 850 1300 850 1300 850 1300 1310 1550
nm nm nm nm nm nm nm nm nm nm
ATM 622
Mbps
300m 500m 300m 500m 300m 500m 300m 500m 2000m -

Fibre
Channel 1062 300m - 500m - 500m - 500m - 2000m -
Mbps

FDDI - 2000m - 2000m - 2000m - 2000m - -

100 Base-FX
Ethernet
- 2000m - 2000m - 2000m - 2000m - -

1000 Base-SX
Ethernet
275m - 550m - 550m - 550m - - -

1000 Base-LX
Ethernet
- 550m - >550m - >550m - >550m 5000m -

10G Base-
LX4 Ethernet
- 300m - 300m - 300m - - 10Km -

10G Base-
SR/SW 33m - 82m - 300m - 400m - 10Km 40Km
Ethernet

Page 5-6 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
HNAS 4xx0 SFP+ Copper TwinAx Cable Assembly

HNAS 4xx0 SFP+ Copper TwinAx Cable Assembly

 Price band $75 to $100


 Only available for the 4xx0 models

Brand Length in meter Part number


Cisco 1 SFP-H10GB-CU1M
Cisco 3 SFP-H10GB-CU3M
Cisco 5 SFP-H10GB-CU5M
Molex 7 747524701

HDS Confidential: For distribution only to authorized parties. Page 5-7


Ethernet and Fibre Channel Networks
Cable Distance and Copper Media Type

Cable Distance and Copper Media Type

Protocol Connector (Media) Cable Power Distance


CAT5 or
1000Base-TX RJ45 4 pair UTP Copper 100m
CAT5E
Cat6,
10GBaseT RJ45 10GbaseT Copper Cat6A, 4–6W 100m
Cat7
10GBase-CX1 SFP+ CU copper Twinax 1 – 1.5W 10m

Page 5-8 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
NAS Platform Models 3080 and 3090 Networks

NAS Platform Models 3080 and 3090 Networks

HNAS Chassis
Memory Memory Mercury FPGA Board (MFB)
3GB 3GB
10GbE SiliconFS™
Cluster Network Data
10GbE Interconnect Interface Movement File system
(NI) (TFL) Metadata (WFS)
10GbE Data Cache
Interface 10GB
10GbE
NVRAM Memory
2GB 2GB
GbE

GbE Fastpath
Disk FCI
Fastpath
FC
Interface
GbE Data (DI) FC
GbE Interface FC
Fastpath
Sector FC
Fastpath
GbE Cache
FC
4GB
GbE
(MBI)

BALI Mercury Motherboard (MMB)


Management SMU Memory 8GB Management
GbE GbE
Port eth0 Port eth1
Intel Core 2 Duo E8400 3.0Ghz

HDS Confidential: For distribution only to authorized parties. Page 5-9


Ethernet and Fibre Channel Networks
NAS Platform Models 4060, 4080, and 4100 Networks

NAS Platform Models 4060, 4080, and 4100 Networks

HNAS Chassis
Memory Memory Main FPGA Board (MFB2)
4GB 4GB

Network Data SiliconFS™


10GbE Interface Movement File system
Cluster
(NI) (TFL) Metadata (WFS)
10GbE Interconnect
Cache
10GB
NVRAM Memory
4GB 8GB
10GbE
Fastpath
Disk FCI
10GbE Fastpath
FC
Data Interface Q
10GbE
Interface
(PDI) E FC
FC
Sector 8 FC
10GbE Fastpath Fastpath
Cache
FC
4GB
(MBI)

BALI Main Motherboard (MMB)


Management SMU Memory 16GB Management
GbE eth0
Port eth0 eth1
Port eth1
GbE
Intel Xeon Quad Core

Page 5-10 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
Hitachi NAS 30x0 Network and Embedded SMU

Hitachi NAS 30x0 Network and Embedded SMU

Private
External IP Management
data network network
connection (internal switch)

External IP network Private


connection Management
(customer facing) network (internal)

HDS Confidential: For distribution only to authorized parties. Page 5-11


Ethernet and Fibre Channel Networks
Hitachi NAS 4xx0 Network and External SMU

Hitachi NAS 4xx0 Network and External SMU

External IP
data network
connection

External IP network Private


connection Management
(customer facing) network (internal)

SMU

Page 5-12 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
Hitachi NAS 4xx0 Network and Clustering

Hitachi NAS 4xx0 Network and Clustering

Cluster inter- External IP


connection links data network
connection

External IP network Private


connection Management
(customer facing) network (internal)

SMU

HDS Confidential: For distribution only to authorized parties. Page 5-13


Ethernet and Fibre Channel Networks
Private and Public Management Network Embedded SMU 30x0

Private and Public Management Network Embedded SMU 30x0

Data (public)
Network Public Hitachi NAS Platform

Private
Private Management
Network

Public

Public Management Network

Page 5-14 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
Private and Public Management Network External SMU 30x0 Cluster

Private and Public Management Network External SMU 30x0


Cluster

Data (public)
Network Public
Private

Private Management
Network

SMU

Public

Public Management Network

HDS Confidential: For distribution only to authorized parties. Page 5-15


Ethernet and Fibre Channel Networks
Private and Public Management Network with SMU Managed Legacy Storage

Private and Public Management Network with SMU Managed


Legacy Storage

Data (public)
Network Public
Hitachi NAS Platform

Private

Private Management
Network

Public

Public

Public Management Network


!
Legacy NetApp/LSI storage (Also known as BlueArc storage) can be managed by
either embedded or external SMU. The interface for management can either be the
private (red) or customer-facing (blue) management network. In this scenario, using
embedded SMU and single configuration, the 5-port switch on the 30x0 can be used
as a private management switch.

Page 5-16 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
EVS Connectivity in a Cluster

EVS Connectivity in a Cluster

ag2 ag1 ag1 ag2


EVS1-3 EVS1-2
10.16.16.34 10.16.16.42

EVS1-1
192.168.3.81

Private Management 192.168.3.39


EVS0
192.0.2.15

Node 1 192.0.2.11 192.0.2.12 Node 2

172.145.2.14

Public Management

HDS Confidential: For distribution only to authorized parties. Page 5-17


Ethernet and Fibre Channel Networks
IP Addressing and EVS

IP Addressing and EVS

Private
Management

Admin EVS
EVS on Node 1
EVS on Node 2

Admin EVS

The diagram displays the address assignment for the management and public (data)
LAN. Pay attention to the address 192.0.2.25 which is an EVS, but only for
administration purposes. The address can reside on either physical Node-1 or Node-
2, as the other EVS addresses. The addresses 192.0.2.21 and 192.0.2.22 are tightly
coupled to the physical node and cannot move. The address 10.67.64.15 and
10.67.68.169 are the public addresses of the internal admin EVS.

Page 5-18 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
Aggregation Configuration Screen Models 30x0

Aggregation Configuration Screen Models 30x0

1GbE

10GbE

Next slide

Read notes

ge1 to ge6 = 1GbE,


tg1 to tg2 = 10GbE

Can aggregate file serving ports (Data network)


 Up to 8 aggregations
 But cannot mix 1G and 10G ports in an aggregate
 Direct traffic to specific ports by giving aggregations the appropriate IP address
Aggregation is constructed by ticking the physical ge or tg interface numbers which
should belong to that aggregate. LACP is selected per aggregate, and if not selected,
this aggregate will be set for static instead.
Hitachi NAS Platform Round Robin algorithm is not recommended due to the high
risk for frame “out-of-order delivery”.

HDS Confidential: For distribution only to authorized parties. Page 5-19


Ethernet and Fibre Channel Networks
Aggregation Configuration Screen Models 4xx0

Aggregation Configuration Screen Models 4xx0

tg1 to tg4 = 10GbE

Page 5-20 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
LACP Protocol Usage

LACP Protocol Usage

A-A

ag1 = tg2 + tg4

Switch port redundancy

A-A P-P

ag2 = tg1 + tg2 + ge3 + ge4

Port and switch redundancy

LACP is a negotiated protocol which uses "Actor" and "Partner" link entities. Partner
takes cues from Actor when Actor decides to bring a link up.
In a single-switch configuration, there is no functional difference between the LACP
and static aggregation. In static aggregation both parties bring up the previously
defined aggregation link member unconditionally.
Where LACP can be utilized to its fullest is in a link/switch failover situation. In this
scenario, one would create a *single* aggregation on the HNAS server side and split
it between two switches (for example, 4 links are going to one switch and 2 links to
the other). Since the Actor can only bring up a logical link (which can be a number of
physical links) with a Partner, only one switch will be active at a time. In a 4+2
scenario, the switch with more links will be favored. In a symmetrical split (for
example, 3+3), any one of the switches can be chosen as an LACP Partner.
Static aggregation link cannot be split between the switches.

HDS Confidential: For distribution only to authorized parties. Page 5-21


Ethernet and Fibre Channel Networks
NTP and Management Network

NTP and Management Network

Public Private
Hitachi NAS Platform
Data (public)
Network

NTP Public
Server SMU

SMU
time sync.

Public Management Private Management

Page 5-22 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
Fibre Channel Connectivity

Fibre Channel Connectivity

Fibre Channel
Connectivity

HDS Confidential: For distribution only to authorized parties. Page 5-23


Ethernet and Fibre Channel Networks
Storage Considerations: Platform Differences

Storage Considerations: Platform Differences

 For proper configuration of a Hitachi NAS 3100 or 3200


Node cluster, the FC Host port configurations need to be
identical. In other words, port 1 on all cluster nodes
needs to see the same logical units (LUNs) on the same
FC port.
 For proper configuration of a Hitachi NAS 30X0 or 4XX0
Node cluster, the FC Host port configurations need to be
identical. In other words, port 1 on all cluster nodes
needs to see the same logical units (LUNs) on the same
FC controller /cluster
 See notes and following slides for details.

AMS family and HUS 100 have the concept of controllers: controller 0 and
controller 1.
Enterprise family up to VSP does not have controllers, and the left nipple of the port
ID will be used as a virtual controller number. Channel port 1A will belong to
virtual controller 1, 3A to controller 3, and channel port 8c will belong to virtual
controller 8.
HUS VM does not have controllers, but the cluster ID will be interpreted from the
SCSI inquiry command output. Therefore, channel port 1A will belong to cluster 1,
3A to cluster 1, and channel port 8c will belong to cluster 2.
Examples on the following pages using scsi-racks command to create the output.

Page 5-24 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
AMS200, 500, 1000, 2000 and HUS

AMS200, 500, 1000, 2000 and HUS

HDS Confidential: For distribution only to authorized parties. Page 5-25


Ethernet and Fibre Channel Networks
Enterprise Including VSP (Not HUS VM)

Enterprise Including VSP (Not HUS VM)

= 7A

= 8A

Page 5-26 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
Hitachi Unified Storage VM

Hitachi Unified Storage VM

HDS Confidential: For distribution only to authorized parties. Page 5-27


Ethernet and Fibre Channel Networks
Fibre Channel Minimum Configuration for 2-Node 2200 Cluster

Fibre Channel Minimum Configuration for 2-Node 2200 Cluster

Node 1 Node 2

ASIC ASIC ASIC ASIC


1 2 3 4 1 2 3 4

Fibre
Channel ports 8
1 3 6

Fabric 1 Fabric 2

2 9
Ctl-1 50060E800043053 50060E800043052
FC
Set path priority to LUNs on AMS 0A 0B
using odd or even paths 1B 1A Adaptable
50060E800043050
FC
50060E800043051 Ctl-0 Modular
Storage

Owning controller Owning controller


LUN 0 LUN 1 LUN 2 LUN 3
Ctl-0 Ctl-1 Ctl-0 Ctl-1

This is the minimum configuration with a two-way clustered connection. Numbers in the
fabric indicate the port numbers on the FC Switch.
Preferred path configuration:
System Drive 0, 2, 4……. Node 1 port 1 to interface 0A LUN 0, 2, 4…….
System Drive 1, 3, 5……. Node 1 port 3 to interface 1A LUN 1, 3, 5…….

System Drive 0, 2, 4……. Node 2 port 1 to interface 0A LUN 0, 2, 4…….


System Drive 1, 3, 5……. Node 2 port 3 to interface 1A LUN 1, 3, 5…….

Zoning Fabric 1:
Zone 1: port 1 and 2
Zone 2: port 3 and 2

Zoning Fabric 2:
Zone 1: port 6 and 9
Zone 2: port 8 and 9

Page 5-28 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
Fibre Channel Configuration for 2-Node 3100 Cluster and Enterprise Storage

Fibre Channel Configuration for 2-Node 3100 Cluster and


Enterprise Storage

Node 1 Node 2

ASIC ASIC
1 2 3 4 1 2 3 4

Fibre
Channel ports
1 3 8
Fabric 1 6 Fabric 2
4 7
2
9

Map all LUNs to all ports on the 3A 5B 4A 6B


Universal Storage Platform.
Preferred path can still be used from
Virtual
NAS node to fine-tune performance.
Storage
Platform

LUN 0 LUN 1 LUN 2 LUN 3

A key difference in the chip-to-chip failover from the node-to-node failover is that
for a chip-to-chip failover, the EVS will stay up on the node. What this means is that
the interaction with the clients is different as follows:
Chip-to-Chip Behavior
One major benefit to the chip-to-chip failover is that unaffected file systems will
continue to serve data, and will not be interrupted.
During System Drive failover to the other chip, the EVS will continue to interact
with connected clients. Clients attempting access to a file system that is using a
System Drive that is moving to the other chip will receive I/O errors.
Once the failover is complete, the I/O errors will stop and normal service will
continue.
Node-to-Node Behavior
For a node-to-node failover, the EVS (and all file systems) will completely disappear
for some period of time. No responses (I/O errors) will be returned. The EVS will
reappear on the other node and normal service can continue.

HDS Confidential: For distribution only to authorized parties. Page 5-29


Ethernet and Fibre Channel Networks
Fibre Channel Configuration for 2-Node 3100 Cluster and Enterprise Storage

Thus, in a chip-to-chip failover the clients will maintain connectivity and will get
I/O errors, while in node-to-node failover the clients will lose connectivity and
might not receive errors. The clients should be prepared for both possibilities.
The best way to maintain optimum connectivity and availability while minimizing
potential system impact is to properly configure the system to avoid chip-to-chip
failover unless there is a specific combination of multiple failures (paths for LUNs
must have failed to a particular chip but remain available to the other chip).

Page 5-30 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
High-performance NAS Platform 3200 Connectivity

High-performance NAS Platform 3200 Connectivity

Node 1 Node 2

ASIC ASIC ASIC ASIC


1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8

Fibre
Channel ports

Fabric 1 Fabric 2

Map all LUNs to all ports on the 3A 5B 4A 6B


Universal Storage Platform. Preferred
path can still be used from High- Virtual
performance NAS to fine-tune
performance.
Storage
Platform

LUN 0 LUN 1 LUN 2 LUN 3

If connectivity to a System Drive is unavailable, the High-performance NAS


Platform node will re-establish connectivity in the following order:
1. Move connectivity to that System Drive to another port (any of the other three)
on the same Tachyon chip.
2. Move connectivity to that System Drive to the other Tachyon chip (if available).
3. If connectivity is lost for that System Drive via both Tachyon chips on that node,
then one of the following three options will happen:
a) Maintain EVS connectivity, but fail only the affected file systems
If the EVS contains several file systems, and at least one file system (that is, all
System Drives associated with that file system) is still able to be accessed
through the primary High-performance NAS Platform server, then the EVS
will stay on the primary system. Access will continue to the good file systems,
and only the file systems without connectivity will fail. This is done to
maintain uninterrupted access to the good file systems.

HDS Confidential: For distribution only to authorized parties. Page 5-31


Ethernet and Fibre Channel Networks
High-performance NAS Platform 3200 Connectivity

b) Fail over the EVS to the alternate node


If all file systems fail within the EVS (lost connectivity), the primary node will
then check with other nodes to determine if any node has connectivity. If
another node has connectivity, the EVS will then failover to the other node.
c) Fail the EVS
If all file systems fail within the EVS (lost connectivity), the primary node will
then check with the other nodes to determine if any node has connectivity. If
no nodes have connectivity, the EVS will stay on the original node, but the
file systems will fail.
Note that for a single-node system, failover to other nodes in “b)” above is not an option.

Page 5-32 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
Fibre Channel Switchless Configuration for 2-Node 3100 or 3200 Cluster

Fibre Channel Switchless Configuration for 2-Node 3100 or


3200 Cluster

Node 1 Node 2

ASIC ASIC
1 2 3 4 1 2 3 4

Fibre
Channel ports
Not supported

0E 0F 1E 1F

Hitachi
Unified
Storage
(HUS)

LUN 0 LUN 1 LUN 2 LUN 3


Ctl-0 Ctl-1 Ctl-0 Ctl-1

Direct Attached Storage (DAS) is not supported for Hitachi High-performance NAS
models 2100, 2200, 3100, and 3200. It is only supported on Hitachi NAS Platform
models 30x0 and 4xx0.

HDS Confidential: For distribution only to authorized parties. Page 5-33


Ethernet and Fibre Channel Networks
Fibre Channel Switchless Configuration for Single 3100 or 3200 Node

Fibre Channel Switchless Configuration for Single 3100 or 3200


Node

Node 1

ASIC
1 2 3 4

Fibre
Channel ports

Hitachi
Unified
0A 0C 1A 1C
Storage
(HUS)

LUN 0 LUN 1 LUN 2 LUN 3


Ctl-0 Ctl-1 Ctl-0 Ctl-1

DAS is supported on Hitachi High-performance NAS nodes in a single node


configuration. In a single node configuration there is no issue with seeing the same
storage image on all nodes.

Page 5-34 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
Fibre Channel Fabric Configuration for 2-Node Cluster HNAS 4xx0

Fibre Channel Fabric Configuration for 2-Node Cluster HNAS


4xx0
Fibre Channel Host ports
ASIC ASIC
1 2 3 4 1 2 3 4

Node 1 Node 2

1 3 8
Fabric 1 6 Fabric 2

2 4 7
9

Hitachi
Unified 0E 0F 1E 1F

Storage
(HUS)

LUN 0 LUN 1 LUN 2 LUN 3


Ctl-0 Ctl-1 Ctl-0 Ctl-1

This is the recommended configuration with a two-way clustered connection.


Numbers in the fabric indicate the port numbers on the FC Switch.
Preferred path configuration
System Drive 0, 4, 8……. Node 1 port 1 to interface 0E LUN 0, 4, 8…….
System Drive 1, 5, 9……. Node 1 port 3 to interface 1E LUN 1, 5, 9…….
System Drive 2, 6, A……. Node 1 port 3 to interface 0F LUN 2, 6, A…….
System Drive 3, 7, B……. Node 1 port 1 to interface 1F LUN 3, 7, B…….
System Drive 0, 4, 8……. Node 2 port 1 to interface 0E LUN 0, 4, 8…….
System Drive 1, 5, 9……. Node 2 port 3 to interface 1E LUN 1, 5, 9…….
System Drive 2, 6, A……. Node 2 port 3 to interface 0F LUN 2, 6, A…….
System Drive 3, 7, B……. Node 2 port 1 to interface 1F LUN 3, 7, B

HDS Confidential: For distribution only to authorized parties. Page 5-35


Ethernet and Fibre Channel Networks
Fibre Channel Fabric Configuration for 2-Node Cluster HNAS 4xx0

Zoning Fabric 1:
Zone 1: port 1, 2 and 4
Zone 2: port 3, 2 and 4
Zoning Fabric 2:
Zone 1: port 6, 7 and 9
Zone 2: port 8, 7 and 9

Page 5-36 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
Fibre Channel Fabric Configuration for 2-Node Cluster HNAS 4xx0

Fibre Channel Host ports


ASIC ASIC
1 2 3 4 1 2 3 4

Node 1 Node 2

1 3 8
Fabric 1 6 Fabric 2

2 4 7
9

Hitachi
Unified 0E 0F 1E 1F

Storage
(HUS)

LUN 0 LUN 1 LUN 2 LUN 3


Ctl-0 Ctl-1 Ctl-0 Ctl-1

This is the recommended configuration with a two-way clustered connection. Numbers in


the fabric indicate the port numbers on the FC Switch.
Preferred path configuration
System Drive 0, 4, 8……. Node 1 port 1 to interface 0E LUN 0, 4, 8…….
System Drive 1, 5, 9……. Node 1 port 3 to interface 1E LUN 1, 5, 9…….
System Drive 2, 6, A……. Node 1 port 3 to interface 0F LUN 2, 6, A…….
System Drive 3, 7, B……. Node 1 port 1 to interface 1F LUN 3, 7, B…….
System Drive 0, 4, 8……. Node 2 port 1 to interface 0E LUN 0, 4, 8…….
System Drive 1, 5, 9……. Node 2 port 3 to interface 1E LUN 1, 5, 9…….
System Drive 2, 6, A……. Node 2 port 3 to interface 0F LUN 2, 6, A…….
System Drive 3, 7, B……. Node 2 port 1 to interface 1F LUN 3, 7, B
Zoning Fabric 1:
Zone 1: port 1, 2 and 4
Zone 2: port 3, 2 and 4
Zoning Fabric 2:
Zone 1: port 6, 7 and 9
Zone 2: port 8, 7 and 9

HDS Confidential: For distribution only to authorized parties. Page 5-37


Ethernet and Fibre Channel Networks
Fibre Channel Best Practice Configuration for 2-Node Cluster Using Secure Storage Domains

Fibre Channel Best Practice Configuration for 2-Node Cluster


Using Secure Storage Domains
Fibre Channel Host ports
ASIC ASIC
1 2 3 4 1 2 3 4

Node 1 Node 2

1 3
8
6
Fabric 1 Fabric 2
4
Preferred path and 7 9
2
secure storage domains
can be used to fine-tune 3A 5B 4A 6B
Hitachi
performance. Unified
Although more than 2 Storage
paths per LUN is (HUS VM)
supported, engineering
recommend only 2
paths per LUN.
LUN 0 LUN 31 LUN 32 LUN 63

The HNAS development team recommends keeping the number of paths as low as
possible, which means two paths per LUN.
To satisfy this recommendation and keep the default setting, the secure storage
domains could be arranged as above.

Page 5-38 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
Fibre Channel Recommended Configuration for 2-Node Cluster Enterprise 4xx0

Fibre Channel Recommended Configuration for 2-Node Cluster


Enterprise 4xx0
Fibre Channel Host ports
ASIC ASIC
1 2 3 4 1 2 3 4

Node 1 Node 2

6 8
3
Fabric 1 Fabric 2
4 7
2
9

Map all LUNs to all ports on


3A 1B 4A 2C Hitachi
the Hitachi Unified Storage
VM. Preferred path can still be Unified
used from NAS node to fine- Storage
tune performance. (HUS VM)

LUN 0 LUN 1 LUN 2 LUN 3

For proper configuration of Hitachi NAS Platform models 30x0 and 4xx0 node
clusters, the FC port configurations need to be identical. In other words, port 1 on all
cluster nodes needs to see the same logical units (LUNs) on the same disk system
controller port or controller.
Try to follow one Hport at a time on each node:
Hport 1 on node 1 see LUN 0 over 3A and 4A which is virtual controller 3 and 4
Hport 1 on node 2 see LUN 0 over 3A and 4A which is virtual controller 3 and 4
Then try the next Hport on each node:
Hport 3 on node 1 see LUN 0 over 1B and 2C which is virtual controller 1 and 2
Hport 3 on node 2 see LUN 0 over 1B and 2C which is virtual controller 1 and 2

HDS Confidential: For distribution only to authorized parties. Page 5-39


Ethernet and Fibre Channel Networks
Fibre Channel Configuration for 2-Node Cluster Enterprise 4xx0

Fibre Channel Configuration for 2-Node Cluster Enterprise 4xx0


Fibre Channel Host ports
ASIC ASIC
1 2 3 4 1 2 3 4

Node 1 Node 2

6 8
3
Fabric 1 Fabric 2
4 7
2
9 BAD!!
Map all LUNs to all ports on the
Hitachi Unified Storage VM, 3A 1B 4A 2C
Hitachi
preferred path can still be used Unified
from NAS node to fine-tune
performance.
Storage
(HUS VM)

LUN 0 LUN 1 LUN 2 LUN 3

For proper configuration of Hitachi NAS Platform models 30x0 and 4xx0 node
clusters, the FC port configurations need to be identical. In other words, port 1 on all
cluster nodes needs to see the same logical units (LUNs) on the same disk system
controller port or controller.
Try to follow one Hport at a time on each node:
Hport 1 on node 1 see LUN 0 over 3A which is virtual controller 3
Hport 1 on node 2 see LUN 0 over 4A which is virtual controller 4
Then try the next Hport on each node:
Hport 3 on node 1 see LUN 0 over 1B which is virtual controller 1
Hport 3 on node 2 see LUN 0 over 2C which is virtual controller 2
The requirement of seeing the storage in the same way from both nodes is not
fulfilled, and EVS migration will be affected.

Page 5-40 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
Fibre Channel Switch-less Configuration for 2-Node Cluster Modular 30x0

Fibre Channel Switch-less Configuration for 2-Node Cluster


Modular 30x0

Fibre Channel Host ports


ASIC ASIC
1 2 3 4 1 2 3 4

Node 1 Node 2

Hitachi
Unified
0A 0C 1A 1C
Storage
(HUS)

LUN 0 LUN 1 LUN 2 LUN 3


Ctl-0 Ctl-1 Ctl-0 Ctl-1

For proper configuration of Hitachi NAS Platform models 30x0 and 4xx0 node
clusters, the FC port configurations need to be identical. In other words, port 1 on all
cluster nodes needs to see the same logical units (LUNs) on the same disk system
controller.
Try to follow one Hport at a time on each node:
Hport 1 on node 1 see LUN 0 over 0A which is controller 0
Hport 1 on node 2 see LUN 0 over 0C which is controller 0
Then try the next Hport on each node :
Hport 3 on node 1 see LUN 0 over 1A which is controller 1
Hport 3 on node 2 see LUN 0 over 1C which is controller 1

 Host ports must be in loop mode: fc-link-type -t nl


 Each RAID controller must be connected to both HNAS servers.
 At most, have 2 storage arrays in a switchless cluster.
 Preferred paths (if any) should be set using host port only.

HDS Confidential: For distribution only to authorized parties. Page 5-41


Ethernet and Fibre Channel Networks
Fibre Channel Switch-less Configuration for 2-Node Cluster Modular 30x0

Fibre Channel Host ports


ASIC ASIC
1 2 3 4 1 2 3 4

Node 1 Node 2

Hitachi
Unified
0A 0C 1A 1C
Storage
(HUS)

LUN 0 LUN 1 LUN 2 LUN 3


Ctl-0 Ctl-1 Ctl-0 Ctl-1

For proper configuration of Hitachi NAS Platform models 30x0 and 4xx0 node
clusters, the FC port configurations need to be identical. In other words, port 1 on all
cluster nodes needs to see the same logical units (LUNs) on the same disk system
controller.
Try to follow one Hport at a time on each node:
Hport 1 on node 1 see LUN 0 over 1A which is controller 1
Hport 1 on node 2 see LUN 0 over 0C which is controller 0
Then try the next Hport on each node:
Hport 3 on node 1 see LUN 0 over 0A which is controller 0
Hport 3 on node 2 see LUN 0 over 1C which is controller 1
The requirement of seeing the storage in the same way from both nodes is not
fulfilled, and EVS migration will be affected.

Page 5-42 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
Fibre Channel Switch-less Configuration for 2-Node Cluster Enterprise 4xx0

Fibre Channel Switch-less Configuration for 2-Node Cluster


Enterprise 4xx0
Fibre Channel Host ports
ASIC ASIC
1 2 3 4 1 2 3 4

Node 1 Node 2

Port numbers are


just examples
Map all LUNs to all ports on the 1A 1B 6A 6B
Virtual Storage Platform. Preferred
path can still be used from NAS Virtual
node to fine-tune performance. Storage
Platform

LUN 0 LUN 1 LUN 2 LUN 3

 High-performance NAS Platform models 3100 and 3200 do not support Direct Attached
Storage (DAS) in a two node cluster configuration, only as a single node.
 Hitachi NAS Platform models 30x0 and 4xx0 support single node and two node
clustered configurations using DAS connectivity.
 A maximum of two storage systems can be connected using DAS as backend
connectivity.
 You can connect storage using direct FC connections, or an FC switch; however, do not
use both connection types in the same system configuration.
 The Hitachi NAS Platform in a switch-less configuration using the Hitachi Enterprise
Storage Systems (9900V, USP, and USP-V) introduces cabling restrictions when
connected using direct FC connections.
 The Hitachi NAS platform treats the first letter of the port number as a virtual controller
with a limit of two controllers maximum.
 Therefore, connections need to be grouped into only two controller groups, and each
controller group must be visible from both nodes and switches.
(For a direct-connect example: connect node 1 to ports 1A and 6A, and node 2 to ports
1B and 6B).

HDS Confidential: For distribution only to authorized parties. Page 5-43


Ethernet and Fibre Channel Networks
Fibre Channel Switch-less 2-Node Cluster Configuration 30x0 and NetApp 2680

Fibre Channel Switch-less 2-Node Cluster Configuration 30x0


and NetApp 2680
Fibre Channel Host ports
ASIC ASIC
1 2 3 4 1 2 3 4

Node 1 Node 2

RS12C/RS24C

3 4 3 4

Ctl A Ctl B

LUN 0 LUN 1 LUN 2 LUN 3

For proper configuration of Hitachi NAS Platform models 30x0 and 4xx0 node
clusters, the FC port configurations need to be identical. In other words, port 1 on all
cluster nodes needs to see the same logical units (LUNs) on the same disk system
controller.
Try to follow one Hport at a time on each node:
Hport 1 on node 1 sees LUN 0 over FC port 3 which is controller A
Hport 1 on node 2 sees LUN 0 over FC port 4 which is controller A
Then try the next Hport on each node:
Hport 3 on node 1 sees LUN 0 over FC port 3 which is controller B
Hport 3 on node 2 sees LUN 0 over FC port 4 which is controller B

Page 5-44 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
Fibre Channel Switch-less Configuration for 2-Node Cluster 4xx0 Enterprise

Fibre Channel Switch-less Configuration for 2-Node Cluster


4xx0 Enterprise
Fibre Channel Host ports
ASIC ASIC
1 2 3 4 1 2 3 4

Node 1 Node 2

Port numbers are


just examples
Map all LUNs to all ports on the 5D 7B 6D 8B
Universal Storage Platform, preferred
path can still be used from High-
BAD!!
Virtual
performance NAS to fine-tune Storage
performance.
Platform

LUN 0 LUN 1 LUN 2 LUN 3

For proper configuration of Hitachi NAS Platform models 30x0 and 4xx0 node
clusters, the FC port configurations need to be identical. In other words, port 1 on all
cluster nodes needs to see the same logical units (LUNs) on the same disk system
controller.
Try to follow one Hport at a time on each node:
Hport 1 on node 1 sees LUN 0 over 5D which is virtual controller 5
Hport 1 on node 2 sees LUN 0 over 7B which is virtual controller 7
Then try the next Hport on each node:
Hport 3 on node 1 sees LUN 0 over 6D which is virtual controller 6
Hport 3 on node 2 sees LUN 0 over 8B which is virtual controller 1
The requirements of seeing the storage in the same way from both nodes is not
fulfilled, and EVS migration will be affected.

HDS Confidential: For distribution only to authorized parties. Page 5-45


Ethernet and Fibre Channel Networks
Most Important SCSI Command Node 1

Most Important SCSI Command Node 1

 To get a good view from the node point of view use:


scsi-racks

Page 5-46 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
Most Important SCSI Command Node 2

Most Important SCSI Command Node 2

 To get a good view from the node point of view use:


scsi-racks

HDS Confidential: For distribution only to authorized parties. Page 5-47


Ethernet and Fibre Channel Networks
Problem Determination Example 1

Problem Determination Example 1

Page 5-48 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
Problem Determination Example 2

Problem Determination Example 2

HDS Confidential: For distribution only to authorized parties. Page 5-49


Ethernet and Fibre Channel Networks
Storage Considerations

Storage Considerations

 The Adaptable Modular Storage system has the concept


of “path priority,” while the Hitachi NAS Platform node
uses a concept of “preferred path.”
 For the AMS models before the 2000 series, it is critical
to align “path priority” and “preferred path.”
 The HUS 100 family is compliant with the SCSI SPC-3
Asymmetric Logical Unit Access (ALUA) controller model
 HNAS nodes can with HUS 100 use ALUA to balance the
preferred access schema across used System Drives
(SD)
 The Hitachi NAS Platform node can support multiple
paths to the same LUN but does not perform dynamic
load balancing across the paths.

Page 5-50 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
Storage Enhancements for HNAS

Storage Enhancements for HNAS

 AMS2000 code 08B7/D introduced a new feature that


allows the HNAS to identify the preferred path to an SD.
 From that version the sdpath command should not be
used for manual path balancing.
 HUS100 code 0935/A introduced a new Host Group
option, called “HNAS Option Mode”
 This option enable HNAS to interrogate more detailed
storage SCSI information
 From HNAS version 11.2.33xx.xx (angel-3) this
information enable for automatically expanding the
q-depth triggered by the “Command Queue Expansion
Mode” port option on the HUS100

HDS Confidential: For distribution only to authorized parties. Page 5-51


Ethernet and Fibre Channel Networks
HUS 100 Options and HNAS

HUS 100 Options and HNAS

Host Group Options

Port Options

Page 5-52 HDS Confidential: For distribution only to authorized parties.


Ethernet and Fibre Channel Networks
Module Summary

Module Summary

 In this module, you have learned to:


• List the Gigabit Ethernet (1GbE and 10GbE) network maximum cable
length
• Explain the private and public network configuration scenarios for both
platforms
• Differentiate between private Rack LAN and public User Data LAN
• Examine “The good, the bad and the ugly” back-end SAN configurations

HDS Confidential: For distribution only to authorized parties. Page 5-53


Ethernet and Fibre Channel Networks
Module Review

Module Review

1. What media type is supported and for which interfaces?


2. Indicate the approximate maximum distance for:
10GbE multimode? ___ 1GbE UTP? ____ 10GbE UTP? ____
3. Can the SMU be a NTP client and NTP server?
4. How many initiators can be enabled on a node to get access over
the SAN to the storage targets?
5. Can the customer data LAN and the private management LAN
physically be the same?
6. Which components can be managed through the Private
Management? And how is this accomplished?
7. How many paths are needed from the node to storage?
8. Is multipathing including load balancing supported?

Page 5-54 HDS Confidential: For distribution only to authorized parties.


6. File System and
Access Protocols
Module Objectives

 Upon completion of this module, you should be able to:


• Explain the storage pools of the Hitachi NAS Platform
• Describe file system structure
• List the storage pool and file system specifications
• Identify the benefits of Tiered File Systems (TFS)
• Describe the access protocol used by Microsoft® Windows® and UNIX
• Identify the implementation differences for the access protocols

HDS Confidential: For distribution only to authorized parties. Page 6-1


File System and Access Protocols
From Disk Drive to HNAS Virtualized Storage

From Disk Drive to HNAS Virtualized Storage

H
N
A
S

SD 0 SD 1 SD 2 SD 3 SD 4 SD 5 SD 6 SD 7 SD 8 SD 9 SD

L L S
LDEV 11 LDEV 12 LDEV 13 LDEV 16 LDEV 19 LDEV 20 LDEV 21 LDEV 22 LDEV 23 LDEV 0 D U T
E N O
V R
A
RG G
E

RG = RAID Group
LDEV = Logical Device
HNAS = Hitachi NAS Platform
SD = System Drive
SP = Storage Pool
FS = File System
EVS = Enterprise Virtual Server
SHR = Share

Page 6-2 HDS Confidential: For distribution only to authorized parties.


File System and Access Protocols
Hitachi Storage System Integration

Hitachi Storage System Integration

 Provision RAID groups and logical units (LDEVs/LUNs) using


storage vendor management application
• Connect your storage, Hitachi NAS Platform, and the Fibre Channel (FC)
switches to form back-end SAN or DAS FC
• Configure FC switch zoning

In the SMU, go to Storage Management > System Drives and verify that new
System Drives (in other words, LUNs presented by storage) are visible to the
Hitachi NAS Platform node.
1. Verify storage capacity license limit.
2. Once verified, allow Hitachi NAS Platform Node access to the specified System
Drives.
The refresh status can be executed via CLI using the scsi-refresh command.
DAS stands for Direct attached Storage

HDS Confidential: For distribution only to authorized parties. Page 6-3


File System and Access Protocols
BlueArc RAID Rack Discovery

BlueArc RAID Rack Discovery

 You must discover the storage array before you can manage it

When you click on Discover Racks, the IP addresses of both RAID controllers are
discovered, and the array becomes manageable by the BlueArc Systems Server.
Once a rack is added, the following events occur:
 The selected RAID racks appear on the RAID Racks list page and on the System
Monitor (for the currently selected managed server).
 The SMU begins logging rack events, which can be viewed through the Event
Log link on the RAID Rack Details page.
 RAID rack severe events will be forwarded to each managed server that has
discovered the rack and included in its event log. This triggers the server's alert
mechanism, possibly resulting in alert emails and SNMP traps.
 The RAID rack’s time is synchronized daily with SMU time.
 If system drives are present on the RAID rack, the rack “cache block size” will be
set to 16KB.
Note that if there is a problem with either array controller, the rack will be
discovered, but, in a degraded (partially discovered) state and will have reduced
functionality. You must resolve the problem with the array, then, remove and
rediscover the array.

Page 6-4 HDS Confidential: For distribution only to authorized parties.


File System and Access Protocols
Create System Drives

Create System Drives

 From the System Drives management screen, click Create


 Select the storage array on which you are going to build your
System Drive

SMU only has the API scripts to build RAID arrays on BlueArc Storage Arrays (LSI).
On other vendors’ storage, you will use their native application.
Supported RAID types are RAID-1, 5, and 6 on BlueArc RC16 arrays.
1. Navigate to the System Drives page (Home > Storage Management > System
Drives).
2. In the System Drives page, click Create.
3. Select a rack. When the Select RAID Rack page is displayed, select a rack, then
click Next.
4. Indicate the RAID level.
5. Specify the drive parameters (size, name, stripe size).

HDS Confidential: For distribution only to authorized parties. Page 6-5


File System and Access Protocols
System Drives – Create SD

System Drives – Create SD

 Select the RAID level for the System Drive

Page 6-6 HDS Confidential: For distribution only to authorized parties.


File System and Access Protocols
System Drives – Create SD

 Create an SD in a 7+1 RAID-5 RAID group

Select the number of drives in your RAID group. This includes Parity drives.
You are able to create multiple SDs within a single RAID group.
 This is not recommended because it will cause the heads on the disk to seek
between 2 physical areas on disk.
Select the stripe depth for your RAID groups, keeping in mind Superflush.

HDS Confidential: For distribution only to authorized parties. Page 6-7


File System and Access Protocols
CLI Displaying the System Drives

CLI Displaying the System Drives

 From the CLI more details can be displayed.


• Pay attention to the Mirror column indicating the role in a TrueCopy pair.

Page 6-8 HDS Confidential: For distribution only to authorized parties.


File System and Access Protocols
From Disk Drive to HNAS Virtualized Storage

From Disk Drive to HNAS Virtualized Storage

EVS1 EVS2 EVS3 H


192.168.3.21 192.168.3.25 192.168.3.31 N
A
S

SP SP
FS01/ FS02/ /VV1 02
FS03/ FS04/
01

SD 0 SD 1 SD 2 SD 3 SD 4 SD 5 SD 6 SD 7 SD 8 SD 9 SD

L L S
LDEV 11 LDEV 12 LDEV 13 LDEV 16 LDEV 19 LDEV 20 LDEV 21 LDEV 22 LDEV 23 LDEV 0 D U T
E N O
V R
A
RG G
E

HDS Confidential: For distribution only to authorized parties. Page 6-9


File System and Access Protocols
Hitachi Dynamic Provisioning (HDP) and HNAS

Hitachi Dynamic Provisioning (HDP) and HNAS

EVS1 EVS2 H
192.168.3.21 192.168.3.25 N
A
S

SP
01 FS01/ FS02/ /VV1

SD 0 SD 1 SD 2 SD 3 SD 4 SD

DP
DP-VOL 15
POOL
DP-VOL 14
DP-VOL 13
S
DP-VOL 12
T
DP-VOL 11 O
R
A
RG G
E

Some of the current restrictions:

 An HDP Pool hosting HNAS System Drives (SDs) should never be over-
provisioned.
 HNAS is not HDP thin-provisioned-volume aware.
 If an HDP Pool runs out of disk space, the HNAS System Drive experiences SCSI
and I/O errors, fails the entire span and unmounts it automatically .
 Always monitor and ensure that the HDP Pools for HNAS are never
oversubscribed.
 HNAS does not have the ability to adapt to DP-VOL size changes.
 The size of the DPVOLs must never change.
 All the DPVOLs used in a HNAS storage pool should have the same
performance capabilities.

Page 6-10 HDS Confidential: For distribution only to authorized parties.


File System and Access Protocols
From Physical Disk to Storage Pool

From Physical Disk to Storage Pool


Stripeset

COD – Configuration On Disk

Stripe 1

FS1 Stripe 2

Stripe 3

Stripe 4

Parity Stripe

No No No No No No No No No
SD/LDEV Access
access
Access
access
Access
access
Access
access
Access
access
Access
access
Access
access access access

RAID-5 3+1

The storage pool consists of one or more System Drives (SDs). A single SD has the
same capacity (in bytes) as a Logical Unit Number (LUN) presented over the SAN.
This diagram illustrates the concept of storage pools and should not be interpreted
as a best practice. For best practices, consult the appropriate documentation for the
modular or enterprise disk subsystems.

HDS Confidential: For distribution only to authorized parties. Page 6-11


File System and Access Protocols
Expanding a Storage Pool

Expanding a Storage Pool

Stripeset

COD – Configuration On Disk

Stripe 1

FS1 Stripe 2

Stripe 3

Stripe 4

Parity Stripe

No No No No No No No No No
SD/LDEV Access
access
Access
access
Access
access
Access
access
Access
access
Access
access
Access
access access access

RAID-5 3+1

A storage pool can be expanded non-disruptively in capacity by adding one or more


system drives (LUNs). The number of LUNs added per expansion also defines the
size of the new stripeset used by the storage pool. Therefore, performance may differ
in comparison with the stripeset in the storage pool before the expansion, so be
aware. The Dynamic Write Balancing mechanism compensates for this phenomenon
and is enabled by default.

Page 6-12 HDS Confidential: For distribution only to authorized parties.


File System and Access Protocols
File System in a Storage Pool

File System in a Storage Pool


Stripeset

COD – Configuration On Disk

Stripe 1

Stripe 2

FS1 FS2 Stripe 3

Stripe 4

Parity Stripe

Storage Pool

No No No No No No No No No
SD/LDEV Access
access
Access
access
Access
access
Access
access
Access
access
Access
access
Access
access access access

With the storage pool concept, more than one file system can be allocated in the
same storage pool. The concept of storage pool requires a license key for storage
pools.

HDS Confidential: For distribution only to authorized parties. Page 6-13


File System and Access Protocols
File System Using Auto Expansion

File System Using Auto Expansion


Stripeset

FS3 COD – Configuration On Disk

Stripe 1
FS2
Stripe 2

Stripe 3
FS1
Stripe 4

Parity Stripe

Storage Pool

No No No No No No No No No
SD/LDEV Access
access
Access
access
Access
access
Access
access
Access
access
Access
access
Access
access access access

Auto expansion can be used as a kind of thin provisioning on the file system level.
File systems are created with a maximum value, but only a user-defined fraction of
the maximum is pre-allocated. In this way the capacity of all file systems can be
greater than the available space in the storage pool. When more data is added to the
file system, the pre-allocated space expands as needed. Of course, the file systems
together cannot grow larger than the storage pool capacity allows, so growth of the
file systems should be taken into consideration when enlarging the storage pool.

Page 6-14 HDS Confidential: For distribution only to authorized parties.


File System and Access Protocols
System Drive Groups (SDG)

System Drive Groups (SDG)

 To enable advanced file system technologies, it is essential that the


NAS nodes understand the storage RAID group and HW-LUN/LDEV
relationship
 The SDG represents the SDs/LUNs/LDEVs mapping, that share the
same storage RAID group
 Autogroup assignment should only be selected when a LDEV spans
over a complete RAID group (1 SD in each SDG)

SD/LDEV SD/LDEV SD/LDEV SD/LDEV SDG SD/LDEV SD/LDEV SD/LDEV SD/LDEV

HDS Confidential: For distribution only to authorized parties. Page 6-15


File System and Access Protocols
Hitachi Dynamic Provisioning (HDP)

Hitachi Dynamic Provisioning (HDP)

 The HDP feature introduces a virtualization layer that hides the RAID
group layout for the HNAS nodes
 The best practice is to assign one SD/LUN/DP-VOL mapping, into
one SDG

SD/DPV SD/DPV SD/DPV SD/DPV SDG

DP-VOL 4
DP-VOL 3
DP-VOL 2
DP-VOL 1

DPV = DP-VOL

Page 6-16 HDS Confidential: For distribution only to authorized parties.


File System and Access Protocols
Storage Pool Best Practices

Storage Pool Best Practices

 One LDEV/LUN spans over the complete RAID group


 Minimum of four (4) SDs in a storage pool
 Even number of SDs in pool
 Design with future expansion in mind
 Queue depth:
• All file systems in a storage pool belong EVSs on one node
• In other words, do not share the same storage pool across nodes
▪ SCSI Queue depth is cluster wide
▪ The maximum SCSI Queue depth is 500 per modular storage system
target port
▪ The NAS node has a fixed SCSI Queue depth of 32 per LDEV

scsi-queue-limits-show:
. . . . …..HITACHI AMS500 / DF700M Default: per controller: 0, per target port: 500, per system drive: 32
Current: per controller: 0, per target port: 500, per system drive: 32
HITACHI AMS1000 / DF700H Default: per controller: 0, per target port: 500, per system drive: 32
Current: per controller: 0, per target port: 500, per system drive: 32
HITACHI SMS100 / SA800 Default: per controller: 0, per target port: 500, per system drive: 32
Current: per controller: 0, per target port: 500, per system drive: 32
HITACHI SMS110 / SA810 Default: per controller: 0, per target port: 500, per system drive: 32
Current: per controller: 0, per target port: 500, per system drive: 32
HITACHI AMS2100 / DF800S Default: per controller: 0, per target port: 500, per system drive: 32
Current: per controller: 0, per target port: 500, per system drive: 32
HITACHI AMS2300 / DF800M Default: per controller: 0, per target port: 500, per system drive: 32
Current: per controller: 0, per target port: 500, per system drive: 32
HITACHI AMS2500 / DF800H Default: per controller: 0, per target port: 500, per system drive: 32
Current: per controller: 0, per target port: 500, per system drive: 32
HITACHI VSP / R700 Default: per controller: 0, per target port: 2000, per system drive: 32
Current: per controller: 0, per target port: 2000, per system drive: 32
HITACHI HUS110 / DF850XS Default: per controller: 0, per target port: 500, per system drive: 32

HDS Confidential: For distribution only to authorized parties. Page 6-17


File System and Access Protocols
Storage Pool Best Practices

Current: per controller: 0, per target port: 500, per system drive: 32
HITACHI HUS130 / DF850S Default: per controller: 0, per target port: 500, per system drive: 32
Current: per controller: 0, per target port: 500, per system drive: 32
HITACHI HUS150 / DF850MH Default: per controller: 0, per target port: 500, per system drive: 32
Current: per controller: 0, per target port: 500, per system drive: 32
HITACHI HUS-VM / HM700 Default: per controller: 0, per target port: 2000, per system drive: 32
Current: per controller: 0, per target port: 2000, per system drive: 32
HITACHI Default HDS / UNKNOWN / OTHER Default: per controller: 0, per target port: 256, per system drive:
32
Current: per controller: 0, per target port: 256, per system drive: 32

Page 6-18 HDS Confidential: For distribution only to authorized parties.


File System and Access Protocols
Storage Pools Specifications

Storage Pools Specifications

 Storage pool — Expandable Container of file systems


• Initial creation with up to 32 System Drives
• A storage pool can be expanded 63 times
• Expandable to 256TB (FW version 10.1 and above 1PB)
• Can contain up to 16,384 chunks

HDS Confidential: For distribution only to authorized parties. Page 6-19


File System and Access Protocols
Creating a Storage Pool

Creating a Storage Pool

When you only allow access to the SDs you need, it will make the storage pool
assignment much easier. In this scenario we select access to 4 SDs on the system
drive screen and now, using the storage pool wizard, all you need is to check all and
then continue. At this stage not considering which SDs to use.

Page 6-20 HDS Confidential: For distribution only to authorized parties.


File System and Access Protocols
File System Specifications

File System Specifications

 One or more file systems may be created in a storage pool


 256TB maximum capacity limit; maximum of 1,023 chunks
 Up to 128 file systems in a storage pool
 Maximum 125 file systems in a cluster and 128 in a single node

The file system on the Hitachi NAS Platform was originally called “Silicon File
System”. The HNAS file system can be displayed as shown on the screen capture.
Newer file system versions are called Wise File System version 1 (WFS1) and Wise
File System version 2 (WFS2).

HDS Confidential: For distribution only to authorized parties. Page 6-21


File System and Access Protocols
File System Definition

File System Definition

Set the Size Limit and enable


Auto-Expansion for file systems
that will grow as needed.

Assign the
file system
to an EVS.
Disable Auto-
Expansion and
specify initial file WORM is
system size to supported
create file systems by Hitachi.
with maximum Format for BlueArc
capacity. JetMirror target.

Block Size:
32KB – Best Performance - big files Prepare for Deduplication.
4KB – Optimal space utilization.

Choosing a file system block size is an important decision because it affects


performance, storage size, and the efficiency of storage utilization. A file system
with a 32KB block size provides higher throughput when transferring large files.
However, a file system with a 4KB block size performs better than a file system with
a 32KB block size when subjected to a large number of smaller I/O operations.
If the file system contains many relatively small files, a 4KB file system block size
provides more efficient space utilization.
When saving a 42KB file in a file system with a 32KB block size, the 42KB file takes
up two 32KB blocks, for a total of 64KB used (2 x 32KB = 64KB). In a file system with
a 4KB block size, the 42KB file takes up eleven 4KB blocks, for a total of 44KB used
(11 x 4KB = 44KB). In this case, the 32KB block size wastes 22KB of space while the
4KB block size wastes only 2KB of space. One advantage of configuring multiple file
systems within the same storage pool is that applications requiring a 4KB block size
can share storage with applications that require a 32KB block size.

Page 6-22 HDS Confidential: For distribution only to authorized parties.


File System and Access Protocols
Tiered File Systems (Tiered Storage Pools)

Tiered File Systems (Tiered Storage Pools)

 Tiered File Systems (TFS) provides cost efficiency with


equal performance by using fewer or lower cost disks
• Can be deployed with or without SSD drives
• New installation with no degradation in I/O performance

Metadata Tier 0
High Speed
Small Reads SSD or SAS
Tier 1 & Writes Disks
Metadata
High Speed
& User Data
SAS Disks Tier 1
User Data Larger
Lower Cost
Reads
NL-SAS or SATA
& Writes
Disks
Traditional file system Tiered File Systems

HDS Confidential: For distribution only to authorized parties. Page 6-23


File System and Access Protocols
Creating a Tiered Storage Pool

Creating a Tiered Storage Pool

Page 6-24 HDS Confidential: For distribution only to authorized parties.


File System and Access Protocols
Creating a Tiered Store Pool

Creating a Tiered Store Pool

Step 1

Step 2

HDS Confidential: For distribution only to authorized parties. Page 6-25


File System and Access Protocols
Displaying a Tiered Storage Pool

Displaying a Tiered Storage Pool


sd-list –i --show-tier

span-list

Page 6-26 HDS Confidential: For distribution only to authorized parties.


File System and Access Protocols
From Disk Drive to Drive Letter and UNIX Mount Point

From Disk Drive to Drive Letter and UNIX Mount Point


192.168.3.78 VV1 on
MNT1 EVS2 192.168.3.41
SHR1 on
FS03 on Win1 (X:)
MNT2
EVS3
SHR3 on
Win3 (Y:)

SHR1 FS01/ FS03/ SHR3 H


N
EVS1 Win1 EVS2 Win3 EVS3 A
192.168.3.21 Server 192.168.3.25 Server 192.168.3.31 S
NFSD NFSD

SP SP
FS01/ FS02/ /VV1 02
FS03/ FS04/
01

SD 0 SD 1 SD 2 SD 3 SD 4 SD 5 SD 6 SD 7 SD 8 SD 9 SD

L L S
LDEV 11 LDEV 12 LDEV 13 LDEV 16 LDEV 19 LDEV 20 LDEV 21 LDEV 22 LDEV 23 LDEV 0 D U T
E N O
V R
A
G
RG E

RG = RAID Group
LDEV = Logical Device
HNAS = Hitachi NAS Platform
SD = System Drive
SP = Storage Pool
FS = File System
EVS = Enterprise Virtual Server
SHR = Share

HDS Confidential: For distribution only to authorized parties. Page 6-27


File System and Access Protocols
What Are the Similarities?

What Are the Similarities?

 NFS and CIFS are protocols


 NFS and CIFS enables sharing the same “Storage”
 NFS and CIFS access across a network
 NFS and CIFS have built in:
• System login security
• Connection protocol
• File and Directory security
• File and Directory locking

The way the similarities are implemented


makes CIFS and NFS very different!

Page 6-28 HDS Confidential: For distribution only to authorized parties.


File System and Access Protocols
What Is Different?

What Is Different?

Issue NFS v2/v3 CIFS

System Client logs into server Client logs into domain


Login Server authenticates Domain authenticates
Security Server exports directories Server shares directories

Stateful/
No historical relationship Client/server share history
Stateless
Do not have to re-authenticate Have to re-authenticate
connection
File and Check U-ID /G-ID at request Use ACL for the share
Directory time U-ID (SID) checked
Security U-ID/G-ID per file/directory against ACL
File and
Advisory locks Mandatory locks
Directory
Works for good citizens only Access decides the lock
Locking

HDS Confidential: For distribution only to authorized parties. Page 6-29


File System and Access Protocols
UNIX Permissions

UNIX Permissions

Owner Group Everyone

Maybe stupid –
but “Simple”!

Page 6-30 HDS Confidential: For distribution only to authorized parties.


File System and Access Protocols
Windows Permissions

Windows Permissions

Maybe Advanced —
but “Complex”!

HDS Confidential: For distribution only to authorized parties. Page 6-31


File System and Access Protocols
Common Internet File System (CIFS) Authentication/Active Directory Service (ADS)

Common Internet File System (CIFS) Authentication/Active


Directory Service (ADS)

 Add a CIFS Name to each EVS that will host Microsoft


Windows clients
• ADS CIFS names will be automatically added to the specified
Active Directory using Dynamic DNS (DDNS)
• ADS accounts are placed in the “Computers” folder by default

DNS:
DNS is used to translate host names into IP addresses. With DNS, records must be
created manually for every host name and IP address.
Dynamic DNS:
On TCP/IP networks, the Domain Name System (DNS) is the most common method
to resolve a host name with an IP address, facilitating IP-based communication.
With DNS, records must be created manually for every host name and IP address.
Starting with Microsoft Windows 2000, Microsoft enabled support for Dynamic DNS,
with a DNS database that allows authenticated hosts to automatically add a record
of their host name and IP address, thus eliminating the need for manual creation of
records.

Page 6-32 HDS Confidential: For distribution only to authorized parties.


File System and Access Protocols
ADS and Network Basic Input/Output System (NetBIOS)

ADS and Network Basic Input/Output System (NetBIOS)

 Legacy Microsoft and some non-Microsoft Windows CIFS clients


may require NetBIOS to be enabled
 On customer request, DDNS can be disabled

Using NetBIOS:
When enabled, NetBIOS allows NetBIOS and WINS on this server. If this server
communicates by name with computers that use older Microsoft Windows versions,
this setting is required. By default, the server is configured to use NetBIOS.
Disabling NetBIOS has some advantages:
 Simplifies the transport of SMB traffic
 Removes WINS and NetBIOS broadcast as a means of name resolution
 Standardizes name resolution on DNS for file and printer sharing

HDS Confidential: For distribution only to authorized parties. Page 6-33


File System and Access Protocols
ADS and Domain Name System (DNS)

ADS and Domain Name System (DNS)

 The server registers each CIFS name and IP address with the
directory’s Dynamic DNS server (DDNS)

Same EVS
represented 3 times

Domain
Controller

Page 6-34 HDS Confidential: For distribution only to authorized parties.


File System and Access Protocols
ADS Computers

ADS Computers

 CIFS names will appear as unique computers in the Active


Directory Computers folder

HDS Confidential: For distribution only to authorized parties. Page 6-35


File System and Access Protocols
ADS Computer Properties

ADS Computer Properties

 Computer Properties for Hitachi NAS Platform Node EVS

Page 6-36 HDS Confidential: For distribution only to authorized parties.


File System and Access Protocols
CIFS Shares

CIFS Shares

 Shares can be created


through the GUI (shown)
or using computer
management MMC.
 New shares are created
with an “Everyone Full”
permission.
 File and directory access
applies according to the
ACL.
 CIFSv1 and CIFSv2 is
supported on all platform
families.

Access configuration – optional IP-based restrictions.


Example:
19.168.*.*(rw)
10.1.3.38(noaccess)
10.1.2.0/24(ro)

Ordering is important. Start specific, then make more general.


Notes:
 All clients on network ID 19.168.0.0/55.55.0.0 will have read and write access.
 The client WS with IP address 10.1..38 will have no access at all, and all other WS
IP addresses will have read-only access.

HDS Confidential: For distribution only to authorized parties. Page 6-37


File System and Access Protocols
Network File System (NFS) and Exports

Network File System (NFS) and Exports

 NFSv4 support added to NFSv3 and NFSv2 support


• Some performance and security improvements
 By default, exports grant read/write access to all clients and squash root
(user and group) to 65534

Access configuration – optional IP-based restrictions.


Example:
19.168.0.1(norootsquash)
19.168.*.*(rw)
*(ro)
Ordering is important. Start specific, then make more general bin.
Notes:
 The root account (UID/GID = 0/0) on the client WS with IP address 19.168.0.1
will not be mapped to “anonymous” (uid/gid 65534).
 All clients on network ID 19.168.0.0/55.55.0.0 will have read and write access.
 All other WS IP addresses will have read-only access.
 root squash:
Map requests from uid/gid 0 (root) to the anonymous uid/gid (65534).
Note that this does not apply to any other UIDs that might be equally sensitive,
such as super users.

Page 6-38 HDS Confidential: For distribution only to authorized parties.


File System and Access Protocols
Multi-protocol Access

Multi-protocol Access

CIFS NFS iSCSI FTP


Client Client Client Client

LAG
HNAS 3080

CIFS NFS iSCSI FTP

Target FTP
Share Export
LUN Dir

Expandable File system

To use Internet Small Computer Systems Interface (iSCSI) storage on the server, one
or more iSCSI logical units (LUs) must be defined. iSCSI logical units share blocks of
SCSI storage that are accessed through iSCSI targets. iSCSI targets can be found
through an iSNS database or through a target portal. Once an iSCSI target has been
found, an Initiator running on a Microsoft Windows server can access the logical
unit as a “local disk” through its target. Security mechanisms can be used to prevent
unauthorized access to iSCSI targets.
On the server, an iSCSI logical unit shares regular files residing on a file system. As a
result, iSCSI benefits from file system management functions provided by the server,
such as NVRAM logging, snapshots, and quotas.

HDS Confidential: For distribution only to authorized parties. Page 6-39


File System and Access Protocols
Module Summary

Module Summary

 In this module, you have learned to:


• Explain the storage pools of the Hitachi NAS Platform
• Describe file system structure
• List the storage pool and file system specifications
• Identify the benefits of Tiered File Systems (TFS)
• Describe the access protocol used by Microsoft® Windows® and UNIX
• Identify the implementation differences for the access protocols

Page 6-40 HDS Confidential: For distribution only to authorized parties.


File System and Access Protocols
Module Review

Module Review

1. List the acronyms of some popular file systems.


2. List some functions most file systems have in common.
3. Specify the maximum volume size in the Hitachi NAS Platform
3090.
4. Which benefits can the customer achieve with storage pools?
5. Which file access protocol is used on the Microsoft Windows
platform?
6. Which file access protocol is used on the UNIX platform?

HDS Confidential: For distribution only to authorized parties. Page 6-41


File System and Access Protocols
Module Review

Page 6-42 HDS Confidential: For distribution only to authorized parties.


7. N-way Clustering and
Enterprise Virtual Server
(EVS)

Module Objectives

 Upon completion of this module, you should be able to:


• Explain the concept of an Enterprise Virtual Server (EVS)
• Explain the purpose of clustering
• Define dataflow in NVRAM
• Define IP address assignment in case of an error
• List the failure detection areas
• Recognize the failover and recovery operation
• Describe a Synchronous Disaster Recovery Cluster

HDS Confidential: For distribution only to authorized parties. Page 7-1


N-way Clustering and Enterprise Virtual Server (EVS)
Enterprise Virtual Servers (EVS) Attributes

Enterprise Virtual Servers (EVS) Attributes

 Each EVS has the following


EVS
1
EVS
3 .... EVS
2 attributes assigned:
IP Address
Policy
IP Address
Policy
IP Address
Policy
• One or more IP addresses
• One or more file systems
▪ All EVSs may see the same
logical devices (LDEVs) but can
only access the area belonging
to the file systems assigned to
the EVS
(host [node]-based masking)
• Port assignment for
performance management per
EVS
• NFS/CIFS exported resources
• Command line interface (CLI)
context

EVS allows administrators to create up to 64 logical servers within a single physical


system. Each virtual server can have a separate address and policy.

Page 7-2 HDS Confidential: For distribution only to authorized parties.


N-way Clustering and Enterprise Virtual Server (EVS)
EVS Configuration Summary

EVS Configuration Summary

 EVS is the virtual file server component of the Hitachi NAS Platform
solution
 Maximum of 64 EVSs per server node/cluster nodes

Anatomy of an EVS:
• One or more file serving IP
addresses
• Can host one or more file
systems
• Is the container for CIFS
shares, NFS exports, and
more
• Bound to one Link
Aggregation Group (LAG)
• In a cluster failover scenario,
EVSs migrate from the failed
node to an online node

HDS Confidential: For distribution only to authorized parties. Page 7-3


N-way Clustering and Enterprise Virtual Server (EVS)
Virtual Server Configuration

Virtual Server Configuration

These screen shots explain the IP address assignments for EVS, as well as the EVS
types.

Page 7-4 HDS Confidential: For distribution only to authorized parties.


N-way Clustering and Enterprise Virtual Server (EVS)
Automatic EVS Migration (Clustering) Network Problem

Automatic EVS Migration (Clustering) Network Problem

ag2 ag1 ag1 ag2


EVS1-3 EVS1-2
10.33.123.14 10.33.123.12

EVS1-1
192.168.3.11

Private Management 192.168.3.19


EVS0
192.0.2.15

Node 1 192.0.2.11 192.0.12 Node 2

172.145.2.14

Public Management

HDS Confidential: For distribution only to authorized parties. Page 7-5


N-way Clustering and Enterprise Virtual Server (EVS)
Automatic EVS Migration (Clustering) Node HW Problem

Automatic EVS Migration (Clustering) Node HW Problem

ag2 ag1 ag1 ag2


EVS1-3 EVS1-2
10.33.123.14 10.33.123.12

EVS1-1
192.168.3.11

192.168.3.19 Private Management


EVS0
192.0.2.15

Node 1 192.0.2.11 192.0.2.12 Node 2

172.145.2.14

Public Management

Page 7-6 HDS Confidential: For distribution only to authorized parties.


N-way Clustering and Enterprise Virtual Server (EVS)
2-node Clustering

2-node Clustering

Data
Hitachi NAS Platform 2-node Cluster Data Network
Network Clients

Cluster
Heartbeat Hitachi NAS Platform

Cluster
Heartbeat

SMU
Cluster Management Mgmt.
Interconnect Network Network
(Quorum device)

System Cluster
Configuration Heartbeat

Cluster
Heartbeat Hitachi NAS Platform

The Hitachi NAS Platform supports 2-node Active-Active (A-A) clusters. In A-A
cluster configurations, each node can host several independent EVSs, which can
service network requests simultaneously. A maximum of 64 EVSs per 2-node cluster
are supported. Should either of the nodes in the cluster fail, the EVSs from the failed
node will automatically migrate to the remaining node. Network clients will not
typically be aware of the failure and will not experience any loss of service, although
the cluster may operate with reduced performance until the failed node is restored.
After the node is restored and is ready for normal operation, the EVS can be
migrated manually back to the original node.
Note: SMU stands for System Management Unit.

HDS Confidential: For distribution only to authorized parties. Page 7-7


N-way Clustering and Enterprise Virtual Server (EVS)
Clustering Basics

Clustering Basics

 The Cluster is Active-Active when one or more EVS is defined in both


nodes
 Clusters of 2 to 8 nodes
(3080, 3090, 4060 two, 4080, 4100 four, 4080, 4100 later eight)
• Clusters greater than 2 nodes require dual interconnect 10GbE switches
 Quorum is maintained by having a majority of votes
• The Quorum Device votes only in even-node clusters
• The Quorum Device resides on a System Management Unit
• Node(s) not a part of a quorum do not host services
 Failover means EVS migration
• Occurs when:
▪ All GbE ports in an aggregation used by an EVS fail
▪ All file systems associated with an EVS are offline
▪ Remote node goes offline; in other words, it is no longer sending
heartbeats

 A kind of Active-Passive cluster configuration can be achieved in a 2-node


cluster having one node serving all EVSs and no EVSs being serviced by the
second node.
 In case of SMU failure, the cluster failover functionality can be affected in a 2- or
4-node cluster.
 The interconnect switches for clusters greater than 2 nodes need to support
10GbE.
 Support for up to 64 EVS per single node or 2-node to 4-node clusters.

Page 7-8 HDS Confidential: For distribution only to authorized parties.


N-way Clustering and Enterprise Virtual Server (EVS)
NVRAM Usage in a 2-way Clustered Configuration

NVRAM Usage in a 2-way Clustered Configuration

(4) Write to disk


(4) Write to disk

(2) Mirror Data and


Acknowledge through HSI

(2) Mirror Data and


Acknowledge through HSI
CACHE CACHE

(3) Write complete (1) Write


(1) Write (3) Write complete

Network Clients
Hitachi NAS Platform 2-way Cluster Network Clients

When the Hitachi NAS Platform node is configured as a 2-node cluster, then, in
addition to buffering all the file system modifications, each cluster node mirrors the
NVRAM contents of the other cluster node. This mirroring of the cluster nodes’
NVRAM content ensures data integrity in the event of a cluster node failure. When a
cluster node takes over for the failed node, it uses the contents of the NVRAM
mirror to complete all file system modifications that were not yet committed to the
storage by the failed server.
HSI = High Speed Interconnect/Interface

HDS Confidential: For distribution only to authorized parties. Page 7-9


N-way Clustering and Enterprise Virtual Server (EVS)
N-way Clustering

N-way Clustering

Data
Hitachi NAS Platform N-way Cluster Data Network
Network Clients
Hitachi NAS Platform

Cluster
Heartbeat
Cluster
Heartbeat Hitachi NAS Platform

Cluster
Heartbeat
SMU Mgmt.
Cluster Network
Interconnect (Quorum device)
Cluster
Heartbeat Hitachi NAS Platform
System
Configuration

Cluster
Heartbeat Hitachi NAS Platform
Cluster
Heartbeat

N-way clustering allows up to 4 Hitachi NAS Platform 3090 nodes to be configured


as a single Hitachi NAS Platform Cluster. When formed into a cluster, the Hitachi
NAS Platform nodes are called cluster nodes. In a Hitachi NAS Platform cluster,
nodes are not passive. Each node is active and able to host independent EVSs, which
can serve network requests simultaneously. A maximum of 64 EVSs per cluster are
supported. If a cluster node fails, the EVSs from the failing node automatically
migrate to other cluster nodes. The EVSs from the failed node are then hosted by the
other nodes in the cluster. Network clients will not typically be aware of the failure
and will not experience any loss of service, although the cluster may operate with
reduced performance until the failed node is restored. After the failed node is
restored and is ready for normal operation, an EVS can be migrated back to the
original node manually .

Page 7-10 HDS Confidential: For distribution only to authorized parties.


N-way Clustering and Enterprise Virtual Server (EVS)
NVRAM Usage in a 4-way Clustered Configuration

NVRAM Usage in a 4-way Clustered Configuration

Hitachi NAS Platform 4-way Cluster

CACHE 4 CACHE 3

(2) Mirror Data/


Acknowledge
through HSI
(4) Write to disk (4) Write to disk

(2) Mirror Data


and Acknowledge 2
1 through HSI
CACHE CACHE

(1) Write (3) Write (3) Write (1) Write


Network complete Network complete
Clients Clients

When the Hitachi NAS Platform node is configured as a cluster, then, in addition to
buffering all the file system modifications, each cluster node mirrors the NVRAM
contents of the other cluster nodes in sequence. This mirroring of the cluster nodes’
NVRAM content ensures data integrity in the event of any one cluster node failure.
When a cluster node takes over for the failed node, it uses the contents of the
NVRAM mirror to complete all file system modifications that were not yet
committed to storage by the failed server.

HDS Confidential: For distribution only to authorized parties. Page 7-11


N-way Clustering and Enterprise Virtual Server (EVS)
Cluster Configuration

Cluster Configuration

Quorum Device
Name Name of server hosting the QD (in other words, the SMU
on which the QD resides).
IP IP address of server hosting the QD (in other words, the
Address SMU on which the QD resides).
Status QD status:
Item Description
• Configured - Attached to the cluster. The QDs vote is not
Cluster Name Name of cluster. needed when any cluster contains an odd number of
operational nodes..
Status Overall cluster status (online or offline). • Owned - the QD is attached to the cluster and owned by
Health Cluster health: a specific node in the cluster.
• Robust • Not up - the QD cannot be contacted.
• Degraded • Seized - the QD has been taken over by another cluster.

Quorum Device services are provided by the SMU. While servers and clusters in a
server farm are managed by a single SMU, an SMU can provide quorum services for
up to 8 clusters in a server farm. To do so, the SMU hosts a pool of 8 available
Quorum Devices (QDs). When a new cluster is formed, a QD must be assigned to
the cluster. Once assigned to the cluster, the QD is “owned” by that cluster and is no
longer available. Removing a QD from a cluster releases its ownership and returns
the QD service to the pool of available QDs.
If you need to add or remove the cluster’s QD, click the appropriate button
(Add Quorum or Remove Quorum).
If the QD is removed from the cluster, the port will be released back to the SMU’s
pool of QDs and ports.

Page 7-12 HDS Confidential: For distribution only to authorized parties.


N-way Clustering and Enterprise Virtual Server (EVS)
EVS Failover Functionality and Process Summary

EVS Failover Functionality and Process Summary

 Out of all configured EVSs, only the EVS affected by a problem will
failover (migrate) to another node
 In case of node hardware or software failure, all EVSs hosted by this
node will migrate
 Even an EVS that has migrated to another node, due to failure, can
be migrated to a third node
 Failback is performed manually and is an EVS migration to the
preferred node
 An EVS that is not running on the preferred node is indicated with
orange in the GUI
 Migrating an EVS enables the IP address(s) associated with the EVS
on the other node, together with all services and shares configured
on the EVS
 If the Admin EVS is running on a failing node, this admin EVS is
migrated as well
 Failback is also a manual operation for the admin EVS

HDS Confidential: For distribution only to authorized parties. Page 7-13


N-way Clustering and Enterprise Virtual Server (EVS)
IP Address before Failover

IP Address before Failover

Hosts

192.168.0.3

Arp table

192.168.0.3 : xxx01
192.168.0.4 : xxx02

NIC1 MAC address: xxx01 NIC2 MAC address: xxx02

NAS node1 NAS node2

EVS 1: EVS 2:
IP address : 192.168.0.3 IP address : 192.168.0.4
File services: NFS, CIFS, FTP… File services: NFS, CIFS, FTP…

The way the ARP protocol maps the MAC address and IP address is displayed in
the above diagram under normal operation for two different EVSs on two different
physical nodes.

Page 7-14 HDS Confidential: For distribution only to authorized parties.


N-way Clustering and Enterprise Virtual Server (EVS)
On Failing Over

On Failing Over

 An IP alias of EVS 1’s IP


address is created on node2 Hosts

 Node2 broadcasts
192.168.0.3
gratuitous ARP packets, Update!
which force an update of Arp table

the ARP table of host clients 192.168.0.3 : xxx01


192.168.0.4 : xxx02

Gratuitous ARP

NIC1 MAC address: xxx01 NIC2 MAC address: xxx02

NAS node1
Failure NAS node2

EVS 1: EVS 2:
Failover
IP address : 192.168.0.3 IP address : 192.168.0.4
File services: NFS, CIFS, FTP… File services: NFS, CIFS, FTP…

HDS Confidential: For distribution only to authorized parties. Page 7-15


N-way Clustering and Enterprise Virtual Server (EVS)
After Failover

After Failover

 Client hosts can continue


to access EVS 1 of NAS 192.168.0.3

node1 with the same IP New ARP table

address but through node2 192.168.0.3 : xxx02


192.168.0.4 : xxx02
Hosts

NIC1 MAC address: xxx01 NIC2 MAC address: xxx02

NASFailure
node1 NAS node2

EVS 2:
IP address : 192.168.0.4
File services: NFS, CIFS, FTP…

EVS 1:

Failover IP address : 192.168.0.3


File services: NFS, CIFS, FTP…

After the failover process is completed, the updated ARP table in the clients will
associate the IP for EVS 1 with the same MAC address as for EVS 2. This way, the
clients using the IP address or the associated name for EVS 1 on node1 will not
detect any difference before and after failover. Clients with a historical host
relationship (stateful connection) like CIFS will need to re-authenticate before a
transfer can be re-established.

Page 7-16 HDS Confidential: For distribution only to authorized parties.


N-way Clustering and Enterprise Virtual Server (EVS)
Cluster Failover Reporting

Cluster Failover Reporting

syslogd SMU
Hi-Track Monitor

Event log Private


Customer
Management
Hitachi NAS Platform Network

SNMP
SMTP

Failure

Data
Network

Private

Depending on the customer network and server configuration, several or all error
reporting methods can be used for error reporting. On the SMU, the error-reporting
relay functions must be configured, and the “Network Management” and “Mail
Servers” adjusted by the customer to reflect the configuration of the SMU.
Alternatively SMTP Servers as an example can reside on the Data Network as well.
Hi-Track® Monitor uses “SNMP get” commands to do a health check, and an EVS
admin IP address on the public LAN can be used as well to interrogate any status
changes and issue the alerting process setup by the CE.

HDS Confidential: For distribution only to authorized parties. Page 7-17


N-way Clustering and Enterprise Virtual Server (EVS)
Let’s Have a Look at a Single Node

Let’s Have a Look at a Single Node

 No redundancy except
Network and FC links
EVS EVS
FS FS FS  If the Node fails, loses
SPAN SPAN
system drives Network connectivity or cannot
reach the storage: no
fileservice can be provided

Page 7-18 HDS Confidential: For distribution only to authorized parties.


N-way Clustering and Enterprise Virtual Server (EVS)
A Cluster Improves Things

A Cluster Improves Things

 In a traditional HNAS cluster,


high-availability is achieved by
EVS EVS adding additional nodes which
FS
SPAN
FS FS
SPAN
share configuration and
system drives
storage
 Automatic EVS migration to
other cluster nodes will ensure
that file service can be
provided, even if one node is
down or has connectivity
problems
 If the Storage fails, all nodes
are affected, until storage is
back online

HDS Confidential: For distribution only to authorized parties. Page 7-19


N-way Clustering and Enterprise Virtual Server (EVS)
Hitachi Synchronous Disaster Recovery (Sync DR) Cluster Service

Hitachi Synchronous Disaster Recovery (Sync DR) Cluster


Service

Groningen Utrecht

 The idea of a Sync DR Cluster


is to add an additional copy of
the data using TrueCopy
EVS EVS
FS
SPAN
FS FS
SPAN
 Since it does not really make
system drives sense to keep 2 copies of data
at one location, the cluster is
usually stretched over 2
locations
P P P P S S S S  Problems with one of the 2
nodes will still be handled by
the “well known” cluster
TrueCopy
mechanism providing high-
availability

Page 7-20 HDS Confidential: For distribution only to authorized parties.


N-way Clustering and Enterprise Virtual Server (EVS)
Sync DR Components and Connectivity

Sync DR Components and Connectivity

HDS Confidential: For distribution only to authorized parties. Page 7-21


N-way Clustering and Enterprise Virtual Server (EVS)
This Is NOT a Sync DR Cluster

This Is NOT a Sync DR Cluster

Groningen Utrecht

EVS EVS EVS EVS


FS FS FS FS FS FS
SPAN SPAN SPAN SPAN
system drives system drives

P P P P TrueCopy S S S S

S S S S TrueCopy P P P P

There are other approaches to combine high Availability (HA) and Disaster
Recovery (DR) with HNAS, however these approaches are not called HNAS “Sync
DR Cluster” or “Metro Cluster” and are usually customer-specific services and
configurations delivered by GSS.

Page 7-22 HDS Confidential: For distribution only to authorized parties.


N-way Clustering and Enterprise Virtual Server (EVS)
Module Summary

Module Summary

 In this module, you have learned to:


• Explain the concept of an Enterprise Virtual Server (EVS)
• Explain the purpose of clustering
• Define dataflow in NVRAM
• Define IP address assignment in case of an error
• List the failure detection areas
• Recognize the failover and recovery operation
• Describe a Synchronous Disaster Recovery Cluster

HDS Confidential: For distribution only to authorized parties. Page 7-23


N-way Clustering and Enterprise Virtual Server (EVS)
Module Review

Module Review

1. What are the major benefits of clustering the nodes in the system?
2. Will a running application continue to run during and after the
failover?
3. How many nodes can be included in one cluster?
4. Which configuration parameters must be aligned across the nodes in
the cluster?
5. Which conditions will result in an automatic failover?
6. How is the event of a failover reported?
7. List the benefits of creating multiple EVSs in a cluster.

Page 7-24 HDS Confidential: For distribution only to authorized parties.


8. Maintenance
Module Objectives

 Upon completion of this module, you should be able to:


• Differentiate the IP addresses used to identify different components and
functions in the Hitachi NAS Platform
• List the different management facilities
• Recognize the naming and versioning convention for the software in
System Management Unit (SMU) and node
• Follow the upgrade procedures for hardware and software
• Install and configure Hi-Track Remote Monitoring system

HDS Confidential: For distribution only to authorized parties. Page 8-1


Maintenance
Node IP Addresses 1 of 2

Node IP Addresses 1 of 2

 Administrative IP Addresses
• Assigned to 10/100/1000 private management port
or if required to the 1GbE (30x0 only) and 10GbE aggregated ports
▪ Accessing the private management network through the External or
Embedded SMU
▪ Creating an admin services IP address to the 1GbE (30x0 only) or
10GbE aggregated ports
• On the 30x0 and 4xx0 cluster configuration the eth0 interface can be
used to administrate and monitor the nodes as well
• Server administration using SMU, SSC and SSH
• IP-based access restriction on a per-service basis

Page 8-2 HDS Confidential: For distribution only to authorized parties.


Maintenance
Node IP Addresses 2 of 2

Node IP Addresses 2 of 2

 File Serving (EVS) IP Addresses


• Assigned to 1GbE (30x0 only) and 10GbE aggregated ports only
• Supports file service protocols CIFS, NFS, FTP, and the block-based
protocol iSCSI
• IP-based access restriction on share and exports
• Version 11.1.xxxx.xx supports Data Migrator to cloud, where private
management eth1 or public management eth0 can be used for
migration to a cloud.
 Cluster Node IP Addresses
• Assigned to 10/100/1000 private management network only
• Used for inter-cluster and Quorum Device communications
• Physical, non-migrating IP, stays with the cluster node

HDS Confidential: For distribution only to authorized parties. Page 8-3


Maintenance
Management Facilities

Management Facilities

 Management Services:
• HTTPS — GUI, Primary Management Interface
▪ https://<SMU_IP>/
• ssh – Access the Node CLI
▪ ssh manager@<SMU_IP>; enter the managed server
▪ ssh supervisor@192.0.2.2
• Telnet – Access the Node CLI
• ssc/pssc – Utility for Running Remote Commands
▪ ssc –u supervisor –p supervisor 192.0.2.2 <command>
• scp – Secure Copy to/from Server Flash
▪ scp <Local_File> supervisor@192.0.2.2:/<File_Name>
▪ scp supervisor@192.0.2.2:/event.log ./event.log

HTTP: HyperText Transfer Protocol


HTTPS: HyperText Transfer Protocol Secure
PSSC: Perl SiliconServer Control
SCP: Secure CoPy
SSC: SiliconServer Control
SSH: Secure SHell

Page 8-4 HDS Confidential: For distribution only to authorized parties.


Maintenance
Securing Management Access

Securing Management Access

 GUI Access:
• Home > SMU Administration > Security Options
• Home > Server Settings
 CLI Access:
• mscfg <server>
▪ HTTP — Atlas server
▪ HTTPS — Atlas server (Secure)
▪ Telnet — Telnet server
▪ ssc — SSC/PSSC CLI
▪ vss — VSS Hardware provider DLL connection
▪ SNMP — SNMP agent
• [enable | disable]
• [restrict on|off]
• [addhost <host>] [removehost <host>]

VSS Hardware Provider:


Through the integration between the Volume Shadow Copy Service (VSS), hardware
or software VSS providers, application-level writers and backup applications, VSS
enables integral backups that are point-in-time and application-level consistent
without the backup tool having knowledge about the internals of each application.

HDS Confidential: For distribution only to authorized parties. Page 8-5


Maintenance
Useful Command Line Utilities

Useful Command Line Utilities

 “Tab Completion”
• As an example: > disk + →I > diskusage_applet

 help <command>
 man <command>
 apropos <what>
• all with: |more |grep

Page 8-6 HDS Confidential: For distribution only to authorized parties.


Maintenance
CLI Commands and Context

CLI Commands and Context

 pn <no> — physical node


or
 cn <no> — cluster-node

 vn <no> — virtual node (EVS)

 evssel <no> — virtual node (EVS)

 for-each-evs — all EVSs

 for-each-cnode — all physical nodes in the cluster

Example: pn all fc-link-status

HDS Confidential: For distribution only to authorized parties. Page 8-7


Maintenance
Maintenance Actions

Maintenance Actions

 The most often requested maintenance


action is firmware upgrade
 SMU upgrade and Hitachi NAS Platform
Node firmware upgrades are often tightly
coupled, and the sequence is important

 No Release Notes (RN) and no upgrade procedure?


— no upgrade!
 Pay attention to Product Support Alerts
 Not strictly following the up- and down-grade procedures can
result in unrecoverable error situations and customer outage
 Make sure prerequisites are understood and are fulfilled

Page 8-8 HDS Confidential: For distribution only to authorized parties.


Maintenance
Software Patching

Software Patching

 Procedure is the same as with major software releases


 Upgrade could be only SMU or only Node
 Read Release Notes published with the software (SW) package version

10 . 2 . 3073 . 05

Major Release Patches and fixes

New software features Build level


and support for new hardware Maintenance releases

We occasionally get questions from customers asking how often we release major
(i.e. 10.0.x, 11.0.x) and minor (i.e. 10.2.x, 11.1.x) HNAS OS code. Moving forward, we
are planning:
 1 major release every 15-18 months
 3 minor releases every 12 month period
Please keep in mind these are guidelines and we reserve the right to make
adjustments and changes. If you have any additional questions regarding this topic,
free feel to reach out to a member of the HNAS PM team.

HDS Confidential: For distribution only to authorized parties. Page 8-9


Maintenance
Software Version Numbers and Names

Software Version Numbers and Names

Version 4.2 - Nov 06 Version 4.3 - Feb 07 Version 5.0 – Nov 07 Version 6.0 – Oct 08
NODE Version 4.2.???.x Version 4.3 .???.x Version 5.0.???.x Version 6.0.???.x

Name Octopus Name Razor Name Parrot Name Stone 1

SMU Version 4.2 .???.x Version 4.3.???.x Version 5.0.???.x Version 6.0.???.x

Name Beech Name Copper Name Davos Name Eaglecrest

SMU OS RH 7.2 SMU OS RH 7.2 SMU OS CentOS 4.4 SMU OS CentOS 4.4

Node software names = “Fish” names


SMU software names = Ski resorts names

Page 8-10 HDS Confidential: For distribution only to authorized parties.


Maintenance
Software Version Numbers and Names

3100/3200 only 3080/3090 only


Version 6.1 – Oct 09 Version 6.5 - Sep 09 Version 7.0 – Sep 10 Version 7.0 – Oct 10
NODE Version 6.1.???.x Version 6.5 .???.x Version 7.0.2048.x Version 7.0.2050.x

Name Stone 2 Name Nemo Name Tiger-1 Name Tiger-1


(M1)

SMU Version 6.1 .???.x Version 6.5.???.x Version 7.0.2048.x Version 7.0.2050.x

Name Eaglecrest Name Northstar Name Taos Name Taos

SMU OS CentOS 4.4 SMU OS Debian 5.0 SMU OS Debian 5.0 SMU OS Debian 5.0
External: External: External:
CentOS 4.4 CentOS 4.4 CentOS 4.4

Node software names = “Fish” names


SMU software names = Ski resorts names

HDS Confidential: For distribution only to authorized parties. Page 8-11


Maintenance
Software Version Numbers and Names

30x0 only 30x0 only


Version 8.0 – Feb 11 Version 8.1 – May 11 Version 10.0 – Q1 12 Version 10.1 – Q2 12
NODE Version 8.0.2226.x Version 8.1.2312.x Version 10.0.xxxx.x Version 10.1.xxxx.x

Name Vampire-1 Name Vampire-2 Name Unicorn-1 Name Unicorn-2

SMU Version 8.0.2226.x Version 8.1.2312.x Version 10.0.xxxx.x Version 10.1.xxxx.x

Name Vail-1 Name Vail-2 Name Uplands-1 Name Uplands-2

SMU OS Debian 5.0 SMU OS Debian 5.0 SMU OS Debian 5.0 SMU OS Debian 5.0
External: External: External: External:
CentOS 4.8 CentOS 4.8 CentOS 6.0 CentOS 6.2

Node software names = “Fish” names


SMU software names = Ski resorts names

Page 8-12 HDS Confidential: For distribution only to authorized parties.


Maintenance
Software Version Numbers and Names

30x0 only 30x0 only 30x0 only


Version 8.2 – 08/11 Version 10.2 – 08/12 Version 11.0 – 12/12 Version 11.1 – 4/13
NODE Version 8.2.2374.06 Version 10.2.3071.04 Version 11.0.3123.xx Version 11.1.3225.xx

Name Vampire-3 Name Unicorn-3 Name Angel-1 Name Angel-2

SMU Version 8.2.2374.01 Version 10.2.3071.03 Version 11.0.3123.xx Version 11.1.3225.xx

Name Vail-3 Name Uplands-3 Name Alpine-1 Name Alpine-2

SMU OS Debian 5.0 SMU OS Debian 5.0 SMU OS Debian 5.0 SMU OS Debian 5.0
External: External: External: External:
CentOS 4.8 CentOS 6.2 CentOS 6.2 CentOS 6.2

Node software names = “Fish” names


SMU software names = Ski resorts names

HDS Confidential: For distribution only to authorized parties. Page 8-13


Maintenance
Software Version Numbers and Names

4xx0 only 30x0 and 4xx0 30x0 and 4xx0 30x0 and 4xx0
Version 11.1 – 7/13 Version 11.2 – 08/13 Version 12.0 – Q1/14 Version 13.0 – ?/?
NODE Version 11.1.3250.xx Version 11.2.33xx.xx Version 12.0.xxxx.xx Version 13.0.xxxx.xx

Name Angel-2 Name Angel-3 Name Bat-1 Name Cornet-1

SMU Version 11.1.3225.xx Version 11.2.33xx.xx Version 12.0.xxxx.x Version 13.x.xxxx.x

Name Alpine-2 Name Alpine-3 Name ?? Name ??

SMU OS Debian 5.0 SMU OS Debian 5.0 SMU OS Debian 5.0 SMU OS Debian 5.0
External: External: External: External:
CentOS 6.2 CentOS 6.2 CentOS 6.? CentOS 6.?

Node software names = “Fish” names


SMU software names = Ski resorts names

Page 8-14 HDS Confidential: For distribution only to authorized parties.


Maintenance
Software Upgrades

Software Upgrades

 Following are the general rules followed by software upgrades:


• Upgrade from 10.x to 11.x to 12.x is rolling upgrades
• Upgrade from version 5.x to 6.x is not a “rolling” upgrade
• As well version 6.x to 7.x, 7.x to 8.x, and 8.x to 10.x requires total
system outage and a maintenance window is required
• Upgrade from 5.0 to 5.1 is not a “rolling” upgrade. System outage
and a maintenance window is required
• From Stone 1 release (version 6.0) “dot releases” as rolling
upgrade is supported upgrading from 6.0 to 6.1, or 6.1 to 6.2, - but
NOT 6.0 to 6.2
• Going from 5.0.1042.x to 5.0.1289.x (Maintenance release) can
often be done as a “rolling” upgrade, node by node. Read the RN
to make sure system outage and a maintenance window is not
required
• Patch release upgrade 5.0.1042.05 to 5.0.1042.09 can be done as
a “rolling” upgrade, node by node

“Rolling” upgrades means doing the upgrade node by node while the customer still
has access to the file systems and shares on the other nodes.

HDS Confidential: For distribution only to authorized parties. Page 8-15


Maintenance
Upgrade Path in Release Notes

Upgrade Path in Release Notes

 Consult the latest Upgrade Path in the Release Notes:


• Read the notes
• Do not compromise

In rolling upgrade (in green), cluster nodes may boot one-at-a-time into the new
firmware version. EVS migration works between revisions that support rolling
upgrades, and each revision can read NVRAM written by the other revision.
Cluster upgrades (in red) require all cluster nodes to shut down and boot into the
new firmware version simultaneously. EVS migration between revisions does not
work. Often NVRAM from one version is unreadable in the other version, which
requires file systems to be fully unmounted in one version before they can be
mounted in the other version.
[1] Due to defect 58192, upgrades from versions 8.0 through 8.2.2312.08 must go to
8.1.2312.09 or 8.1.2350.22 before going to a higher version.
[2] Due to defect 66378, upgrades from 8.1.2350.22 (or earlier 8.X builds) require
careful EVS migration between nodes during rolling upgrades to 8.1.2350.22, and
again from here to any higher version. See release notes for detailed instructions.
[3] Due to defect 66551, file systems will not mount without intervention in the event
of a failover from a node running 10.2 to a node running 10.0, so a cluster should not
be left with a node on each level any longer than necessary for the upgrade process.

Page 8-16 HDS Confidential: For distribution only to authorized parties.


Maintenance
Software Version Example from Daily Summary Email

Software Version Example from Daily Summary Email

HDS Confidential: For distribution only to authorized parties. Page 8-17


Maintenance
Saving External SMU Configuration Before Upgrade

Saving External SMU Configuration Before Upgrade

Saving the SMU Configuration Manually:


1. From the Home page, click SMU Administration. Then, click SMU Backup
2. Click backup.
3. Choose a location (on your PC) to store/archive the configuration.
4. Click OK. A copy of that backup is also kept on the SMU.
SMU automatic backup runs daily and the last 14 backups are saved on the SMU.
Important: An internal SMU’s backup can only be restored to an internal SMU,
and an external backup only to an external SMU.

Page 8-18 HDS Confidential: For distribution only to authorized parties.


Maintenance
Saving Embedded SMU and 30x0/4xx0 Server Registry

Saving Embedded SMU and 30x0/4xx0 Server Registry

HDS Confidential: For distribution only to authorized parties. Page 8-19


Maintenance
External SMU SW Upgrade and Downgrade

External SMU SW Upgrade and Downgrade

 Since the 5.0.xxx.xx release including CentOS 4.4 the external SMU
is made ready to be partitioned for a dual boot concept
 This concept enable easy fallback to an earlier version in case of a
problem
 To ensure fall back to an older version always use:
second-kvm — Second SMU OS install using KVM. or
second-serial — Second SMU OS install using serial console.

External SMU

Version 8.0 Version 7.0

smu-boot-alt-partition

Grub loader

Note: Code upgrades from 8.x to 10.x and 10.0 to 10.2 require a clean-kvm or clean-
serial upgrade. Fallback means downgrade to the earlier SMU version, and
will again be a "clean" process. Configuration backups are essential for both
up- and downgrade.

Page 8-20 HDS Confidential: For distribution only to authorized parties.


Maintenance
1a. Selecting CentOS Installation Method Second

1a. Selecting CentOS Installation Method Second

 Keep the current version and the configuration


 Easy fall back to previous version

This process only installs the Linux Operating system and makes the other partition
ready to host the SMU Application in the next step.

HDS Confidential: For distribution only to authorized parties. Page 8-21


Maintenance
1b. Selecting CentOS Installation Method Clean

1b. Selecting CentOS Installation Method Clean

 Format the complete HDD and delete the configuration


 No fall back to previous version

This process formats the complete HDD and installs the Linux operating system and
makes one partition ready to host the SMU Application in the next step.

Page 8-22 HDS Confidential: For distribution only to authorized parties.


Maintenance
2. External SMU Application Upgrade Procedures

2. External SMU Application Upgrade Procedures

 Connect serial null-modem cable


 115200 baud, 8 data bits, no parity, 1 stop bit, no flow control
 Install the update running the update script
 Log in as root
 Insert the Upgrade CD and close the CD/DVD drive, wait 5 seconds
 From the prompt, mount the CD/DVD by typing:
mount /media/cdrecorder or mount /media/cdrom
 Then type:
/media/cdrecorder/autorun or /media/cdrom/autorun
to start the upgrade process
 Upon completion the system will reboot automatically

This process installs the HNAS SMU application, using the CD/DVD player build
into the external SMU.

HDS Confidential: For distribution only to authorized parties. Page 8-23


Maintenance
Embedded SMU Upgrade and Downgrade 30x0/4xx0

Embedded SMU Upgrade and Downgrade 30x0/4xx0

 Embedded SMU:
• To transfer the updated files select your preferred method:
▪ Connect DVD/CD player with media to the node
▪ SCP ISO image to the node and mount the ISO file
▪ Connect a USB memory stick and mount the ISO file
▪ Transfer the packages over HTTP using the GUI (Avoid Wi-Fi!)
• Install and upgrade the same as the external SMU
• Uninstall process is available for the Embedded SMU only
• Downgrade of the embedded SMU will be uninstalled followed by
reinstalled

 Linux upgrade/downgrade:
• Will be automated
• Linux patching has until now fixed known issues
 Embedded SMU program will be uninstalled on nodes assembled in
2013 and later

Page 8-24 HDS Confidential: For distribution only to authorized parties.


Maintenance
Upgrade of Embedded SMU SW from the GUI

Upgrade of Embedded SMU SW from the GUI

Browse to the ISO


image file
on your client computer

Start the upgrade


process

This upgrade procedure using HTTP to upload an ISO image should only be used
for embedded SMU upgrade. The external SMU should NOT be upgraded using this
method.

HDS Confidential: For distribution only to authorized parties. Page 8-25


Maintenance
Model 30x0 and 4xx0 Server Upgrade Procedures

Model 30x0 and 4xx0 Server Upgrade Procedures

1. Under Server Settings, click Upgrade Firmware


2. Select Managed Server and use the Server
3. Specify the location of the firmware files and click Apply

You have an option


to pre-stage the
firmware without
rebooting.

The file format for Hitachi NAS 3080 and 3090 must be in tar format.

Page 8-26 HDS Confidential: For distribution only to authorized parties.


Maintenance
Hitachi Command Suite (HCS) and Device Manager

Hitachi Command Suite (HCS) and Device Manager

HDS Confidential: For distribution only to authorized parties. Page 8-27


Maintenance
Hitachi Command Suite (HCS) 7.3.0

Hitachi Command Suite (HCS) 7.3.0

HCS version 7.3.x support link and launch, calling the appropriate page in the SMU
web GUI.

Page 8-28 HDS Confidential: For distribution only to authorized parties.


Maintenance
Hitachi Command Suite (HCS) Version 7.4 and up

Hitachi Command Suite (HCS) Version 7.4 and up

Over time with newer releases more and more functions will be executed as CLI
commands in the background, making it transparent to the users how the task is
executed.

HDS Confidential: For distribution only to authorized parties. Page 8-29


Maintenance
SNMP Manager Connectivity (First SNMP Hi-Track)

SNMP Manager Connectivity (First SNMP Hi-Track)

3. SNMP Admin EVS


Manager?
HNAS 4xx0

2. SNMP
Manager?

Admin EVS

1. SNMP
Manager?
HNAS 4xx0

First implementation of Hi-Track, used Hi-Track monitor as an HDS programmed


SNMP manager server. This method is taken over by Hi-Track using the SMU CLI,
logging into the SMU.
Questions You/Your Customer Need to Answer
Scenario 1, using the private (red eth1) network:
 Will you allow SNMP Manager Monitor on this network?
Scenario 2, using the customer facing management (blue eth0) network:
 Do you have connectivity for both eth0 connectors?
 Have you configured an AVN IP address on both eth1 and eth0?
Scenario 3, using the customer facing data (green ag1-8) network:
 Does the customer allow monitoring/SNMP traffic on his file services data
network?
 Have you configured an AVN IP address on both eth1 and the AG?

Page 8-30 HDS Confidential: For distribution only to authorized parties.


Maintenance
SNMP Agent Configuration on the Hitachi NAS Node

SNMP Agent Configuration on the Hitachi NAS Node

Adding a community called “public” as RO (Read Only) is all that is required to


configure the NAS node, so the SNMP Manager can get information from the SNMP
agent. Most often customers define the community to be used.

HDS Confidential: For distribution only to authorized parties. Page 8-31


Maintenance
Hi-Track Monitor SNMP Configuration

Hi-Track Monitor SNMP Configuration

 Configuring the SNMP Hi-Track Monitor is done in the same way as for FC
switches and NetApp NAS Gateways.
 Type in the serial number correctly since this is not interrogated from the
management information base (MIB) file in the SNMP agent.
 The IP address is represented by an administrative EVS IP address addressable
through the customer’s network and aggregates.
 SNMP Access ID reflects the public RO community defined before in the Hitachi
NAS node.
 This method is still supported, but the install base is rapidly migrating to the
new SMU CLI method with a lot more detailed information and capabilities.

Page 8-32 HDS Confidential: For distribution only to authorized parties.


Maintenance
Monitoring Devices

Monitoring Devices

HDS Confidential: For distribution only to authorized parties. Page 8-33


Maintenance
Hi-Track Monitor Version 5.7 and Up

Hi-Track Monitor Version 5.7 and Up

 From Hi-Track Monitor version 5.7 and up, a new monitoring method
has been introduced
 Hi-Track monitor: log into SMU using SSH and manager account
 Will monitor all entities managed by SMU
 Remote user account can be customized
 Only the SMU IP address needs to be registered
 Hitachi NAS (HNAS) Server accounts will automatically be registered
 Issuing commands against Admin EVS such as:
diagshowall and eventlog-show

Page 8-34 HDS Confidential: For distribution only to authorized parties.


Maintenance
Connectivity of Hi-Track Monitor SSH to SMU

Connectivity of Hi-Track Monitor SSH to SMU

Admin EVS

HNAS 4xx0

2. Hi-Track
Monitor?

Admin EVS

1. Hi-Track
Monitor?
HNAS 4xx0

Questions You/Your Customer Need to Answer


Scenario 1, using the private (red eth1) network:
 Will you allow Hi-Track Monitor on this network?
 Does the Hi-Track server have a second NIC card to Hi-Track DB connectivity?
 Where do you monitor, as an example, the Modular Storage product family?

Scenario 2, using the customer-facing management (blue eth0) network:


 Do you have connectivity for both eth0 connectors?
 Have you configured an AVN IP address on both eth1 and eth0?
 Are the DF products monitored in this network?
 Do you need a second NIC to the Hi-Track DB connection?

HDS Confidential: For distribution only to authorized parties. Page 8-35


Maintenance
Hi-Track Monitor using SMU

Hi-Track Monitor using SMU

Page 8-36 HDS Confidential: For distribution only to authorized parties.


Maintenance
Monitoring Devices

Monitoring Devices

HDS Confidential: For distribution only to authorized parties. Page 8-37


Maintenance
Detail Status of the SMU

Detail Status of the SMU

Page 8-38 HDS Confidential: For distribution only to authorized parties.


Maintenance
Detail Status of the Cluster

Detail Status of the Cluster

HDS Confidential: For distribution only to authorized parties. Page 8-39


Maintenance
Hi-Track Graphical Configuration Output

Hi-Track Graphical Configuration Output

Page 8-40 HDS Confidential: For distribution only to authorized parties.


Maintenance
Logical View

Logical View

HDS Confidential: For distribution only to authorized parties. Page 8-41


Maintenance
Best Practices Check

Best Practices Check

Page 8-42 HDS Confidential: For distribution only to authorized parties.


Maintenance
Module Summary

Module Summary

 In this module, you have learned to:


• Differentiate the IP addresses used to identify different components and
functions in the Hitachi NAS Platform
• List the different management facilities
• Recognize the naming and versioning convention for the software in
System Management Unit (SMU) and node
• Follow the upgrade procedures for hardware and software
• Install and configure Hi-Track Remote Monitoring system

HDS Confidential: For distribution only to authorized parties. Page 8-43


Maintenance
Module Review

Module Review

1. List the configuration parameters needed on the nodes to enable Hi-


Track monitoring.
2. List some help functions discussed in the module.
3. Which software upgrades can be performed as “rolling upgrades”?
4. What is a mandatory requirement before starting the software
upgrade procedure?

Page 8-44 HDS Confidential: For distribution only to authorized parties.


9. Troubleshooting and
Replacement
Module Objectives

 Upon completion of this module, you should be able to:


• Set up the monitoring and reporting tools
• Recognize error messages created by reporting tools
• Gather necessary information for escalation
• Identify the required standard documentation to implement replacement
processes
• Recognize the importance of electrostatic discharge (ESD) precautions

HDS Confidential: For distribution only to authorized parties. Page 9-1


Troubleshooting and Replacement
Other Hitachi NAS Platform Management Interfaces

Other Hitachi NAS Platform Management Interfaces

 Call-home Mechanism:
• SMTP-based mechanism for alerts and monitoring
• Selective notification profiles
• Daily performance data included
 SNMP v1/v2c
 Syslog
 Telnet/SSH/SSC access to NAS Platform Nodes (admin EVS),
command line interface (CLI)
 Hi-Track Monitor from version 3.8 and up
 Hitachi Device Manager software
 Hitachi Command Suite (HCS)

Page 9-2 HDS Confidential: For distribution only to authorized parties.


Troubleshooting and Replacement
Storage Array Setup

Storage Array Setup

 Storage is managed using Hitachi Data Systems native utilities


• Hitachi Storage Navigator program
• Service Processor (SVP)
• Maintenance PC
• Management PC
• Hitachi Storage Navigator Modular (HSNM and HSNM2)
• Web browser
• Device Manager software
• And others

HDS Confidential: For distribution only to authorized parties. Page 9-3


Troubleshooting and Replacement
Alert SMTP Connectivity

Alert SMTP Connectivity

Admin EVS
3. SMTP Server?

HNAS 4xx0

(SMU forwarding)

2. SMTP 1. SMTP
Server? Server?

Admin EVS

HNAS 4xx0

Questions You/Your Customer Need to Answer:


Scenario 1, using the private (red eth1) network:
 Have you configured the AVN to alert the SMU IP?
 As the SMU can only use DNS names, is DNS working?
 Have you configured the SMU to relay the SMTP alerts?
 Do you have connectivity to the customers SMTP server over the blue network?
Scenario 2, using the customer facing management (blue eth0) network:
 Do you have connectivity for both eth0 connectors?
 Have you configured an AVN IP on both eth1 and eth0?
 Do you have connectivity to the customers SMTP server over the blue network?
Scenario 3, using the customer facing data (green ag1-8) network:
 Does the customer allow monitoring/SMTP traffic on his file services data
network?
 Have you configured an AVN IP address on both eth1 and the AG?
 Have you configured the AVN to alert to the customers SMTP server (name or IP
address)?

Page 9-4 HDS Confidential: For distribution only to authorized parties.


Troubleshooting and Replacement
Configuring SMTP Servers

Configuring SMTP Servers

 Configure a primary and secondary SMTP server


• Use the SMU’s private network IP (like 192.0.2.60) as a mail server

HDS Confidential: For distribution only to authorized parties. Page 9-5


Troubleshooting and Replacement
Configuring SMU Email Alerts Forwarding

Configuring SMU Email Alerts Forwarding

 Select SMU email forwarding

Page 9-6 HDS Confidential: For distribution only to authorized parties.


Troubleshooting and Replacement
Set up Email Forwarding on the SMU

Set up Email Forwarding on the SMU

 Insert the name or IP address of the customers SMTP server


 DNS functionality is essential for SMU email forwarding using names

HDS Confidential: For distribution only to authorized parties. Page 9-7


Troubleshooting and Replacement
Set Up Email Profile

Set Up Email Profile

1. From the Home page, click Status & Monitoring.


2. Click Email Alerts Setup.
3. Click Add.
4. Give the profile a name.
5. Modify defaults as desired (Above is the default).
6. Create an email text.
7. Add one or more recipients.
8. Click OK.

Page 9-8 HDS Confidential: For distribution only to authorized parties.


Troubleshooting and Replacement
Daily Health Check Email

Daily Health Check Email

This screen shows an example of Daily Health Check through email.

HDS Confidential: For distribution only to authorized parties. Page 9-9


Troubleshooting and Replacement
Alerts Summary Email

Alerts Summary Email

This screen shows the alerts summary received as requested through email.

Page 9-10 HDS Confidential: For distribution only to authorized parties.


Troubleshooting and Replacement
Diagnostic Download

Diagnostic Download

 Download complete system diagnostics log through the GUI


 The first diagnostic should be executed before troubleshooting
 Diagnostic logs may be emailed

 Server diagnostics can be sent from the server’s CLI by issuing the
following command: diagemail <email_address>

HDS Confidential: For distribution only to authorized parties. Page 9-11


Troubleshooting and Replacement
Diagnostic Report: Email for the Nodes

Diagnostic Report: Email for the Nodes

Above screen shows the diagnostic report for both nodes in a cluster.

Page 9-12 HDS Confidential: For distribution only to authorized parties.


Troubleshooting and Replacement
Diagnostic Report: Email for SMU and More

Diagnostic Report: Email for SMU and More

Above screen shows the Diagnostic reports for the SMU as well for the FC- switch.
The diagnostics for storage do not cover Hitachi Data Systems storage at the
moment.

HDS Confidential: For distribution only to authorized parties. Page 9-13


Troubleshooting and Replacement
Performance Information Report (PIR)

Performance Information Report (PIR)

 The PIR provides explicit and granular details on dozens of


performance relevant server statistics
 SMU GUI can provide some graphical overview

 Custom PIRs may be sent from the server’s CLI:


pir [<duration in minutes>] [-r <;-separated recipients>]
[-s <subject>] [--volume <volume>][--cancel]

Page 9-14 HDS Confidential: For distribution only to authorized parties.


Troubleshooting and Replacement
Performance Graph

Performance Graph

HDS Confidential: For distribution only to authorized parties. Page 9-15


Troubleshooting and Replacement
Using the trouble Command

Using the trouble Command

lab1-1:$ trouble
……………..truncated…………..
fs-protocols:cifs (on FSA; base priority 200)
Domain Controller 192.168.1.63 on EVS 3:
Priority 201: Pnode 1 FSA:
Unable to contact Domain Controller.
Problem with DC on local address: 172.20.20.31
Problem with DC on local address: 172.30.30.31
Fix problems with CIFS names first, if necessary.
Check EVS 3's machine account(s) on the Domain Controller(s).
A machine account should be configured for each of the EVS's
CIFS names.
Use the 'vn 3 cifs-dc prod' command to initiate a Domain
Controller reconnect.
To see: vn 3 cifs-name list
To see: vn 3 cifs-dc list -v
To see: vn 3 cifs-dc-errors
[trouble took 1.30 min.]
lab1-1:$

Page 9-16 HDS Confidential: For distribution only to authorized parties.


Troubleshooting and Replacement
trouble Reporter Examples

trouble Reporter Examples

HDS Confidential: For distribution only to authorized parties. Page 9-17


Troubleshooting and Replacement
trouble Performance Reporter Examples

trouble Performance Reporter Examples

Page 9-18 HDS Confidential: For distribution only to authorized parties.


Troubleshooting and Replacement
Server-Based Packet Capturing

Server-Based Packet Capturing

 Built-in network capture utility, accessible through CLI


 Captures on any interface, including multi-port aggregations!
 Protocol and host based filtering
 Captures can be sent from the server by email

 WARNING: Not for use on a NAS node in production!

 Usage example:
packet-capture --start --filter "host 10.2.1.1" ag1
packet-capture --stop ag1
nail -n -a tmp -s "My Capture" <email_address>
ssc <IP_Address> ssget tmp ~/capture.cap

HDS Confidential: For distribution only to authorized parties. Page 9-19


Troubleshooting and Replacement
Fascia (Bezel) Removal

Fascia (Bezel) Removal

Page 9-20 HDS Confidential: For distribution only to authorized parties.


Troubleshooting and Replacement
Model 30x0 G1 Fan Replacement Procedure

Model 30x0 G1 Fan Replacement Procedure

1. Remove the fascia.


Please refer to Hitachi NAS
2. Identify the fan to be replaced. Platform Hardware Reference
MK-99BA013-
3. Fans are labeled on the chassis,
numbered 1 to 3.
4. Disconnect the fan lead from its adjacent connector.
5. Remove the upper fan retention bracket.
6. Remove the lower fan retention bracket of the fan that is being
replaced.
7. The fan can now be replaced.
8. The new fan must be fitted in the same way with the arrow
indicating the direction of airflow into the server.
9. Secure the fan by fitting the brackets in the reverse order and
reconnecting the fan.

HDS Confidential: For distribution only to authorized parties. Page 9-21


Troubleshooting and Replacement
Model 30x0 G1 Removing Fan Unit

Model 30x0 G1 Removing Fan Unit

Upper fan retention bracket

Fan power connector lead Lower fan retention bracket

Page 9-22 HDS Confidential: For distribution only to authorized parties.


Troubleshooting and Replacement
Model 30x0 G2/4xx0 Fan Replacement

Model 30x0 G2/4xx0 Fan Replacement

30x0 G2

Please refer to Hitachi NAS


Platform Hardware Reference
MK-90BA030- or
MK-92HNAS030-

4xx0

HDS Confidential: For distribution only to authorized parties. Page 9-23


Troubleshooting and Replacement
Model 30x0/4xx0 Battery Pack

Model 30x0/4xx0 Battery Pack

 Secured in place when the fascia is fitted


• NiMH - 72 hours backup of NVRAM
 Conditioning
• Regular conditioning cycle
• Can run a full conditioning cycle to properly determine the current
capacity
 Replacement
• By removing the fascia
• Only replace with pack bearing same part number
• Procedure will be supplied
 Lifetime
• Minimum two years life
• Alert generated when replacement required
• Shelf life of spares is six months
• Store packs between 10C and 25C for optimal life

Page 9-24 HDS Confidential: For distribution only to authorized parties.


Troubleshooting and Replacement
General Battery Precautions

General Battery Precautions

 Batteries left in a system that has been improperly powered down


will drain beyond usefulness sometime after 72 hours
 If the battery is left connected in an improperly shut down system,
the battery must be recharged within 30 days
 If the system is to be powered down for an extended period, from the
server console, run the CLI command: shutdown --ship
 Wait 10-15 seconds, then check that the NVRAM status LED is off
 When the NVRAM status LED is off, the batteries will no longer
power the NVRAM, and the nodes are shut down correctly for
storage and/ or shipment

HDS Confidential: For distribution only to authorized parties. Page 9-25


Troubleshooting and Replacement
Model 30x0 G1 NVRAM Battery Replacement

Model 30x0 G1 NVRAM Battery Replacement

 Battery Replacement Procedure Please refer to Hitachi NAS


• Remove the fascia Platform Hardware Reference
MK-99BA013-
• Disconnect the battery lead from its
adjacent connector
• The battery can now be replaced. The new battery must be fitted in the
same orientation with the lead exiting the back face of the pack at the
bottom
• Reconnect the new battery
• Battery replacement should be done as quickly as possible and only
when the new pack is at hand. NVRAM is not battery backed while the
battery is disconnected
(High-performance NAS Platform has two batteries, one in each PSU)

When the server is powered down following a clean shutdown and NVRAM is not
in battery backup mode, the battery will still self-discharge at approx 1% per day. If
the NVRAM is still in battery backup as indicated by the flashing NVRAM LED then
the battery can be manually isolated using the reset button (see reset button
description).
Battery packs have a shelf life of up to 6 months** before conditioning is
recommended. Conditioning tests the pack and maintains optimal capacity. When
fully charged, the battery can be left fitted in a server in storage for a maximum of 6
months**.
Battery conditioning equipment will be made available at key service sites and does
not require Hitachi NAS Platform hardware.
** Life testing is ongoing to determine if these limits can be increased.

Page 9-26 HDS Confidential: For distribution only to authorized parties.


Troubleshooting and Replacement
Model 30x0 G1 Battery Connector

Model 30x0 G1 Battery Connector

 Remember to disconnect the battery lead, releasing the latch on the


left side of the connector

HDS Confidential: For distribution only to authorized parties. Page 9-27


Troubleshooting and Replacement
Model 30x0 G2/4xx0 Battery Replacement

Model 30x0 G2/4xx0 Battery Replacement

30x0 G2

Please refer to Hitachi NAS


Platform Hardware Reference
MK-90BA030- or
MK-92HNAS030-
4xx0

Page 9-28 HDS Confidential: For distribution only to authorized parties.


Troubleshooting and Replacement
Battery Replacement in Caddy

Battery Replacement in Caddy

The spare battery for the G1 version is stocked as the G2 version. This means for G1,
the battery needs to be removed from the caddy. The battery in the caddy is
compatible with both G1 and G2 generations.

HDS Confidential: For distribution only to authorized parties. Page 9-29


Troubleshooting and Replacement
Model 30x0 G1 Hard Disk Replacement Procedure

Model 30x0 G1 Hard Disk Replacement Procedure

1. Shut down the server and disconnect power to both PSUs


2. Remove the fascia and one or both fans. Disconnection of drives is
easier with both fans 1 and 2 removed
3. Identify the drive to be replaced. Drives are labeled on the chassis
and named A and B as shown. Replace one drive only
4. Disconnect the power connector and SATA cable from the drive. Do
NOT disconnect the SATA cable from the motherboard
5. Undo the thumbscrew on the drive carrier and slide the carrier out
6. If the replacement drive is not already fitted to a carrier then
remove the four screws fixing the faulty drive to the carrier and fit
the new drive into the carrier in the same orientation. Re-fit the
carrier by locating it in the lugs and tighten the thumb screw
7. Reconnect the drive power and SATA cable
8. Replace the fans and fascia
9. The system will configure the new drive on reboot; however, user
interaction is required to run the appropriate script

The hard drive is mounted in the carrier using four “Torx” fixing screws.
Use Torx screwdriver T10.

Page 9-30 HDS Confidential: For distribution only to authorized parties.


Troubleshooting and Replacement
Model 30x0 G1 Hard Disk Cabling and Positioning

Model 30x0 G1 Hard Disk Cabling and Positioning

Do not borrow any HDD as a spare part from another node. The HDD needs to be
new and blank from the spare warehouse. Procedures will not work and there is a
severe risk of booting an incorrect image.

HDS Confidential: For distribution only to authorized parties. Page 9-31


Troubleshooting and Replacement
Model 30x0 G2/4xx0 G2 Hard Disk Replacement

Model 30x0 G2/4xx0 G2 Hard Disk Replacement

30x0 G2
Please refer to Hitachi NAS
Platform Hardware Reference
MK-90BA030- or
MK-92HNAS030-

4xx0

Page 9-32 HDS Confidential: For distribution only to authorized parties.


Troubleshooting and Replacement
Hardware Field System Testing

Hardware Field System Testing

 Manufacturing Test and Diagnostic Software (MTDS)

HDS Confidential: For distribution only to authorized parties. Page 9-33


Troubleshooting and Replacement
Manufacturing Test and Diagnostic Software (MTDS)

Manufacturing Test and Diagnostic Software (MTDS)

 MTDS is primarily used for testing many different parts of the


Mercury server hardware during production
 The MTDS field test runs a test list designed for testing the hardware
in the field and assessing if the hardware is OK
 MTDS field test runs approximately 100 different hardware tests
aimed at testing the Mercury FPGA board, but also performs tests on
the HDDs, chassis fans, PSUs, etc.
Note: MTDS field test does not perform a memory test on the MMB
memory. To test the MMB memory in the field you need to run
memtest86+

Page 9-34 HDS Confidential: For distribution only to authorized parties.


Troubleshooting and Replacement
MTDS Console

MTDS Console

 Connect a KVM or console RS-232 connection


• Null modem cable
• Terminal server connected to console
 Mercury server must be stopped before running the mtds command

IMS

HDS Confidential: For distribution only to authorized parties. Page 9-35


Troubleshooting and Replacement
MTDS Test Commands

MTDS Test Commands

 Available commands are:


battery-test bring-up-test cpu-cmos-test cpu-dmi-test
cpu-mem-test cpu-sensors-test data-sizes debug-test
dimm-qual-test dvt emc-test ess-test
eth-switch-test fan-fru-test fan-test fc-port-test
field-test fpga-prog-test fpga-ram-test fpga-sdram-conf
ge-port-test glue-logic-test hdd-test i2c-test
inter-fpga-test led-test manufacture-test mbi-flash-test
mcp-test mfb-test monitors nvram-test
pcie-test post pre-test psu-fru-test
psu-test rom-test seeprom-test stress-test
system-config tg-port-test thermal-test versions
-check
voltage-test

Page 9-36 HDS Confidential: For distribution only to authorized parties.


Troubleshooting and Replacement
Executing: mtds field-test

Executing: mtds field-test

/opt/mercury-mtds/bin/mtds field-test
adm46:/home/manager# /opt/mercury-mtds/bin/mtds field-test
Version : 8.1.2351.09
Directory : /home/builder/vampire/2351.09/main/bin/x86_64_linux-bart_libc-
2.7_release
Build date : Aug 5 2011, 10:05:52
Log file path: /var/opt/mtds/log/B1038029_log/
chassis-monitor process detected; PID = 3487
Are you sure you want to stop the Mercury server? (y/n) y
Successfully requested mfb.elf reset, waiting for exit.
If it takes longer than 300 seconds to exit it will be killed....mfb exited
successfully after 7 seconds.
Waiting for stop script to complete
Wait 10000mS......Stopped
Test list settings:
Continue-on-fail
Stress phase time: 5 minutes
2012-03-22 17:32:14
Executing list of 108 tests for 1 cycles
(Some additional tests may be run if all tests pass)
+ Test 2002: pcie-test mbi-pci-check ...............................Passed
+ Test 2001: pcie-test mbi-scratch .................................Passed
+ Test 2201: glue-logic-test glue-register-test ....................Passed
+ Test 2210: glue-logic-test check-failover-interface ..............Passed
truncated.....................................

HDS Confidential: For distribution only to authorized parties. Page 9-37


Troubleshooting and Replacement
Ending: mtds field-test

Ending: mtds field-test


……….truncated
Testing complete : 2012-03-22 17:43:49
Overall result : PASSED
Tests run : 114, Passed: 114
Test success rate : 100.00%
----------------------------------------------------------------
Re-starting monitors
[====================]
Setting kernel variables (/etc/sysctl.conf)...done.
Setting kernel variables (/etc/sysctl.d/mercury-platform.conf)...done.
Reloading internet superserver configuration: xinetd.
Motherboard is Tyan S5211
Chassis monitor daemon started
Chassis monitor is now running as expected, PID = 11168
Checking the chassis drive configuration is valid and fault tolerant
Starting checks
Checks complete
No changes were made to the configuration.
RAID monitor daemon started: Version: 8.1.2351.09
MMB monitor shared memory initialized.
CPU frequency states: 2
MMB monitor daemon initialized. Poll interval=60
Monitors re-started
Elapsed: 11m34.331s
adm46:/home/manager# RAID monitor daemon initialized. Poll interval=60
File=/proc/mdstat
adm46:/home/manager#

Page 9-38 HDS Confidential: For distribution only to authorized parties.


Troubleshooting and Replacement
Mercury Motherboard Memory Test Memtest86+

Mercury Motherboard Memory Test Memtest86+

 The BlueArc customized version of memtest86+ is available on all


HNAS 30x0 installed with SU 7.0 or later from the factory
 Connect a KVM on the server, reboot the server, ‘break into’ the
GRUB menu (by hitting a key during the GRUB loader) and select
the “MEMTEST” option
 This version of memtest allows a repeat count to be specified on the
kernel line (in the GRUB menu). If a repeat count hasn't been
specified, it defaults to -1

HDS Confidential: For distribution only to authorized parties. Page 9-39


Troubleshooting and Replacement
Unrecoverable Configuration or Logical Errors

Unrecoverable Configuration or Logical Errors

 HNAS configuration
 Linux configuration

Page 9-40 HDS Confidential: For distribution only to authorized parties.


Troubleshooting and Replacement
Factory Reset to Default Assessment

Factory Reset to Default Assessment

 Should NOT be the first tool to use having configuration issues


 Can be used as the final recovery tool when:
• Uncorrectable configuration issues
• Linux corruption
• Node boot problems
• Bring nodes back to factory defaults
• Rebuilding production partition for reinstallation
 The tool will not fix grub boot loader issues
 Recovery partitions need to be intact
 Should NOT be used when one of the HDDs is in error

HDS Confidential: For distribution only to authorized parties. Page 9-41


Troubleshooting and Replacement
Fixing Logical Errors

Fixing Logical Errors

 Read and understand the:


• FE-90BA022-xx “Resetting Servers to Factory Defaults” before execution
 Always open a case with GSC and ask for supervision
 Cases are essential for tracking, statistics, and quality improvement
 Let GSC advise which version is appropriate in your case
 There will only be one image for every major release
 After recovery an FW upgrade might be required
 Remember the hwdb parameter
(If you not want to have another MAC ID !!)
 Un-mount Memory stick before reboot
 Have good connection with the man with the long beard above the
clouds

Page 9-42 HDS Confidential: For distribution only to authorized parties.


Troubleshooting and Replacement
Resetting Servers to Factory Defaults

Resetting Servers to Factory Defaults

 Request USB recovery files for the build version on your system
 The /var, /opt and the / (NOT /root only!) partition will be overwritten
 Locate a USB memory stick (4GB minimum)
 Create a USB memory stick with recovery files (Careful using WinZip!)
 Boot in the “Mercury Recovery” partition using the “grub” menu
 Check for both /dev/sda and /dev/sdb being accessible
 Mount USB (/dev/sdc1) to /mnt
 Run:
• /mnt/mercury-reinstall-main-partitions --preserve-hwdb
• reboot
• nas-preconfig
• reboot

HDS Confidential: For distribution only to authorized parties. Page 9-43


Troubleshooting and Replacement
HNAS Server Node Replacement

HNAS Server Node Replacement

 Only the 3090, 4100, and 4060 models will be on stock as a spare part
 If a 3080 replacement is required, a USB conversion tool is required
 Conversion tools are tracked and are required to be returned after use!
 Replacing 4080 with a 4060 in a cluster will turn the 4060 into a 4080
 The G1 model and G2 model have different spare part numbers
 If the node to be replaced is in a single node configuration, a complete
set of license keys are required for the new node
 FPGA package replacement in a G2 model will not change the MAC ID
 Replacing a node in a cluster, does not require new license keys
 The “Hitachi NAS Platform Server
Replacement Procedures” will list all the
steps needed for a successful node
replacement

Things to consider:
 G1 or G2 model
 G2 chassis do not include MFB package.
 MFB package do not fit into G1 model
 New MAC ID in single node configuration means new licenses
 New network MAC ID
 Change ownership of storage pools for single node
 WWN zoning
 LUN security
 Conversion tool (3080)
 Planning

Page 9-44 HDS Confidential: For distribution only to authorized parties.


Troubleshooting and Replacement
Spare Part List Model 30x0

Spare Part List Model 30x0

 Spare part lists for model 4xx0 will be available after GA


• Link to HNAS logistic: http://logistics.hds.com/Spares/main_BLU.htm

Stay updated with the latest spare part list under Logistic Global.
http://logistics.hds.com/Spares/main_BLU.htm

HDS Confidential: For distribution only to authorized parties. Page 9-45


Troubleshooting and Replacement
Spare Part List SMU, Switches, and Optics

Spare Part List SMU, Switches, and Optics

Page 9-46 HDS Confidential: For distribution only to authorized parties.


Troubleshooting and Replacement
General Precautions

General Precautions

 Proper ESD precautions should be used any time you work on the
node system

 Proper ventilation and cooling of all components relies on the system


being “intact”
 Lifting:
• All of the node components, especially drive enclosures, are extremely
heavy and require two people to lift them

HDS Confidential: For distribution only to authorized parties. Page 9-47


Troubleshooting and Replacement
Module Summary

Module Summary

 In this module, you have learned to:


• Set up the monitoring and reporting tools
• Recognize error messages created by reporting tools
• Gather necessary information for escalation
• Identify the required standard documentation to implement replacement
processes
• Recognize the importance of electrostatic discharge (ESD) precautions

Page 9-48 HDS Confidential: For distribution only to authorized parties.


Troubleshooting and Replacement
Module Review

Module Review

1. List some error reporting tools support by the Hitachi NAS Platform.
2. How is Hitachi storage monitored and managed?
3. Which email accounts can receive email notifications?
4. Which requirements do we have to specify for the customers to get
email alerting to work?
5. Can network traffic be monitored on a specific aggregate without
externally connected network analyzers?

HDS Confidential: For distribution only to authorized parties. Page 9-49


Troubleshooting and Replacement
Module Review

Page 9-50 HDS Confidential: For distribution only to authorized parties.


Your Next Steps

Certification:
http://www.hds.com/services/education/certification
Learning Center:
http://learningcenter.hds.com
White Papers:
http://www.hds.com/corporate/resources/

HDS Confidential: For distribution only to authorized parties. Page N-1


Your Next Steps

Learning Paths:
APAC:
http://www.hds.com/services/education/apac/?_p=v#GlobalTabNavi

Americas:
http://www.hds.com/services/education/north-
america/?tab=LocationContent1#GlobalTabNavi

EMEA:
http://www.hds.com/services/education/emea/#GlobalTabNavi

HDS Community:
http://community.hds.com - Open to all customers, partners, prospects, and
internals
theLoop:
http://loop.hds.com/message/18879#18879 ― HDS internal only
LinkedIn:
http://www.linkedin.com/groups?home=&gid=3044480&trk=anet_ug_hm&
goback=%2Emyg%2Eanb_3044480_*2
Twitter:
http://twitter.com/#!/HDSAcademy

Page N-2 HDS Confidential: For distribution only to authorized parties.


Communicating in a
Virtual Classroom —
Tools and Features
Virtual Classroom Basics

Overview of Communicating in a Virtual Classroom

 Chat
 Q&A
 Feedback Options
• Raise Hand
• Yes/No
• Emoticons
 Markup Tools
• Drawing Tools
• Text Tool

HDS Confidential: For distribution only to authorized parties. Page V-1


Communicating in a Virtual Classroom — Tools and Features
Reminders: Intercall Call-Back Teleconferencing

Reminders: Intercall Call-Back Teleconferencing

Page V-2 HDS Confidential: For distribution only to authorized parties.


Communicating in a Virtual Classroom — Tools and Features
Feedback Features — Try Them

Feedback Features — Try Them

Raise Hand Yes No Emoticons

HDS Confidential: For distribution only to authorized parties. Page V-3


Communicating in a Virtual Classroom — Tools and Features
Markup Tools (Drawing and Text) — Try Them

Markup Tools (Drawing and Text) — Try Them

Pointer Text Writing Drawing Highlighter Annotation Eraser


Tool Tools Colors

Page V-4 HDS Confidential: For distribution only to authorized parties.


Communicating in a Virtual Classroom — Tools and Features
Transferring Your Audio to Virtual Breakout Rooms

Transferring Your Audio to Virtual Breakout Rooms

 Automatic
• With Intercall / WebEx Teleconference Call-Back Feature
 Otherwise
• To transfer your audio from Main Room to virtual Breakout Room
1. Enter *9
2. You will hear a recording – follow instructions
3. Enter Your Assigned Breakout Room number #
 For example, *9 1# (Breakout Room #1)
• To return your audio to Main Room
 Enter *9

HDS Confidential: For distribution only to authorized parties. Page V-5


Communicating in a Virtual Classroom — Tools and Features
Intercall (WebEx) Technical Support

Intercall (WebEx) Technical Support

 800.374.1852

Page V-6 HDS Confidential: For distribution only to authorized parties.


Communicating in a Virtual Classroom — Tools and Features
WebEx Hands-On Labs

WebEx Hands-On Labs

WebEx Hands-On Lab Operations

 From session, Instructor starts Hands-On remote lab


 Instructor assigns lab teams (lab teams assigned to a computer)
 Learners are prompted to connect to their lab computer
• Click Yes

 After connecting to lab computer, learners see a message asking


them to disconnect and connect to the new teleconference
• Click Yes
You do not need to hang
up and dial a new number,
Intercall auto connects you
to the lab conference.

HDS Confidential: For distribution only to authorized parties. Page V-7


Communicating in a Virtual Classroom — Tools and Features
WebEx Hands-On Lab Operations

 Instructor can join each lab team’s conference.


 Members of a lab group can communicate:
• With each other using CHAT and telephone
Lower right hand corner of
computer screen

• With Instructor using Raise Hand feature


 Only one learner is in control of the lab desktop at any one time.
• To pass control, select learner name and click Presenter Ball

Page V-8 HDS Confidential: For distribution only to authorized parties.


Training Course Glossary
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

—A— AIX — IBM UNIX.


AaaS — Archive as a Service. A cloud computing AL — Arbitrated Loop. A network in which nodes
business model. contend to send data, and only 1 node at a
AAMux — Active-Active Multiplexer. time is able to send data.

ACC — Action Code. A SIM (System Information AL-PA — Arbitrated Loop Physical Address.
Message). AMS — Adaptable Modular Storage.
ACE — Access Control Entry. Stores access rights APAR — Authorized Program Analysis Reports.
for a single user or group within the APF — Authorized Program Facility. In IBM z/OS
Windows security model. and OS/390 environments, a facility that
ACL — Access Control List. Stores a set of ACEs, permits the identification of programs that
so that it describes the complete set of access are authorized to use restricted functions.
rights for a file system object within the API — Application Programming Interface.
Microsoft Windows security model.
APID — Application Identification. An ID to
ACP ― Array Control Processor. Microprocessor identify a command device.
mounted on the disk adapter circuit board
(DKA) that controls the drives in a specific Application Management — The processes that
disk array. Considered part of the back end; manage the capacity and performance of
it controls data transfer between cache and applications.
the hard drives. ARB — Arbitration or request.
ACP Domain ― Also Array Domain. All of the ARM — Automated Restart Manager.
array-groups controlled by the same pair of Array Domain — Also ACP Domain. All
DKA boards, or the HDDs managed by 1 functions, paths, and disk drives controlled
ACP PAIR (also called BED). by a single ACP pair. An array domain can
ACP PAIR ― Physical disk access control logic. contain a variety of LVI or LU
Each ACP consists of 2 DKA PCBs to configurations.
provide 8 loop paths to the real HDDs. Array Group — Also called a parity group. A
Actuator (arm) — Read/write heads are attached group of hard disk drives (HDDs) that form
to a single head actuator, or actuator arm, the basic unit of storage in a subsystem. All
that moves the heads around the platters. HDDs in a parity group must have the same
AD — Active Directory. physical capacity.

ADC — Accelerated Data Copy. Array Unit — A group of hard disk drives in 1
RAID structure. Same as parity group.
Address — A location of data, usually in main
memory or on a disk. A name or token that ASIC — Application specific integrated circuit.
identifies a network component. In local area ASSY — Assembly.
networks (LANs), for example, every node Asymmetric virtualization — See Out-of-band
has a unique address. virtualization.
ADP — Adapter. Asynchronous — An I/O operation whose
ADS — Active Directory Service. initiator does not await its completion before

HDS Confidential: For distribution only to authorized parties. Page G-1


proceeding with other work. Asynchronous or Yottabyte (YB). Note that variations of
I/O operations enable an initiator to have this term are subject to proprietary
multiple concurrent I/O operations in trademark disputes in multiple countries at
progress. Also called Out-of-band the present time.
virtualization. BIOS — Basic Input/Output System. A chip
ATA —Advanced Technology Attachment. A disk located on all computer motherboards that
drive implementation that integrates the governs how a system boots and operates.
controller on the disk drive itself. Also BLKSIZE — Block size.
known as IDE (Integrated Drive Electronics)
Advanced Technology Attachment. BLOB — Binary Large OBject.

ATR — Autonomic Technology Refresh. BP — Business processing.

Authentication — The process of identifying an BPaaS —Business Process as a Service. A cloud


individual, usually based on a username and computing business model.
password. BPAM — Basic Partitioned Access Method.
AUX — Auxiliary Storage Manager. BPM — Business Process Management.
Availability — Consistent direct access to BPO — Business Process Outsourcing. Dynamic
information over time. BPO services refer to the management of
-back to top- partly standardized business processes,
including human resources delivered in a
—B— pay-per-use billing relationship or a self-
B4 — A group of 4 HDU boxes that are used to service consumption model.
contain 128 HDDs. BST — Binary Search Tree.
BA — Business analyst. BSTP — Blade Server Test Program.
Back end — In client/server applications, the BTU — British Thermal Unit.
client part of the program is often called the
Business Continuity Plan — Describes how an
front end and the server part is called the
organization will resume partially or
back end.
completely interrupted critical functions
Backup image—Data saved during an archive within a predetermined time after a
operation. It includes all the associated files, disruption or a disaster. Sometimes also
directories, and catalog information of the called a Disaster Recovery Plan.
backup operation.
-back to top-
BADM — Basic Direct Access Method.
BASM — Basic Sequential Access Method.
—C—
CA — (1) Continuous Access software (see
BATCTR — Battery Control PCB.
HORC), (2) Continuous Availability or (3)
BC — (1) Business Class (in contrast with EC, Computer Associates.
Enterprise Class). (2) Business coordinator.
Cache — Cache Memory. Intermediate buffer
BCP — Base Control Program. between the channels and drives. It is
BCPii — Base Control Program internal interface. generally available and controlled as two
BDW — Block Descriptor Word. areas of cache (cache A and cache B). It may
be battery-backed.
BED — Back end director. Controls the paths to
the HDDs. Cache hit rate — When data is found in the cache,
it is called a cache hit, and the effectiveness
Big Data — Refers to data that becomes so large in of a cache is judged by its hit rate.
size or quantity that a dataset becomes
awkward to work with using traditional Cache partitioning — Storage management
database management systems. Big data software that allows the virtual partitioning
entails data capacity or measurement that of cache and allocation of it to different
requires terms such as Terabyte (TB), applications.
Petabyte (PB), Exabyte (EB), Zettabyte (ZB) CAD — Computer-Aided Design.

Page G-2 HDS Confidential: For distribution only to authorized parties.


CAGR — Compound Annual Growth Rate. Centralized management — Storage data
Capacity — Capacity is the amount of data that a management, capacity management, access
storage system or drive can store after security management, and path
configuration and/or formatting. management functions accomplished by
software.
Most data storage companies, including HDS,
calculate capacity based on the premise that CF — Coupling Facility.
1KB = 1,024 bytes, 1MB = 1,024 kilobytes, CFCC — Coupling Facility Control Code.
1GB = 1,024 megabytes, and 1TB = 1,024 CFW — Cache Fast Write.
gigabytes. See also Terabyte (TB), Petabyte
(PB), Exabyte (EB), Zettabyte (ZB) and CH — Channel.
Yottabyte (YB). CH S — Channel SCSI.
CAPEX — Capital expenditure — the cost of CHA — Channel Adapter. Provides the channel
developing or providing non-consumable interface control functions and internal cache
parts for the product or system. For example, data transfer functions. It is used to convert
the purchase of a photocopier is the CAPEX, the data format between CKD and FBA. The
and the annual paper and toner cost is the CHA contains an internal processor and 128
OPEX. (See OPEX). bytes of edit buffer memory. Replaced by
CAS — (1) Column Address Strobe. A signal sent CHB in some cases.
to a dynamic random access memory CHA/DKA — Channel Adapter/Disk Adapter.
(DRAM) that tells it that an associated CHAP — Challenge-Handshake Authentication
address is a column address. CAS-column Protocol.
address strobe sent by the processor to a
CHB — Channel Board. Updated DKA for Hitachi
DRAM circuit to activate a column address.
Unified Storage VM and additional
(2) Content-addressable Storage.
enterprise components.
CBI — Cloud-based Integration. Provisioning of a
Chargeback — A cloud computing term that refers
standardized middleware platform in the
to the ability to report on capacity and
cloud that can be used for various cloud
utilization by application or dataset,
integration scenarios.
charging business users or departments
An example would be the integration of based on how much they use.
legacy applications into the cloud or CHF — Channel Fibre.
integration of different cloud-based
applications into one application. CHIP — Client-Host Interface Processor.
Microprocessors on the CHA boards that
CBU — Capacity Backup. process the channel commands from the
CBX —Controller chassis (box). hosts and manage host access to cache.
CCHH — Common designation for Cylinder and CHK — Check.
Head. CHN — Channel adapter NAS.
CCI — Command Control Interface. CHP — Channel Processor or Channel Path.
CCIF — Cloud Computing Interoperability CHPID — Channel Path Identifier.
Forum. A standards organization active in CHSN or C-HSN— Cache Memory Hierarchical
cloud computing. Star Network.
CDP — Continuous Data Protection. CHT — Channel tachyon. A Fibre Channel
CDR — Clinical Data Repository protocol controller.
CDWP — Cumulative disk write throughput. CICS — Customer Information Control System.
CE — Customer Engineer. CIFS protocol — Common internet file system is a
platform-independent file sharing system. A
CEC — Central Electronics Complex. network file system accesses protocol
CentOS — Community Enterprise Operating primarily used by Windows clients to
System. communicate file access requests to
Windows servers.

HDS Confidential: For distribution only to authorized parties. Page G-3


CIM — Common Information Model. • Data discoverability
CIS — Clinical Information System. • Data mobility
CKD ― Count-key Data. A format for encoding • Data protection
data on hard disk drives; typically used in • Dynamic provisioning
the mainframe environment.
• Location independence
CKPT — Check Point.
• Multitenancy to ensure secure privacy
CL — See Cluster.
• Virtualization
CLI — Command Line Interface.
Cloud Fundamental —A core requirement to the
CLPR — Cache Logical Partition. Cache can be deployment of cloud computing. Cloud
divided into multiple virtual cache fundamentals include:
memories to lessen I/O contention.
• Self service
Cloud Computing — “Cloud computing refers to
• Pay per use
applications and services that run on a
distributed network using virtualized • Dynamic scale up and scale down
resources and accessed by common Internet Cloud Security Alliance — A standards
protocols and networking standards. It is organization active in cloud computing.
distinguished by the notion that resources are
CLPR — Cache Logical Partition.
virtual and limitless, and that details of the
physical systems on which software runs are Cluster — A collection of computers that are
abstracted from the user.” — Source: Cloud interconnected (typically at high-speeds) for
Computing Bible, Barrie Sosinsky (2011) the purpose of improving reliability,
availability, serviceability or performance
Cloud computing often entails an “as a
(via load balancing). Often, clustered
service” business model that may entail one
computers have access to a common pool of
or more of the following:
storage and run special software to
• Archive as a Service (AaaS) coordinate the component computers'
• Business Process as a Service (BPaas) activities.
• Failure as a Service (FaaS) CM ― Cache Memory, Cache Memory Module.
• Infrastructure as a Service (IaaS) Intermediate buffer between the channels
and drives. It has a maximum of 64GB (32GB
• IT as a Service (ITaaS)
x 2 areas) of capacity. It is available and
• Platform as a Service (PaaS) controlled as 2 areas of cache (cache A and
• Private File Tiering as a Service (PFTaas) cache B). It is fully battery-backed (48 hours).
• Software as a Service (Saas) CM DIR — Cache Memory Directory.
• SharePoint as a Service (SPaas) CME — Communications Media and
• SPI refers to the Software, Platform and Entertainment.
Infrastructure as a Service business model. CM-HSN — Control Memory Hierarchical Star
Cloud network types include the following: Network.
• Community cloud (or community CM PATH ― Cache Memory Access Path. Access
network cloud) Path from the processors of CHA, DKA PCB
to Cache Memory.
• Hybrid cloud (or hybrid network cloud)
CM PK — Cache Memory Package.
• Private cloud (or private network cloud)
• Public cloud (or public network cloud) CM/SM — Cache Memory/Shared Memory.

• Virtual private cloud (or virtual private CMA — Cache Memory Adapter.
network cloud) CMD — Command.
Cloud Enabler —a concept, product or solution CMG — Cache Memory Group.
that enables the deployment of cloud CNAME — Canonical NAME.
computing. Key cloud enablers include:

Page G-4 HDS Confidential: For distribution only to authorized parties.


CNS — Cluster Name Space or Clustered Name CSTOR — Central Storage or Processor Main
Space. Memory.
CNT — Cumulative network throughput. C-Suite — The C-suite is considered the most
CoD — Capacity on Demand. important and influential group of
individuals at a company. Referred to as
Community Network Cloud — Infrastructure “the C-Suite within a Healthcare provider.”
shared between several organizations or
CSV — Comma Separated Value or Cluster Shared
groups with common concerns.
Volume.
Concatenation — A logical joining of 2 series of
CSVP — Customer-specific Value Proposition.
data, usually represented by the symbol “|”.
In data communications, 2 or more data are CSW ― Cache Switch PCB. The cache switch
often concatenated to provide a unique (CSW) connects the channel adapter or disk
name or reference (e.g., S_ID | X_ID). adapter to the cache. Each of them is
Volume managers concatenate disk address connected to the cache by the Cache Memory
spaces to present a single larger address Hierarchical Star Net (C-HSN) method. Each
space. cluster is provided with the 2 CSWs, and
Connectivity technology — A program or device's each CSW can connect 4 caches. The CSW
ability to link with other programs and switches any of the cache paths to which the
devices. Connectivity technology allows channel adapter or disk adapter is to be
programs on a given computer to run connected through arbitration.
routines or access objects on another remote CTG — Consistency Group.
computer. CTL — Controller module.
Controller — A device that controls the transfer of CTN — Coordinated Timing Network.
data from a computer to a peripheral device
(including a storage system) and vice versa. CU — Control Unit (refers to a storage subsystem.
The hexadecimal number to which 256
Controller-based virtualization — Driven by the
LDEVs may be assigned).
physical controller at the hardware
microcode level versus at the application CUDG — Control Unit Diagnostics. Internal
software layer and integrates into the system tests.
infrastructure to allow virtualization across CUoD — Capacity Upgrade on Demand.
heterogeneous storage and third party
CV — Custom Volume.
products.
CVS ― Customizable Volume Size. Software used
Corporate governance — Organizational
to create custom volume sizes. Marketed
compliance with government-mandated
under the name Virtual LVI (VLVI) and
regulations.
Virtual LUN (VLUN).
CP — Central Processor (also called Processing
Unit or PU). CWDM — Course Wavelength Division
Multiplexing.
CPC — Central Processor Complex.
CXRC — Coupled z/OS Global Mirror.
CPM — Cache Partition Manager. Allows for
-back to top-
partitioning of the cache and assigns a
partition to a LU; this enables tuning of the —D—
system’s performance. DA — Device Adapter.
CPOE — Computerized Physician Order Entry
DACL — Discretionary access control list (ACL).
(Provider Ordered Entry).
The part of a security descriptor that stores
CPS — Cache Port Slave. access rights for users and groups.
CPU — Central Processing Unit. DAD — Device Address Domain. Indicates a site
CRM — Customer Relationship Management. of the same device number automation
CSS — Channel Subsystem. support function. If several hosts on the
same site have the same device number
CS&S — Customer Service and Support.
system, they have the same name.

HDS Confidential: For distribution only to authorized parties. Page G-5


DAP — Data Access Path. Also known as Zero virtual disk data addresses are mapped to
Copy Failover (ZCF). sequences of member disk addresses in a
DAS — Direct Attached Storage. regular rotating pattern.

DASD — Direct Access Storage Device. Data Transfer Rate (DTR) — The speed at which
data can be transferred. Measured in
Data block — A fixed-size unit of data that is kilobytes per second for a CD-ROM drive, in
transferred together. For example, the bits per second for a modem, and in
X-modem protocol transfers blocks of 128 megabytes per second for a hard drive. Also,
bytes. In general, the larger the block size, often called data rate.
the faster the data transfer rate.
DBL — Drive box.
Data Duplication — Software duplicates data, as
in remote copy or PiT snapshots. Maintains 2 DBMS — Data Base Management System.
copies of data. DBX — Drive box.
Data Integrity — Assurance that information will DCA ― Data Cache Adapter.
be protected from modification and DCTL — Direct coupled transistor logic.
corruption.
DDL — Database Definition Language.
Data Lifecycle Management — An approach to
information and storage management. The DDM — Disk Drive Module.
policies, processes, practices, services and DDNS — Dynamic DNS.
tools used to align the business value of data DDR3 — Double data rate 3.
with the most appropriate and cost-effective
storage infrastructure from the time data is DE — Data Exchange Software.
created through its final disposition. Data is Device Management — Processes that configure
aligned with business requirements through and manage storage systems.
management policies and service levels DFS — Microsoft Distributed File System.
associated with performance, availability,
DFSMS — Data Facility Storage Management
recoverability, cost, and what ever
Subsystem.
parameters the organization defines as
critical to its operations. DFSM SDM — Data Facility Storage Management
Subsystem System Data Mover.
Data Migration — The process of moving data
from 1 storage device to another. In this DFSMSdfp — Data Facility Storage Management
context, data migration is the same as Subsystem Data Facility Product.
Hierarchical Storage Management (HSM). DFSMSdss — Data Facility Storage Management
Data Pipe or Data Stream — The connection set up Subsystem Data Set Services.
between the MediaAgent, source or DFSMShsm — Data Facility Storage Management
destination server is called a Data Pipe or Subsystem Hierarchical Storage Manager.
more commonly a Data Stream.
DFSMSrmm — Data Facility Storage Management
Data Pool — A volume containing differential Subsystem Removable Media Manager.
data only.
DFSMStvs — Data Facility Storage Management
Data Protection Directive — A major compliance Subsystem Transactional VSAM Services.
and privacy protection initiative within the
DFW — DASD Fast Write.
European Union (EU) that applies to cloud
computing. Includes the Safe Harbor DICOM — Digital Imaging and Communications
Agreement. in Medicine.
Data Stream — CommVault’s patented high DIMM — Dual In-line Memory Module.
performance data mover used to move data Direct Access Storage Device (DASD) — A type of
back and forth between a data source and a storage device, in which bits of data are
MediaAgent or between 2 MediaAgents. stored at precise locations, enabling the
Data Striping — Disk array data mapping computer to retrieve information directly
technique in which fixed-length sequences of without having to scan a series of records.

Page G-6 HDS Confidential: For distribution only to authorized parties.


Direct Attached Storage (DAS) — Storage that is DKU — Disk Array Frame or Disk Unit. In a
directly attached to the application or file multi-frame configuration, a frame that
server. No other device on the network can contains hard disk units (HDUs).
access the stored data. DKUPS — Disk Unit Power Supply.
Director class switches — Larger switches often DLIBs — Distribution Libraries.
used as the core of large switched fabrics.
DKUP — Disk Unit Power Supply.
Disaster Recovery Plan (DRP) — A plan that
describes how an organization will deal with DLM — Data Lifecycle Management.
potential disasters. It may include the DMA — Direct Memory Access.
precautions taken to either maintain or DM-LU — Differential Management Logical Unit.
quickly resume mission-critical functions. DM-LU is used for saving management
Sometimes also referred to as a Business information of the copy functions in the
Continuity Plan. cache.
Disk Administrator — An administrative tool that DMP — Disk Master Program.
displays the actual LU storage configuration.
DMT — Dynamic Mapping Table.
Disk Array — A linked group of 1 or more
physical independent hard disk drives DMTF — Distributed Management Task Force. A
generally used to replace larger, single disk standards organization active in cloud
drive systems. The most common disk computing.
arrays are in daisy chain configuration or DNS — Domain Name System.
implement RAID (Redundant Array of DOC — Deal Operations Center.
Independent Disks) technology.
Domain — A number of related storage array
A disk array may contain several disk drive
groups.
trays, and is structured to improve speed
and increase protection against loss of data. DOO — Degraded Operations Objective.
Disk arrays organize their data storage into DP — Dynamic Provisioning (pool).
Logical Units (LUs), which appear as linear
DP-VOL — Dynamic Provisioning Virtual Volume.
block paces to their clients. A small disk
array, with a few disks, might support up to DPL — (1) (Dynamic) Data Protection Level or (2)
8 LUs; a large one, with hundreds of disk Denied Persons List.
drives, can support thousands. DR — Disaster Recovery.
DKA ― Disk Adapter. Also called an array control DRAC — Dell Remote Access Controller.
processor (ACP). It provides the control
DRAM — Dynamic random access memory.
functions for data transfer between drives
and cache. The DKA contains DRR (Data DRP — Disaster Recovery Plan.
Recover and Reconstruct), a parity generator DRR — Data Recover and Reconstruct. Data Parity
circuit. Replaced by DKB in some cases. Generator chip on DKA.
DKB — Disk Board. Updated DKA for Hitachi DRV — Dynamic Reallocation Volume.
Unified Storage VM and additional
DSB — Dynamic Super Block.
enterprise components.
DSF — Device Support Facility.
DKC ― Disk Controller Unit. In a multi-frame
configuration, the frame that contains the DSF INIT — Device Support Facility Initialization
front end (control and memory (for DASD).
components). DSP — Disk Slave Program.
DKCMN ― Disk Controller Monitor. Monitors DT — Disaster tolerance.
temperature and power status throughout DTA —Data adapter and path to cache-switches.
the machine.
DTR — Data Transfer Rate.
DKF ― Fibre disk adapter. Another term for a
DVE — Dynamic Volume Expansion.
DKA.
DW — Duplex Write.

HDS Confidential: For distribution only to authorized parties. Page G-7


DWDM — Dense Wavelength Division ERP — Enterprise Resource Planning.
Multiplexing. ESA — Enterprise Systems Architecture.
DWL — Duplex Write Line or Dynamic ESB — Enterprise Service Bus.
Workspace Linking.
ESC — Error Source Code.
-back to top-
ESD — Enterprise Systems Division (of Hitachi)
—E— ESCD — ESCON Director.
EAL — Evaluation Assurance Level (EAL1 ESCON ― Enterprise Systems Connection. An
through EAL7). The EAL of an IT product or input/output (I/O) interface for mainframe
system is a numerical security grade computer connections to storage devices
assigned following the completion of a developed by IBM.
Common Criteria security evaluation, an ESD — Enterprise Systems Division.
international standard in effect since 1999.
ESDS — Entry Sequence Data Set.
EAV — Extended Address Volume.
ESS — Enterprise Storage Server.
EB — Exabyte.
ESW — Express Switch or E Switch. Also referred
EC — Enterprise Class (in contrast with BC, to as the Grid Switch (GSW).
Business Class).
Ethernet — A local area network (LAN)
ECC — Error Checking and Correction. architecture that supports clients and servers
ECC.DDR SDRAM — Error Correction Code and uses twisted pair cables for connectivity.
Double Data Rate Synchronous Dynamic ETR — External Time Reference (device).
RAM Memory.
EVS — Enterprise Virtual Server.
ECM — Extended Control Memory. Exabyte (EB) — A measurement of data or data
ECN — Engineering Change Notice. storage. 1EB = 1,024PB.
E-COPY — Serverless or LAN free backup. EXCP — Execute Channel Program.
EFI — Extensible Firmware Interface. EFI is a ExSA — Extended Serial Adapter.
specification that defines a software interface -back to top-
between an operating system and platform
firmware. EFI runs on top of BIOS when a —F—
LPAR is activated. FaaS — Failure as a Service. A proposed business
EHR — Electronic Health Record. model for cloud computing in which large-
EIG — Enterprise Information Governance. scale, online failure drills are provided as a
service in order to test real cloud
EMIF — ESCON Multiple Image Facility.
deployments. Concept developed by the
EMPI — Electronic Master Patient Identifier. Also College of Engineering at the University of
known as MPI. California, Berkeley in 2011.
Emulation — In the context of Hitachi Data Fabric — The hardware that connects
Systems enterprise storage, emulation is the workstations and servers to storage devices
logical partitioning of an Array Group into in a SAN is referred to as a "fabric." The SAN
logical devices. fabric enables any-server-to-any-storage
EMR — Electronic Medical Record. device connectivity through the use of Fibre
Channel switching technology.
ENC — Enclosure or Enclosure Controller. The
units that connect the controllers with the Failback — The restoration of a failed system
Fibre Channel disks. They also allow for share of a load to a replacement component.
online extending a system by adding RKAs. For example, when a failed controller in a
redundant configuration is replaced, the
EOF — End of Field.
devices that were originally controlled by
EOL — End of Life. the failed controller are usually failed back
EPO — Emergency Power Off. to the replacement controller to restore the
EREP — Error REPorting and Printing. I/O balance, and to restore failure tolerance.

Page G-8 HDS Confidential: For distribution only to authorized parties.


Similarly, when a defective fan or power transmitting data between computer devices;
supply is replaced, its load, previously borne a set of standards for a serial I/O bus
by a redundant component, can be failed capable of transferring data between 2 ports.
back to the replacement part. FC RKAJ — Fibre Channel Rack Additional.
Failed over — A mode of operation for failure- Module system acronym refers to an
tolerant systems in which a component has additional rack unit that houses additional
failed and its function has been assumed by hard drives exceeding the capacity of the
a redundant component. A system that core RK unit.
protects against single failures operating in FC-0 ― Lowest layer on fibre channel transport.
failed over mode is not failure tolerant, as This layer represents the physical media.
failure of the redundant component may FC-1 ― This layer contains the 8b/10b encoding
render the system unable to function. Some scheme.
systems (e.g., clusters) are able to tolerate
FC-2 ― This layer handles framing and protocol,
more than 1 failure; these remain failure
frame format, sequence/exchange
tolerant until no redundant component is
management and ordered set usage.
available to protect against further failures.
FC-3 ― This layer contains common services used
Failover — A backup operation that automatically
by multiple N_Ports in a node.
switches to a standby database server or
network if the primary system fails, or is FC-4 ― This layer handles standards and profiles
temporarily shut down for servicing. Failover for mapping upper level protocols like SCSI
is an important fault tolerance function of an IP onto the Fibre Channel Protocol.
mission-critical systems that rely on constant FCA ― Fibre Adapter. Fibre interface card.
accessibility. Also called path failover. Controls transmission of fibre packets.
Failure tolerance — The ability of a system to FC-AL — Fibre Channel Arbitrated Loop. A serial
continue to perform its function or at a data transfer architecture developed by a
reduced performance level, when 1 or more consortium of computer and mass storage
of its components has failed. Failure device manufacturers, and is now being
tolerance in disk subsystems is often standardized by ANSI. FC-AL was designed
achieved by including redundant instances for new mass storage devices and other
of components whose failure would make peripheral devices that require very high
the system inoperable, coupled with facilities bandwidth. Using optical fiber to connect
that allow the redundant components to devices, FC-AL supports full-duplex data
assume the function of failed ones. transfer rates of 100MBps. FC-AL is
compatible with SCSI for high-performance
FAIS — Fabric Application Interface Standard.
storage systems.
FAL — File Access Library.
FCC — Federal Communications Commission.
FAT — File Allocation Table. FCIP — Fibre Channel over IP, a network storage
Fault Tolerant — Describes a computer system or technology that combines the features of
component designed so that, in the event of a Fibre Channel and the Internet Protocol (IP)
component failure, a backup component or to connect distributed SANs over large
procedure can immediately take its place with distances. FCIP is considered a tunneling
no loss of service. Fault tolerance can be protocol, as it makes a transparent point-to-
provided with software, embedded in point connection between geographically
hardware or provided by hybrid combination. separated SANs over IP networks. FCIP
FBA — Fixed-block Architecture. Physical disk relies on TCP/IP services to establish
sector mapping. connectivity between remote SANs over
FBA/CKD Conversion — The process of LANs, MANs, or WANs. An advantage of
converting open-system data in FBA format FCIP is that it can use TCP/IP as the
to mainframe data in CKD format. transport while keeping Fibre Channel fabric
services intact.
FBUS — Fast I/O Bus.
FC ― Fibre Channel or Field-Change (microcode
update) or Fibre Channel. A technology for

HDS Confidential: For distribution only to authorized parties. Page G-9


FCoE – Fibre Channel over Ethernet. An FPGA — Field Programmable Gate Array.
encapsulation of Fibre Channel frames over Frames — An ordered vector of words that is the
Ethernet networks. basic unit of data transmission in a Fibre
FCP — Fibre Channel Protocol. Channel network.
FC-P2P — Fibre Channel Point-to-Point. Front end — In client/server applications, the
FCSE — Flashcopy Space Efficiency. client part of the program is often called the
FC-SW — Fibre Channel Switched. front end and the server part is called the
FCU— File Conversion Utility. back end.
FD — Floppy Disk or Floppy Drive. FRU — Field Replaceable Unit.
FDDI — Fiber Distributed Data Interface. FS — File System.
FDR — Fast Dump/Restore. FSA — File System Module-A.
FE — Field Engineer. FSB — File System Module-B.
FED — (Channel) Front End Director.
FSI — Financial Services Industries.
Fibre Channel — A serial data transfer
FSM — File System Module.
architecture developed by a consortium of
computer and mass storage device FSW ― Fibre Channel Interface Switch PCB. A
manufacturers and now being standardized board that provides the physical interface
by ANSI. The most prominent Fibre Channel (cable connectors) between the ACP ports
standard is Fibre Channel Arbitrated Loop and the disks housed in a given disk drive.
(FC-AL). FTP ― File Transfer Protocol. A client-server
FICON — Fiber Connectivity. A high-speed protocol that allows a user on 1 computer to
input/output (I/O) interface for mainframe transfer files to and from another computer
computer connections to storage devices. As over a TCP/IP network.
part of IBM's S/390 server, FICON channels FWD — Fast Write Differential.
increase I/O capacity through the
-back to top-
combination of a new architecture and faster
physical link rates to make them up to 8 —G—
times as efficient as ESCON (Enterprise GA — General availability.
System Connection), IBM's previous fiber
GARD — General Available Restricted
optic channel standard.
Distribution.
FIPP — Fair Information Practice Principles.
Guidelines for the collection and use of Gb — Gigabit.
personal information created by the United GB — Gigabyte.
States Federal Trade Commission (FTC). Gb/sec — Gigabit per second.
FISMA — Federal Information Security
GB/sec — Gigabyte per second.
Management Act of 2002. A major
compliance and privacy protection law that GbE — Gigabit Ethernet.
applies to information systems and cloud Gbps — Gigabit per second.
computing. Enacted in the United States of
GBps — Gigabyte per second.
America in 2002.
GBIC — Gigabit Interface Converter.
FLGFAN ― Front Logic Box Fan Assembly.
GCMI — Global Competitive and Marketing
FLOGIC Box ― Front Logic Box.
Intelligence (Hitachi).
FM — Flash Memory. Each microprocessor has
GDG — Generation Data Group.
FM. FM is non-volatile memory that contains
microcode. GDPS — Geographically Dispersed Parallel
Sysplex.
FOP — Fibre Optic Processor or fibre open.
GID — Group Identifier within the UNIX security
FQDN — Fully Qualified Domain Name.
model.
FPC — Failure Parts Code or Fibre Channel
gigE — Gigabit Ethernet.
Protocol Chip.

Page G-10 HDS Confidential: For distribution only to authorized parties.


GLM — Gigabyte Link Module. HDDPWR — Hard Disk Drive Power.
Global Cache — Cache memory is used on demand HDU ― Hard Disk Unit. A number of hard drives
by multiple applications. Use changes (HDDs) grouped together within a
dynamically, as required for READ subsystem.
performance between hosts/applications/LUs. Head — See read/write head.
GPFS — General Parallel File System. Heterogeneous — The characteristic of containing
GSC — Global Support Center. dissimilar elements. A common use of this
GSI — Global Systems Integrator. word in information technology is to
describe a product as able to contain or be
GSS — Global Solution Services. part of a “heterogeneous network,"
GSSD — Global Solutions Strategy and consisting of different manufacturers'
Development. products that can interoperate.
GSW — Grid Switch Adapter. Also known as E Heterogeneous networks are made possible by
Switch (Express Switch). standards-conforming hardware and
GUI — Graphical User Interface. software interfaces used in common by
different products, thus allowing them to
GUID — Globally Unique Identifier. communicate with each other. The Internet
-back to top- itself is an example of a heterogeneous
—H— network.

H1F — Essentially the floor-mounted disk rack HiCAM — Hitachi Computer Products America.
(also called desk side) equivalent of the RK. HIPAA — Health Insurance Portability and
(See also: RK, RKA, and H2F). Accountability Act.
H2F — Essentially the floor-mounted disk rack HIS — (1) High Speed Interconnect. (2) Hospital
(also called desk side) add-on equivalent Information System (clinical and financial).
similar to the RKA. There is a limitation of HiStar — Multiple point-to-point data paths to
only 1 H2F that can be added to the core RK cache.
Floor Mounted unit. See also: RK, RKA, and
H1F. HL7 — Health Level 7.

HA — High Availability. HLQ — High-level Qualifier.

HANA — High Performance Analytic Appliance, HLS — Healthcare and Life Sciences.
a database appliance technology proprietary HLU — Host Logical Unit.
to SAP. H-LUN — Host Logical Unit Number. See LUN.
HBA — Host Bus Adapter — An I/O adapter that HMC — Hardware Management Console.
sits between the host computer's bus and the
Fibre Channel loop and manages the transfer Homogeneous — Of the same or similar kind.
of information between the 2 channels. In Host — Also called a server. Basically a central
order to minimize the impact on host computer that processes end-user
processor performance, the host bus adapter applications or requests.
performs many low-level interface functions Host LU — Host Logical Unit. See also HLU.
automatically or with minimal processor
Host Storage Domains — Allows host pooling at
involvement.
the LUN level and the priority access feature
HCA — Host Channel Adapter. lets administrator set service levels for
HCD — Hardware Configuration Definition. applications.
HD — Hard Disk. HP — (1) Hewlett-Packard Company or (2) High
HDA — Head Disk Assembly. Performance.
HDD ― Hard Disk Drive. A spindle of hard disk HPC — High Performance Computing.
platters that make up a hard drive, which is HSA — Hardware System Area.
a unit of physical storage within a HSG — Host Security Group.
subsystem.

HDS Confidential: For distribution only to authorized parties. Page G-11


HSM — Hierarchical Storage Management (see —I—
Data Migrator).
I/F — Interface.
HSN — Hierarchical Star Network.
I/O — Input/Output. Term used to describe any
HSSDC — High Speed Serial Data Connector.
program, operation, or device that transfers
HTTP — Hyper Text Transfer Protocol. data to or from a computer and to or from a
HTTPS — Hyper Text Transfer Protocol Secure. peripheral device.
Hub — A common connection point for devices in IaaS —Infrastructure as a Service. A cloud
a network. Hubs are commonly used to computing business model — delivering
connect segments of a LAN. A hub contains computer infrastructure, typically a platform
multiple ports. When a packet arrives at 1 virtualization environment, as a service,
port, it is copied to the other ports so that all along with raw (block) storage and
segments of the LAN can see all packets. A networking. Rather than purchasing servers,
switching hub actually reads the destination software, data center space or network
address of each packet and then forwards equipment, clients buy those resources as a
the packet to the correct port. Device to fully outsourced service. Providers typically
which nodes on a multi-point bus or loop are bill such services on a utility computing
physically connected. basis; the amount of resources consumed
Hybrid Cloud — “Hybrid cloud computing refers (and therefore the cost) will typically reflect
to the combination of external public cloud the level of activity.
computing services and internal resources IDE — Integrated Drive Electronics Advanced
(either a private cloud or traditional Technology. A standard designed to connect
infrastructure, operations and applications) hard and removable disk drives.
in a coordinated fashion to assemble a
IDN — Integrated Delivery Network.
particular solution.” — Source: Gartner
Research. iFCP — Internet Fibre Channel Protocol.
Hybrid Network Cloud — A composition of 2 or Index Cache — Provides quick access to indexed
more clouds (private, community or public). data on the media during a browse\restore
Each cloud remains a unique entity but they operation.
are bound together. A hybrid network cloud IBR — Incremental Block-level Replication or
includes an interconnection. Intelligent Block Replication.
Hypervisor — Also called a virtual machine ICB — Integrated Cluster Bus.
manager, a hypervisor is a hardware
ICF — Integrated Coupling Facility.
virtualization technique that enables
multiple operating systems to run ID — Identifier.
concurrently on the same computer. IDR — Incremental Data Replication.
Hypervisors are often installed on server
iFCP — Internet Fibre Channel Protocol. Allows
hardware then run the guest operating
an organization to extend Fibre Channel
systems that act as servers.
storage networks over the Internet by using
Hypervisor can also refer to the interface TCP/IP. TCP is responsible for managing
that is provided by Infrastructure as a Service congestion control as well as error detection
(IaaS) in cloud computing. and recovery services.
Leading hypervisors include VMware iFCP allows an organization to create an IP SAN
vSphere Hypervisor™ (ESXi), Microsoft® fabric that minimizes the Fibre Channel
Hyper-V and the Xen® hypervisor. fabric component and maximizes use of the
-back to top- company's TCP/IP infrastructure.
IFL — Integrated Facility for LINUX.
IHE — Integrating the Healthcare Enterprise.
IID — Initiator ID.
IIS — Internet Information Server.

Page G-12 HDS Confidential: For distribution only to authorized parties.


ILM — Information Life Cycle Management. ISL — Inter-Switch Link.
ILO — (Hewlett-Packard) Integrated Lights-Out. iSNS — Internet Storage Name Service.
IML — Initial Microprogram Load. ISOE — iSCSI Offload Engine.
IMS — Information Management System. ISP — Internet service provider.
In-band virtualization — Refers to the location of ISPF — Interactive System Productivity Facility.
the storage network path, between the ISPF/PDF — Interactive System Productivity
application host servers in the storage Facility/Program Development Facility.
systems. Provides both control and data ISV — Independent Software Vendor.
along the same connection path. Also called ITaaS — IT as a Service. A cloud computing
symmetric virtualization.
business model. This general model is an
INI — Initiator. umbrella model that entails the SPI business
Interface —The physical and logical arrangement model (SaaS, PaaS and IaaS — Software,
supporting the attachment of any device to a Platform and Infrastructure as a Service).
connector or to another device. ITSC — Informaton and Telecommunications
Internal bus — Another name for an internal data Systems Companies.
bus. Also, an expansion bus is often referred -back to top-
to as an internal bus.
—J—
Internal data bus — A bus that operates only
Java — A widely accepted, open systems
within the internal circuitry of the CPU,
programming language. Hitachi’s enterprise
communicating among the internal caches of
software products are all accessed using Java
memory that are part of the CPU chip’s
applications. This enables storage
design. This bus is typically rather quick and
administrators to access the Hitachi
is independent of the rest of the computer’s
enterprise software products from any PC or
operations.
workstation that runs a supported thin-client
IOC — I/O controller. internet browser application and that has
IOCDS — I/O Control Data Set. TCP/IP network access to the computer on
IODF — I/O Definition file. which the software product runs.
IOPH — I/O per hour. Java VM — Java Virtual Machine.
IOS — I/O Supervisor. JBOD — Just a Bunch of Disks.
IOSQ — Input/Output Subsystem Queue. JCL — Job Control Language.
IP — Internet Protocol. The communications JMP —Jumper. Option setting method.
protocol that routes traffic across the JMS — Java Message Service.
Internet. JNL — Journal.
IPv6 — Internet Protocol Version 6. The latest JNLG — Journal Group.
revision of the Internet Protocol (IP).
JRE —Java Runtime Environment.
IPL — Initial Program Load.
JVM — Java Virtual Machine.
IPSEC — IP security.
J-VOL — Journal Volume.
IRR — Internal Rate of Return. -back to top-
ISC — Initial shipping condition or Inter-System
Communication. —K—
iSCSI — Internet SCSI. Pronounced eye skuzzy. KSDS — Key Sequence Data Set.
An IP-based standard for linking data kVA— Kilovolt Ampere.
storage devices over a network and
KVM — Kernel-based Virtual Machine or
transferring data by carrying SCSI
Keyboard-Video Display-Mouse.
commands over IP networks.
kW — Kilowatt.
ISE — Integrated Scripting Environment.
-back to top-
iSER — iSCSI Extensions for RDMA.

HDS Confidential: For distribution only to authorized parties. Page G-13


—L— networks where it is difficult to predict the
number of requests that will be issued to a
LACP — Link Aggregation Control Protocol. server. If 1 server starts to be swamped,
LAG — Link Aggregation Groups. requests are forwarded to another server
LAN — Local Area Network. A communications with more capacity. Load balancing can also
network that serves clients within a refer to the communications channels
geographical area, such as a building. themselves.
LBA — Logical block address. A 28-bit value that LOC — “Locations” section of the Maintenance
maps to a specific cylinder-head-sector Manual.
address on the disk. Logical DKC (LDKC) — Logical Disk Controller
LC — Lucent connector. Fibre Channel connector Manual. An internal architecture extension
that is smaller than a simplex connector (SC). to the Control Unit addressing scheme that
allows more LDEVs to be identified within 1
LCDG — Link Processor Control Diagnostics.
Hitachi enterprise storage system.
LCM — Link Control Module.
Longitudinal record —Patient information from
LCP — Link Control Processor. Controls the birth to death.
optical links. LCP is located in the LCM.
LPAR — Logical Partition (mode).
LCSS — Logical Channel Subsystems.
LR — Local Router.
LCU — Logical Control Unit.
LRECL — Logical Record Length.
LD — Logical Device.
LRP — Local Router Processor.
LDAP — Lightweight Directory Access Protocol.
LRU — Least Recently Used.
LDEV ― Logical Device or Logical Device
LSS — Logical Storage Subsystem (equivalent to
(number). A set of physical disk partitions
LCU).
(all or portions of 1 or more disks) that are
combined so that the subsystem sees and LU — Logical Unit. Mapping number of an LDEV.
treats them as a single area of data storage. LUN ― Logical Unit Number. 1 or more LDEVs.
Also called a volume. An LDEV has a Used only for open systems.
specific and unique address within a LUSE ― Logical Unit Size Expansion. Feature used
subsystem. LDEVs become LUNs to an to create virtual LUs that are up to 36 times
open-systems host. larger than the standard OPEN-x LUs.
LDKC — Logical Disk Controller or Logical Disk LVDS — Low Voltage Differential Signal
Controller Manual.
LVI — Logical Volume Image. Identifies a similar
LDM — Logical Disk Manager. concept (as LUN) in the mainframe
LDS — Linear Data Set. environment.
LED — Light Emitting Diode. LVM — Logical Volume Manager.
LFF — Large Form Factor. -back to top-
LIC — Licensed Internal Code. —M—
LIS — Laboratory Information Systems.
MAC — Media Access Control. A MAC address is
LLQ — Lowest Level Qualifier. a unique identifier attached to most forms of
LM — Local Memory. networking equipment.
LMODs — Load Modules. MAID — Massive array of disks.
LNKLST — Link List. MAN — Metropolitan Area Network. A
communications network that generally
Load balancing — The process of distributing
covers a city or suburb. MAN is very similar
processing and communications activity
to a LAN except it spans across a
evenly across a computer network so that no
geographical region such as a state. Instead
single device is overwhelmed. Load
of the workstations in a LAN, the
balancing is especially important for

Page G-14 HDS Confidential: For distribution only to authorized parties.


workstations in a MAN could depict Microcode — The lowest-level instructions that
different cities in a state. For example, the directly control a microprocessor. A single
state of Texas could have: Dallas, Austin, San machine-language instruction typically
Antonio. The city could be a separate LAN translates into several microcode
and all the cities connected together via a instructions.
switch. This topology would indicate a Fortan Pascal C
MAN.
High-level Language
MAPI — Management Application Programming
Assembly Language
Interface.
Machine Language
Mapping — Conversion between 2 data
Hardware
addressing spaces. For example, mapping
Microprogram — See Microcode.
refers to the conversion between physical
disk block addresses and the block addresses MIF — Multiple Image Facility.
of the virtual disks presented to operating Mirror Cache OFF — Increases cache efficiency
environments by control software. over cache data redundancy.
Mb — Megabit. M-JNL — Primary journal volumes.
MB — Megabyte. MM — Maintenance Manual.
MBA — Memory Bus Adaptor. MMC — Microsoft Management Console.
MBUS — Multi-CPU Bus. Mode — The state or setting of a program or
MC — Multi Cabinet. device. The term mode implies a choice,
which is that you can change the setting and
MCU — Main Control Unit, Master Control Unit,
put the system in a different mode.
Main Disk Control Unit or Master Disk
Control Unit. The local CU of a remote copy MP — Microprocessor.
pair. Main or Master Control Unit. MPA — Microprocessor adapter.
MCU — Master Control Unit. MPB – Microprocessor board.
MDPL — Metadata Data Protection Level. MPI — (Electronic) Master Patient Identifier. Also
MediaAgent — The workhorse for all data known as EMPI.
movement. MediaAgent facilitates the MPIO — Multipath I/O.
transfer of data between the data source, the MP PK – MP Package
client computer, and the destination storage
media. MPU — Microprocessor Unit.
Metadata — In database management systems, MQE — Metadata Query Engine (Hitachi).
data files are the files that store the database MS/SG — Microsoft Service Guard.
information; whereas other files, such as MSCS — Microsoft Cluster Server.
index files and data dictionaries, store
administrative information, known as MSS — (1) Multiple Subchannel Set. (2) Managed
metadata. Security Services.
MFC — Main Failure Code. MTBF — Mean Time Between Failure.
MG — (1) Module Group. 2 (DIMM) cache MTS — Multitiered Storage.
memory modules that work together. (2) Multitenancy — In cloud computing,
Migration Group. A group of volumes to be multitenancy is a secure way to partition the
migrated together. infrastructure (application, storage pool and
MGC — (3-Site) Metro/Global Mirror. network) so multiple customers share a
single resource pool. Multitenancy is one of
MIB — Management Information Base. A database
the key ways cloud can achieve massive
of objects that can be monitored by a
economy of scale.
network management system. Both SNMP
and RMON use standardized MIB formats M-VOL — Main Volume.
that allow any SNMP and RMON tools to MVS — Multiple Virtual Storage.
monitor any device defined by a MIB. -back to top-

HDS Confidential: For distribution only to authorized parties. Page G-15


—N— —O—
NAS ― Network Attached Storage. A disk array OCC — Open Cloud Consortium. A standards
connected to a controller that gives access to organization active in cloud computing.
a LAN Transport. It handles data at the file OEM — Original Equipment Manufacturer.
level.
OFC — Open Fibre Control.
NAT — Network Address Translation.
OGF — Open Grid Forum. A standards
NDMP — Network Data Management Protocol. organization active in cloud computing.
A protocol meant to transport data between
NAS devices. OID — Object identifier.

NetBIOS — Network Basic Input/Output System. OLA — Operating Level Agreements.

Network — A computer system that allows OLTP — On-Line Transaction Processing.


sharing of resources, such as files and OLTT — Open-loop throughput throttling.
peripheral hardware devices. OMG — Object Management Group. A standards
Network Cloud — A communications network. organization active in cloud computing.
The word "cloud" by itself may refer to any On/Off CoD — On/Off Capacity on Demand.
local area network (LAN) or wide area
network (WAN). The terms “computing" ONODE — Object node.
and "cloud computing" refer to services OPEX — Operational Expenditure. This is an
offered on the public Internet or to a private operating expense, operating expenditure,
network that uses the same protocols as a operational expense, or operational
standard network. See also cloud computing. expenditure, which is an ongoing cost for
NFS protocol — Network File System is a protocol running a product, business, or system. Its
that allows a computer to access files over a counterpart is a capital expenditure (CAPEX).
network as easily as if they were on its local ORM — Online Read Margin.
disks. OS — Operating System.
NIM — Network Interface Module. Out-of-band virtualization — Refers to systems
NIS — Network Information Service (originally where the controller is located outside of the
called the Yellow Pages or YP). SAN data path. Separates control and data
NIST — National Institute of Standards and on different connection paths. Also called
Technology. A standards organization active asymmetric virtualization.
in cloud computing. -back to top-

NLS — Native Language Support. —P—


Node ― An addressable entity connected to an P-2-P — Point to Point. Also P-P.
I/O bus or network, used primarily to refer
to computers, storage devices, and storage PaaS — Platform as a Service. A cloud computing
subsystems. The component of a node that business model — delivering a computing
connects to the bus or network is a port. platform and solution stack as a service.
PaaS offerings facilitate deployment of
Node name ― A Name_Identifier associated with applications without the cost and complexity
a node. of buying and managing the underlying
NPV — Net Present Value. hardware, software and provisioning
NRO — Network Recovery Objective. hosting capabilities. PaaS provides all of the
facilities required to support the complete
NTP — Network Time Protocol. life cycle of building and delivering web
NVS — Non Volatile Storage. applications and services entirely from the
-back to top- Internet.
PACS – Picture Archiving and Communication
System.

Page G-16 HDS Confidential: For distribution only to authorized parties.


PAN — Personal Area Network. A PDM — Policy based Data Migration or Primary
communications network that transmit data Data Migrator.
wirelessly over a short distance. Bluetooth PDS — Partitioned Data Set.
and Wi-Fi Direct are examples of personal
PDSE — Partitioned Data Set Extended.
area networks.
Performance — Speed of access or the delivery of
PAP — Password Authentication Protocol.
information.
Parity — A technique of checking whether data Petabyte (PB) — A measurement of capacity — the
has been lost or written over when it is amount of data that a drive or storage
moved from 1 place in storage to another or system can store after formatting. 1PB =
when it is transmitted between computers. 1,024TB.
Parity Group — Also called an array group. This is PFA — Predictive Failure Analysis.
a group of hard disk drives (HDDs) that
PFTaaS — Private File Tiering as a Service. A cloud
form the basic unit of storage in a subsystem.
computing business model.
All HDDs in a parity group must have the
same physical capacity. PGP — Pretty Good Privacy (encryption).
Partitioned cache memory — Separate workloads PGR — Persistent Group Reserve.
in a “storage consolidated” system by PI — Product Interval.
dividing cache into individually managed PIR — Performance Information Report.
multiple partitions. Then customize the PiT — Point-in-Time.
partition to match the I/O characteristics of
assigned LUs. PK — Package (see PCB).

PAT — Port Address Translation. PL — Platter. The circular disk on which the
magnetic data is stored. Also called
PATA — Parallel ATA. motherboard or backplane.
Path — Also referred to as a transmission channel, PM — Package Memory.
the path between 2 nodes of a network that a
POC — Proof of concept.
data communication follows. The term can
refer to the physical cabling that connects the Port — In TCP/IP and UDP networks, an
nodes on a network, the signal that is endpoint to a logical connection. The port
communicated over the pathway or a sub- number identifies what type of port it is. For
channel in a carrier frequency. example, port 80 is used for HTTP traffic.

Path failover — See Failover. POSIX — Portable Operating System Interface for
UNIX. A set of standards that defines an
PAV — Parallel Access Volumes. application programming interface (API) for
PAWS — Protect Against Wrapped Sequences. software designed to run under
PB — Petabyte. heterogeneous operating systems.
PBC — Port By-pass Circuit. PP — Program product.
PCB — Printed Circuit Board. P-P — Point-to-point; also P2P.
PCHIDS — Physical Channel Path Identifiers. PPRC — Peer-to-Peer Remote Copy.
PCI — Power Control Interface. Private Cloud — A type of cloud computing
defined by shared capabilities within a
PCI CON — Power Control Interface Connector
single company; modest economies of scale
Board.
and less automation. Infrastructure and data
PCI DSS — Payment Card Industry Data Security reside inside the company’s data center
Standard. behind a firewall. Comprised of licensed
PCIe — Peripheral Component Interconnect software tools rather than on-going services.
Express.
PD — Product Detail. Example: An organization implements its
own virtual, scalable cloud and business
PDEV— Physical Device. units are charged on a per use basis.

HDS Confidential: For distribution only to authorized parties. Page G-17


Private Network Cloud — A type of cloud QoS — Quality of Service. In the field of computer
network with 3 characteristics: (1) Operated networking, the traffic engineering term
solely for a single organization, (2) Managed quality of service (QoS) refers to resource
internally or by a third-party, (3) Hosted reservation control mechanisms rather than
internally or externally. the achieved service quality. Quality of
PR/SM — Processor Resource/System Manager. service is the ability to provide different
priority to different applications, users, or
Protocol — A convention or standard that enables
data flows, or to guarantee a certain level of
the communication between 2 computing
performance to a data flow.
endpoints. In its simplest form, a protocol
can be defined as the rules governing the QSAM — Queued Sequential Access Method.
syntax, semantics, and synchronization of -back to top-
communication. Protocols may be —R—
implemented by hardware, software, or a
combination of the 2. At the lowest level, a RACF — Resource Access Control Facility.
protocol defines the behavior of a hardware RAID ― Redundant Array of Independent Disks,
connection. or Redundant Array of Inexpensive Disks. A
group of disks that look like a single volume
Provisioning — The process of allocating storage
resources and assigning storage capacity for to the server. RAID improves performance
by pulling a single stripe of data from
an application, usually in the form of server
multiple disks, and improves fault-tolerance
disk drive space, in order to optimize the
performance of a storage area network either through mirroring or parity checking
and it is a component of a customer’s SLA.
(SAN). Traditionally, this has been done by
the SAN administrator, and it can be a RAID-0 — Striped array with no parity.
tedious process. In recent years, automated RAID-1 — Mirrored array and duplexing.
storage provisioning (also called auto- RAID-3 — Striped array with typically non-
provisioning) programs have become rotating parity, optimized for long, single-
available. These programs can reduce the threaded transfers.
time required for the storage provisioning
RAID-4 — Striped array with typically non-
process, and can free the administrator from
rotating parity, optimized for short, multi-
the often distasteful task of performing this
threaded transfers.
chore manually.
RAID-5 — Striped array with typically rotating
PS — Power Supply.
parity, optimized for short, multithreaded
PSA — Partition Storage Administrator . transfers.
PSSC — Perl Silicon Server Control. RAID-6 — Similar to RAID-5, but with dual
PSU — Power Supply Unit. rotating parity physical disks, tolerating 2
PTAM — Pickup Truck Access Method. physical disk failures.
PTF — Program Temporary Fixes. RAIN — Redundant (or Reliable) Array of
Independent Nodes (architecture).
PTR — Pointer.
PU — Processing Unit. RAM — Random Access Memory.
RAM DISK — A LUN held entirely in the cache
Public Cloud — Resources, such as applications
and storage, available to the general public area.
over the Internet. RAS — Reliability, Availability, and Serviceability
P-VOL — Primary Volume. or Row Address Strobe.
-back to top- RBAC — Role Base Access Control.

—Q— RC — (1) Reference Code or (2) Remote Control.

QD — Quorum Device RCHA — RAID Channel Adapter.


QDepth — The number of I/O operations that can RCP — Remote Control Processor.
run in parallel on a SAN device; also WWN RCU — Remote Control Unit or Remote Disk
QDepth. Control Unit.

Page G-18 HDS Confidential: For distribution only to authorized parties.


RCUT — RCU Target. language and development environment,
RD/WR — Read/Write. can write object-oriented programming in
which objects on different computers can
RDM — Raw Disk Mapped. interact in a distributed network. RMI is the
RDMA — Remote Direct Memory Access. Java version of what is generally known as a
RDP — Remote Desktop Protocol. RPC (remote procedure call), but with the
ability to pass 1 or more objects along with
RDW — Record Descriptor Word. the request.
Read/Write Head — Read and write data to the RndRD — Random read.
platters, typically there is 1 head per platter
side, and each head is attached to a single ROA — Return on Asset.
actuator shaft. RoHS — Restriction of Hazardous Substances (in
RECFM — Record Format Redundant. Describes Electrical and Electronic Equipment).
the computer or network system ROI — Return on Investment.
components, such as fans, hard disk drives, ROM — Read Only Memory.
servers, operating systems, switches, and
telecommunication links that are installed to Round robin mode — A load balancing technique
back up primary resources in case they fail. which distributes data packets equally
among the available paths. Round robin
A well-known example of a redundant DNS is usually used for balancing the load
system is the redundant array of of geographically distributed Web servers. It
independent disks (RAID). Redundancy works on a rotating basis in that one server
contributes to the fault tolerance of a system. IP address is handed out, then moves to the
back of the list; the next server IP address is
Redundancy — Backing up a component to help
handed out, and then it moves to the end of
ensure high availability.
the list; and so on, depending on the number
Reliability — (1) Level of assurance that data will of servers being used. This works in a
not be lost or degraded over time. (2) An looping fashion.
attribute of any commuter component
Router — A computer networking device that
(software, hardware, or a network) that
forwards data packets toward their
consistently performs according to its
destinations, through a process known as
specifications.
routing.
REST — Representational State Transfer.
RPC — Remote procedure call.
REXX — Restructured extended executor.
RPO — Recovery Point Objective. The point in
RID — Relative Identifier that uniquely identifies time that recovered data should match.
a user or group within a Microsoft Windows
RPSFAN — Rear Power Supply Fan Assembly.
domain.
RRDS — Relative Record Data Set.
RIS — Radiology Information System.
RS CON — RS232C/RS422 Interface Connector.
RISC — Reduced Instruction Set Computer.
RSD — RAID Storage Division (of Hitachi).
RIU — Radiology Imaging Unit.
R-SIM — Remote Service Information Message.
R-JNL — Secondary journal volumes.
RSM — Real Storage Manager.
RK — Rack additional.
RTM — Recovery Termination Manager.
RKAJAT — Rack Additional SATA disk tray.
RTO — Recovery Time Objective. The length of
RKAK — Expansion unit.
time that can be tolerated between a disaster
RLGFAN — Rear Logic Box Fan Assembly. and recovery of data.
RLOGIC BOX — Rear Logic Box. R-VOL — Remote Volume.
RMF — Resource Measurement Facility. R/W — Read/Write.
RMI — Remote Method Invocation. A way that a -back to top-
programmer, using the Java programming

HDS Confidential: For distribution only to authorized parties. Page G-19


—S— SBM — Solutions Business Manager.

SA — Storage Administrator. SBOD — Switched Bunch of Disks.

SA z/OS — System Automation for z/OS. SBSC — Smart Business Storage Cloud.
SBX — Small Box (Small Form Factor).
SAA — Share Access Authentication. The process
of restricting a user's rights to a file system SC — (1) Simplex connector. Fibre Channel
object by combining the security descriptors connector that is larger than a Lucent
from both the file system object itself and the connector (LC). (2) Single Cabinet.
share to which the user is connected. SCM — Supply Chain Management.
SaaS — Software as a Service. A cloud computing SCP — Secure Copy.
business model. SaaS is a software delivery SCSI — Small Computer Systems Interface. A
model in which software and its associated parallel bus architecture and a protocol for
data are hosted centrally in a cloud and are transmitting large data blocks up to a
typically accessed by users using a thin distance of 15 to 25 meters.
client, such as a web browser via the
SD — Software Division (of Hitachi).
Internet. SaaS has become a common
delivery model for most business SDH — Synchronous Digital Hierarchy.
applications, including accounting (CRM SDM — System Data Mover.
and ERP), invoicing (HRM), content SDSF — Spool Display and Search Facility.
management (CM) and service desk Sector — A sub-division of a track of a magnetic
management, just to name the most common disk that stores a fixed amount of data.
software that runs in the cloud. This is the
fastest growing service in the cloud market SEL — System Event Log.
today. SaaS performs best for relatively Selectable segment size — Can be set per partition.
simple tasks in IT-constrained organizations. Selectable Stripe Size — Increases performance by
SACK — Sequential Acknowledge. customizing the disk access size.
SACL — System ACL. The part of a security SENC — Is the SATA (Serial ATA) version of the
descriptor that stores system auditing ENC. ENCs and SENCs are complete
information. microprocessor systems on their own and
they occasionally require a firmware
SAIN — SAN-attached Array of Independent upgrade.
Nodes (architecture).
SeqRD — Sequential read.
SAN ― Storage Area Network. A network linking
Serial Transmission — The transmission of data
computing devices to disk or tape arrays and
bits in sequential order over a single line.
other devices over Fibre Channel. It handles
data at the block level. Server — A central computer that processes
end-user applications or requests, also called
SAP — (1) System Assist Processor (for I/O
a host.
processing), or (2) a German software
company. Server Virtualization — The masking of server
resources, including the number and identity
SAP HANA — High Performance Analytic of individual physical servers, processors,
Appliance, a database appliance technology and operating systems, from server users.
proprietary to SAP. The implementation of multiple isolated
SARD — System Assurance Registration virtual environments in one physical server.
Document. Service-level Agreement — SLA. A contract
SAS —Serial Attached SCSI. between a network service provider and a
SATA — Serial ATA. Serial Advanced Technology customer that specifies, usually in
Attachment is a new standard for connecting measurable terms, what services the network
hard drives into computer systems. SATA is service provider will furnish. Many Internet
based on serial signaling technology, unlike service providers (ISP) provide their
current IDE (Integrated Drive Electronics) customers with a SLA. More recently, IT
hard drives that use parallel signaling. departments in major enterprises have

Page G-20 HDS Confidential: For distribution only to authorized parties.


adopted the idea of writing a service level guidance information. (2) Storage Interface
agreement so that services for their Module. (3) Subscriber Identity Module.
customers (users in other departments SIM RC — Service (or system) Information
within the enterprise) can be measured, Message Reference Code.
justified, and perhaps compared with those
SIMM — Single In-line Memory Module.
of outsourcing network providers.
SLA —Service Level Agreement.
Some metrics that SLAs may specify include:
SLO — Service Level Objective.
• The percentage of the time services will be
available SLRP — Storage Logical Partition.
SM ― Shared Memory or Shared Memory Module.
• The number of users that can be served
Stores the shared information about the
simultaneously
subsystem and the cache control information
• Specific performance benchmarks to (director names). This type of information is
which actual performance will be used for the exclusive control of the
periodically compared subsystem. Like CACHE, shared memory is
• The schedule for notification in advance of controlled as 2 areas of memory and fully non-
network changes that may affect users volatile (sustained for approximately 7 days).
• Help desk response time for various SM PATH— Shared Memory Access Path. The
classes of problems Access Path from the processors of CHA,
• Dial-in access availability DKA PCB to Shared Memory.
• Usage statistics that will be provided SMB/CIFS — Server Message Block
Protocol/Common Internet File System.
Service-Level Objective — SLO. Individual
SMC — Shared Memory Control.
performance metrics built into an SLA. Each
SLO corresponds to a single performance SME — Small and Medium Enterprise
characteristic relevant to the delivery of an SMF — System Management Facility.
overall service. Some examples of SLOs SMI-S — Storage Management Initiative
include: system availability, help desk Specification.
incident resolution time, and application
SMP — Symmetric Multiprocessing. An IBM-
response time.
licensed program used to install software
SES — SCSI Enclosure Services. and software changes on z/OS systems.
SFF — Small Form Factor. SMP/E — System Modification
SFI — Storage Facility Image. Program/Extended.
SFM — Sysplex Failure Management. SMS — System Managed Storage.
SFP — Small Form-Factor Pluggable module Host SMTP — Simple Mail Transfer Protocol.
connector. A specification for a new SMU — System Management Unit.
generation of optical modular transceivers. Snapshot Image — A logical duplicated volume
The devices are designed for use with small (V-VOL) of the primary volume. It is an
form factor (SFF) connectors, offer high internal volume intended for restoration.
speed and physical compactness, and are SNIA — Storage Networking Industry
hot-swappable. Association. An association of producers and
SHSN — Shared memory Hierarchical Star consumers of storage networking products,
Network. whose goal is to further storage networking
SID — Security Identifier. A user or group technology and applications. Active in cloud
identifier within the Microsoft Windows computing.
security model. SNMP — Simple Network Management Protocol.
SIGP — Signal Processor. A TCP/IP protocol that was designed for
SIM — (1) Service Information Message. A management of networks over TCP/IP,
message reporting an error that contains fix using agents and stations.
SOA — Service Oriented Architecture.

HDS Confidential: For distribution only to authorized parties. Page G-21


SOAP — Simple object access protocol. A way for SRM — Site Recovery Manager.
a program running in one kind of operating SSB — Sense Byte.
system (such as Windows 2000) to
SSC — SiliconServer Control.
communicate with a program in the same or
another kind of an operating system (such as SSCH — Start Subchannel.
Linux) by using the World Wide Web's SSD — Solid-state Drive or Solid-State Disk.
Hypertext Transfer Protocol (HTTP) and its SSH — Secure Shell.
Extensible Markup Language (XML) as the
SSID — Storage Subsystem ID or Subsystem
mechanisms for information exchange.
Identifier.
Socket — In UNIX and some other operating
SSL — Secure Sockets Layer.
systems, socket is a software object that
connects an application to a network SSPC — System Storage Productivity Center.
protocol. In UNIX, for example, a program SSUE — Split SUSpended Error.
can send and receive TCP/IP messages by SSUS — Split SUSpend.
opening a socket and reading and writing
SSVP — Sub Service Processor interfaces the SVP
data to and from the socket. This simplifies
to the DKC.
program development because the
programmer need only worry about SSW — SAS Switch.
manipulating the socket and can rely on the Sticky Bit — Extended UNIX mode bit that
operating system to actually transport prevents objects from being deleted from a
messages across the network correctly. Note directory by anyone other than the object's
that a socket in this sense is completely soft; owner, the directory's owner or the root user.
it is a software object, not a physical Storage pooling — The ability to consolidate and
component. manage storage resources across storage
SOM — System Option Mode. system enclosures where the consolidation
SONET — Synchronous Optical Network. of many appears as a single view.
SOSS — Service Oriented Storage Solutions. STP — Server Time Protocol.
SPaaS — SharePoint as a Service. A cloud STR — Storage and Retrieval Systems.
computing business model. Striping — A RAID technique for writing a file to
SPAN — Span is a section between 2 intermediate multiple disks on a block-by-block basis,
supports. See Storage pool. with or without parity.
Spare — An object reserved for the purpose of Subsystem — Hardware or software that performs
substitution for a like object in case of that a specific function within a larger system.
object's failure. SVC — Supervisor Call Interruption.
SPC — SCSI Protocol Controller. SVC Interrupts — Supervisor calls.
SpecSFS — Standard Performance Evaluation S-VOL — (1) (ShadowImage) Source Volume for
Corporation Shared File system. In-System Replication, or (2) (Universal
SPECsfs97 — Standard Performance Evaluation Replicator) Secondary Volume.
Corporation (SPEC) System File Server (sfs) SVP — Service Processor ― A laptop computer
developed in 1997 (97). mounted on the control frame (DKC) and
SPI model — Software, Platform and used for monitoring, maintenance and
Infrastructure as a service. A common term administration of the subsystem.
to describe the cloud computing “as a service” Switch — A fabric device providing full
business model. bandwidth per port and high-speed routing
SRA — Storage Replicator Adapter. of data via link-level addressing.
SRDF/A — (EMC) Symmetrix Remote Data SWPX — Switching power supply.
Facility Asynchronous. SXP — SAS Expander.
SRDF/S — (EMC) Symmetrix Remote Data Symmetric virtualization — See In-band
Facility Synchronous. virtualization.

Page G-22 HDS Confidential: For distribution only to authorized parties.


Synchronous — Operations that have a fixed time storage cost. Categories may be based on
relationship to each other. Most commonly levels of protection needed, performance
used to denote I/O operations that occur in requirements, frequency of use, and other
time sequence, i.e., a successor operation does considerations. Since assigning data to
not occur until its predecessor is complete. particular media may be an ongoing and
-back to top- complex activity, some vendors provide
software for automatically managing the
—T— process based on a company-defined policy.
Target — The system component that receives a Tiered Storage Promotion — Moving data
SCSI I/O command, an open device that
between tiers of storage as their availability
operates at the request of the initiator.
requirements change.
TB — Terabyte. 1TB = 1,024GB. TLS — Tape Library System.
TCDO — Total Cost of Data Ownership.
TLS — Transport Layer Security.
TCO — Total Cost of Ownership. TMP — Temporary or Test Management Program.
TCP/IP — Transmission Control Protocol over
TOD (or ToD) — Time Of Day.
Internet Protocol.
TOE — TCP Offload Engine.
TDCONV — Trace Dump CONVerter. A software
program that is used to convert traces taken Topology — The shape of a network or how it is
on the system into readable text. This laid out. Topologies are either physical or
information is loaded into a special logical.
spreadsheet that allows for further TPC-R — Tivoli Productivity Center for
investigation of the data. More in-depth Replication.
failure analysis. TPF — Transaction Processing Facility.
TDMF — Transparent Data Migration Facility. TPOF — Tolerable Points of Failure.
Telco or TELCO — Telecommunications Track — Circular segment of a hard disk or other
Company. storage media.
TEP — Tivoli Enterprise Portal. Transfer Rate — See Data Transfer Rate.
Terabyte (TB) — A measurement of capacity, data Trap — A program interrupt, usually an interrupt
or data storage. 1TB = 1,024GB. caused by some exceptional situation in the
TFS — Temporary File System. user program. In most cases, the Operating
TGTLIBs — Target Libraries. System performs some action, and then
returns control to the program.
THF — Front Thermostat.
TSC — Tested Storage Configuration.
Thin Provisioning — Thin provisioning allows
storage space to be easily allocated to servers TSO — Time Sharing Option.
on a just-enough and just-in-time basis. TSO/E — Time Sharing Option/Extended.
THR — Rear Thermostat. T-VOL — (ShadowImage) Target Volume for
Throughput — The amount of data transferred In-System Replication.
from 1 place to another or processed in a -back to top-

specified amount of time. Data transfer rates —U—


for disk drives and networks are measured
UA — Unified Agent.
in terms of throughput. Typically,
throughputs are measured in kbps, Mbps UBX — Large Box (Large Form Factor).
and Gb/sec. UCB — Unit Control Block.
TID — Target ID. UDP — User Datagram Protocol is 1 of the core
Tiered storage — A storage strategy that matches protocols of the Internet protocol suite.
data classification to storage metrics. Tiered Using UDP, programs on networked
storage is the assignment of different computers can send short messages known
categories of data to different types of as datagrams to one another.
storage media in order to reduce total UFA — UNIX File Attributes.

HDS Confidential: For distribution only to authorized parties. Page G-23


UID — User Identifier within the UNIX security VLL — Virtual Logical Volume Image/Logical
model. Unit Number.
UPS — Uninterruptible Power Supply — A power VLUN — Virtual LUN. Customized volume. Size
supply that includes a battery to maintain chosen by user.
power in the event of a power outage.
VLVI — Virtual Logic Volume Image. Marketing
UR — Universal Replicator. name for CVS (custom volume size).
UUID — Universally Unique Identifier. VM — Virtual Machine.
-back to top-
VMDK — Virtual Machine Disk file format.
—V— VNA — Vendor Neutral Archive.
vContinuum — Using the vContinuum wizard, VOJP — (Cache) Volatile Jumper.
users can push agents to primary and
secondary servers, set up protection and VOLID — Volume ID.
perform failovers and failbacks. VOLSER — Volume Serial Numbers.
VCS — Veritas Cluster System. Volume — A fixed amount of storage on a disk or
VDEV — Virtual Device. tape. The term volume is often used as a
synonym for the storage medium itself, but
VDI — Virtual Desktop Infrastructure. it is possible for a single disk to contain more
VHD — Virtual Hard Disk. than 1 volume or for a volume to span more
VHDL — VHSIC (Very-High-Speed Integrated than 1 disk.
Circuit) Hardware Description Language. VPC — Virtual Private Cloud.
VHSIC — Very-High-Speed Integrated Circuit. VSAM — Virtual Storage Access Method.
VI — Virtual Interface. A research prototype that VSD — Virtual Storage Director.
is undergoing active development, and the VTL — Virtual Tape Library.
details of the implementation may change
considerably. It is an application interface VSP — Virtual Storage Platform.
that gives user-level processes direct but VSS — (Microsoft) Volume Shadow Copy Service.
protected access to network interface cards. VTOC — Volume Table of Contents.
This allows applications to bypass IP
VTOCIX — Volume Table of Contents Index.
processing overheads (for example, copying
data, computing checksums) and system call VVDS — Virtual Volume Data Set.
overheads while still preventing 1 process V-VOL — Virtual Volume.
from accidentally or maliciously tampering -back to top-
with or reading data being used by another.
Virtualization — Referring to storage
—W—
virtualization, virtualization is the WAN — Wide Area Network. A computing
amalgamation of multiple network storage internetwork that covers a broad area or
devices into what appears to be a single region. Contrast with PAN, LAN and MAN.
storage unit. Storage virtualization is often
used in a SAN, and makes tasks such as WDIR — Directory Name Object.
archiving, backup and recovery easier and WDIR — Working Directory.
faster. Storage virtualization is usually
implemented via software applications. WDS — Working Data Set.
WebDAV — Web-based Distributed Authoring
There are many additional types of and Versioning (HTTP extensions).
virtualization.
Virtual Private Cloud (VPC) — Private cloud WFILE — File Object or Working File.
existing within a shared or public cloud (for WFS — Working File Set.
example, the Intercloud). Also known as a
virtual private network cloud. WINS — Windows Internet Naming Service.

Page G-24 HDS Confidential: For distribution only to authorized parties.


WL — Wide Link. —Y—
WLM — Work Load Manager. YB — Yottabyte.
WORM — Write Once, Read Many. Yottabyte — A highest-end measurement of data
WSDL — Web Services Description Language. at the present time. 1YB = 1,024ZB, or 1
quadrillion GB. A recent estimate (2011) is
WSRM — Write Seldom, Read Many. that all the computer hard drives in the
world do not contain 1YB of data.
WTREE — Directory Tree Object or Working Tree.
-back to top-
WWN ― World Wide Name. A unique identifier
for an open-system host. It consists of a 64-
bit physical address (the IEEE 48-bit format —Z—
with a 12-bit extension and a 4-bit prefix). z/OS — z Operating System (IBM® S/390® or
z/OS® Environments).
WWNN — World Wide Node Name. A globally
unique 64-bit identifier assigned to each z/OS NFS — (System) z/OS Network File System.
Fibre Channel node process. z/OSMF — (System) z/OS Management Facility.
WWPN ― World Wide Port Name. A globally zAAP — (System) z Application Assist Processor
unique 64-bit identifier assigned to each (for Java and XML workloads).
Fibre Channel port. A Fibre Channel port’s ZCF — Zero Copy Failover. Also known as Data
WWPN is permitted to use any of several Access Path (DAP).
naming authorities. Fibre Channel specifies a
Zettabyte (ZB) — A high-end measurement of
Network Address Authority (NAA) to
data at the present time. 1ZB = 1,024EB.
distinguish between the various name
registration authorities that may be used to zFS — (System) zSeries File System.
identify the WWPN. zHPF — (System) z High Performance FICON.
-back to top- zIIP — (System) z Integrated Information
Processor (specialty processor for database).
—X—
Zone — A collection of Fibre Channel Ports that
XAUI — "X"=10, AUI = Attachment Unit Interface. are permitted to communicate with each
other via the fabric.
XCF — Cross System Communications Facility.
Zoning — A method of subdividing a storage area
XDS — Cross Enterprise Document Sharing. network into disjoint zones, or subsets of
XDSi — Cross Enterprise Document Sharing for nodes on the network. Storage area network
Imaging. nodes outside a zone are invisible to nodes
within the zone. Moreover, with switched
XFI — Standard interface for connecting 10Gb SANs, traffic within each zone may be
Ethernet MAC device to XFP interface. physically isolated from traffic outside the
zone.
XFP — "X"=10Gb Small Form Factor Pluggable.
-back to top-
XML — eXtensible Markup Language.
XRC — Extended Remote Copy.
-back to top-

HDS Confidential: For distribution only to authorized parties. Page G-25


Page G-26 HDS Confidential: For distribution only to authorized parties.
Evaluating this Course
Please use the online evaluation system to help improve our
courses.

Learning Center Sign-in location:


https://learningcenter.hds.com/Saba/Web/Main

HDS Confidential: For distribution only to authorized parties. Page E-1


Evaluating this Course

Page E-2 HDS Confidential: For distribution only to authorized parties.

Potrebbero piacerti anche