Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
All other trademarks, trade names, and service marks used herein are the rightful property of their respective
owners.
NOTICE:
Notational conventions: 1KB stands for 1,024 bytes, 1MB for 1,024 kilobytes, 1GB for 1,024 megabytes, and
1TB for 1,024 gigabytes, as is consistent with IEC (International Electrotechnical Commission) standards for
prefixes for binary and metric multiples.
© Hitachi Data Systems Corporation 2013. All Rights Reserved
HDS Academy 1073
This training course is based on firmware version 11.1.3250.xx also referenced as Angel-2.
INTRODUCTION ..............................................................................................................IX
Welcome and Introductions ........................................................................................................ix
Course Description ..................................................................................................................... x
Required Knowledge and Skills ..................................................................................................xi
Supplemental Courses............................................................................................................... xii
Course Objectives ..................................................................................................................... xiii
Course Topics ........................................................................................................................... xiv
Learning Paths ...........................................................................................................................xv
Collaborate and Share .............................................................................................................. xvi
HDS Academy Is on Twitter and LinkedIn ............................................................................... xvii
8. MAINTENANCE........................................................................................................ 8-1
Module Objectives ................................................................................................................... 8-1
Node IP Addresses 1 of 2 ........................................................................................................ 8-2
Node IP Addresses 2 of 2 ........................................................................................................ 8-3
Management Facilities ............................................................................................................. 8-4
Securing Management Access ................................................................................................ 8-5
Useful Command Line Utilities ................................................................................................. 8-6
CLI Commands and Context ................................................................................................... 8-7
Maintenance Actions................................................................................................................ 8-8
Software Patching .................................................................................................................... 8-9
Software Version Numbers and Names ................................................................................ 8-10
Software Upgrades ................................................................................................................ 8-15
Upgrade Path in Release Notes ............................................................................................ 8-16
Software Version Example from Daily Summary Email......................................................... 8-17
Saving External SMU Configuration Before Upgrade............................................................ 8-18
Saving Embedded SMU and 30x0/4xx0 Server Registry ...................................................... 8-19
External SMU SW Upgrade and Downgrade ........................................................................ 8-20
1a. Selecting CentOS Installation Method Second ................................................................ 8-21
1b. Selecting CentOS Installation Method Clean .................................................................. 8-22
2. External SMU Application Upgrade Procedures ................................................................ 8-23
Embedded SMU Upgrade and Downgrade 30x0/4xx0.......................................................... 8-24
Upgrade of Embedded SMU SW from the GUI ..................................................................... 8-25
Model 30x0 and 4xx0 Server Upgrade Procedures ............................................................... 8-26
Hitachi Command Suite (HCS) and Device Manager ............................................................ 8-27
Hitachi Command Suite (HCS) 7.3.0 ..................................................................................... 8-28
Hitachi Command Suite (HCS) Version 7.4 and up ............................................................... 8-29
SNMP Manager Connectivity (First SNMP Hi-Track) ............................................................ 8-30
Student Introductions
• Name
• Position
• Experience
• Your expectations
Course Description
Successfully completed:
• Hitachi Enterprise Storage Systems Installation, Configuration and
Support
or
• Hitachi Modular Storage Systems Installation, Configuration and Support
For the best results from this training, it is important that you have
experience and skills in:
• NAS and SAN concepts
• TCP/IP networking concepts such as router and switches
• Network management and maintenance
• UNIX/Linux administration
• Microsoft® Windows® administration
Supplemental Courses
Course Objectives
Course Topics
Learning Paths
HDS.com: http://www.hds.com/services/education/
Partner Xchange Portal: https://portal.hds.com/
HDSnet: http://hdsnet.hds.com/hds_academy/
Please contact your local training administrator if you have any questions regarding
Learning Paths or visit your applicable website.
Academy in theLoop!
theLoop: http://loop.hds.com/community/hds_academy/course_announcements_
and_feedback_community ― HDS internal only
4100
Hitachi NAS Platform
140K IOPS per Node
3090 PA 4080 32PB Max Capacity
3090 4060
Features/Capacity/Performance
Performance numbers are only used for comparison purposes. HNAS 3090 is shown
with and without Performance Accelerator. HNAS 3090 PA is with Performance
Accelerator installed. For more exact and customer facing numbers consult the
appropriate and updated performance documents.
F1140 = Hitachi NAS Platform F1140
3080 = Hitachi NAS Platform 3080
3090 = Hitachi NAS Platform 3090
3090 PA = Hitachi NAS Platform 3090 including Performance Accelerator license
4060 = Hitachi NAS Platform 4060
4080 = Hitachi NAS Platform 4080
4100 = Hitachi NAS Platform 4100
CIFS FC
NFS iSCSI
FC-SAN = + #10: FC
IP-SAN/
FCoE = + #11: iSCSI
#12: FCoE
= +
File Module (HNAS)
CIFS/
NFS
F3080 or F30x0 or
F4060 F4xx0
File
modules F3080 or F3080 or F30x0 or
F4060 F4060 F4xx0
Block
modules Model: XS Model: S Model: MH
LAN / WAN
Fibre Channel
The Gateway technology works like a converter between LAN/WAN file-level data
access and Fibre Channel block-level data access. A NAS Gateway is primarily
designed to perform the data store and retrieve tasks among the huge number of tasks
a file server normally is designed to take care of. By designing a server primarily to
carry out the data store and retrieve tasks, it often helps to outperform file servers
designed to span over multiple file server functions.
Benefits:
Feature rich
Asset protection
NAS/SAN consolidation for improved Total Cost of Ownership (TCO)
High-level Implementation
Servers
(NFS, CIFS, FTP, IP Data
and iSCSI) Network
Private
Management
Network
Dual Fibre
Standby SMU
Channel
Switches/SANs
SMU
Consult HiFIRE for interoperability with FC switches, and supported firmware levels.
Support for Enterprise Storage Systems:
Hitachi Unified Storage VM (HUS VM)
Hitachi Virtual Storage Platform (VSP)
Hitachi Universal Storage Platform V (USPV)
Hitachi Universal Storage Platform VM (USPVM)
16PB Initial /
Usable Capacity max per Cluster 2PB 4PB 8PB 16PB
32 PB later
Max # of System Drives per Cluster 512 512 512 512 512
Max concurrent:
Connections 30,000 45,000 60,000 60,000 60,000
Open Files 22,000 90,000 221,000 221,000 474,000
per single Node/Server
6 X 1Gb1 + 6 X 1Gb1 +
LAN / File Serving 4 x 10Gb3 4 x 10Gb3 4 x 10Gb3
2 X 10Gb2 2 X 10Gb2
3090
3080
Licensing
Performance Accelerator is a licensed feature and will only be enabled if the
Performance Accelerator license is present.
Performance Accelerator is supported on:
NAS 3090 only
Performance Accelerator is installed by:
Installing a Performance Accelerator license
Performing a full system reboot
If clustered, reboot one node at a time
4060
4080
What Is What
G1
G2
4060/
4080
4100
Scalability Targets
• 125 file systems per cluster • Unified NAS and IP SAN
• File system sizes up to 1PB • Hardware accelerated
• Up to 32PB shared storage • Virtual volumes and servers
• Disk capacity 32PB • Multi-protocol support
• Directory capacity up to • Multi-Tiered Storage (MTS)
16 million files • Policy-based management
• Up to 1,024 snapshots • Data protection features
High Availability
• Hot swappable units
• Clustering up to 8 nodes
• NVRAM mirroring
• Parallel RAID striping
• Active-Active clustering
Module Summary
Module Review
HNAS Chassis
Memory Memory Mercury FPGA Board (MFB1)
3GB 3GB
10GbE SiliconFS™
Network Data
10GbE Interface Movement File system
(NI) (TFL) Metadata (WFS)
10GbE Cache
10GB
10GbE
NVRAM Memory
2GB 2GB
GbE
GbE Fastpath
Disk FCI FC
Fastpath
Interface Q
GbE
(DI) E FC
GbE 4 FC
Sector
Fastpath Fastpath +
GbE Cache FC
4GB
GbE
(MBI)
HNAS Chassis
Memory Memory Main FPGA Board (MFB2)
4GB 4GB
10GbE SiliconFS™
Network Data
10GbE Interface Movement File system
(NI) (TFL) Metadata (WFS)
Cache
10GB
10GbE
NVRAM Memory
4GB 8GB
10GbE
Fastpath
Disk FCI FC
Fastpath
10GbE
Interface Q
(DI) E FC
10GbE 8 FC
Fastpath
Sector
Fastpath
Cache FC
4GB
(MBI)
CPU Memory in
8 8 16 16 32
GBs
NVRAM1 in GBs 2 2 4 4 8
Metadata Cache in
10 10 10 10 36
GBs
Sector Cache in
4 4 4 4 16
GBs
Other in GBs 8 8 12 12 16
Total in GBs 32 32 46 46 108
(1) The NVRAM Data Retention period will be 72 hours and the NVRAM battery needs replacing every 2 years
MMB
Off the shelf x86 motherboard
Single processor
• 30x0 dual core and 4xx0 quad core
On board 10/100/1000 Ethernet (3)
Connected to 2 x 2.5” HDD (Linux SW RAID-1 configuration)
Runs Debian Linux 5.0
Inter-module communications over loopback Linux sockets and
shared memory
64 bit architecture
Model 30x0 8GB memory
Model 40X0 16GB memory
Model 4100 32GB memory
MFB
Single custom PCB is similar in size to a motherboard
Connects to MMB using four PCIe lanes
Six FPGAs (Replacing 13 High-performance NAS FPGAs)
Model 30x0 24GB memory
Model 40X0 50GB memory
Model 4100 76GB memory
ALERT
LED
PWR
SWITCH
RESET
SWITCH
2 X REDUNDANT, RESERVED 2 x USB SERIAL RESERVED 2 x 10/100/1000 MOTHERBOARD PORT LAYOUT MAY VARY.
HOT-SWAPPABLE MOTHERBOARD MOUSE PORT MOTHERBOARD
ETHERNET KEY MOTHERBOARD PORTS ARE
AND KEYBOARD VIDEO
PSU RJ45 MANAGEMENT PORTS IDENTIFIED BY LABELLING.
(future use)
PWR
SWITCH
RESET
SWITCH
2 X REDUNDANT, MOTHERBOARD 2 x USB RJ45 SERIAL MOTHERBO 2 x 10/100/1000 MOTHERBOARD PORT LAYOUT MAY VARY.
HOT-SWAPPABLE MOUSE AND PORT ARD VIDEO ETHERNET KEY MOTHERBOARD PORTS ARE
PSU KEYBOARD MANAGEMENT PORTS IDENTIFIED BY LABELLING.
Mouse
Keyboard
VGA
USB ports for Keyboard and/or external media Serial port for initial setup.
Hitachi NAS 4060, 4080, and 4100 have 3 sets of Ethernet ports.
From left to right we have 2 10GbE cluster interconnect ports for use when
clustering NAS 4060 and 4080 systems.
Then four 10GbE cluster interconnects (SFP+). For client file services, there are 4 x
10GbE file serving ports (SFP+).
Can aggregate file serving ports. From 1 and up to 4 aggregations. Direct traffic to
specific ports by giving aggregations the appropriate IP address.
Next in line, there are four 2/4/8Gbps Fibre Channel storage service ports.
All four Fiber Channel ports can be used simultaneously and still maintain their
maximum speed of 8Gbps.
On the MMB, we have the mouse and keyboard PS2 ports and a video port for
connection to a KVM switch, two USB ports, one serial interface.
Two Ethernet ports for connection to the public and private networks for
management access. The third Ethernet port above the USB ports is not currently
active but might be used in the future for Intelligent Platform Management (IPMI).
Supermicro
Tyan
HNAS 3080/3090
TYAN Toledo
HNAS 4060/4080/4100
Supermicro
HNAS 4060/4080/4100
TYAN
STATUS LED
3100/3200 NIM Module 3080/3090/4060/4080/4100 MFB
RESET Switch
3100/3200 NIM Module 3080/3090/4060/4080/4100 MFB
RESET Switch
• With all Hitachi NAS Platforms, pressing the reset button is always preferable
to pulling the power cables or using the main switch
• Generates diagnostic dumps THE RESET SWITCH
AND POWER
Power Switch 3080/3090/4060/4080/4100 SWITCHES ARE
• Effectively a motherboard power switch RECESSED AND
REQUIRE THE
• Should not be required in normal use INSERTION OF A PEN
OR SIMILAR OBJECT
TO ACTIVATE
3080/3090 (450W)
DC GOOD LED
AC GOOD LED
April 2011
Pentium 4 2.8 GHz, Pentium Dual-Core 1.8 GHz, Intel Core 2 Duo E7500 2.93 GHz,
1 GB, 80GB (SATA) 1 GB, 500GB (SATA) 4 GB, 1TB (SATA)
SMU400 will not be released and GA at the same time as HNAS 4xx0 and Angel-2
SW release.
Module Summary
Module Review
1 2 3 4
MFB MBI EVS0
server
SOAP
Client
SOAP
Atlas
Atlas
PAPI
SOAP
client
MCP
PAPI PAPI
SOAP
server
BALI
Linux
SMU
The underlying operating system is Linux. Linux manages the hardware, including
the mirrored HDDs and the network protocol stack.
SOAP = Simple Object Access Protocol
MFB = Mercury FPGA Board
MMB = Mercury Motherboard
MCP = Mercury Charge/Power Board
SMU = System Management Unit
PAPI = Platform API
BALI = BOS And Linux Incorporated
1 2 3 4
MFB MBI EVS0
server
SOAP
Client
SOAP
Atlas
Atlas
PAPI
SOAP
client
MCP
PAPI
PAPI
SOAP
server
BALI
Linux
SMU
A software platform
BALI starts after Linux is running and is the software that controls the NAS node
functionality.
BALI = BOS and Linux Incorporated.
PAPI communicates the necessary information to the Linux platform for execution.
And, it scans periodically for Linux configuration changes and fixes any
discrepancies. The custom FPGA System Board is managed through a device driver
as any other device would be. The Linux network stack provides connectivity used
for management.
There is a SOAP client or server for each of the major BlueArc software components.
SOAP was implemented first in Stone-1 v6.0, which enables different firmware
versions to communicate. SOAP is an industry standard. If you are not familiar
with SOAP, it is a simple XML based protocol used to allow applications to
exchange information over HTTP. It makes the individual components fairly
independent of each other making development and modifications much simpler.
SOAP = Simple Object Access Protocol
PAPI = Platform API if you try to change Linux, PAPI will overwrite the changes
API = Application Programming Interface.
XML = Extensible Markup Language
The PAPI services can be restarted on request.
Virtualization
• Virtual file system and volumes
• Basic and Premium Deduplication
• Enterprise Virtual servers (EVS)
• Clustered Name Space (CNS)
Storage Management
• Integrated tiered storage
• Tiered File Systems (TFS)
• Policy-based data migration and replication
Data Protection
• Snapshots
• Asynchronous replication
• Anti-virus scanning
• Disk-to-disk and disk-to-tape backup
• TrueCopy Remote Replication and
ShadowImage Replication
• Synchronous Disaster Recovery
EVS Security
Model
iSCSI
Ultra
BASE
Key
The above example is only to explain the concept of license bundles and individual
licenses. The bundles might be changed for reasons like adjusting the solution to the
market and competitive solutions.
Module Summary
Module Review
Module Objectives
1. Rack mounting
2. Pre cabling
a) To avoid IP-address conflicts do not connect any customer
facing network to the nodes until initial setup is completed
3. Fibre Channel (FC) switch configuration
4. Storage subsystem configuration
5. Initial setup of the first node
If a single node, install SMU application and process stops here;
otherwise continue:
6. Initial setup of SMU
a) SMU Initial Configuration (CLI — Command Line Interface)
b) SMU Wizard (GUI)
7. Initial setup of the second node in the cluster
8. Join the second node to the cluster
Rack Mounting
server
SOAP
Client
management
SOAP
SOAP
client
WEB
SSC
SOAP server
PAPI
IMS
server
eth1
supervisor
supervisor
Linux SSC
SSH
server
manager supervisor admin
root manager nasadmin supervisor nasadmin
nasadmin nasadmin
root@mercury(bash):~#
Console
eth0
Console
IMS
Console
Public management
server
SOAP
Client
SOAP
SOAP supervisor
supervisor
client
SSC
IMS
SSC
SOAP
PAPI server
eth1
supervisor SMU
supervisor
Linux SSC
SSH
server
manager admin
root manager
nasadmin nasadmin
nasadmin nasadmin
Console
root@mercury(bash):~#
eth0
Console
IMS
Console
Public management
Interface configuration:
115,200 bps
8 data bits
1 stop bit
No parity
No Flow control
VT100 emulation
Node1
Collect all customer
(Clustername-1)
Node 2
Mycluster-1 192.0.2.11
related information
(Clustername-2) Mycluster-2 192.0.2.12
and fill in this form as
External SMU Smu1 192.0.2.10 a minimum.
The IP address in the red network is optional. If components on the red network
need to be managed by the internal SMU or the customer has services or Hi-Track
on that network, an IP address for the Admin Virtual Node is required.
Public
Node 1 eth0
Admin EVS IP
1. CLI setup 12.120.56.111
eth1
GW
Private Public
Default settings
Setting Default 1 Default 2
Root password nasadmin nasadmin
Manager password nasadmin nasadmin
Admin password nasadmin nasadmin
Admin EVS public IP address (eth0) 192.168.31.xxx 192.168.4.xxx
Subnet mask 255.255.255.0 255.255.255.0
Admin EVS private IP address (eth1) 192.0.2.2 192.0.2.2
Node private IP address (eth1) 192.0.2.200 192.0.2.200
Subnet mask 255.255.255.0 255.255.255.0
Gateway 192.168.31.254 192.168.4.1
Host name myhost testhost
Domain mydomain.com testdomain.com
The 4060, 4080, and 4100 models are not preconfigured with any default
configuration. Therefore the process for initial setup is somewhat different from the
3080 and 3090 models.
WARNING: root access should be used only under instruction from your support
team. Modifying system settings or installed packages could adversely affect
the server.
root@mercury(bash):~# nas-preconfig
This script configures the server's basic network settings (when such settings
have not been set before).
Please provide the server's:
- IP address
- netmask
- gateway
- domain name
- host name
After this phase of setup has completed, further configuration may be carried out via web
browser.
Please enter the Admin Service Private (eth1) IP address Continue on next page > >
Public
Node 1 eth0
Admin EVS IP
12.120.56.111
eth1
GW
1. CLI setup
Private Public
Public
Node 1 eth0
Admin EVS IP
12.120.56.111
eth1
Data EVS1 GW
1. CLI setup
213.1.15.22
Private Public
After the process for initializing the node configuration, the administration process
can be initiated.
Data EVS can be created to enable File Services being offered for the clients
connected via the data network.
In the above example only one EVS (EVS1) is created.
Pay attention to the lack of a scroll function. In the embedded SMU GUI there is no
scroll function. Only one server can be managed.
Clustering from A to Z
SSH/GUI, NTP, Storage, NTP
Switches
SMTP, Hi-Track, ….
Hi-Track….
One IP address
associated with AVN
• eth1 private network
address 192.0.2.45
SMU:12.120.56.222 SMU: 192.0.2.40
Addresses are permanent
as there are no clustering
considerations to worry
about.
6. Manage node 1
7. Add license bundles, TB and cluster key to node 1
8. Promote node 1 as single node cluster
9. Manage node 2
10. Add cluster license key to node 2
11. Add node 2 into cluster
Public
Node 1 eth0
eth1
Admin EVS IP GW
1. CLI setup 192.0.2.45
Private Public
For HNAS 4060, 4080, 4100 models refer to the 4 pages starting with page 13.
2. CLI setup
Node 1 eth0
eth1
Admin EVS IP GW
1. CLI setup 192.0.2.45
Private Public
This part of initial setup is among other parameters defining the public network
access of the SMU. The customer will need to supply the address on the public
customer network, which is intended to be used for the management interface
address.
The serial cable is only intended to be used for the initial installation process. It is
strongly recommended to remove the serial cable after installation to avoid any
performance impact on the management plane.
Node 1 eth0
eth1
Admin EVS IP GW
1. CLI setup 192.0.2.45
Private Public
During the SMU Wizard process, the private LAN address is given. It is
recommended you use the default address range on the “Rack Network”: 192.0.2.X.
Escalation and dump analysis is easier when the addressing follows the default
address and ranges for the private network. The passwords, DNS and Domain,
SMTP host, time zone, and public NTP host access are defined during this process as
well.
Node 1 eth0
eth1
Admin EVS IP GW
1. CLI setup 192.0.2.45
Private Public
In the Managed Servers GUI, specify the IP address of the Admin EVS and the
UserID/Password for the node. This specification is for the SMU to make a
connection to the Admin EVS.
Node 1 eth0
eth1
Admin EVS IP GW
1. CLI setup 192.0.2.45
Private Public
In step 5, the license keys for the primary node are added.
Node 1 eth0
Private IP Address
192.0.2.41
eth1
Admin EVS IP
GW
1. CLI setup
192.0.2.45
Private Public
The Cluster Wizard defines the physical IP address of the primary node which is
used for heartbeat and Cluster Interconnect addresses by the cluster software. It also
promotes the Active-Active cluster and provides a cluster name. This process ends
with a primary node restart.
1. Go to SMU
2. Under Server Settings, click Cluster Wizard
3. Enter cluster name and node IP Address
a. Refer to Lab Configuration Sheet
The quorum device would normally be the SMU that manages the node, but could
actually be any SMU containing quorums and addressable on the private rack
network. As an example, the MetroCluster solution recommends using a quorum in
different location than the Primary SMU or Standby SMU.
Having this flexibility, the quorum could be located in a third location different from
the Primary/Secondary site.
The IP address in the blue network is optional. If the customer has services or Hi-
Track on that network, an IP address for the Admin Virtual Node is required as well
on the blue network.
Private IP Address
192.0.2.41
eth1 eth1 GW
GW
1. CLI setup Admin EVS IP Admin EVS IP
192.0.2.45 192.0.2.46
Private Public
For HNAS 4060, 4080, 4100 models refer to the 4 pages starting with page 13.
Private IP Address
192.0.2.41
eth1 eth1 GW
GW
1. CLI setup Admin EVS IP Admin EVS IP
192.0.2.45 192.0.2.46
8. GUI
5. GUI License Keys: Managed
CFS, NFS, … Cluster:1
Servers
Private Public
In case the single node 2 is going to join an existing cluster, this node also needs to
be managed by the SMU in order to install the license key for clustering.
Select the Managed Server for Node 1 and add Node 2 to the list of managed servers
by specifying the IP Address of the admin EVS of Node 2.
Private IP Address
192.0.2.41
eth1 eth1 GW
GW
1. CLI setup Admin EVS IP Admin EVS IP
192.0.2.45 192.0.2.46
8. GUI 9. GUI
5. GUI License Keys: Managed License Key
CFS, NFS, … Cluster:1 Servers Cluster:1
Private Public
Select the admin EVS of Node 2 and choose Server Settings to add the license key to
Node 2.
Only one license is added to Node 2 (The cluster license: “MAX 1 Nodes”), since the
“protocols” are already licensed on the primary Node 1.
8. GUI 9. GUI
5. GUI License Keys: Managed License Key
CFS, NFS, … Cluster:1
Cluster:1+1=2 Servers Cluster:1
Private Public
1. Go to SMU
2. In the Server status console window scroll down and select Node 1
(Admin EVS)
3. Under Server Settings, click Cluster Configuration and select Add
Cluster Node
a. Enter the IP Address for Node 2
After automatic reboot of Node 2, the node will join the cluster
defined in Node 1
8. GUI 9. GUI
5. GUI License Keys: Managed License Key
CFS, NFS, …Cluster:1+1=2
Cluster:1+1=2
Servers Cluster:1
Private Public
After the process for initializing and establishing a 2-node cluster configuration, the
administration process can be initiated.
Data EVS on either node can be created to enable File Services being offered for
clients connected via the data network.
In the above example only one EVS (EVS1) is created on Node 2.
Module Summary
Module Review
10Gb Protocol
"X" = 10
XFI is the standard interface for connecting 10 Gigabit Ethernet MAC devices to a
XFP interface.
As of the mid-year 2006, most 10GbE products use XAUI interface that has four
lanes running at 3.125Gbit/sec using 8B/10B encoding. XFI provides a single lane
running at 10.3125Gbit/sec with a 64B/66B encoding scheme. The XFP (10Gigabit
Small Form Factor Pluggable) used in models 3080 and 3090 is a hot-swappable,
protocol independent optical transceiver. It typically operates at 850nm, 1310nm, or
1550nm for 10GB/sec SONET/SDH, Fibre Channel, Gigabit Ethernet and other
applications including DWDM links.
2 x SFP+ 10GbE
Cluster Ports
4 x SFP+ 10GbE
Network Ports
• FTLX8571D3BCV
• X = 10 (10GbE)
4 x SFP+ 8Gbp
FC Storage Ports
• FTLF8528P3BNV
• F = FC (Fibre Channel)
Fibre
Channel 1062 300m - 500m - 500m - 500m - 2000m -
Mbps
100 Base-FX
Ethernet
- 2000m - 2000m - 2000m - 2000m - -
1000 Base-SX
Ethernet
275m - 550m - 550m - 550m - - -
1000 Base-LX
Ethernet
- 550m - >550m - >550m - >550m 5000m -
10G Base-
LX4 Ethernet
- 300m - 300m - 300m - - 10Km -
10G Base-
SR/SW 33m - 82m - 300m - 400m - 10Km 40Km
Ethernet
HNAS Chassis
Memory Memory Mercury FPGA Board (MFB)
3GB 3GB
10GbE SiliconFS™
Cluster Network Data
10GbE Interconnect Interface Movement File system
(NI) (TFL) Metadata (WFS)
10GbE Data Cache
Interface 10GB
10GbE
NVRAM Memory
2GB 2GB
GbE
GbE Fastpath
Disk FCI
Fastpath
FC
Interface
GbE Data (DI) FC
GbE Interface FC
Fastpath
Sector FC
Fastpath
GbE Cache
FC
4GB
GbE
(MBI)
HNAS Chassis
Memory Memory Main FPGA Board (MFB2)
4GB 4GB
Private
External IP Management
data network network
connection (internal switch)
External IP
data network
connection
SMU
SMU
Data (public)
Network Public Hitachi NAS Platform
Private
Private Management
Network
Public
Data (public)
Network Public
Private
Private Management
Network
SMU
Public
Data (public)
Network Public
Hitachi NAS Platform
Private
Private Management
Network
Public
Public
EVS1-1
192.168.3.81
172.145.2.14
Public Management
Private
Management
Admin EVS
EVS on Node 1
EVS on Node 2
Admin EVS
The diagram displays the address assignment for the management and public (data)
LAN. Pay attention to the address 192.0.2.25 which is an EVS, but only for
administration purposes. The address can reside on either physical Node-1 or Node-
2, as the other EVS addresses. The addresses 192.0.2.21 and 192.0.2.22 are tightly
coupled to the physical node and cannot move. The address 10.67.64.15 and
10.67.68.169 are the public addresses of the internal admin EVS.
1GbE
10GbE
Next slide
Read notes
A-A
A-A P-P
LACP is a negotiated protocol which uses "Actor" and "Partner" link entities. Partner
takes cues from Actor when Actor decides to bring a link up.
In a single-switch configuration, there is no functional difference between the LACP
and static aggregation. In static aggregation both parties bring up the previously
defined aggregation link member unconditionally.
Where LACP can be utilized to its fullest is in a link/switch failover situation. In this
scenario, one would create a *single* aggregation on the HNAS server side and split
it between two switches (for example, 4 links are going to one switch and 2 links to
the other). Since the Actor can only bring up a logical link (which can be a number of
physical links) with a Partner, only one switch will be active at a time. In a 4+2
scenario, the switch with more links will be favored. In a symmetrical split (for
example, 3+3), any one of the switches can be chosen as an LACP Partner.
Static aggregation link cannot be split between the switches.
Public Private
Hitachi NAS Platform
Data (public)
Network
NTP Public
Server SMU
SMU
time sync.
Fibre Channel
Connectivity
AMS family and HUS 100 have the concept of controllers: controller 0 and
controller 1.
Enterprise family up to VSP does not have controllers, and the left nipple of the port
ID will be used as a virtual controller number. Channel port 1A will belong to
virtual controller 1, 3A to controller 3, and channel port 8c will belong to virtual
controller 8.
HUS VM does not have controllers, but the cluster ID will be interpreted from the
SCSI inquiry command output. Therefore, channel port 1A will belong to cluster 1,
3A to cluster 1, and channel port 8c will belong to cluster 2.
Examples on the following pages using scsi-racks command to create the output.
= 7A
= 8A
Node 1 Node 2
Fibre
Channel ports 8
1 3 6
Fabric 1 Fabric 2
2 9
Ctl-1 50060E800043053 50060E800043052
FC
Set path priority to LUNs on AMS 0A 0B
using odd or even paths 1B 1A Adaptable
50060E800043050
FC
50060E800043051 Ctl-0 Modular
Storage
This is the minimum configuration with a two-way clustered connection. Numbers in the
fabric indicate the port numbers on the FC Switch.
Preferred path configuration:
System Drive 0, 2, 4……. Node 1 port 1 to interface 0A LUN 0, 2, 4…….
System Drive 1, 3, 5……. Node 1 port 3 to interface 1A LUN 1, 3, 5…….
Zoning Fabric 1:
Zone 1: port 1 and 2
Zone 2: port 3 and 2
Zoning Fabric 2:
Zone 1: port 6 and 9
Zone 2: port 8 and 9
Node 1 Node 2
ASIC ASIC
1 2 3 4 1 2 3 4
Fibre
Channel ports
1 3 8
Fabric 1 6 Fabric 2
4 7
2
9
A key difference in the chip-to-chip failover from the node-to-node failover is that
for a chip-to-chip failover, the EVS will stay up on the node. What this means is that
the interaction with the clients is different as follows:
Chip-to-Chip Behavior
One major benefit to the chip-to-chip failover is that unaffected file systems will
continue to serve data, and will not be interrupted.
During System Drive failover to the other chip, the EVS will continue to interact
with connected clients. Clients attempting access to a file system that is using a
System Drive that is moving to the other chip will receive I/O errors.
Once the failover is complete, the I/O errors will stop and normal service will
continue.
Node-to-Node Behavior
For a node-to-node failover, the EVS (and all file systems) will completely disappear
for some period of time. No responses (I/O errors) will be returned. The EVS will
reappear on the other node and normal service can continue.
Thus, in a chip-to-chip failover the clients will maintain connectivity and will get
I/O errors, while in node-to-node failover the clients will lose connectivity and
might not receive errors. The clients should be prepared for both possibilities.
The best way to maintain optimum connectivity and availability while minimizing
potential system impact is to properly configure the system to avoid chip-to-chip
failover unless there is a specific combination of multiple failures (paths for LUNs
must have failed to a particular chip but remain available to the other chip).
Node 1 Node 2
Fibre
Channel ports
Fabric 1 Fabric 2
Node 1 Node 2
ASIC ASIC
1 2 3 4 1 2 3 4
Fibre
Channel ports
Not supported
0E 0F 1E 1F
Hitachi
Unified
Storage
(HUS)
Direct Attached Storage (DAS) is not supported for Hitachi High-performance NAS
models 2100, 2200, 3100, and 3200. It is only supported on Hitachi NAS Platform
models 30x0 and 4xx0.
Node 1
ASIC
1 2 3 4
Fibre
Channel ports
Hitachi
Unified
0A 0C 1A 1C
Storage
(HUS)
Node 1 Node 2
1 3 8
Fabric 1 6 Fabric 2
2 4 7
9
Hitachi
Unified 0E 0F 1E 1F
Storage
(HUS)
Zoning Fabric 1:
Zone 1: port 1, 2 and 4
Zone 2: port 3, 2 and 4
Zoning Fabric 2:
Zone 1: port 6, 7 and 9
Zone 2: port 8, 7 and 9
Node 1 Node 2
1 3 8
Fabric 1 6 Fabric 2
2 4 7
9
Hitachi
Unified 0E 0F 1E 1F
Storage
(HUS)
Node 1 Node 2
1 3
8
6
Fabric 1 Fabric 2
4
Preferred path and 7 9
2
secure storage domains
can be used to fine-tune 3A 5B 4A 6B
Hitachi
performance. Unified
Although more than 2 Storage
paths per LUN is (HUS VM)
supported, engineering
recommend only 2
paths per LUN.
LUN 0 LUN 31 LUN 32 LUN 63
The HNAS development team recommends keeping the number of paths as low as
possible, which means two paths per LUN.
To satisfy this recommendation and keep the default setting, the secure storage
domains could be arranged as above.
Node 1 Node 2
6 8
3
Fabric 1 Fabric 2
4 7
2
9
For proper configuration of Hitachi NAS Platform models 30x0 and 4xx0 node
clusters, the FC port configurations need to be identical. In other words, port 1 on all
cluster nodes needs to see the same logical units (LUNs) on the same disk system
controller port or controller.
Try to follow one Hport at a time on each node:
Hport 1 on node 1 see LUN 0 over 3A and 4A which is virtual controller 3 and 4
Hport 1 on node 2 see LUN 0 over 3A and 4A which is virtual controller 3 and 4
Then try the next Hport on each node:
Hport 3 on node 1 see LUN 0 over 1B and 2C which is virtual controller 1 and 2
Hport 3 on node 2 see LUN 0 over 1B and 2C which is virtual controller 1 and 2
Node 1 Node 2
6 8
3
Fabric 1 Fabric 2
4 7
2
9 BAD!!
Map all LUNs to all ports on the
Hitachi Unified Storage VM, 3A 1B 4A 2C
Hitachi
preferred path can still be used Unified
from NAS node to fine-tune
performance.
Storage
(HUS VM)
For proper configuration of Hitachi NAS Platform models 30x0 and 4xx0 node
clusters, the FC port configurations need to be identical. In other words, port 1 on all
cluster nodes needs to see the same logical units (LUNs) on the same disk system
controller port or controller.
Try to follow one Hport at a time on each node:
Hport 1 on node 1 see LUN 0 over 3A which is virtual controller 3
Hport 1 on node 2 see LUN 0 over 4A which is virtual controller 4
Then try the next Hport on each node:
Hport 3 on node 1 see LUN 0 over 1B which is virtual controller 1
Hport 3 on node 2 see LUN 0 over 2C which is virtual controller 2
The requirement of seeing the storage in the same way from both nodes is not
fulfilled, and EVS migration will be affected.
Node 1 Node 2
Hitachi
Unified
0A 0C 1A 1C
Storage
(HUS)
For proper configuration of Hitachi NAS Platform models 30x0 and 4xx0 node
clusters, the FC port configurations need to be identical. In other words, port 1 on all
cluster nodes needs to see the same logical units (LUNs) on the same disk system
controller.
Try to follow one Hport at a time on each node:
Hport 1 on node 1 see LUN 0 over 0A which is controller 0
Hport 1 on node 2 see LUN 0 over 0C which is controller 0
Then try the next Hport on each node :
Hport 3 on node 1 see LUN 0 over 1A which is controller 1
Hport 3 on node 2 see LUN 0 over 1C which is controller 1
Node 1 Node 2
Hitachi
Unified
0A 0C 1A 1C
Storage
(HUS)
For proper configuration of Hitachi NAS Platform models 30x0 and 4xx0 node
clusters, the FC port configurations need to be identical. In other words, port 1 on all
cluster nodes needs to see the same logical units (LUNs) on the same disk system
controller.
Try to follow one Hport at a time on each node:
Hport 1 on node 1 see LUN 0 over 1A which is controller 1
Hport 1 on node 2 see LUN 0 over 0C which is controller 0
Then try the next Hport on each node:
Hport 3 on node 1 see LUN 0 over 0A which is controller 0
Hport 3 on node 2 see LUN 0 over 1C which is controller 1
The requirement of seeing the storage in the same way from both nodes is not
fulfilled, and EVS migration will be affected.
Node 1 Node 2
High-performance NAS Platform models 3100 and 3200 do not support Direct Attached
Storage (DAS) in a two node cluster configuration, only as a single node.
Hitachi NAS Platform models 30x0 and 4xx0 support single node and two node
clustered configurations using DAS connectivity.
A maximum of two storage systems can be connected using DAS as backend
connectivity.
You can connect storage using direct FC connections, or an FC switch; however, do not
use both connection types in the same system configuration.
The Hitachi NAS Platform in a switch-less configuration using the Hitachi Enterprise
Storage Systems (9900V, USP, and USP-V) introduces cabling restrictions when
connected using direct FC connections.
The Hitachi NAS platform treats the first letter of the port number as a virtual controller
with a limit of two controllers maximum.
Therefore, connections need to be grouped into only two controller groups, and each
controller group must be visible from both nodes and switches.
(For a direct-connect example: connect node 1 to ports 1A and 6A, and node 2 to ports
1B and 6B).
Node 1 Node 2
RS12C/RS24C
3 4 3 4
Ctl A Ctl B
For proper configuration of Hitachi NAS Platform models 30x0 and 4xx0 node
clusters, the FC port configurations need to be identical. In other words, port 1 on all
cluster nodes needs to see the same logical units (LUNs) on the same disk system
controller.
Try to follow one Hport at a time on each node:
Hport 1 on node 1 sees LUN 0 over FC port 3 which is controller A
Hport 1 on node 2 sees LUN 0 over FC port 4 which is controller A
Then try the next Hport on each node:
Hport 3 on node 1 sees LUN 0 over FC port 3 which is controller B
Hport 3 on node 2 sees LUN 0 over FC port 4 which is controller B
Node 1 Node 2
For proper configuration of Hitachi NAS Platform models 30x0 and 4xx0 node
clusters, the FC port configurations need to be identical. In other words, port 1 on all
cluster nodes needs to see the same logical units (LUNs) on the same disk system
controller.
Try to follow one Hport at a time on each node:
Hport 1 on node 1 sees LUN 0 over 5D which is virtual controller 5
Hport 1 on node 2 sees LUN 0 over 7B which is virtual controller 7
Then try the next Hport on each node:
Hport 3 on node 1 sees LUN 0 over 6D which is virtual controller 6
Hport 3 on node 2 sees LUN 0 over 8B which is virtual controller 1
The requirements of seeing the storage in the same way from both nodes is not
fulfilled, and EVS migration will be affected.
Storage Considerations
Port Options
Module Summary
Module Review
H
N
A
S
SD 0 SD 1 SD 2 SD 3 SD 4 SD 5 SD 6 SD 7 SD 8 SD 9 SD
L L S
LDEV 11 LDEV 12 LDEV 13 LDEV 16 LDEV 19 LDEV 20 LDEV 21 LDEV 22 LDEV 23 LDEV 0 D U T
E N O
V R
A
RG G
E
RG = RAID Group
LDEV = Logical Device
HNAS = Hitachi NAS Platform
SD = System Drive
SP = Storage Pool
FS = File System
EVS = Enterprise Virtual Server
SHR = Share
In the SMU, go to Storage Management > System Drives and verify that new
System Drives (in other words, LUNs presented by storage) are visible to the
Hitachi NAS Platform node.
1. Verify storage capacity license limit.
2. Once verified, allow Hitachi NAS Platform Node access to the specified System
Drives.
The refresh status can be executed via CLI using the scsi-refresh command.
DAS stands for Direct attached Storage
You must discover the storage array before you can manage it
When you click on Discover Racks, the IP addresses of both RAID controllers are
discovered, and the array becomes manageable by the BlueArc Systems Server.
Once a rack is added, the following events occur:
The selected RAID racks appear on the RAID Racks list page and on the System
Monitor (for the currently selected managed server).
The SMU begins logging rack events, which can be viewed through the Event
Log link on the RAID Rack Details page.
RAID rack severe events will be forwarded to each managed server that has
discovered the rack and included in its event log. This triggers the server's alert
mechanism, possibly resulting in alert emails and SNMP traps.
The RAID rack’s time is synchronized daily with SMU time.
If system drives are present on the RAID rack, the rack “cache block size” will be
set to 16KB.
Note that if there is a problem with either array controller, the rack will be
discovered, but, in a degraded (partially discovered) state and will have reduced
functionality. You must resolve the problem with the array, then, remove and
rediscover the array.
SMU only has the API scripts to build RAID arrays on BlueArc Storage Arrays (LSI).
On other vendors’ storage, you will use their native application.
Supported RAID types are RAID-1, 5, and 6 on BlueArc RC16 arrays.
1. Navigate to the System Drives page (Home > Storage Management > System
Drives).
2. In the System Drives page, click Create.
3. Select a rack. When the Select RAID Rack page is displayed, select a rack, then
click Next.
4. Indicate the RAID level.
5. Specify the drive parameters (size, name, stripe size).
Select the number of drives in your RAID group. This includes Parity drives.
You are able to create multiple SDs within a single RAID group.
This is not recommended because it will cause the heads on the disk to seek
between 2 physical areas on disk.
Select the stripe depth for your RAID groups, keeping in mind Superflush.
SP SP
FS01/ FS02/ /VV1 02
FS03/ FS04/
01
SD 0 SD 1 SD 2 SD 3 SD 4 SD 5 SD 6 SD 7 SD 8 SD 9 SD
L L S
LDEV 11 LDEV 12 LDEV 13 LDEV 16 LDEV 19 LDEV 20 LDEV 21 LDEV 22 LDEV 23 LDEV 0 D U T
E N O
V R
A
RG G
E
EVS1 EVS2 H
192.168.3.21 192.168.3.25 N
A
S
SP
01 FS01/ FS02/ /VV1
SD 0 SD 1 SD 2 SD 3 SD 4 SD
DP
DP-VOL 15
POOL
DP-VOL 14
DP-VOL 13
S
DP-VOL 12
T
DP-VOL 11 O
R
A
RG G
E
An HDP Pool hosting HNAS System Drives (SDs) should never be over-
provisioned.
HNAS is not HDP thin-provisioned-volume aware.
If an HDP Pool runs out of disk space, the HNAS System Drive experiences SCSI
and I/O errors, fails the entire span and unmounts it automatically .
Always monitor and ensure that the HDP Pools for HNAS are never
oversubscribed.
HNAS does not have the ability to adapt to DP-VOL size changes.
The size of the DPVOLs must never change.
All the DPVOLs used in a HNAS storage pool should have the same
performance capabilities.
Stripe 1
FS1 Stripe 2
Stripe 3
Stripe 4
Parity Stripe
No No No No No No No No No
SD/LDEV Access
access
Access
access
Access
access
Access
access
Access
access
Access
access
Access
access access access
RAID-5 3+1
The storage pool consists of one or more System Drives (SDs). A single SD has the
same capacity (in bytes) as a Logical Unit Number (LUN) presented over the SAN.
This diagram illustrates the concept of storage pools and should not be interpreted
as a best practice. For best practices, consult the appropriate documentation for the
modular or enterprise disk subsystems.
Stripeset
Stripe 1
FS1 Stripe 2
Stripe 3
Stripe 4
Parity Stripe
No No No No No No No No No
SD/LDEV Access
access
Access
access
Access
access
Access
access
Access
access
Access
access
Access
access access access
RAID-5 3+1
Stripe 1
Stripe 2
Stripe 4
Parity Stripe
Storage Pool
No No No No No No No No No
SD/LDEV Access
access
Access
access
Access
access
Access
access
Access
access
Access
access
Access
access access access
With the storage pool concept, more than one file system can be allocated in the
same storage pool. The concept of storage pool requires a license key for storage
pools.
Stripe 1
FS2
Stripe 2
Stripe 3
FS1
Stripe 4
Parity Stripe
Storage Pool
No No No No No No No No No
SD/LDEV Access
access
Access
access
Access
access
Access
access
Access
access
Access
access
Access
access access access
Auto expansion can be used as a kind of thin provisioning on the file system level.
File systems are created with a maximum value, but only a user-defined fraction of
the maximum is pre-allocated. In this way the capacity of all file systems can be
greater than the available space in the storage pool. When more data is added to the
file system, the pre-allocated space expands as needed. Of course, the file systems
together cannot grow larger than the storage pool capacity allows, so growth of the
file systems should be taken into consideration when enlarging the storage pool.
The HDP feature introduces a virtualization layer that hides the RAID
group layout for the HNAS nodes
The best practice is to assign one SD/LUN/DP-VOL mapping, into
one SDG
DP-VOL 4
DP-VOL 3
DP-VOL 2
DP-VOL 1
DPV = DP-VOL
scsi-queue-limits-show:
. . . . …..HITACHI AMS500 / DF700M Default: per controller: 0, per target port: 500, per system drive: 32
Current: per controller: 0, per target port: 500, per system drive: 32
HITACHI AMS1000 / DF700H Default: per controller: 0, per target port: 500, per system drive: 32
Current: per controller: 0, per target port: 500, per system drive: 32
HITACHI SMS100 / SA800 Default: per controller: 0, per target port: 500, per system drive: 32
Current: per controller: 0, per target port: 500, per system drive: 32
HITACHI SMS110 / SA810 Default: per controller: 0, per target port: 500, per system drive: 32
Current: per controller: 0, per target port: 500, per system drive: 32
HITACHI AMS2100 / DF800S Default: per controller: 0, per target port: 500, per system drive: 32
Current: per controller: 0, per target port: 500, per system drive: 32
HITACHI AMS2300 / DF800M Default: per controller: 0, per target port: 500, per system drive: 32
Current: per controller: 0, per target port: 500, per system drive: 32
HITACHI AMS2500 / DF800H Default: per controller: 0, per target port: 500, per system drive: 32
Current: per controller: 0, per target port: 500, per system drive: 32
HITACHI VSP / R700 Default: per controller: 0, per target port: 2000, per system drive: 32
Current: per controller: 0, per target port: 2000, per system drive: 32
HITACHI HUS110 / DF850XS Default: per controller: 0, per target port: 500, per system drive: 32
Current: per controller: 0, per target port: 500, per system drive: 32
HITACHI HUS130 / DF850S Default: per controller: 0, per target port: 500, per system drive: 32
Current: per controller: 0, per target port: 500, per system drive: 32
HITACHI HUS150 / DF850MH Default: per controller: 0, per target port: 500, per system drive: 32
Current: per controller: 0, per target port: 500, per system drive: 32
HITACHI HUS-VM / HM700 Default: per controller: 0, per target port: 2000, per system drive: 32
Current: per controller: 0, per target port: 2000, per system drive: 32
HITACHI Default HDS / UNKNOWN / OTHER Default: per controller: 0, per target port: 256, per system drive:
32
Current: per controller: 0, per target port: 256, per system drive: 32
When you only allow access to the SDs you need, it will make the storage pool
assignment much easier. In this scenario we select access to 4 SDs on the system
drive screen and now, using the storage pool wizard, all you need is to check all and
then continue. At this stage not considering which SDs to use.
The file system on the Hitachi NAS Platform was originally called “Silicon File
System”. The HNAS file system can be displayed as shown on the screen capture.
Newer file system versions are called Wise File System version 1 (WFS1) and Wise
File System version 2 (WFS2).
Assign the
file system
to an EVS.
Disable Auto-
Expansion and
specify initial file WORM is
system size to supported
create file systems by Hitachi.
with maximum Format for BlueArc
capacity. JetMirror target.
Block Size:
32KB – Best Performance - big files Prepare for Deduplication.
4KB – Optimal space utilization.
Metadata Tier 0
High Speed
Small Reads SSD or SAS
Tier 1 & Writes Disks
Metadata
High Speed
& User Data
SAS Disks Tier 1
User Data Larger
Lower Cost
Reads
NL-SAS or SATA
& Writes
Disks
Traditional file system Tiered File Systems
Step 1
Step 2
span-list
SP SP
FS01/ FS02/ /VV1 02
FS03/ FS04/
01
SD 0 SD 1 SD 2 SD 3 SD 4 SD 5 SD 6 SD 7 SD 8 SD 9 SD
L L S
LDEV 11 LDEV 12 LDEV 13 LDEV 16 LDEV 19 LDEV 20 LDEV 21 LDEV 22 LDEV 23 LDEV 0 D U T
E N O
V R
A
G
RG E
RG = RAID Group
LDEV = Logical Device
HNAS = Hitachi NAS Platform
SD = System Drive
SP = Storage Pool
FS = File System
EVS = Enterprise Virtual Server
SHR = Share
What Is Different?
Stateful/
No historical relationship Client/server share history
Stateless
Do not have to re-authenticate Have to re-authenticate
connection
File and Check U-ID /G-ID at request Use ACL for the share
Directory time U-ID (SID) checked
Security U-ID/G-ID per file/directory against ACL
File and
Advisory locks Mandatory locks
Directory
Works for good citizens only Access decides the lock
Locking
UNIX Permissions
Maybe stupid –
but “Simple”!
Windows Permissions
Maybe Advanced —
but “Complex”!
DNS:
DNS is used to translate host names into IP addresses. With DNS, records must be
created manually for every host name and IP address.
Dynamic DNS:
On TCP/IP networks, the Domain Name System (DNS) is the most common method
to resolve a host name with an IP address, facilitating IP-based communication.
With DNS, records must be created manually for every host name and IP address.
Starting with Microsoft Windows 2000, Microsoft enabled support for Dynamic DNS,
with a DNS database that allows authenticated hosts to automatically add a record
of their host name and IP address, thus eliminating the need for manual creation of
records.
Using NetBIOS:
When enabled, NetBIOS allows NetBIOS and WINS on this server. If this server
communicates by name with computers that use older Microsoft Windows versions,
this setting is required. By default, the server is configured to use NetBIOS.
Disabling NetBIOS has some advantages:
Simplifies the transport of SMB traffic
Removes WINS and NetBIOS broadcast as a means of name resolution
Standardizes name resolution on DNS for file and printer sharing
The server registers each CIFS name and IP address with the
directory’s Dynamic DNS server (DDNS)
Same EVS
represented 3 times
Domain
Controller
ADS Computers
CIFS Shares
Multi-protocol Access
LAG
HNAS 3080
Target FTP
Share Export
LUN Dir
To use Internet Small Computer Systems Interface (iSCSI) storage on the server, one
or more iSCSI logical units (LUs) must be defined. iSCSI logical units share blocks of
SCSI storage that are accessed through iSCSI targets. iSCSI targets can be found
through an iSNS database or through a target portal. Once an iSCSI target has been
found, an Initiator running on a Microsoft Windows server can access the logical
unit as a “local disk” through its target. Security mechanisms can be used to prevent
unauthorized access to iSCSI targets.
On the server, an iSCSI logical unit shares regular files residing on a file system. As a
result, iSCSI benefits from file system management functions provided by the server,
such as NVRAM logging, snapshots, and quotas.
Module Summary
Module Review
Module Objectives
EVS is the virtual file server component of the Hitachi NAS Platform
solution
Maximum of 64 EVSs per server node/cluster nodes
Anatomy of an EVS:
• One or more file serving IP
addresses
• Can host one or more file
systems
• Is the container for CIFS
shares, NFS exports, and
more
• Bound to one Link
Aggregation Group (LAG)
• In a cluster failover scenario,
EVSs migrate from the failed
node to an online node
These screen shots explain the IP address assignments for EVS, as well as the EVS
types.
EVS1-1
192.168.3.11
172.145.2.14
Public Management
EVS1-1
192.168.3.11
172.145.2.14
Public Management
2-node Clustering
Data
Hitachi NAS Platform 2-node Cluster Data Network
Network Clients
Cluster
Heartbeat Hitachi NAS Platform
Cluster
Heartbeat
SMU
Cluster Management Mgmt.
Interconnect Network Network
(Quorum device)
System Cluster
Configuration Heartbeat
Cluster
Heartbeat Hitachi NAS Platform
The Hitachi NAS Platform supports 2-node Active-Active (A-A) clusters. In A-A
cluster configurations, each node can host several independent EVSs, which can
service network requests simultaneously. A maximum of 64 EVSs per 2-node cluster
are supported. Should either of the nodes in the cluster fail, the EVSs from the failed
node will automatically migrate to the remaining node. Network clients will not
typically be aware of the failure and will not experience any loss of service, although
the cluster may operate with reduced performance until the failed node is restored.
After the node is restored and is ready for normal operation, the EVS can be
migrated manually back to the original node.
Note: SMU stands for System Management Unit.
Clustering Basics
Network Clients
Hitachi NAS Platform 2-way Cluster Network Clients
When the Hitachi NAS Platform node is configured as a 2-node cluster, then, in
addition to buffering all the file system modifications, each cluster node mirrors the
NVRAM contents of the other cluster node. This mirroring of the cluster nodes’
NVRAM content ensures data integrity in the event of a cluster node failure. When a
cluster node takes over for the failed node, it uses the contents of the NVRAM
mirror to complete all file system modifications that were not yet committed to the
storage by the failed server.
HSI = High Speed Interconnect/Interface
N-way Clustering
Data
Hitachi NAS Platform N-way Cluster Data Network
Network Clients
Hitachi NAS Platform
Cluster
Heartbeat
Cluster
Heartbeat Hitachi NAS Platform
Cluster
Heartbeat
SMU Mgmt.
Cluster Network
Interconnect (Quorum device)
Cluster
Heartbeat Hitachi NAS Platform
System
Configuration
Cluster
Heartbeat Hitachi NAS Platform
Cluster
Heartbeat
CACHE 4 CACHE 3
When the Hitachi NAS Platform node is configured as a cluster, then, in addition to
buffering all the file system modifications, each cluster node mirrors the NVRAM
contents of the other cluster nodes in sequence. This mirroring of the cluster nodes’
NVRAM content ensures data integrity in the event of any one cluster node failure.
When a cluster node takes over for the failed node, it uses the contents of the
NVRAM mirror to complete all file system modifications that were not yet
committed to storage by the failed server.
Cluster Configuration
Quorum Device
Name Name of server hosting the QD (in other words, the SMU
on which the QD resides).
IP IP address of server hosting the QD (in other words, the
Address SMU on which the QD resides).
Status QD status:
Item Description
• Configured - Attached to the cluster. The QDs vote is not
Cluster Name Name of cluster. needed when any cluster contains an odd number of
operational nodes..
Status Overall cluster status (online or offline). • Owned - the QD is attached to the cluster and owned by
Health Cluster health: a specific node in the cluster.
• Robust • Not up - the QD cannot be contacted.
• Degraded • Seized - the QD has been taken over by another cluster.
Quorum Device services are provided by the SMU. While servers and clusters in a
server farm are managed by a single SMU, an SMU can provide quorum services for
up to 8 clusters in a server farm. To do so, the SMU hosts a pool of 8 available
Quorum Devices (QDs). When a new cluster is formed, a QD must be assigned to
the cluster. Once assigned to the cluster, the QD is “owned” by that cluster and is no
longer available. Removing a QD from a cluster releases its ownership and returns
the QD service to the pool of available QDs.
If you need to add or remove the cluster’s QD, click the appropriate button
(Add Quorum or Remove Quorum).
If the QD is removed from the cluster, the port will be released back to the SMU’s
pool of QDs and ports.
Out of all configured EVSs, only the EVS affected by a problem will
failover (migrate) to another node
In case of node hardware or software failure, all EVSs hosted by this
node will migrate
Even an EVS that has migrated to another node, due to failure, can
be migrated to a third node
Failback is performed manually and is an EVS migration to the
preferred node
An EVS that is not running on the preferred node is indicated with
orange in the GUI
Migrating an EVS enables the IP address(s) associated with the EVS
on the other node, together with all services and shares configured
on the EVS
If the Admin EVS is running on a failing node, this admin EVS is
migrated as well
Failback is also a manual operation for the admin EVS
Hosts
192.168.0.3
Arp table
192.168.0.3 : xxx01
192.168.0.4 : xxx02
EVS 1: EVS 2:
IP address : 192.168.0.3 IP address : 192.168.0.4
File services: NFS, CIFS, FTP… File services: NFS, CIFS, FTP…
The way the ARP protocol maps the MAC address and IP address is displayed in
the above diagram under normal operation for two different EVSs on two different
physical nodes.
On Failing Over
Node2 broadcasts
192.168.0.3
gratuitous ARP packets, Update!
which force an update of Arp table
Gratuitous ARP
NAS node1
Failure NAS node2
EVS 1: EVS 2:
Failover
IP address : 192.168.0.3 IP address : 192.168.0.4
File services: NFS, CIFS, FTP… File services: NFS, CIFS, FTP…
After Failover
NASFailure
node1 NAS node2
EVS 2:
IP address : 192.168.0.4
File services: NFS, CIFS, FTP…
EVS 1:
After the failover process is completed, the updated ARP table in the clients will
associate the IP for EVS 1 with the same MAC address as for EVS 2. This way, the
clients using the IP address or the associated name for EVS 1 on node1 will not
detect any difference before and after failover. Clients with a historical host
relationship (stateful connection) like CIFS will need to re-authenticate before a
transfer can be re-established.
syslogd SMU
Hi-Track Monitor
SNMP
SMTP
Failure
Data
Network
Private
Depending on the customer network and server configuration, several or all error
reporting methods can be used for error reporting. On the SMU, the error-reporting
relay functions must be configured, and the “Network Management” and “Mail
Servers” adjusted by the customer to reflect the configuration of the SMU.
Alternatively SMTP Servers as an example can reside on the Data Network as well.
Hi-Track® Monitor uses “SNMP get” commands to do a health check, and an EVS
admin IP address on the public LAN can be used as well to interrogate any status
changes and issue the alerting process setup by the CE.
No redundancy except
Network and FC links
EVS EVS
FS FS FS If the Node fails, loses
SPAN SPAN
system drives Network connectivity or cannot
reach the storage: no
fileservice can be provided
Groningen Utrecht
Groningen Utrecht
P P P P TrueCopy S S S S
S S S S TrueCopy P P P P
There are other approaches to combine high Availability (HA) and Disaster
Recovery (DR) with HNAS, however these approaches are not called HNAS “Sync
DR Cluster” or “Metro Cluster” and are usually customer-specific services and
configurations delivered by GSS.
Module Summary
Module Review
1. What are the major benefits of clustering the nodes in the system?
2. Will a running application continue to run during and after the
failover?
3. How many nodes can be included in one cluster?
4. Which configuration parameters must be aligned across the nodes in
the cluster?
5. Which conditions will result in an automatic failover?
6. How is the event of a failover reported?
7. List the benefits of creating multiple EVSs in a cluster.
Node IP Addresses 1 of 2
Administrative IP Addresses
• Assigned to 10/100/1000 private management port
or if required to the 1GbE (30x0 only) and 10GbE aggregated ports
▪ Accessing the private management network through the External or
Embedded SMU
▪ Creating an admin services IP address to the 1GbE (30x0 only) or
10GbE aggregated ports
• On the 30x0 and 4xx0 cluster configuration the eth0 interface can be
used to administrate and monitor the nodes as well
• Server administration using SMU, SSC and SSH
• IP-based access restriction on a per-service basis
Node IP Addresses 2 of 2
Management Facilities
Management Services:
• HTTPS — GUI, Primary Management Interface
▪ https://<SMU_IP>/
• ssh – Access the Node CLI
▪ ssh manager@<SMU_IP>; enter the managed server
▪ ssh supervisor@192.0.2.2
• Telnet – Access the Node CLI
• ssc/pssc – Utility for Running Remote Commands
▪ ssc –u supervisor –p supervisor 192.0.2.2 <command>
• scp – Secure Copy to/from Server Flash
▪ scp <Local_File> supervisor@192.0.2.2:/<File_Name>
▪ scp supervisor@192.0.2.2:/event.log ./event.log
GUI Access:
• Home > SMU Administration > Security Options
• Home > Server Settings
CLI Access:
• mscfg <server>
▪ HTTP — Atlas server
▪ HTTPS — Atlas server (Secure)
▪ Telnet — Telnet server
▪ ssc — SSC/PSSC CLI
▪ vss — VSS Hardware provider DLL connection
▪ SNMP — SNMP agent
• [enable | disable]
• [restrict on|off]
• [addhost <host>] [removehost <host>]
“Tab Completion”
• As an example: > disk + →I > diskusage_applet
help <command>
man <command>
apropos <what>
• all with: |more |grep
Maintenance Actions
Software Patching
10 . 2 . 3073 . 05
We occasionally get questions from customers asking how often we release major
(i.e. 10.0.x, 11.0.x) and minor (i.e. 10.2.x, 11.1.x) HNAS OS code. Moving forward, we
are planning:
1 major release every 15-18 months
3 minor releases every 12 month period
Please keep in mind these are guidelines and we reserve the right to make
adjustments and changes. If you have any additional questions regarding this topic,
free feel to reach out to a member of the HNAS PM team.
Version 4.2 - Nov 06 Version 4.3 - Feb 07 Version 5.0 – Nov 07 Version 6.0 – Oct 08
NODE Version 4.2.???.x Version 4.3 .???.x Version 5.0.???.x Version 6.0.???.x
SMU Version 4.2 .???.x Version 4.3.???.x Version 5.0.???.x Version 6.0.???.x
SMU OS RH 7.2 SMU OS RH 7.2 SMU OS CentOS 4.4 SMU OS CentOS 4.4
SMU Version 6.1 .???.x Version 6.5.???.x Version 7.0.2048.x Version 7.0.2050.x
SMU OS CentOS 4.4 SMU OS Debian 5.0 SMU OS Debian 5.0 SMU OS Debian 5.0
External: External: External:
CentOS 4.4 CentOS 4.4 CentOS 4.4
SMU OS Debian 5.0 SMU OS Debian 5.0 SMU OS Debian 5.0 SMU OS Debian 5.0
External: External: External: External:
CentOS 4.8 CentOS 4.8 CentOS 6.0 CentOS 6.2
SMU OS Debian 5.0 SMU OS Debian 5.0 SMU OS Debian 5.0 SMU OS Debian 5.0
External: External: External: External:
CentOS 4.8 CentOS 6.2 CentOS 6.2 CentOS 6.2
4xx0 only 30x0 and 4xx0 30x0 and 4xx0 30x0 and 4xx0
Version 11.1 – 7/13 Version 11.2 – 08/13 Version 12.0 – Q1/14 Version 13.0 – ?/?
NODE Version 11.1.3250.xx Version 11.2.33xx.xx Version 12.0.xxxx.xx Version 13.0.xxxx.xx
SMU OS Debian 5.0 SMU OS Debian 5.0 SMU OS Debian 5.0 SMU OS Debian 5.0
External: External: External: External:
CentOS 6.2 CentOS 6.2 CentOS 6.? CentOS 6.?
Software Upgrades
“Rolling” upgrades means doing the upgrade node by node while the customer still
has access to the file systems and shares on the other nodes.
In rolling upgrade (in green), cluster nodes may boot one-at-a-time into the new
firmware version. EVS migration works between revisions that support rolling
upgrades, and each revision can read NVRAM written by the other revision.
Cluster upgrades (in red) require all cluster nodes to shut down and boot into the
new firmware version simultaneously. EVS migration between revisions does not
work. Often NVRAM from one version is unreadable in the other version, which
requires file systems to be fully unmounted in one version before they can be
mounted in the other version.
[1] Due to defect 58192, upgrades from versions 8.0 through 8.2.2312.08 must go to
8.1.2312.09 or 8.1.2350.22 before going to a higher version.
[2] Due to defect 66378, upgrades from 8.1.2350.22 (or earlier 8.X builds) require
careful EVS migration between nodes during rolling upgrades to 8.1.2350.22, and
again from here to any higher version. See release notes for detailed instructions.
[3] Due to defect 66551, file systems will not mount without intervention in the event
of a failover from a node running 10.2 to a node running 10.0, so a cluster should not
be left with a node on each level any longer than necessary for the upgrade process.
Since the 5.0.xxx.xx release including CentOS 4.4 the external SMU
is made ready to be partitioned for a dual boot concept
This concept enable easy fallback to an earlier version in case of a
problem
To ensure fall back to an older version always use:
second-kvm — Second SMU OS install using KVM. or
second-serial — Second SMU OS install using serial console.
External SMU
smu-boot-alt-partition
Grub loader
Note: Code upgrades from 8.x to 10.x and 10.0 to 10.2 require a clean-kvm or clean-
serial upgrade. Fallback means downgrade to the earlier SMU version, and
will again be a "clean" process. Configuration backups are essential for both
up- and downgrade.
This process only installs the Linux Operating system and makes the other partition
ready to host the SMU Application in the next step.
This process formats the complete HDD and installs the Linux operating system and
makes one partition ready to host the SMU Application in the next step.
This process installs the HNAS SMU application, using the CD/DVD player build
into the external SMU.
Embedded SMU:
• To transfer the updated files select your preferred method:
▪ Connect DVD/CD player with media to the node
▪ SCP ISO image to the node and mount the ISO file
▪ Connect a USB memory stick and mount the ISO file
▪ Transfer the packages over HTTP using the GUI (Avoid Wi-Fi!)
• Install and upgrade the same as the external SMU
• Uninstall process is available for the Embedded SMU only
• Downgrade of the embedded SMU will be uninstalled followed by
reinstalled
Linux upgrade/downgrade:
• Will be automated
• Linux patching has until now fixed known issues
Embedded SMU program will be uninstalled on nodes assembled in
2013 and later
This upgrade procedure using HTTP to upload an ISO image should only be used
for embedded SMU upgrade. The external SMU should NOT be upgraded using this
method.
The file format for Hitachi NAS 3080 and 3090 must be in tar format.
HCS version 7.3.x support link and launch, calling the appropriate page in the SMU
web GUI.
Over time with newer releases more and more functions will be executed as CLI
commands in the background, making it transparent to the users how the task is
executed.
2. SNMP
Manager?
Admin EVS
1. SNMP
Manager?
HNAS 4xx0
Configuring the SNMP Hi-Track Monitor is done in the same way as for FC
switches and NetApp NAS Gateways.
Type in the serial number correctly since this is not interrogated from the
management information base (MIB) file in the SNMP agent.
The IP address is represented by an administrative EVS IP address addressable
through the customer’s network and aggregates.
SNMP Access ID reflects the public RO community defined before in the Hitachi
NAS node.
This method is still supported, but the install base is rapidly migrating to the
new SMU CLI method with a lot more detailed information and capabilities.
Monitoring Devices
From Hi-Track Monitor version 5.7 and up, a new monitoring method
has been introduced
Hi-Track monitor: log into SMU using SSH and manager account
Will monitor all entities managed by SMU
Remote user account can be customized
Only the SMU IP address needs to be registered
Hitachi NAS (HNAS) Server accounts will automatically be registered
Issuing commands against Admin EVS such as:
diagshowall and eventlog-show
Admin EVS
HNAS 4xx0
2. Hi-Track
Monitor?
Admin EVS
1. Hi-Track
Monitor?
HNAS 4xx0
Monitoring Devices
Logical View
Module Summary
Module Review
Call-home Mechanism:
• SMTP-based mechanism for alerts and monitoring
• Selective notification profiles
• Daily performance data included
SNMP v1/v2c
Syslog
Telnet/SSH/SSC access to NAS Platform Nodes (admin EVS),
command line interface (CLI)
Hi-Track Monitor from version 3.8 and up
Hitachi Device Manager software
Hitachi Command Suite (HCS)
Admin EVS
3. SMTP Server?
HNAS 4xx0
(SMU forwarding)
2. SMTP 1. SMTP
Server? Server?
Admin EVS
HNAS 4xx0
This screen shows the alerts summary received as requested through email.
Diagnostic Download
Server diagnostics can be sent from the server’s CLI by issuing the
following command: diagemail <email_address>
Above screen shows the diagnostic report for both nodes in a cluster.
Above screen shows the Diagnostic reports for the SMU as well for the FC- switch.
The diagnostics for storage do not cover Hitachi Data Systems storage at the
moment.
Performance Graph
lab1-1:$ trouble
……………..truncated…………..
fs-protocols:cifs (on FSA; base priority 200)
Domain Controller 192.168.1.63 on EVS 3:
Priority 201: Pnode 1 FSA:
Unable to contact Domain Controller.
Problem with DC on local address: 172.20.20.31
Problem with DC on local address: 172.30.30.31
Fix problems with CIFS names first, if necessary.
Check EVS 3's machine account(s) on the Domain Controller(s).
A machine account should be configured for each of the EVS's
CIFS names.
Use the 'vn 3 cifs-dc prod' command to initiate a Domain
Controller reconnect.
To see: vn 3 cifs-name list
To see: vn 3 cifs-dc list -v
To see: vn 3 cifs-dc-errors
[trouble took 1.30 min.]
lab1-1:$
Usage example:
packet-capture --start --filter "host 10.2.1.1" ag1
packet-capture --stop ag1
nail -n -a tmp -s "My Capture" <email_address>
ssc <IP_Address> ssget tmp ~/capture.cap
30x0 G2
4xx0
When the server is powered down following a clean shutdown and NVRAM is not
in battery backup mode, the battery will still self-discharge at approx 1% per day. If
the NVRAM is still in battery backup as indicated by the flashing NVRAM LED then
the battery can be manually isolated using the reset button (see reset button
description).
Battery packs have a shelf life of up to 6 months** before conditioning is
recommended. Conditioning tests the pack and maintains optimal capacity. When
fully charged, the battery can be left fitted in a server in storage for a maximum of 6
months**.
Battery conditioning equipment will be made available at key service sites and does
not require Hitachi NAS Platform hardware.
** Life testing is ongoing to determine if these limits can be increased.
30x0 G2
The spare battery for the G1 version is stocked as the G2 version. This means for G1,
the battery needs to be removed from the caddy. The battery in the caddy is
compatible with both G1 and G2 generations.
The hard drive is mounted in the carrier using four “Torx” fixing screws.
Use Torx screwdriver T10.
Do not borrow any HDD as a spare part from another node. The HDD needs to be
new and blank from the spare warehouse. Procedures will not work and there is a
severe risk of booting an incorrect image.
30x0 G2
Please refer to Hitachi NAS
Platform Hardware Reference
MK-90BA030- or
MK-92HNAS030-
4xx0
MTDS Console
IMS
/opt/mercury-mtds/bin/mtds field-test
adm46:/home/manager# /opt/mercury-mtds/bin/mtds field-test
Version : 8.1.2351.09
Directory : /home/builder/vampire/2351.09/main/bin/x86_64_linux-bart_libc-
2.7_release
Build date : Aug 5 2011, 10:05:52
Log file path: /var/opt/mtds/log/B1038029_log/
chassis-monitor process detected; PID = 3487
Are you sure you want to stop the Mercury server? (y/n) y
Successfully requested mfb.elf reset, waiting for exit.
If it takes longer than 300 seconds to exit it will be killed....mfb exited
successfully after 7 seconds.
Waiting for stop script to complete
Wait 10000mS......Stopped
Test list settings:
Continue-on-fail
Stress phase time: 5 minutes
2012-03-22 17:32:14
Executing list of 108 tests for 1 cycles
(Some additional tests may be run if all tests pass)
+ Test 2002: pcie-test mbi-pci-check ...............................Passed
+ Test 2001: pcie-test mbi-scratch .................................Passed
+ Test 2201: glue-logic-test glue-register-test ....................Passed
+ Test 2210: glue-logic-test check-failover-interface ..............Passed
truncated.....................................
HNAS configuration
Linux configuration
Request USB recovery files for the build version on your system
The /var, /opt and the / (NOT /root only!) partition will be overwritten
Locate a USB memory stick (4GB minimum)
Create a USB memory stick with recovery files (Careful using WinZip!)
Boot in the “Mercury Recovery” partition using the “grub” menu
Check for both /dev/sda and /dev/sdb being accessible
Mount USB (/dev/sdc1) to /mnt
Run:
• /mnt/mercury-reinstall-main-partitions --preserve-hwdb
• reboot
• nas-preconfig
• reboot
Only the 3090, 4100, and 4060 models will be on stock as a spare part
If a 3080 replacement is required, a USB conversion tool is required
Conversion tools are tracked and are required to be returned after use!
Replacing 4080 with a 4060 in a cluster will turn the 4060 into a 4080
The G1 model and G2 model have different spare part numbers
If the node to be replaced is in a single node configuration, a complete
set of license keys are required for the new node
FPGA package replacement in a G2 model will not change the MAC ID
Replacing a node in a cluster, does not require new license keys
The “Hitachi NAS Platform Server
Replacement Procedures” will list all the
steps needed for a successful node
replacement
Things to consider:
G1 or G2 model
G2 chassis do not include MFB package.
MFB package do not fit into G1 model
New MAC ID in single node configuration means new licenses
New network MAC ID
Change ownership of storage pools for single node
WWN zoning
LUN security
Conversion tool (3080)
Planning
Stay updated with the latest spare part list under Logistic Global.
http://logistics.hds.com/Spares/main_BLU.htm
General Precautions
Proper ESD precautions should be used any time you work on the
node system
Module Summary
Module Review
1. List some error reporting tools support by the Hitachi NAS Platform.
2. How is Hitachi storage monitored and managed?
3. Which email accounts can receive email notifications?
4. Which requirements do we have to specify for the customers to get
email alerting to work?
5. Can network traffic be monitored on a specific aggregate without
externally connected network analyzers?
Certification:
http://www.hds.com/services/education/certification
Learning Center:
http://learningcenter.hds.com
White Papers:
http://www.hds.com/corporate/resources/
Learning Paths:
APAC:
http://www.hds.com/services/education/apac/?_p=v#GlobalTabNavi
Americas:
http://www.hds.com/services/education/north-
america/?tab=LocationContent1#GlobalTabNavi
EMEA:
http://www.hds.com/services/education/emea/#GlobalTabNavi
HDS Community:
http://community.hds.com - Open to all customers, partners, prospects, and
internals
theLoop:
http://loop.hds.com/message/18879#18879 ― HDS internal only
LinkedIn:
http://www.linkedin.com/groups?home=&gid=3044480&trk=anet_ug_hm&
goback=%2Emyg%2Eanb_3044480_*2
Twitter:
http://twitter.com/#!/HDSAcademy
Chat
Q&A
Feedback Options
• Raise Hand
• Yes/No
• Emoticons
Markup Tools
• Drawing Tools
• Text Tool
Automatic
• With Intercall / WebEx Teleconference Call-Back Feature
Otherwise
• To transfer your audio from Main Room to virtual Breakout Room
1. Enter *9
2. You will hear a recording – follow instructions
3. Enter Your Assigned Breakout Room number #
For example, *9 1# (Breakout Room #1)
• To return your audio to Main Room
Enter *9
800.374.1852
ACC — Action Code. A SIM (System Information AL-PA — Arbitrated Loop Physical Address.
Message). AMS — Adaptable Modular Storage.
ACE — Access Control Entry. Stores access rights APAR — Authorized Program Analysis Reports.
for a single user or group within the APF — Authorized Program Facility. In IBM z/OS
Windows security model. and OS/390 environments, a facility that
ACL — Access Control List. Stores a set of ACEs, permits the identification of programs that
so that it describes the complete set of access are authorized to use restricted functions.
rights for a file system object within the API — Application Programming Interface.
Microsoft Windows security model.
APID — Application Identification. An ID to
ACP ― Array Control Processor. Microprocessor identify a command device.
mounted on the disk adapter circuit board
(DKA) that controls the drives in a specific Application Management — The processes that
disk array. Considered part of the back end; manage the capacity and performance of
it controls data transfer between cache and applications.
the hard drives. ARB — Arbitration or request.
ACP Domain ― Also Array Domain. All of the ARM — Automated Restart Manager.
array-groups controlled by the same pair of Array Domain — Also ACP Domain. All
DKA boards, or the HDDs managed by 1 functions, paths, and disk drives controlled
ACP PAIR (also called BED). by a single ACP pair. An array domain can
ACP PAIR ― Physical disk access control logic. contain a variety of LVI or LU
Each ACP consists of 2 DKA PCBs to configurations.
provide 8 loop paths to the real HDDs. Array Group — Also called a parity group. A
Actuator (arm) — Read/write heads are attached group of hard disk drives (HDDs) that form
to a single head actuator, or actuator arm, the basic unit of storage in a subsystem. All
that moves the heads around the platters. HDDs in a parity group must have the same
AD — Active Directory. physical capacity.
ADC — Accelerated Data Copy. Array Unit — A group of hard disk drives in 1
RAID structure. Same as parity group.
Address — A location of data, usually in main
memory or on a disk. A name or token that ASIC — Application specific integrated circuit.
identifies a network component. In local area ASSY — Assembly.
networks (LANs), for example, every node Asymmetric virtualization — See Out-of-band
has a unique address. virtualization.
ADP — Adapter. Asynchronous — An I/O operation whose
ADS — Active Directory Service. initiator does not await its completion before
• Virtual private cloud (or virtual private CMA — Cache Memory Adapter.
network cloud) CMD — Command.
Cloud Enabler —a concept, product or solution CMG — Cache Memory Group.
that enables the deployment of cloud CNAME — Canonical NAME.
computing. Key cloud enablers include:
DASD — Direct Access Storage Device. Data Transfer Rate (DTR) — The speed at which
data can be transferred. Measured in
Data block — A fixed-size unit of data that is kilobytes per second for a CD-ROM drive, in
transferred together. For example, the bits per second for a modem, and in
X-modem protocol transfers blocks of 128 megabytes per second for a hard drive. Also,
bytes. In general, the larger the block size, often called data rate.
the faster the data transfer rate.
DBL — Drive box.
Data Duplication — Software duplicates data, as
in remote copy or PiT snapshots. Maintains 2 DBMS — Data Base Management System.
copies of data. DBX — Drive box.
Data Integrity — Assurance that information will DCA ― Data Cache Adapter.
be protected from modification and DCTL — Direct coupled transistor logic.
corruption.
DDL — Database Definition Language.
Data Lifecycle Management — An approach to
information and storage management. The DDM — Disk Drive Module.
policies, processes, practices, services and DDNS — Dynamic DNS.
tools used to align the business value of data DDR3 — Double data rate 3.
with the most appropriate and cost-effective
storage infrastructure from the time data is DE — Data Exchange Software.
created through its final disposition. Data is Device Management — Processes that configure
aligned with business requirements through and manage storage systems.
management policies and service levels DFS — Microsoft Distributed File System.
associated with performance, availability,
DFSMS — Data Facility Storage Management
recoverability, cost, and what ever
Subsystem.
parameters the organization defines as
critical to its operations. DFSM SDM — Data Facility Storage Management
Subsystem System Data Mover.
Data Migration — The process of moving data
from 1 storage device to another. In this DFSMSdfp — Data Facility Storage Management
context, data migration is the same as Subsystem Data Facility Product.
Hierarchical Storage Management (HSM). DFSMSdss — Data Facility Storage Management
Data Pipe or Data Stream — The connection set up Subsystem Data Set Services.
between the MediaAgent, source or DFSMShsm — Data Facility Storage Management
destination server is called a Data Pipe or Subsystem Hierarchical Storage Manager.
more commonly a Data Stream.
DFSMSrmm — Data Facility Storage Management
Data Pool — A volume containing differential Subsystem Removable Media Manager.
data only.
DFSMStvs — Data Facility Storage Management
Data Protection Directive — A major compliance Subsystem Transactional VSAM Services.
and privacy protection initiative within the
DFW — DASD Fast Write.
European Union (EU) that applies to cloud
computing. Includes the Safe Harbor DICOM — Digital Imaging and Communications
Agreement. in Medicine.
Data Stream — CommVault’s patented high DIMM — Dual In-line Memory Module.
performance data mover used to move data Direct Access Storage Device (DASD) — A type of
back and forth between a data source and a storage device, in which bits of data are
MediaAgent or between 2 MediaAgents. stored at precise locations, enabling the
Data Striping — Disk array data mapping computer to retrieve information directly
technique in which fixed-length sequences of without having to scan a series of records.
H1F — Essentially the floor-mounted disk rack HiCAM — Hitachi Computer Products America.
(also called desk side) equivalent of the RK. HIPAA — Health Insurance Portability and
(See also: RK, RKA, and H2F). Accountability Act.
H2F — Essentially the floor-mounted disk rack HIS — (1) High Speed Interconnect. (2) Hospital
(also called desk side) add-on equivalent Information System (clinical and financial).
similar to the RKA. There is a limitation of HiStar — Multiple point-to-point data paths to
only 1 H2F that can be added to the core RK cache.
Floor Mounted unit. See also: RK, RKA, and
H1F. HL7 — Health Level 7.
HANA — High Performance Analytic Appliance, HLS — Healthcare and Life Sciences.
a database appliance technology proprietary HLU — Host Logical Unit.
to SAP. H-LUN — Host Logical Unit Number. See LUN.
HBA — Host Bus Adapter — An I/O adapter that HMC — Hardware Management Console.
sits between the host computer's bus and the
Fibre Channel loop and manages the transfer Homogeneous — Of the same or similar kind.
of information between the 2 channels. In Host — Also called a server. Basically a central
order to minimize the impact on host computer that processes end-user
processor performance, the host bus adapter applications or requests.
performs many low-level interface functions Host LU — Host Logical Unit. See also HLU.
automatically or with minimal processor
Host Storage Domains — Allows host pooling at
involvement.
the LUN level and the priority access feature
HCA — Host Channel Adapter. lets administrator set service levels for
HCD — Hardware Configuration Definition. applications.
HD — Hard Disk. HP — (1) Hewlett-Packard Company or (2) High
HDA — Head Disk Assembly. Performance.
HDD ― Hard Disk Drive. A spindle of hard disk HPC — High Performance Computing.
platters that make up a hard drive, which is HSA — Hardware System Area.
a unit of physical storage within a HSG — Host Security Group.
subsystem.
PAT — Port Address Translation. PL — Platter. The circular disk on which the
magnetic data is stored. Also called
PATA — Parallel ATA. motherboard or backplane.
Path — Also referred to as a transmission channel, PM — Package Memory.
the path between 2 nodes of a network that a
POC — Proof of concept.
data communication follows. The term can
refer to the physical cabling that connects the Port — In TCP/IP and UDP networks, an
nodes on a network, the signal that is endpoint to a logical connection. The port
communicated over the pathway or a sub- number identifies what type of port it is. For
channel in a carrier frequency. example, port 80 is used for HTTP traffic.
Path failover — See Failover. POSIX — Portable Operating System Interface for
UNIX. A set of standards that defines an
PAV — Parallel Access Volumes. application programming interface (API) for
PAWS — Protect Against Wrapped Sequences. software designed to run under
PB — Petabyte. heterogeneous operating systems.
PBC — Port By-pass Circuit. PP — Program product.
PCB — Printed Circuit Board. P-P — Point-to-point; also P2P.
PCHIDS — Physical Channel Path Identifiers. PPRC — Peer-to-Peer Remote Copy.
PCI — Power Control Interface. Private Cloud — A type of cloud computing
defined by shared capabilities within a
PCI CON — Power Control Interface Connector
single company; modest economies of scale
Board.
and less automation. Infrastructure and data
PCI DSS — Payment Card Industry Data Security reside inside the company’s data center
Standard. behind a firewall. Comprised of licensed
PCIe — Peripheral Component Interconnect software tools rather than on-going services.
Express.
PD — Product Detail. Example: An organization implements its
own virtual, scalable cloud and business
PDEV— Physical Device. units are charged on a per use basis.
SA z/OS — System Automation for z/OS. SBSC — Smart Business Storage Cloud.
SBX — Small Box (Small Form Factor).
SAA — Share Access Authentication. The process
of restricting a user's rights to a file system SC — (1) Simplex connector. Fibre Channel
object by combining the security descriptors connector that is larger than a Lucent
from both the file system object itself and the connector (LC). (2) Single Cabinet.
share to which the user is connected. SCM — Supply Chain Management.
SaaS — Software as a Service. A cloud computing SCP — Secure Copy.
business model. SaaS is a software delivery SCSI — Small Computer Systems Interface. A
model in which software and its associated parallel bus architecture and a protocol for
data are hosted centrally in a cloud and are transmitting large data blocks up to a
typically accessed by users using a thin distance of 15 to 25 meters.
client, such as a web browser via the
SD — Software Division (of Hitachi).
Internet. SaaS has become a common
delivery model for most business SDH — Synchronous Digital Hierarchy.
applications, including accounting (CRM SDM — System Data Mover.
and ERP), invoicing (HRM), content SDSF — Spool Display and Search Facility.
management (CM) and service desk Sector — A sub-division of a track of a magnetic
management, just to name the most common disk that stores a fixed amount of data.
software that runs in the cloud. This is the
fastest growing service in the cloud market SEL — System Event Log.
today. SaaS performs best for relatively Selectable segment size — Can be set per partition.
simple tasks in IT-constrained organizations. Selectable Stripe Size — Increases performance by
SACK — Sequential Acknowledge. customizing the disk access size.
SACL — System ACL. The part of a security SENC — Is the SATA (Serial ATA) version of the
descriptor that stores system auditing ENC. ENCs and SENCs are complete
information. microprocessor systems on their own and
they occasionally require a firmware
SAIN — SAN-attached Array of Independent upgrade.
Nodes (architecture).
SeqRD — Sequential read.
SAN ― Storage Area Network. A network linking
Serial Transmission — The transmission of data
computing devices to disk or tape arrays and
bits in sequential order over a single line.
other devices over Fibre Channel. It handles
data at the block level. Server — A central computer that processes
end-user applications or requests, also called
SAP — (1) System Assist Processor (for I/O
a host.
processing), or (2) a German software
company. Server Virtualization — The masking of server
resources, including the number and identity
SAP HANA — High Performance Analytic of individual physical servers, processors,
Appliance, a database appliance technology and operating systems, from server users.
proprietary to SAP. The implementation of multiple isolated
SARD — System Assurance Registration virtual environments in one physical server.
Document. Service-level Agreement — SLA. A contract
SAS —Serial Attached SCSI. between a network service provider and a
SATA — Serial ATA. Serial Advanced Technology customer that specifies, usually in
Attachment is a new standard for connecting measurable terms, what services the network
hard drives into computer systems. SATA is service provider will furnish. Many Internet
based on serial signaling technology, unlike service providers (ISP) provide their
current IDE (Integrated Drive Electronics) customers with a SLA. More recently, IT
hard drives that use parallel signaling. departments in major enterprises have