Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Guide
Service Edition
Abstract
This guide provides information about maintenance and upgrading HP 3PAR StoreServ 7000 Storage system hardware
components for authorized technicians.
http://www.hp.com/go/storagewarranty
Printed in the US
Contents
1 Understanding LED Indicator Status...............................................................7
Enclosure LEDs.........................................................................................................................7
Bezels LEDs.........................................................................................................................7
Disk Drive LEDs....................................................................................................................7
Storage System Component LEDs................................................................................................8
Power Cooling Module LEDs..................................................................................................8
Drive PCM LEDs............................................................................................................10
I/O Modules LEDs.............................................................................................................10
Controller Node and Internal Component LEDs...........................................................................11
Ethernet LEDs....................................................................................................................13
Node FC and CNA Port LEDs..............................................................................................13
Fibre Channel (FC) Adapter LEDs.........................................................................................14
Converged Network Adapter (CNA) LEDs.............................................................................14
Node FC and CNA Port Numbering....................................................................................15
SAS Port LEDs....................................................................................................................16
Interconnect Port LEDs.........................................................................................................16
Verifying Service Processor LEDs...............................................................................................17
2 Servicing the Storage System......................................................................20
Service Processor Onsite Customer Care ...................................................................................20
Accessing Guided Maintenance..........................................................................................21
Accessing SPMAINT ..........................................................................................................21
Accessing the HP 3PAR Management Console.......................................................................21
Identifying a Replaceable Part..................................................................................................21
Swappable Components.....................................................................................................21
Getting Recommended Actions.................................................................................................22
Powering Off/On the Storage System........................................................................................23
Powering Off.....................................................................................................................23
Powering On.....................................................................................................................24
Disengaging the PDU Pivot Brackets..........................................................................................24
Replacing an Interconnect Link Cable........................................................................................25
Repairing a Disk Drive.............................................................................................................25
Removing a 2.5 inch Disk ..................................................................................................28
Removing a 3.5 inch Disk...................................................................................................28
Installing a Disk Drive.........................................................................................................29
Verifying Disk Drives...........................................................................................................31
Controller Node Replacement Procedure....................................................................................31
Preparation.......................................................................................................................31
Node Identification and Shutdown.......................................................................................32
Node Identification and Preparation.....................................................................................32
Node Removal..................................................................................................................36
Node Installation...............................................................................................................36
Node Verification .............................................................................................................37
SFP Repair.............................................................................................................................38
Replacing an SFP...............................................................................................................42
Replacing a Drive Enclosure.....................................................................................................42
Replacing an I/O Module.......................................................................................................43
Removing an I/O Module...................................................................................................44
Installing an I/O Module....................................................................................................45
Replacing a Power Cooling Module..........................................................................................46
Removing a Power Cooling Module......................................................................................48
Replacing a Battery inside a Power Cooling Module...............................................................49
Contents 3
Installing a Power Cooling Module ......................................................................................51
Controller Node Internal Component Repair...............................................................................52
Node Cover Removal and Replacement................................................................................53
Controller Node (Node) Clock Battery Replacement Procedure................................................53
Preparation..................................................................................................................53
Node Identification and Shutdown..................................................................................53
Node Removal..............................................................................................................54
Node Clock Battery Replacement....................................................................................55
Node Replacement........................................................................................................55
Node and Clock Battery Verification................................................................................55
Controller Node (Node) DIMM Replacement Procedure..........................................................56
Preparation..................................................................................................................56
Node and DIMM Identification and Node Shutdown.........................................................56
Node Removal..............................................................................................................58
DIMM Replacement.......................................................................................................58
Node Replacement........................................................................................................58
Node and DIMM Verification.........................................................................................59
Controller Node (Node) PCIe Adapter Procedure...................................................................60
Controller Node (Node) Drive Assembly Replacement Procedure..............................................62
Preparation..................................................................................................................62
Node Identification and Shutdown..................................................................................62
Node Removal..............................................................................................................64
Node Drive Assembly Replacement.................................................................................64
Node Replacement........................................................................................................64
Node Verification .........................................................................................................65
CLI Procedures.......................................................................................................................66
Node Identification and Preparation ....................................................................................66
Node Verification .............................................................................................................66
The Startnoderescue Command............................................................................................67
Node and PCIe Adapter Identification and Preparation ..........................................................67
Node and PCIe Adapter Verification ...................................................................................68
Controller Node (Node) PCIe Adapter Riser Card Replacement Procedure.................................69
PCIe Adapter Identification and Node Shutdown...............................................................69
Node Removal..............................................................................................................70
PCIe Adapter Riser Card Replacement.............................................................................70
Node Replacement........................................................................................................71
Node PCM Identification....................................................................................................71
Drive PCM Identification ....................................................................................................71
PCM Location...............................................................................................................72
PCM and Battery Verification...............................................................................................73
SFP Identification...............................................................................................................74
SFP Verification.............................................................................................................74
Disk Drive Identification......................................................................................................75
Disk Drive (Magazine) Location...........................................................................................76
Disk Drive Verification.........................................................................................................76
3 Upgrading the Storage System...................................................................77
Installing Rails for Component Enclosures...................................................................................77
Controller Node Upgrade .......................................................................................................78
Upgrading a 7400 Storage System......................................................................................79
Installing the Enclosures.................................................................................................91
Drive Enclosures and Disk Drives Upgrade ................................................................................93
Adding an Expansion Drive Enclosure..................................................................................93
Upgrade Drive Enclosures...................................................................................................94
Check Initial Status........................................................................................................95
4 Contents
Install Drive Enclosures and Disk Drives............................................................................96
Power up enclosures and check status..............................................................................97
Chain Node 0 Loop DP-2 (B Drive Enclosures and the solid red lines)...................................97
Chain Node 0 Loop DP-1 (A Drive Enclosures and the dashed red lines)...............................98
Check Pathing..............................................................................................................99
Move Node 1 DP-1 and DP-2 to farthest drive enclosures..................................................100
Check Pathing............................................................................................................101
Chain Node 1 Loop DP-2 (B Drive Enclosures and the solid green lines).............................102
Chain Node 1 Loop DP-1 (A Drive Enclosures and the dashed green lines)..........................103
Check Pathing............................................................................................................105
Execute admithw.........................................................................................................106
Verify Pathing.............................................................................................................107
Verify Cabling............................................................................................................108
Upgrade Disk Drives.............................................................................................................108
Check Initial Status...........................................................................................................109
Inserting Disk Drives ........................................................................................................109
Check Status...................................................................................................................109
Check Progress................................................................................................................110
Upgrade Completion........................................................................................................110
Upgrading PCIe Adapters......................................................................................................111
Upgrading the HP 3PAR OS and Service Processor...................................................................111
4 Support and Other Resources...................................................................112
Contacting HP......................................................................................................................112
HP 3PAR documentation........................................................................................................112
Typographic conventions.......................................................................................................116
HP 3PAR branding information...............................................................................................116
5 Documentation feedback.........................................................................117
A Installing Storage Software Manually........................................................118
Connecting to the Laptop.......................................................................................................118
Connecting the Laptop to the Controller Node.....................................................................118
Connecting the Laptop to the HP 3PAR Service Processor......................................................118
Serial Cable Connections..................................................................................................118
Maintenance PC Connector Pin-outs .............................................................................118
Service Processor Connector Pin-outs .............................................................................119
Manually Initializing the Storage System Software.....................................................................119
Manually Setting up the Storage System..............................................................................119
Storage System Console – Out Of The Box.....................................................................122
Adding a Storage System to the Service Processor....................................................................127
Exporting Test LUNs..............................................................................................................128
Defining Hosts.................................................................................................................129
Creating and Exporting Test Volumes..................................................................................129
B Service Processor Moment Of Birth (MOB).................................................131
C Connecting to the Service Processor.........................................................143
Using a Serial Connection.....................................................................................................143
D Node Rescue.........................................................................................145
Automatic Node-to-Node Rescue............................................................................................145
Service Processor-to-Node Rescue...........................................................................................146
Virtual Service Processor-to-Node Rescue.................................................................................148
E Illustrated Parts Catalog...........................................................................152
Drive Enclosure Components..................................................................................................152
Storage System Components..................................................................................................155
Controller Node and Internal Components...............................................................................157
Contents 5
Service Processor..................................................................................................................160
Miscellaneous Cables and Parts.............................................................................................160
F Disk Drive Numbering.............................................................................163
Numbering Disk Drives..........................................................................................................163
G Uninstalling the Storage System...............................................................165
Storage System Inventory.......................................................................................................165
Removing Storage System Components from an Existing or Third Party Rack.................................165
6 Contents
1 Understanding LED Indicator Status
Storage system components have LEDs to indicate status of the hardware and whether it is
functioning properly. These indicators help diagnose basic hardware problems. You can quickly
identify hardware problems by examining the LEDs on all components using the tables and
illustrations in this chapter.
Enclosure LEDs
Bezels LEDs
The bezels are located at the front of the system on each side of the drive enclosure and include
three LEDs.
2 Module Fault Amber On – System hardware fault to I/O modules or PCMs within the enclosure.
At the rear of the enclosure, identify if the PCM or I/O module LED is also
Amber.
3 Disk Drive Amber On – Specific disk drive LED identifies the affected disk. This LED applies to
Status disk drives only.
NOTE: Prior to running the installation scripts, the numeric display located under the Disk Drive
Status LED on the bezels may not display the proper numeric order in relation to their physical
locations. The correct sequence will be displayed after the installation script completes.
Enclosure LEDs 7
Figure 2 Disk Drive LEDs
Flashing Activity
Flashing Activity
1 2 3
NOTE: Issue the locatenode command to flash the UID LED blue.
Off No fault
Ethernet LEDs
The controller node has two built-in Ethernet ports and each includes two LEDs:
• MGMT — Eth0 port provides connection to the public network
• RC-1 — designated port for Remote Copy functionality
On Normal/Connected – link up
Link status Green
Flashing Link down or not connected
Figure 13 FC Ports
Table 11 FC Ports
Port Slot:Port
FC-1 1:1
FC-2 1:2
1 2:1
2 2:2
3 2:3
4 2:4
1 2:1
2 2:2
1 2
Green Off No activity on port. This LED does not indicate a Ready state with a solid
On as the I/O Module External Port Activity LEDs do.
Off Deactivated
Off No link
Off No activity
Off Deactivated
CAUTION: Before servicing any component in the storage system, prepare an Electrostatic
Discharge-safe (ESD) work surface by placing an antistatic mat on the floor or table near the storage
system. Attach the ground lead of the mat to an unpainted surface of the rack. Always use a
wrist-grounding strap provided with the storage system. Attach the grounding strap clip directly to
an unpainted surface of the rack.
For more information on part numbers for storage system components listed in this chapter, see
the “Illustrated Parts Catalog” (page 152).
Accessing SPMAINT
Use SPMAINT if you are servicing a storage system component or when you need to run a CLI
command.
To access SPMAINT:
1. On the left side of the SPOCC homepage, click Support.
2. On the Service Processor - Support page, under Service Processor, click SPMAINT on the Web
in the Action column.
3. Select option 7 Interactive CLI for a StoreServ and then select the desired system.
Swappable Components
Colored touch points on a storage system component (such as a lever or latch) identify whether
the system should be powered on or off during a part replacement:
• Hot-swappable – Parts are identified by red-colored touch points. The system can remain
powered on and active during replacement.
Identifying a Replaceable Part 21
NOTE: Disk drives are hot-swappable, even though they are yellow and do not have red
touch points.
• Warm-swappable– Parts are identified by gray touch points. The system does not fail if the
part is removed, but data loss may occur if the replacement procedure is not followed correctly.
• Cold-swappable – Parts are identified by blue touch points. The system must be powered off
or otherwise suspended before replacing the part.
CAUTION:
• Do not replace cold-swappable components while power is applied to the product. Power off
the device and then disconnect all AC power cords.
• Power off the equipment and disconnect power to all AC power cords before removing any
access covers for cold-swappable areas.
• When replacing hot-swappable components, allow approximately 30 seconds between
removing the failed component and installing the replacement. This time is needed to ensure
that configuration data about the removed component is cleared from the system registry. To
prevent overheating due to an empty enclosure or bay, use a blank or leave the slightly
disengaged component in the enclosure until the replacement can be made.
Drives must be replaced within 10 minutes, nodes 30 minutes and all other parts within 6
minutes.
• Before replacing a hot-swappable component, ensure that steps have been taken to prevent
loss of data.
WARNING! Do not power off the system unless a service procedure requires the system to be
powered off. Before you power off the system to perform maintenance procedures, first verify with
a system administrator. Powering off the system will result in loss of access to the data from all
attached hosts.
Powering Off
Before you begin, use either SPMAINT or SPOCC to shut down and power off the system. For
information about SPOCC, see “Service Processor Onsite Customer Care ” (page 20).
NOTE: PDUs in any expansion cabinets connected to the storage system may need to be shut
off. Use the locatesys command to identify all connected cabinets before shutting down the
system. The command blinks all node and drive enclosure LEDs.
The system can be shutdown before powering off by any of the following three methods:
Using SPOCC
1. Select StoreServ Product Maintenance.
2. Select Halt a StoreServ cluster/node.
3. Follow the prompts to shutdown a cluster. Do not shut down individual Nodes.
4. Turn off power to the node PCMs.
5. Turn off power to the drive enclosure PCMs.
6. Turn off all PDUs in the rack.
Using SPMAINT
1. Select option 4 (StoreServ Product Maintenance).
2. Select Halt a StoreServ cluster/node.
3. Follow the prompts to shutdown a cluster. Do not shut down individual Nodes.
4. Turn off power to the node PCMs.
5. Turn off power to the drive enclosure PCMs.
6. Turn off all PDUs in the rack.
CAUTION: Failure to wait until all controller nodes are in a halted state can cause the system
to view the shutdown as uncontrolled. The system will undergo a check-state when powered
on if the nodes are not fully halted before power is removed and can seriously impact host
access to data.
2. Allow 2-3 minutes for the node to halt, then verify that the node Status LED is flashing green
and the node hotplug LED is blue, indicating that the node has been halted. For information
about LEDs status, see “Understanding LED Indicator Status” (page 7).
3. Turn off power to the node PCMs.
4. Turn off power to the drive enclosure PCMs.
5. Turn off all PDUs in the rack.
Powering On
1. Set the circuit breakers on the PDUs to the ON position.
2. Set the switches on the power strips to the ON position.
3. Power on the drive enclosure PCMs.
NOTE: To avoid any cabling errors, all drive enclosures must have at least one or more
hard drive(s) installed before powering on the enclosure.
NOTE: If necessary, loosen the two bottom screws to easily pull down the PDU.
WARNING! If the StoreServ is enabled with HP 3PAR Data Encryption feature, only use the
self-encrypting drives (SED). Using a non-self-encrypting drive may cause errors during the repair
process.
CAUTION:
• If you require more than 10 minutes to replace a disk drive, install a disk drive blank cover
to prevent overheating while you are working.
• To avoid damage to hardware and the loss of data, never remove a disk drive without
confirming that the disk fault LED is lit.
NOTE: SSDs have a limited number of writes that can occur before reaching the SSD's write
endurance limit. This limit is generally high enough so wear out will not occur during the expected
service life of an HP 3PAR StoreServ under the great majority of configurations, IO patterns, and
workloads. HP 3PAR StoreServ tracks all writes to SSDs and can report the percent of the total
write endurance limit that has been used. This allows any SSD approaching the write endurance
limit to be proactively replaced before they are automatically spared out. An SSD has reached the
maximum usage limit once it exceeds its write endurance limit. Following the product warranty
period, SSDs that have exceeded the maximum usage limit will not be repaired or replaced under
HP support contracts.
WARNING! The Physical Disks may indicate Degraded, which indicates that the disk drive
is not yet ready for replacement. It may take several hours for the data to be vacated; do not
proceed until the status is Failed. Removing the failed drive before all the data is vacated
will cause loss of data.
2. On the Summary tab, select the Failed link in the Physical Disk row next to the red X icon
( ).
CAUTION: If more than one disk drive is failed or degraded, contact your authorized service
provider to determine if the repair can be done in a safe manner, preventing down time or
data loss.
A filtered table displays, showing only failed or degraded disk drives (see Figure 26 (page
26)).
The Alert tab displays a filtered Alert table showing only the critical alerts associated with disk
drives, where the alert details are displayed (see Figure 27 (page 27)).
NOTE: The lower pane lists the alerts in a tabular fashion (you can see the highlighted alert
in Figure 27 (page 27)). Highlighted alerts display their details in the pane above the list.
3. Select the Locate icon in the top toolbar of the Management Console.
NOTE: If necessary, use the Stop Locate icon to halt LED flashing.
An icon with a flashing LED will be shown next to the cage, which flashes all drives in this
cage except the failed drive.
Figure 31 7200 and 7400 Two Node System (HP M6710 Drive Enclosure)
CAUTION: To avoid potential damage to equipment and loss of data, handle disk drives carefully.
NOTE: All drives in a vertical column of an LFF drive enclosure must be the same speed and
type.
4. Observe the newly installed disk drive for 60 seconds to verify the amber LED turns off and
remains off for 60 seconds.
NOTE: Until data has been restored, the original disk drive will display as Failed and the
replacement disk drive will display as Degraded.
3. The new drive displays in the same position as the failed drive and the State is listed as
Normal.
NOTE: The drive that was replaced continues to display in the table as Failed until the
disk rebuild is complete, which may take several hours. When the process is complete, the
failed drive is dismissed and dropped from the display.
4. Open a CLI session. Issue the checkhealth command to verify the system is working properly.
NOTE: Be sure to wear your electrostatic discharge wrist strap to avoid damaging any circuitry.
Preparation
1. Unpack the replacement node and place it on an ESD safe mat.
2. Remove the node cover:
a. Loosen the two thumbscrews that secure the node cover to the node.
b. Lift the node cover and remove it.
3. If a PCIe adapter exists in the failed node:
a. Remove the PCIe adapter riser card from the replacement node by grasping the blue
touch point on the riser card and pulling it up and away from the node.
b. Insert the existing PCIe adapter onto the riser card.
c. Install the PCIe adapter assembly by aligning the recesses on the adapter plate with the
pins on the node chassis. This should align the riser card with the slot on the node. Snap
the PCIe adapter assembly into the node.
4. Install the node cover:
a. While aligning the node rod with the cutout in the front and the guide pins with the cutouts
in the side, lower the node cover into place.
b. Tighten the two thumbscrews to secure the node cover to the node.
5. Pull the gray node rod to the extracted position, out of the component.
NOTE: If the failed node is already halted, it is not necessary to shutdown (halt) the node because
it is not part of the cluster.
The following figure illustrates the 7200 controller node.
NOTE: If the failed node is already halted, it is not necessary to shutdown the node because it
is not part of the cluster.
In this case, there is only one controller node present, which indicates that the other node is
not part of the cluster. If the node UID LED is blue proceed to step 4 to locate the system. If
the node UID LED is not blue, escalate to the next level of support.
2. The Alert panel displays a filtered Alert table showing only the critical alerts associated with
the node, where the alert details are displayed. On the storage system, identify the node and
verify that the status LED is lit amber.
b. Enter an appropriate time to allow service personnel to view the LED status of the System.
NOTE: If necessary use the Stop Locate icon to halt LED flashing.
This flashes the LEDs on all of the drives and all nodes in this System except the failed
node, which has a solid blue LED.
CAUTION: The system does not fail if the node is properly halted before removal, but data
loss may occur if the replacement procedure is not followed correctly.
NOTE: The Node Fault LED may be amber, depending on the nature of the node failure.
NOTE: Nodes 1 and 3 are rotated in relation to nodes 0 and 2. See Figure 36 (page 32).
2. Ensure that all cables on the failed node are marked to facilitate reconnecting later.
3. Remove cables from the failed node.
4. Pull the node rod to remove the node from the enclosure.
5. When the node is halfway out of the enclosure, use both hands to slide the node out completely.
6. Set the node on the ESD safe mat next to the replacement node for servicing.
7. Push in the failed node’s rod to ready it for packaging and provide differentiation from the
replacement node.
Node Installation
1. Move both SFPs from the onboard FC ports on the failed node to the onboard FC ports on
the replacement node:
a. Lift the retaining clip and carefully slide the SFP out of the slot.
b. Carefully slide the SFP into the FC port on the replacement node until it is fully seated;
close the wire handle to secure it in place.
2. If a PCIe adapter is installed in the failed node, move the SFPs from the PCIe adapter on the
failed node to the PCIe adapter on the replacement node:
a. Lift the retaining clip and carefully slide the SFP out of the slot.
b. Carefully slide the replacement SFP into the adapter on the replacement node until it is
fully seated; close the wire handle to secure it in place.
3. On the replacement node, ensure the gray node rod is in the extracted position, pulled out
of the component.
CAUTION: Ensure the node is correctly oriented; alternate nodes are rotated 180°.
5. Keep sliding the node in until it halts against the insertion mechanism.
CAUTION: Do not proceed until the replacement node has an Ethernet cable connected to
the MGMT port. Without an Ethernet cable, node rescue cannot complete and the replacement
node is not able to rejoin the cluster.
CAUTION: If the blue LED is flashing, which indicates that the node is not properly seated,
pull out the grey node rod and push it back in to ensure that the node is fully seated.
NOTE: Once inserted, the node should power up and go through the node rescue procedure
before joining the cluster. This may take up to 10 minutes.
NOTE: On a 7400 (4-node system), there may only be two customer Ethernet cables. When
replacing nodes without any attached Ethernet cables, enter shownet command to identify
one of the active nodes, then remove one of the existing Ethernet cables and attach it to the
node being rescued.
8. Verify the node LED is blinking green in synchronization with other nodes, indicating that the
node has joined the cluster.
9. Follow the return instructions provided with the new component.
NOTE: If a PCIe adapter is installed in the failed node, leave it installed. Do not remove
and return it in the packaging for the replacement PCIe adapter.
Node Verification
For the CLI procedure, see “CLI Procedures” (page 66).
1. Verify the node is installed successfully by refreshing the Management Console.
NOTE: The Management Console refreshes periodically and may already reflect the new
status.
2. The Status LED for the new node may indicate Green and take up to 3 minutes to change to
Green Blinking.
NOTE: The storage system status is good and the alerts associated with the failure have
been auto-resolved by the system and removed.
SFP Repair
The SFP is located in the port on the controller node HBA/CNA and there are two to six SFPs per
node.
Before you begin, use either SPMAINT or the HP 3PAR Management Console to identify the failed
SFP.
SFP Identification
1. Under the Systems tree in the left panel, select the storage system to be serviced.
2. On the Summary tab, click the Port link to open the port's tab.
State should now be listed as Ready, the Mode as Target and the Connected Device Type
as Host.
For the CLI procedure, see “SFP Identification” (page 74).
To perform maintenance using CLI, access SPMAINT:
1. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
2. Issue the following commands:
• showport to view the port State:
s750 cli%showport
N:S:P Mode State Node_WWN Port_WWN/HW_Addr Type Protocol
Label Partner FailoverState
0:0:1 initiator ready 50002ACFF70185A6 50002AC0010185A6 disk SAS
- - -
0:0:2 initiator ready 50002ACFF70185A6 50002AC0020185A6 disk SAS
- - -
0:1:1 target ready 2FF70002AC0185A6 20110002AC0185A6 host FC
- - -
0:1:2 target ready 2FF70002AC0185A6 20120002AC0185A6 host FC
- - -
0:2:1 target loss_sync - 2C27D75301F6 iscsi iSCSI
- - -
SFP Repair 39
0:2:2 target loss_sync - 2C27D75301F2 iscsi iSCSI
- - -
0:3:1 peer offline - 0002AC8004DB rcip IP RCIP0
- -
1:0:1 initiator ready 50002ACFF70185A6 50002AC1010185A6 disk SAS
- - -
1:0:2 initiator ready 50002ACFF70185A6 50002AC1020185A6 disk SAS
- - -
1:1:1 target ready 2FF70002AC0185A6 21110002AC0185A6 host FC
- - -
1:1:2 target loss_sync 2FF70002AC0185A6 21120002AC0185A6 free FC
- - -
1:2:1 initiator loss_sync 2FF70002AC0185A6 21210002AC0185A6 free FC
- - -
1:2:2 initiator loss_sync 2FF70002AC0185A6 21220002AC0185A6 free FC
- - -
1:2:3 initiator loss_sync 2FF70002AC0185A6 21230002AC0185A6 free FC
- - -
1:2:4 initiator loss_sync 2FF70002AC0185A6 21240002AC0185A6 free FC
- - -
1:3:1 peer offline - 0002AC8004BD rcip IP RCIP1
- -
cli%showport -sfp
N:S:P -State- -Manufacturer- MaxSpeed(Gbps) TXDisable TXFault RXLoss DDM
0:1:1 OK HP-F 8.5 No No No Yes
0:1:2 OK HP-F 8.5 No No No Yes
0:2:1 OK AVAGO 10.3 No No Yes Yes
0:2:2 OK AVAGO 10.3 No No Yes Yes
1:1:1 OK HP-F 8.5 No No No Yes
1:1:2 - - - - - - -
1:2:1 OK HP-F 8.5 No No Yes Yes
1:2:2 OK HP-F 8.5 No No Yes Yes
1:2:3 OK HP-F 8.5 No No Yes Yes
1:2:4 OK HP-F 8.5 No No Yes Yes
cli%showport
N:S:P Mode State Node_WWN Port_WWN/HW_Addr Type Protocol Label
Partner FailoverState
0:0:1 initiator ready 50002ACFF70185A6 50002AC0010185A6 disk SAS -
- -
0:0:2 initiator ready 50002ACFF70185A6 50002AC0020185A6 disk SAS -
- -
0:1:1 target ready 2FF70002AC0185A6 20110002AC0185A6 host FC -
- -
0:1:2 target ready 2FF70002AC0185A6 20120002AC0185A6 host FC -
- -
0:2:1 target loss_sync - 2C27D75301F6 iscsi iSCSI
- - -
0:2:2 target loss_sync - 2C27D75301F2 iscsi iSCSI
- - -
0:3:1 peer offline - 0002AC8004DB rcip IP
RCIP0 - -
1:0:1 initiator ready 50002ACFF70185A6 50002AC1010185A6 disk SAS -
- -
• showport -sfp to verify that the replaced SFP is connected and the State is listed as
OK:
SFP Repair 41
4. Replace the SFP. See “Replacing an SFP” (page 42).
5. In the HP 3PAR Management Console, verify that the SFP is successfully replaced. The replaced
port State is listed as Ready, the Mode is listed as Target, and the Connected Device Type
is listed as Host.
Replacing an SFP
1. After identifying the SFP that requires replacement, disconnect the cable and lift the retaining
clip to carefully slide the SFP out of the slot.
2. Remove the replacement SFP module from its protective packaging.
3. Carefully slide the replacement SFP into the adapter until fully seated, close the retaining clip
to secure it in place, and reconnect the cable.
4. Place the failed SFP into the packaging for return to HP.
5. Reconnect the cable to the SFP module and verify that the link status LED is solid green.
CAUTION: Before removing a drive enclosure from the rack, remove each disk drive, label it
with its slot number, and place each on a clean or ESD surface. After completing the enclosure
installation, reinstall the disk drives to their original slots.
CAUTION: Two people are required to remove the enclosure from the rack to prevent injury.
To replace an enclosure:
1. Power down the enclosure and disconnect all power cables.
2. Remove the drives from the enclosure, noting each drives location in the enclosure.
3. Remove the bezels at the sides of the enclosure to access the screws.
4. Unscrew the M5 screws that mount the enclosure to the rack.
5. Using both hands, pull the enclosure from the rail shelves. Use the bottom lip as a guide and
the top to catch the enclosure.
42 Servicing the Storage System
6. Reinstall the enclosure. See “Installing the Enclosures” (page 91).
Before you begin, verify the location of the I/O module in an enclosure:
1. Display the failed I/O Module by executing the showcage command:
cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 1:0:1 0 0:0:1 0 6 25-28 320c 320c DCN1 n/a
1 cage1 1:0:1 2 0:0:1 2 6 25-29 320c 320c DCS1 n/a
2 cage2 1:0:1 1 0:0:1 1 6 33-28 320c 320c DCS2 n/a
3 cage3 1:0:1 0 ----- 0 6 33-27 320c 320c DCS2 n/a
Typically, the dashes (— — — — —) indicate that one of the interfaces failed.
2. If required, execute the locatecage command to identify the drive enclosure:
a. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
b. Execute the locatecage command.
To perform maintenance using CLI, access SPMAINT:
1. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
2. Issue the following commands:
• showcage. A ----- indicates the location of the module in the enclosure. See the Name
field in the output.
• locatecage cagex. Where x is the number of the cage in the Name field.
cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 1:0:1 0 0:0:1 0 7 25-34 3202 3202 DCN1 n/a
1 cage1 1:0:1 0 0:0:1 1 0 0-0 3202 3202 DCS1 n/a
2 cage2 1:0:1 3 0:0:1 2 2 33-34 3202 3202 DCS2 n/a
3. The drive and I/O module fault LEDs flash amber with a one-second interval. Identify the
enclosure location where the I/O module resides by verifying the LED number on the front of
the enclosure.
4. Label and remove the SAS cables attached to the I/O module.
5. Replace the I/O module. See “Removing an I/O Module” (page 44) and “Installing an I/O
Module” (page 45).
6. Reattach the SAS cables to the I/O module.
7. In the CLI, issue the showcage command to verify that the I/O module has been successfully
replaced and the ----- is replaced with output:
cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 1:0:1 0 0:0:1 0 7 25-34 3202 3202 DCN1 n/a
1 cage1 1:0:1 0 0:0:1 1 0 0-0 3202 3202 DCS1 n/a
2 cage2 1:0:1 3 0:0:1 2 2 33-33 3202 3202 DCS2 n/a
3 cage3 1:0:1 2 0:0:1 3 2 32-32 3202 3202 DCS2 n/a
4 cage4 1:0:1 1 0:0:1 3 2 34-34 3202 3202 DCS2 n/a
6 cage6 1:0:2 2 0:0:2 1 6 33-35 3202 3202 DCS1 n/a
7 cage7 1:0:2 1 0:0:2 2 6 34-34 3202 3202 DCS1 n/a
8 cage8 1:0:2 0 0:0:2 0 6 35-36 3202 3202 DCS1 n/a
9 cage9 1:0:2 3 0:0:2 0 8 34-48 220c 220c DCS1 n/a
6. Verify that the I/O module is successfully replaced by executing the showcage command:
cli% showcage
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 1:0:1 0 0:0:1 0 6 25-28 320c 320c DCN1 n/a
1 cage1 1:0:1 2 0:0:1 2 6 25-29 320c 320c DCS1 n/a
2 cage2 1:0:1 1 0:0:1 1 6 25-28 320c 320c DCS2 n/a
3 cage3 1:0:1 0 0:0:1 0 6 25-27 320c 320c DCS2 n/a
0 1
CAUTION: To prevent overheating the Node PCM bay in the enclosure should not be left open
for more than 6 minutes.
NOTE: Be sure to put on your electrostatic discharge wrist strap to avoid damaging any circuitry.
1. Remove the replacement PCM from its packaging and place it on an ESD safe mat with the
empty battery compartment facing up.
PCM Removal
CAUTION: Verify the PCM power switch is turned to the OFF position to disconnect power.
NOTE: Because they use a common power bus, some PCM LEDs may remain illuminated after
the PCM is powered off.
1. Loosen the cord clamp, release the cable tie tab, and slide the cord clamp off the cable tie.
2. Disconnect the power cable, keeping the cord clamp on the power cable.
3. Secure the power cable and cable clamp so that it will not be in the way when the PCM is
removed.
4. Note the PCM orientation.
5. With thumb and forefinger, grasp and squeeze the latch to release the handle.
6. Rotate the PCM release handle and slide the PCM out of the enclosure.
7. Place the faulty PCM on the ESD safe mat next to the replacement PCM with the battery
compartment facing up.
NOTE: Check that the battery and handle is level with the surface of the PCM.
NOTE: Ensure that no cables get caught in the PCM insertion mechanism, especially the thin
Fiber Channel cables.
4. Rotate the handle to fully seat the PCM into the enclosure; you will hear a click as the latch
engages.
5. Once inserted, pull back lightly on the PCM to ensure that it is properly engaged.
6. Reconnect the power cable and slide the cable clamp onto the cable tie.
7. Tighten the cord clamp.
8. Turn the PCM on and check that power LED is green (see Table 3 (page 9)).
9. Slide the cord clamp from the replacement PCM onto the cable tie of the failed PCM.
10. Follow the return instructions provided with the new component.
11. Verify that the PCM has been successfully replaced (see “PCM and Battery Verification” (page
73)).
NOTE: For a failed battery in a PCM, see “Replacing a Battery inside a Power Cooling Module”
(page 49).
To perform maintenance using CLI, access SPMAINT:
1. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
2. Issue the shownode –ps command:
3. Replace the PCM. See “Removing a Power Cooling Module” (page 48) and “Installing a
Power Cooling Module ” (page 51).
4. In the CLI, issue the shownode -ps command to verify that the PCM has been successfully
replaced.
WARNING! If both batteries in the same node enclosure failed, do not attempt to replace both
at the same time.
Before you begin, verify that at least one PCM battery in each node enclosure is functional and
identify which battery needs to be replaced.
To perform maintenance using CLI, access SPMAINT:
1. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ and issue
the following commands:
• showbattery to verify that the battery has failed:
cli% showbattery
Node PS Bat Serial State ChrgLvl(%) ExpDate Expired Testing
0,1 0 0 BCC0974242G00C7 Failed 106 n/a No No
0,1 1 0 BCC0974242G006J OK 104 n/a No No
NOTE: Because each battery is a backup for both nodes, node 0 and 1 both report a
problem with a single battery. The Qty appears as 2 in output because two nodes are
reporting the problem. Battery 0 for node 0 is in the left PCM, and battery 0 for node 1
is in the right side PCM (when looking at the node enclosure from the rear).
2. Remove the PCM, see “Removing a Power Cooling Module” (page 48).
a. At the back of the PCM, lift the battery handle to eject the battery pack.
3. To reinstall the PCM, see “Installing a Power Cooling Module ” (page 51).
4. In the CLI, issue the following commands:
• showbattery to confirm the battery is functional and the serial ID has changed:
cli% showbattery
Node PS Bat Assy_Serial State ChrgLvl(%) ExpDate Expired Testing
0,1 0 0 BCC0974242G00CH OK 104 n/a No No
0,1 1 0 BCC0974242G006J OK 106 n/a No No
NOTE: After servicing the controller nodes and cages, use the upgradecage cage<n> command
to ensure all the cages, along with the associated firmware, are operating with the correct version
of the software.
The following node internal component procedures are very complicated and may result in loss of
data. Before performing these procedures, remove the node cover, if appropriate.
NOTE: Items 1 and 2 in the list above are regarded as one component, called the Node drive assembly.
NOTE: Before beginning any internal node component procedure, the node must be removed
from the storage system and the node cover removed.
NOTE: The clock inside the node uses a 3-V lithium coin battery. The lithium coin battery may
explode if it is incorrectly installed in the node. Replace the clock battery only with a battery
supplied by HP. Do not use non-HP supplied batteries. Dispose of used batteries according to the
manufacturer’s instructions.
CAUTION: To prevent overheating the node bay in the enclosure should not be left open for
more than 30 minutes.
NOTE: Be sure to use an electrostatic discharge wrist strap to avoid damaging any circuitry.
Preparation
Unpack the replacement clock battery and place on an ESD safe mat.
NOTE: If the failed node is already halted, it is not necessary to shutdown (halt) the node because
it is not part of the cluster. The failed DIMM should be identified from the failure notification.
1. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
2. Issue the shownode command to see if the node is listed as Degraded or is missing from the
output.
NOTE: If the node's state is Degraded, it must be shutdown to be serviced. If the node is
missing from the output it may already be shutdown and is ready to be serviced, in this case
proceed to Step 6.
In the following example of a 7200 both nodes are present:
cli% shownode
Control Data Cache
Node --Name--- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%)
0 1699808-0 OK Yes Yes Off GreenBlnk 8192 4096 100
1 1699808-1 Degraded No Yes Off GreenBlnk 8192 4096 100
NOTE: If more than one node is down at the same time, escalate to the next level of support.
NOTE: All nodes in this System flash, except the failed node, which displays a solid
blue LED.
Node Removal
1. Allow 2-3 minutes for the node to halt, then verify that the Node Status LED is flashing green
and the Node UID LED is blue, indicating that the node has been halted.
CAUTION: The system will not fail if the node is properly halted before removal, but data
loss may occur if the replacement procedure is not followed correctly.
NOTE: The Node Fault LED may be amber, depending on the nature of the node failure.
NOTE: Do not touch internal node components when removing or inserting the battery.
3. Insert the replacement 3-V lithium coin battery into the Clock Battery slot with the positive-side
facing the retaining clip.
4. Replace the node cover.
Node Replacement
1. Ensure that the gray node rod is in the extracted position, pulled out of the component.
2. Grasp each side of the node and gently slide it into the enclosure. Ensure the node is aligned
with the grooves in the slot.
CAUTION: Ensure the node is correctly oriented, alternate nodes are rotated by 180°.
3. Keep sliding the node in until it halts against the insertion mechanism.
4. Reconnect cables to the node.
5. Push the extended gray node rod into the node to ensure the node is correctly installed.
CAUTION: A flashing blue LED indicates that the node is not properly seated. Pull out the
gray node rod and push back in to ensure that the node is fully seated.
NOTE: Once inserted, the node should power up and rejoin the cluster; this may take up to
5 minutes.
6. Verify that the node LED is blinking green in synchronization with other nodes, indicating that
the node has joined the cluster.
7. Follow the return or disposal instructions provided with the new component.
4. 4. Issue the showdate command to confirm that the clock setting is correct:
cli% showdate
Node Date
0 2012-11-21 08:36:35 PDT (America/Los_Angeles)
1 2012-11-21 08:36:35 PDT (America/Los_Angeles)
CAUTION: To prevent overheating, the node bay in the enclosure should not be left open for
more than 30 minutes.
NOTE: Use an electrostatic discharge wrist strap to avoid damaging any circuitry.
Preparation
Unpack the replacement DIMM and place on an ESD safe mat.
NOTE: If the failed node is already halted, it is not necessary to shutdown (halt) the node because
it is not part of the cluster. The failed DIMM should be identified from the failure notification.
Step 1 through Step 4 assist in the identification of the part to be ordered, if this information has
not already been obtained from the notification.
NOTE: Even when a DIMM is reported as failed it still displays configuration information.
1. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
2. Issue the shownode command to see if the node is listed as Degraded or is missing from the
output.
NOTE: If the node's state is Degraded, it must be shut down to be serviced. If the node is
missing from the output, it may already be shutdown and is ready to be serviced, in this case
proceed to Step 6.
In the following example of a 7200, both nodes are present:
cli% shownode
Control Data Cache
Node --Name--- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%)
0 1699808-0 OK Yes Yes Off GreenBlnk 8192 4096 100
1 1699808-1 Degraded No Yes Off GreenBlnk 8192 4096 100
NOTE: If more than one node is down at the same time, escalate to the next level of support.
3. Issue the shownode -mem command to display the usage (control or data cache) and
manufacturer (sometimes this cannot be displayed).
cli% shownode -mem
Node Riser Slot SlotID -Name- -Usage- ---Type--- --Manufacturer--- -Serial- -Latency-- Size(MB)
0 n/a 0 J0155 DIMM0 Control DDR3_SDRAM -- B1F55894 CL5.0/10.0 8192
0 n/a 0 J0300 DIMM0 Data DDR2_SDRAM Micron Technology DD9CCF19 CL4.0/6.0 2048
0 n/a 1 J0301 DIMM1 Data DDR2_SDRAM Micron Technology DD9CCF1A CL4.0/6.0 2048
1 n/a 0 J0155 DIMM0 Control DDR3_SDRAM -- B1F55897 CL5.0/10.0 8192
1 n/a 0 J0300 DIMM0 Data DDR2_SDRAM Micron Technology DD9CCF1C CL4.0/6.0 2048
1 n/a 1 J0301 DIMM1 Data DDR2_SDRAM Micron Technology DD9CCF1B CL4.0/6.0 2048
NOTE: All nodes in this system flash, except the failed node, which displays a solid
blue LED.
CAUTION: The system does not fail if the node is properly halted before removal, but data
loss may occur if the replacement procedure is not followed correctly.
NOTE: The Node Fault LED may be amber, depending on the nature of the node failure.
DIMM Replacement
1. Lift the Node Drive Assembly, move it to the side, and place it on the ESD safe mat.
2. Physically identify the failed DIMM in the node.
The Control Cache (CC) and Data Cache (DC) DIMMs can be identified by locating the
appropriate silk-screening on the board.
3. With your thumb or finger, press outward on the two tabs on the sides of the DIMM to remove
the failed DIMM and place on the ESD safe mat.
4. Align the key and insert the DIMM by pushing downward on the edge of the DIMM until the
tabs on both sides snap into place.
5. Replace the Node Drive Assembly.
6. Replace the node cover.
Node Replacement
1. Ensure that the gray node rod is in the extracted position, pulled out of the component.
CAUTION: Ensure that the node is correctly oriented, alternate nodes are rotated by 180°.
3. Keep sliding the node in until it halts against the insertion mechanism.
4. Push the extended gray node rod into the node to ensure the node is correctly installed.
5. Reconnect cables to the node.
CAUTION: A flashing blue LED indicates that the node is not properly seated. Pull out the
gray node rod and push back in to ensure that the node is fully seated
NOTE: Once inserted, the node should power up and rejoin the cluster, which may take up
to 5 minutes.
6. Verify that the node LED is blinking green in synchronization with other nodes, indicating that
the node has joined the cluster.
7. Follow the return instructions provided with the new component.
NOTE: Depending on the serviced component, the node may go through Node Rescue,
which can take up to 10 minutes.
NOTE: The LED status for the replaced node may indicate green and could take up to 3
minutes to change to blinking green.
cli% shownode
Control Data Cache
Node --Name--- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%)
0 1699808-0 OK Yes Yes Off GreenBlnk 8192 4096 100
1 1699808-1 OK No Yes Off GreenBlnk 8192 4096 100
NOTE: The shownode -i command displays node inventory information; scroll down to
view physical memory information.
cli% shownode -i
------------------------Nodes------------------------
.
----------------------PCI Cards----------------------
.
-------------------------CPUs------------------------
.
-------------------Internal Drives-------------------
.
-------------------------------------------Physical Memory-------------------------------------------
Node Riser Slot SlotID Name Type --Manufacturer--- ----PartNumber---- -Serial- -Rev- Size(MB)
0 n/a 0 J0155 DIMM0 DDR3_SDRAM -- 36KDYS1G72PZ-1G4M1 B1F55894 4D31 8192
0 n/a 0 J0300 DIMM0 DDR2_SDRAM Micron Technology 18HVF25672PZ-80EH1 DD9CCF19 0100 2048
0 n/a 1 J0301 DIMM1 DDR2_SDRAM Micron Technology 18HVF25672PZ-80EH1 DD9CCF1A 0100 2048
1 n/a 0 J0155 DIMM0 DDR3_SDRAM -- 36KDYS1G72PZ-1G4M1 B1F55897 4D31 8192
1 n/a 0 J0300 DIMM0 DDR2_SDRAM Micron Technology 18HVF25672PZ-80EH1 DD9CCF1C 0100 2048
1 n/a 1 J0301 DIMM1 DDR2_SDRAM Micron Technology 18HVF25672PZ-80EH1 DD9CCF1B 0100 2048
--------------------Power Supplies-------------------
CAUTION: To prevent overheating, the node bay in the enclosure should not be left open for
more than 30 minutes.
NOTE: Be sure to put on your electrostatic discharge wrist strap to avoid damaging any circuitry.
Unpack the replacement PCIe Adapter and place on an ESD safe mat.
:
If the failed node is already halted, it is not necessary to shutdown (halt) the node because it is not
part of the cluster. The failed PCIe adapter is identified by the failure notification.
CAUTION: The system does not fail if the node is properly halted before removal, but data
loss may occur if the replacement procedure is not followed correctly.
NOTE: The Node Fault LED may be amber, depending on the nature of the node failure.
NOTE: The CNA Adapter is half-height and will not be held in place by the blue touch point
tab.
a. Press down on the blue touch point tab to release the assembly from the node.
b. Grasp the blue touch point on the riser card and pull the assembly up and away from
the node for removal.
c. Pull the riser card to the side to remove the riser card from the assembly.
3. Insert the replacement PCIe Adapter into the riser card.
4. To replace the Adapter, align the recesses on the Adapter plate with the pins on the node
chassis. This should align the riser card with the slot on the node. Snap the PCIe Adapter
assembly into the node.
5. Replace the node cover.
CAUTION: Alloy gray-colored latches on components such as the node mean the component is
warm-swappable. HP recommends shutting down the node (with the enclosure power remaining
on) before removing this component.
CAUTION: To prevent overheating the node bay in the enclosure should not be left open for
more than 30 minutes.
1. Ensure that the gray node rod is in the extracted position, pulled out of the component.
2. Grasp each side of the node and gently slide it into the enclosure. Ensure the node is aligned
with the grooves in the slot.
CAUTION: Ensure that the node is correctly oriented; alternate nodes are rotated by 180°.
3. Keep sliding the node in until the node halts against the insertion mechanism.
4. Reconnect cables to the node.
5. Push the extended gray node rod into the node to ensure the node is correctly installed.
CAUTION: If the blue LED is flashing, it indicates that the node is not properly seated. Pull
out the gray node rod and push back in to ensure that the node is fully seated.
NOTE: Once inserted, the node should power up and rejoin the cluster; this may take up to
5 minutes.
6. Verify that the node LED is blinking green in synchronization with other nodes indicating that
the node has joined the cluster.
7. Follow the return or disposal instructions provided with the new component.
8. Verify that the node has been successfully replaced and the replacement PCIe Adapter is
recognized.
CAUTION: Alloy gray-colored latches on components such as the node mean the component is
warm-swappable. HP recommends shutting down the node (with the enclosure power remaining
on) before removing this component.
CAUTION: To prevent overheating the node bay in the enclosure should not be left open for
more than 30 minutes.
NOTE: Be sure to put on your electrostatic discharge wrist strap to avoid damaging any circuitry.
Preparation
Remove the replacement Node Drive Assembly from its protective packaging and place on an ESD
safe mat.
NOTE: If the failed node is already halted, it is not necessary to shutdown (halt) the node because
it is not part of the cluster. The failed DIMM should be identified from the failure notification.
NOTE: Note: If the node's state is Degraded, it must be shut down to be serviced. If the
node is missing from the output, it may already be shutdown and is ready to be serviced, in
this case proceed to step 6.
• In this example of a 7200 both nodes are present.
cli% shownode
Control Data Cache
Node --Name--- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%)
0 1699808-0 OK Yes Yes Off GreenBlnk 8192 4096 100
1 1699808-1 Degraded No Yes Off GreenBlnk 8192 4096 100
NOTE: If more than one node is down at the same time escalate to the next level of
support.
NOTE: This flashes all nodes in this System except the failed node, which will have a
solid blue LED.
CAUTION: The system does not fail if the node is properly halted before removal, but data
loss may occur if the replacement procedure is not followed correctly.
NOTE: The Node Fault LED may be amber, depending on the nature of the node failure.
2. Ensure that all cables on the failed node are marked to facilitate reconnecting later.
3. At the rear of the rack, remove cables from the failed node.
4. Pull the node rod to remove the node from the enclosure.
5. When the node is halfway out of the enclosure, use both hands to slide the node out completely.
6. Set the node on the ESD safe mat for servicing.
NOTE: There are four plastic guide pins that hold the node disk in place. To correctly seat
the node disk, push the node disk down on the guide pins. Failure to locate the guide pins
correctly may result in the inability to replace the node cover.
Node Replacement
1. Ensure that the gray node rod is in the extracted position, pulled out of the component.
CAUTION: Ensure that the node is correctly oriented, alternate nodes are rotated by 180°.
3. Keep sliding the node in until it halts against the insertion mechanism
4. Reconnect cables to the node.
CAUTION: Do not proceed until the node being replaced has an Ethernet cable connected
to the MGMT port. Without an Ethernet cable, node rescue cannot complete and the
replacement node will not be able to rejoin the cluster.
5. Push the extended gray node rod into the node to ensure the node is correctly installed.
CAUTION: If the blue LED is flashing, the node is not properly seated. Pull out the gray node
rod and push back in to ensure that the node is fully seated.
NOTE: Once inserted, the node should power up and go through the Node Rescue procedure
before joining the Cluster; this may take up to 10 minutes.
6. Verify that the node LED is blinking green in synchronization with other nodes, indicating that
the node has joined the cluster.
7. Follow the return and disposal instructions provided with the new component.
Node Verification
Verify that the node is operational and the Node Drive Assembly has been successfully replaced:
1. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
2. Issue the checkhealth command to verify that the state of the system is OK:
cli% checkhealth
Checking alert
Checking cabling
Checking cage
Checking date
Checking host
Checking ld
Checking license
Checking network
Checking node
Checking pd
Checking port
Checking rc
Checking snmp
Checking task
Checking vlun
Checking vv
System is healthy
3. Issue the shownode command to verify that the state of all nodes is OK.
NOTE: Depending on the serviced component, the node may go through Node Rescue,
which can take up to 10 minutes.
NOTE: The LED status for the replaced node may indicate green and can take up to 3 minutes
to change to green blinking.
cli% shownode
Control Data Cache
Node --Name--- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%)
0 1699808-0 OK Yes Yes Off GreenBlnk 8192 4096 100
1 1699808-1 OK No Yes Off GreenBlnk 8192 4096 100
NOTE: If the failed node is already halted, it is not necessary to shutdown the node because it
is not part of the cluster.
1. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
2. Issue the shownode command to see if the node is listed as Degraded or is missing from the
output.
NOTE: If the node's state is Degraded, it will need to be shutdown to be serviced. If the
node is missing from the output, it may already be shutdown and is ready to be serviced; in
this case proceed to step 6.
In the following example of a 7200 both nodes are present.
Node --Name--- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%)
0 1699808-0 OK Yes Yes Off GreenBlnk 8192 4096 100
1 1699808-1 Degraded No Yes Off GreenBlnk 8192 4096 100
NOTE: If more than one node is down at the same time, contact your authorized service
provider.
Node Verification
Verify that the node has successfully been replaced:
1. Select the button to return to the 3PAR Service Processor Menu.
2. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
3. Issue the checkhealth command to verify that the state of all nodes is OK.
NOTE: Depending on the serviced component, the node may go through Node Rescue,
which can take up to 10 minutes.
cli% checkhealth
Checking alert
Checking cabling
Checking cage
Checking dar
Checking date
Checking host
Checking ld
Checking license
Checking network
NOTE: Depending on the serviced component, the node may go through Node Rescue,
which can take up to 10 minutes.
NOTE: The LED status for the replaced node may indicate Green and could take up to 3
minutes to change to Green Blinking.
cli% shownode
Control Data Cache
Node --Name--- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%)
0 1699808-0 OK Yes Yes Off GreenBlnk 8192 4096 100
1 1699808-1 OK No Yes Off GreenBlnk 8192 4096 100
SYNTAX
startnoderescue -node <node>
DESCRIPTION
Initiates a node rescue, which initializes the internal node disk of the
specified node to match the contents of the other node disks. The copy is
done over the network, so the node to be rescued must have an ethernet
connection. It will automatically select a valid unused link local
address. Progress is reported as a task.
AUTHORITY
Super, Service
OPTIONS
None.
SPECIFIERS
<node>
Specifies the node to be rescued. This node must be physically present
in the system and powered on, but not part of the cluster.
NOTES
On systems other than T and F class, node rescue will automatically be
started when a blank node disk is inserted into a node. The
startnoderescue command only needs to be manually issued if the node rescue
must be redone on a disk that is not blank. For T and F class systems,
startnoderescue must always be issued to perform a node rescue.
EXAMPLES
The following example show starting a node rescue of node 2.
cli% showtask
Id Type Name Status Phase Step -------StartTime------- -FinishTime- -Priority- ---User----
96 node_rescue node_2_rescue active 1/1 0/1 2012-06-15 18:19:38 PDT - n/a sys:3parsys
CLI Procedures 67
:
If the failed node is already halted, it is not necessary to shutdown the node because it is not part
of the cluster.
1. In the 3PAR Service Processor Menu, select option 7 Interactive CLI for a StoreServ.
2. Issue the shownode -pci command to display adapter information:
cli% shownode -pci
Node Slot Type -Manufacturer- -Model-- --Serial-- -Rev- Firmware
0 0 SAS LSI 9205-8e Onboard 01 11.00.00.00
0 1 FC EMULEX LPe12002 Onboard 03 2.01.X.14
0 2 FC EMULEX LPe12004 5CF223004R 03 2.01.X.14
0 3 Eth Intel e1000e Onboard n/a 1.3.10-k2
1 0 SAS LSI 9205-8e Onboard 01 11.00.00.00
1 1 FC EMULEX LPe12002 Onboard 03 2.01.X.14
1 2 FC EMULEX LPe12004 5CF2230036 03 2.01.X.14
1 3 Eth Intel e1000e Onboard n/a 1.3.10-k2
Using this output, verify that the replacement card manufacturer and model are the same as
that currently installed in a slot.
3. Issue the shownode command to see if the node is listed as Degraded or is missing from the
output.
NOTE: If the node's state is Degraded, it must be shutdown to be serviced. If the node is
missing from the output, it may already be shutdown and ready to be serviced; in that case,
proceed to step 6.
NOTE: If more than one node is down at the same time escalate to the next level of support.
NOTE: This flashes all nodes in this System except the failed node, which has a solid blue
LED.
Checking alert
NOTE: Depending on the serviced component, the node may go through Node Rescue,
which can take up to 10 minutes.
NOTE: The LED status for the replaced node may indicate green and could take up to 3
minutes to change to green blinking.
cli% shownode
Control Data Cache
Node --Name--- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%)
0 1699808-0 OK Yes Yes Off GreenBlnk 8192 4096 100
1 1699808-1 OK No Yes Off GreenBlnk 8192 4096 100
4. Issue the shownode –pci command to verify that all PCIe Adapters are operational.
cli% shownode -pci
Node Slot Type -Manufacturer- -Model-- --Serial-- -Rev- Firmware
0 0 SAS LSI 9205-8e Onboard 01 11.00.00.00
0 1 FC EMULEX LPe12002 Onboard 03 2.01.X.14
0 2 FC EMULEX LPe12004 5CF223004R 03 2.01.X.14
0 3 Eth Intel e1000e Onboard n/a 1.3.10-k2
1 0 SAS LSI 9205-8e Onboard 01 11.00.00.00
1 1 FC EMULEX LPe12002 Onboard 03 2.01.X.14
1 2 FC EMULEX LPe12004 5CF2230036 03 2.01.X.14
1 3 Eth Intel e1000e Onboard n/a 1.3.10-k2
CAUTION: To prevent overheating the node bay in the enclosure should not be left open for
more than 30 minutes.
NOTE: Be sure to put on your electrostatic discharge wrist strap to avoid damaging any circuitry.
Unpack the replacement PCIe Adapter Riser Card and place on an ESD safe mat.
CLI Procedures 69
NOTE: The PCIe Adapter Riser Card does not have active components so is not displayed in any
output, its failure shows as a failed PCIe Adapter.
NOTE: If the failed node is already halted, it is not necessary to shutdown (halt) the node because
it is not part of the cluster.
Node Removal
1. Allow 2-3 minutes for the node to halt, and then verify that the Node Status LED is flashing
green and the Node UID LED is blue, indicating that the node has been halted.
CAUTION: The system does not fail if the node is properly halted before removal, but data
loss may occur if the replacement procedure is not followed correctly.
NOTE: The Node Fault LED may be amber, depending on the nature of the node failure.
2. Ensure that all cables on the failed node are marked to facilitate reconnecting later.
3. At the rear of the rack, remove cables from the failed node.
4. Pull the node rod to remove the node from the enclosure.
5. When the node is halfway out of the enclosure, use both hands to slide the node out completely.
6. Set the node on the ESD safe mat for servicing.
NOTE: The PCIe CNA Adapter is half-height; it is not secured by this tab.
b. Grasp the blue touch point on the riser card and pull the assembly up and away from
the node for removal.
c. Pull the riser card to the side to remove it from the assembly.
3. Insert the PCIe Adapter into the replacement riser card.
Node Replacement
1. Ensure that the gray node rod is in the extracted position, pulled out of the component.
2. Grasp each side of the node and gently slide it into the enclosure. Ensure the node is aligned
with the grooves in the slot.
CAUTION: Ensure that the node is correctly oriented; alternate nodes are rotated by 180°.
3. Keep sliding the node in until it halts against the insertion mechanism.
4. Reconnect cables to the node.
CAUTION: If the blue LED is flashing, the node is not properly seated. Pull out the gray node
rod and push back in to ensure that the node is fully seated.
5. Push the extended gray node rod into the node to ensure the node is correctly installed.
NOTE: Once inserted, the node should power up and rejoin the cluster; it may take up to
5 minutes.
6. Verify that the node LED is blinking green in synchronization with other nodes, indicating that
the node has joined the cluster.
7. Follow the return or disposal instructions provided with the new component.
8. Verify that the node has been successfully replaced and the PCIe Adapter is recognized (see
“Node and PCIe Adapter Verification ” (page 68)).
CLI Procedures 71
1. If the cage has been called out in a notification issue the showcage –d cageX command,
where cageX is the name of the cage indicated in the notification.
cli% showcage -d cage0
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 1:0:1 0 0:0:1 0 6 26-27 320c 320c DCN1 n/a
Position: ---
PCM Location
If an invalid status is not displayed, you can flash LEDs in a drive enclosure using the command
locatecage –t XX cageY where XX is the number of seconds to flash LEDs and cageY is the
name of the cage from the commands in “Drive PCM Identification ” (page 71).
LEDs can be stopped by issuing the locatecage –t 1 cageY command.
Position: ---
CLI Procedures 73
SFP Identification
1. Issue the showport command to view the port state:
cli% showport
N:S:P Mode State Node_WWN Port_WWN/HW_Addr Type Protocol Label Partner FailoverState
0:0:1 initiator ready 50002ACFF70185A6 50002AC0010185A6 disk SAS - - -
0:0:2 initiator ready 50002ACFF70185A6 50002AC0020185A6 disk SAS - - -
0:1:1 target ready 2FF70002AC0185A6 20110002AC0185A6 host FC - - -
0:1:2 target ready 2FF70002AC0185A6 20120002AC0185A6 host FC - - -
0:2:1 target loss_sync - 2C27D75301F6 iscsi iSCSI - - -
0:2:2 target loss_sync - 2C27D75301F2 iscsi iSCSI - - -
0:3:1 peer offline - 0002AC8004DB rcip IP RCIP0 - - -
1:0:1 initiator ready 50002ACFF70185A6 50002AC1010185A6 disk SAS - - -
1:0:2 initiator ready 50002ACFF70185A6 50002AC1020185A6 disk SAS - - -
1:1:1 target ready 2FF70002AC0185A6 21110002AC0185A6 host FC - - -
1:1:2 initiator loss_sync 2FF70002AC0185A6 21120002AC0185A6 free FC - - -
1:2:1 initiator loss_sync 2FF70002AC0185A6 21210002AC0185A6 free FC - - -
1:2:2 initiator loss_sync 2FF70002AC0185A6 21220002AC0185A6 free FC - - -
1:2:3 initiator loss_sync 2FF70002AC0185A6 21230002AC0185A6 free FC - - -
1:2:4 initiator loss_sync 2FF70002AC0185A6 21240002AC0185A6 free FC - - -
Typically, the State is listed as loss sync, the Mode as initiator, and the Connected
Device Type as free.
2. Issue the showport -sfp command to verify which SFP requires replacement:
cli% showport –sfp
SFP Verification
1. Replace the SFP (see “Replacing an SFP” (page 42)).
2. Issue the showport command to verify that the ports are in good condition and the State is
listed as ready:
cli% showport
N:S:P Mode State Node_WWN Port_WWN/HW_Addr Type Protocol Label Partner FailoverState
0:0:1 initiator ready 50002ACFF70185A6 50002AC0010185A6 disk SAS - - -
0:0:2 initiator ready 50002ACFF70185A6 50002AC0020185A6 disk SAS - - -
0:1:1 target ready 2FF70002AC0185A6 20110002AC0185A6 host FC - - -
0:1:2 target ready 2FF70002AC0185A6 20120002AC0185A6 host FC - - -
0:2:1 target loss_sync - 2C27D75301F6 iscsi iSCSI - - -
0:2:2 target loss_sync - 2C27D75301F2 iscsi iSCSI - - -
0:3:1 peer offline - 0002AC8004DB rcip IP RCIP0 - - -
1:0:1 initiator ready 50002ACFF70185A6 50002AC1010185A6 disk SAS - - -
1:0:2 initiator ready 50002ACFF70185A6 50002AC1020185A6 disk SAS - - -
1:1:1 target ready 2FF70002AC0185A6 21110002AC0185A6 host FC - - -
1:1:2 target ready 2FF70002AC0185A6 21120002AC0185A6 host FC - - -
1:2:1 initiator loss_sync 2FF70002AC0185A6 21210002AC0185A6 free FC - - -
1:2:2 initiator loss_sync 2FF70002AC0185A6 21220002AC0185A6 free FC - - -
1:2:3 initiator loss_sync 2FF70002AC0185A6 21230002AC0185A6 free FC - - -
1:2:4 initiator loss_sync 2FF70002AC0185A6 21240002AC0185A6 free FC - - -
The State should now be listed as ready, the Mode as target and the Connected Device
Type as host.
NOTE: When an SSD is identified as degraded, you must manually initiate the replacement
process. Execute servicemag start -pdid pd_id to move the chunklets. When the SSD is
replaced, the system automatically initiates servicemag resume.
There are four possible responses. Response 1 is expected when the drive is ready to be replaced:
1. servicemag has successfully completed:
cli% servicemag status
Cage 0, magazine 1:
The magazine was successfully brought offline by a servicemag start command.
The command completed Thu Oct 4 15:29:05 2012.
servicemag start -pdid 7 - Succeeded
When Succeeded displays as the last line in the output, it is safe to replace the disk.
2. servicemag has not started.
Data is being reconstructed on spares; servicemag does not start until this process is
complete. Retry the command at a later time.
cli% servicemag status
No servicemag operations logged.
3. servicemag has failed. Call your authorized service provider for assistance.
cli% servicemag status
Cage 0, magazine 1:
A servicemag start command failed on this magazine.
.....
4. servicemag is in progress. The output will inform the user of progress.
cli% servicemag status
Cage 0, magazine 1:
The magazine is being brought offline due to a servicemag start.
The last status update was at Thu Oct 4 15:27:54 2012.
Chunklet relocations have completed 35 in 0 seconds
servicemag start -pdid 1 -- is in Progress
NOTE: This process may take up to 10 minutes; repeat the command to refresh the status.
CLI Procedures 75
Disk Drive (Magazine) Location
1. Execute the showpd -failed command:
cli% showpd –failed
----Size(MB)----- ----Ports----
Id CagePos Type RPM State Total Free A B Cap(GB)
7 1:5:0 FC 10 failed 278528 0 1:0:1 0:0:1 450
2. Execute the locatecage -t XX cageY command.
Where:
• XX is the appropriate number of seconds to allow service personnel to view the LED status
of the drive enclosure
• Y is the cage number shown as the first number of CagePos in the output of theshowpd
-failed command; in this case, 1 (1:5:0).
For example, locatecage -t 300 cage1 flashes LEDs on cage1 for 300 seconds
(5 minutes).
This flashes all drives in this cage except the failed drive.
NOTE: If the command is executed again, the estimated time for relocation completion
may vary.
NOTE: There must be additional 2U rack space in the rack immediately above an existing node
enclosure to perform an online node pair upgrade. If rack space is not available, your system must
be shut down and enclosures and components must be removed and then reinstalled to make room
for the additional enclosures for an offline upgrade. See Offline Upgrade.
The following describes the requirements for upgrading hardware components to an existing
storage system.
NOTE: The cage nut is positioned 2 holes above the top of the rail.
3. Press down hard with your hand on the top of each rail to ensure they are mounted firmly.
4. Repeat on the other side of the rack.
CAUTION: When performing any upgrade while concurrently using the system, use extra care,
because an incorrect action during the upgrade process may cause the system to fail. Upgrading
nodes requires performing node rescue. See “Node Rescue” (page 145).
IMPORTANT: You cannot upgrade a 7200 storage system to a 7400. Only a two-node 7400
storage system can be upgraded to a four-node system, see “Upgrading a 7400 Storage System”
(page 79).
Information on node upgrades:
• There must be 2U of space in the rack directly above the existing controller node enclosure
(nodes 0 and 1) for the expansion controller node enclosure to be installed (nodes 2 and 3).
If there is no rack space available, your system must be shutdown and enclosures and
components must be relocated to make room for the additional enclosures for an offline
upgrade.
• 7200 nodes do not work in a 7400 storage system.
• A four-node system (7400) requires interconnect cabling between the node enclosures.
• Nodes must be cabled correctly for the cluster to form; incorrect cabling displays as alerts or
events in the OS.
• Only nodes configured as FRUs can be used to replace existing nodes or for upgrades in a
7400. Nodes cannot be moved from one system and installed in another.
• Nodes in a node pair must have identical PCIe adapter configurations.
CAUTION: All CLI commands must be performed from the SPMAINT using the spvar ID to ensure
correct permissions to execute all the necessary commands.
Before beginning a controller node upgrade:
• At the front of the storage system, before installing the enclosures, remove the filler plates that
cover the empty rack space reserved for the additional enclosures.
• Verify with the system administrator whether a complete backup of all data on the storage
system has been performed. Controller nodes must be installed into an active system.
• Verify Initial LED status:
◦ Node LEDs on nodes 0 and 1 should indicate a good status.
◦ Because no node interconnect cables have been installed, all port LEDs should be off.
• Validate Initial System Status:
1. Issue the showsys command to verify that your system is listed as an HP_3PAR 7400
model and the number of nodes is listed as 2.
cli% showsys
----------------(MB)----------------
ID --Name--- ---Model---- -Serial- Nodes Master TotalCap AllocCap FreeCap FailedCap
99806 3par_7400 HP_3PAR 7400 1699806 2 0 16103424 4178944 11924480 0
2. Issue the showhost command to verify that all hosts are attached to at least two nodes.
cli% showhost
Checking alert
Checking cabling
Checking cage
Checking date
Checking host
Checking ld
Checking license
Checking network
Checking node
Checking pd
Checking port
Checking rc
Checking snmp
Checking task
Checking vlun
Checking vv
System is healthy
Hardware Installation
NOTE: See Cabling Guide instructions for your particular node and drive enclosure configuration
for best practice positioning of enclosures in the rack. These best practices also facilitate cabling.
The cabling guides are located at http://h20000.www2.hp.com/bizsupport/TechSupport/
DocumentIndex.jsp?contentType=SupportManual&lang=en&cc=us&docIndexId=64179&
taskId=101&prodTypeId=12169&prodSeriesId=5335712#1.
Install rail kits for additional node and drive enclosures before loading any enclosures in the rack.
1. Install a rail kit for drive and node enclosures.
NOTE: Controller nodes should ship with PCIe Adapters already installed. If that is not the
case, remove the controller nodes, install PCIe Adapters and SFPs, and re-install the controller
nodes.
2. Install the controller node enclosure. It may ship with the nodes and PCMs already installed.
3. Install all drive enclosures following the Cabling Guide's configuration best practices where
possible. Adding new drive enclosures directly above a new node enclosure may also be
applicable. The cabling guides are located at http://h20000.www2.hp.com/bizsupport/
TechSupport/DocumentIndex.jsp?contentType=SupportManual&lang=en&cc=us&
docIndexId=64179&taskId=101&prodTypeId=12169&prodSeriesId=5335712#1.
4. Install disk drives in the node and drive enclosures (see the “Allocation and Loading Order”
sections in the HP 3PAR StoreServ 7000 Storage Installation Guide).
5. Install the power cables to the controller node and drive enclosure PCMs.
6. After you have completed the physical installation of the drive enclosures and disk drives,
cable the drive enclosures to the controller nodes and each other (see the appropriate HP
3PAR Cabling Guide). The cabling guides are located at http://h20000.www2.hp.com/
bizsupport/TechSupport/DocumentIndex.jsp?contentType=SupportManual&lang=en&cc=us&
docIndexId=64179&taskId=101&prodTypeId=12169&prodSeriesId=5335712#1.
7. Install node interconnect cables between nodes 0, 1 and 2 (see Table 19 (page 81) and
Figure 59 (page 81)).
8. Connect Ethernet cables to the MGMT port for each new node.
CAUTION: Ethernet cables are required as the OS for new nodes is transferred across the
network. If additional Ethernet cables for node 2 and node 3 are unavailable, use one of the
existing cables in node 0 and node 1. Use the shownet command to locate the active node
before moving the non-active node Ethernet connection to node 2.
9. Without removing any cables, pull the gray node rod to unseat node 3 from the enclosure.
Powering On
1. Turn power switches to ON for all drive enclosure PCMs.
2. Verify that each disk drive powers up and the disk drive status LED is green.
3. Turn power switches to ON for the new controller node enclosure PCMs.
4. Node rescue for node 2 auto-starts and the HP 3PAR OS is copied across the local area
network (LAN).
When the HP 3PAR OS is installed, node 2 should reboot and join the cluster.
NOTE: If the node status LED is solid green, the node has booted but is unable to join
the cluster.
• Intr 0 to Intr 2 interconnect port status LEDs on all four nodes should be green, indicating
that links have been established.
• If any node interconnect port fault LEDs are amber or flashing amber, one or both of the
following errors has occurred:
◦ Amber: failed to establish link connection.
WARNING! Never remove a node interconnect cable when all port LEDs at both ends of the
cable are green.
CAUTION: Node interconnect cables are directional. Ends marked A should connect only to
node 0 or node 1. Ends marked C should connect only to node 2 or node 3 (see Figure 60 (page
82)).
NOTE: If all cables are correct, escalate the problem to the next level of HP support.
NOTE: If you are currently adding node 2, only the node 2 cables should be connected.
Install the node interconnect cables as shown in Table 20 (page 82) and Figure 61 (page 83).
Table 20 Node Interconnect Cabling for Nodes 0, 1, 2, and 3
A C
NOTE: This is an example of a node rescue task for node 3. If there are no active node
rescue tasks, go to Step 4 (shownode).
IMPORTANT: If any step does not have expected results, escalate to the next level of HP support.
2. Issue the showtask -d <taskID> command against the active node rescue task to view
detailed node rescue status.
The File sync has begun step in the following procedure, where the node rescue file is
being copied to the new node, takes several minutes.
cli% showtask -d 1296
Detailed status:
2012-10-19 13:27:29 PDT Created task.
2012-10-19 13:27:29 PDT Updated Running node rescue for node 2 as 0:15823
2012-10-19 13:27:36 PDT Updated Using IP 169.254.190.232
2012-10-19 13:27:36 PDT Updated Informing system manager to not autoreset node 2.
2012-10-19 13:27:36 PDT Updated Attempting to contact node 2 via NEMOE.
2012-10-19 13:27:37 PDT Updated Setting boot parameters.
2012-10-19 13:27:59 PDT Updated Waiting for node 2 to boot the node rescue kernel.
2012-10-19 13:28:02 PDT Updated Kernel on node 2 has started. Waiting for node to retrieve install
details.
2012-10-19 13:28:21 PDT Updated Node 2 has retrieved the install details. Waiting for file sync to
begin.
2012-10-19 13:28:54 PDT Updated File sync has begun. Estimated time to complete this step is 5 minutes
3. Repeat the command showtask -d <taskID> against the active node rescue task to view
detailed node rescue status.
Node 2 has completed the node rescue task and is in the process of joining the cluster.
cli% showtask -d 1296
Detailed status:
2012-10-19 13:27:29 PDT Created task.
2012-10-19 13:27:29 PDT Updated Running node rescue for node 2 as 0:15823
2012-10-19 13:27:36 PDT Updated Using IP 169.254.190.232
2012-10-19 13:27:36 PDT Updated Informing system manager to not autoreset node 2.
2012-10-19 13:27:36 PDT Updated Attempting to contact node 2 via NEMOE.
2012-10-19 13:27:37 PDT Updated Setting boot parameters.
2012-10-19 13:27:59 PDT Updated Waiting for node 2 to boot the node rescue kernel.
2012-10-19 13:28:02 PDT Updated Kernel on node 2 has started. Waiting for node to retrieve install
details.
2012-10-19 13:28:21 PDT Updated Node 2 has retrieved the install details. Waiting for file sync to
begin.
2012-10-19 13:28:54 PDT Updated File sync has begun. Estimated time to complete this step is 5 minutes
on a lightly loaded sys.
2012-10-19 13:32:34 PDT Updated Remote node has completed file sync, and will reboot.
2012-10-19 13:32:34 PDT Updated Waiting for node to rejoin cluster.
NOTE: Repeat if necessary. The node may reboot and take an additional three minutes
between the node rescue task completing and the node joining the cluster.
cli% shownode
Control Data Cache
Node --Name--- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%)
0 1699806-0 OK Yes Yes Off GreenBlnk 8192 8192 100
1 1699806-1 OK No Yes Off GreenBlnk 8192 8192 100
2 1699806-2 OK No Yes Off GreenBlnk 8192 8192 100
NOTE: If the node status LED is solid green, the node has booted but is unable to join
the cluster.
• Intr 0 and Intr 1 interconnect port status LEDs on all four nodes should be green, indicating
that links have been established.
• If any node interconnect port fault LEDs are amber or flashing amber, one or both of the
following errors has occurred:
◦ Amber: failed to establish link connection.
8. Issue the showtask -d <taskID> command against the active node rescue task to view
detailed node rescue status.
The File sync has begun step in the following procedure, where the node rescue file is
being copied to the new node, takes several minutes.
cli% showtask -d 1299
Detailed status:
2012-10-19 13:39:25 PDT Created task.
2012-10-19 13:39:25 PDT Updated Running node rescue for node 3 as 0:15823
2012-10-19 13:40:36 PDT Updated Using IP 169.254.190.232
2012-10-19 13:40:36 PDT Updated Informing system manager to not autoreset node 3.
2012-10-19 13:40:36 PDT Updated Attempting to contact node 3 via NEMOE.
2012-10-19 13:40:37 PDT Updated Setting boot parameters.
2012-10-19 13:40:59 PDT Updated Waiting for node 3 to boot the node rescue kernel.
2012-10-19 13:41:02 PDT Updated Kernel on node 3 has started. Waiting for node to retrieve install
details.
2012-10-19 13:41:21 PDT Updated Node 3 has retrieved the install details. Waiting for file sync to
begin.
2012-10-19 13:41:54 PDT Updated File sync has begun. Estimated time to complete this step is 5 minutes
9. Reissue the showtask -d <taskID> command against the active node rescue task to view
detailed node rescue status. Node 3 has completed the node rescue task and is the process
of joining the cluster:
cli% showtask -d 1299
Detailed status:
2012-10-19 13:39:25 PDT Created task.
2012-10-19 13:39:25 PDT Updated Running node rescue for node 3 as 0:15823
2012-10-19 13:40:36 PDT Updated Using IP 169.254.190.232
2012-10-19 13:40:36 PDT Updated Informing system manager to not autoreset node 3.
2012-10-19 13:40:36 PDT Updated Attempting to contact node 3 via NEMOE.
2012-10-19 13:40:37 PDT Updated Setting boot parameters.
2012-10-19 13:40:59 PDT Updated Waiting for node 3 to boot the node rescue kernel.
2012-10-19 13:41:02 PDT Updated Kernel on node 3 has started. Waiting for node to retrieve install
details.
2012-10-19 13:41:21 PDT Updated Node 3 has retrieved the install details. Waiting for file sync to
begin.
2012-10-19 13:41:54 PDT Updated File sync has begun. Estimated time to complete this step is 5 minutes
on a lightly loaded sys.
2012-10-19 13:44:34 PDT Updated Remote node has completed file sync, and will reboot.
2012-10-19 13:44:34 PDT Updated Waiting for node to rejoin cluster.
10. Issue the showtask command to view the node rescue tasks.
When complete the node_rescue task should have a status of done.
li% showtask
NOTE: Repeat if necessary. The node may reboot and take an additional three minutes
between the node rescue task completing and the node joining the cluster.
cli% shownode
Control Data Cache
Node --Name--- -State- Master InCluster -Service_LED ---LED--- Mem(MB) Mem(MB) Available(%)
0 1699806-0 OK Yes Yes Off GreenBlnk 8192 8192 100
1 1699806-1 OK No Yes Off GreenBlnk 8192 8192 100
2 1699806-2 OK No Yes Off GreenBlnk 8192 8192 100
3 1699806-3 OK No Yes Off GreenBlnk 8192 8192 100
Initiate admithw
When node and drive enclosures display in CLI, they are identified as follows:
• DCN1 for a node enclosure
• DCS2 for 2U24 (M6710) drive
• DCS1 for 4U24 (M6720) drive enclosure
Issue the admithw command to start the process to admit new hardware.
cli% admithw
Checking nodes...
Checking volumes...
Checking ports...
Checking cabling...
IMPORTANT: If you are prompted for permission to upgrade drive enclosure (cage) or physical
disk (disk drive) firmware always agree to the upgrade.
There may be a delay in the script while Logging LDs are created for nodes 2 and 3:
Creating logging LD for node 2.
Creating logging LD for node 3.
Initialization of upgraded storage is required for these to be created.
2. Issue the shownode -pci command and verify that all installed PCIe Adapters are displayed.
cli% shownode -pci
Id Name LoopA Pos.A LoopB Pos.B Drives Temp RevA RevB Model Side
0 cage0 1:0:1 0 0:0:1 0 6 31-31 320b 320b DCN1 n/a
1 cage1 1:0:1 0 0:0:1 1 6 30-32 320b 320b DCS1 n/a
NOTE: New disk drives must be initialized before they are ready for use. Initialization occurs
in the background and can take several hours, depending on disk drive capacities.
5. Issue the showhost command to verify that all hosts are still attached to the original two
nodes.
cli% showhost
NOTE: Hosts should be connected to new nodes after the upgrade is completed.
Checking cabling
Checking node
Checking cage
Checking pd
The following components are healthy: cabling, node, cage, pd
◦ checkhealth –svc cabling to verify existing cabling is correct and output displays
as: The following components are healthy: cabling.
NOTE: Before you begin, remove the additional enclosures from the packaging.
1. Install rail kits for the enclosures, if applicable. See “Installing Rails for Component Enclosures”
(page 77).
2. Install the controller node enclosure (that was shipped with the nodes already installed). See
“Installing the Enclosures” (page 91).
3. Install the 764W PCMs into the node enclosure. See “Installing a Power Cooling Module ”
(page 51).
4. Cable node enclosures to each other and verify that the power switch is OFF. Do not power
ON until Nodes Rescue steps have been executed.
a. Insert the cable connector A end into node 0, intr 0 port. Connect the C end to node 2,
intr 1 port.
b. Insert the cable connector A end into node 0, intr 1 port. Connect the C end to node 3,
intr 0 port.
c. Insert the cable connector A end into node 1, intr 1 port. Connect the C end to node 2,
intr 0 port.
5. Install the additional drive enclosures and disk drives according to best practice rules, balancing
the drives between the node pairs. See “Installing a Disk Drive” (page 29).
6. After you have completed the physical installation of the enclosures and disk drives, cable the
drive enclosures to the new controller nodes.
For more information, see “Cabling Controller Nodes” in the HP 3PAR StoreServ 7000 Storage
Installation Guide.
7. Install the power cables to the PCMs and press the power switch to ON. Turn power on to
the drive enclosures first, and then to the node enclosures.
8. Node rescue auto-starts and adds the nodes to the cluster by copying the OS to the new nodes.
9. Verify the upgrade is successful.
NOTE: When installing a two-node 7400 enclosure, 2U of space must be reserved above the
enclosure for an upgrade to a four-node system. There are two 1U filler panels available to reserve
this space.
WARNING! The enclosure is heavy. Lifting, moving, or installing the enclosure requires two
people.
To install an enclosure on the rack:
2. At the front of the enclosure, remove the yellow bezels on each side of the enclosure to provide
access to the mounting holes.
3. Using both hands, slide the enclosure onto the lips of rail channels. Use the bottom lip as a
guide and the top to catch the enclosure. Check all sides of the rack at the front and the back
to ensure the enclosure is fitted to the channel lips before using any screws.
4. If required, add hold-down screws at the rear of the enclosure for earthquake protection .
Part number 5697-1835 is included with each enclosure: 2 x SCR, M5 -0.8, 6mm H, Pan
HEAD- T25/SLOT.
CAUTION: Do not power on without completing the remainder of the physical installation or
upgrade.
NOTE: For proper thermal control, blank filler panels must be installed in any slots without drives.
NOTE: Before beginning this procedure, review how to load the drives based on drive type,
speed, and capacity. For more information, see the HP 3PAR StoreServ 7000 Storage Installation
Guide.
Information on drive enclosure upgrades:
• The number of drive enclosures attached to a specific node-pair should be determined by the
desired RAID set size, and HA Cage protection requirements; drive enclosures should be
added and configured to achieve HA cage for a specific node-pair, taking into account the
customer RAID set requirement.
• The distribution of drive enclosures between DP-1 and DP-2 of the node should be done to
achieve maximum balance across the ports.
• When adding both 2U and 4U drive enclosures, they should be mixed on SAS chains (DP1
and DP2), added in pairs across node pairs on a four-node system, and balanced across SAS
ports on each controller pair.
Drive enclosure expansion Limits:
NOTE: Disk drives in the node enclosure are connected internally through DP1.
• The 7200 node enclosure can support up to five drive enclosures, two connected through
DP-1 and three connected through DP-2 on the nodes.
• The 7400 node enclosure can support up to nine drive enclosures, four connected through
DP-1 and five connected through DP-2 on the nodes. A four-node 7400 configuration doubles
the amount of drive enclosures supported to 18.
Information on disk drives upgrades:
You can install additional disk drives to upgrade partially populated drive enclosures:
• The first expansion drive enclosure added to a system must be populated with the same number
of disk drives as the node enclosure.
• Disks must be identical pairs.
• The same number of disk drives should added to all of the drive enclosures of that type in the
system.
• The minimum upgrade to a two–node system without expansion drive enclosures is two identical
disk drives.
• The minimum upgrade to a four–node system without expansion drive enclosures is four
identical disk drives.
NOTE: For the drive enclosures, verify that the activity LED is functional (all four LEDs are lit
solid green), and the LED at the front of the enclosure should have a number. This number
may change later in the installation process.
2. If they have not been installed at the factory, install the 580 W PCMs into the drive enclosure
“Installing a Power Cooling Module ” (page 51).
3. After you have completed the physical installation of the enclosures and disk drives, cable the
drive enclosure to the controller nodes.
4. Connect the power cables to the PCMs and press the power switch to ON.
5. Verify the upgrade is successful.
Checking alert
Checking cage
Checking dar
Checking date
Chain Node 0 Loop DP-2 (B Drive Enclosures and the solid red lines)
1. Install a cable from the first B drive enclosure I/O module 0 out port (DP-2) to the
in port (DP-1) of I/O module 0 on the second B drive enclosure.
2. 2. Install a cable from the second B drive enclosure I/O module 0 out port (DP-2) to the
Chain Node 0 Loop DP-1 (A Drive Enclosures and the dashed red lines)
Install a cable from the second A drive enclosure I/O module 0 out port (DP-2) to the
in port (DP-1) of I/O module 0 on the third A drive enclosure.
Check Pathing
Execute the showpd command.
• The additional three drive enclosures have been allocated cage numbers 3 through 5; for
example, 3:0:0.
• LED indicators on the drive enclosure left-hand bezels should indicate 03, 04 and 05.
• 18 disk drives have been recognized and are initially connected via Port B to Node 0; for
example, 0:0:2.
• The new disk drives indicate degraded because they currently only have one path.
cli> showpd
----Size(MB)----- ----Ports----
Id CagePos Type RPM State Total Free A B Cap(GB)
--- 3:0:0 FC 10 degraded 417792 0 ----- 0:0:2* 0
--- 3:1:0 FC 10 degraded 417792 0 ----- 0:0:2* 0
the third enclosure in the original configuration) I/O module 1 in port (DP-1) and install
into the in port (DP-1) of I/O module 1 of the added A drive enclosure farthest from the
node enclosure (dashed green line).
2. Remove the cable from the B drive enclosure farthest from the node enclosure (in this example
the second enclosure in the original configuration) I/O module 1 in port (DP-1) and
install into the in port (DP-1) of I/O module 1 on the added B drive enclosure farthest
from the node enclosure (solid green line).
Check Pathing
Execute the showpd command.
• A path has been removed from the original drive enclosures (cages) 1 and 2, PD IDs 6 through
17. Disk drives in these cages are in a degraded state until the path is restored.
• New cages 4 and 5 now have 2 paths, but cage 3 still has only one path. The state of all
installed disk drives with 2 paths is new until they are admitted into the System.
cli> showpd
----Size(MB)----- ----Ports----
Id CagePos Type RPM State Total Free A "B" Cap(GB)
--- 3:0:0 FC 10 degraded 417792 0 ----- 0:0:2 0
--- 3:1:0 FC 10 degraded 417792 0 ----- 0:0:2* 0
--- 3:2:0 FC 10 degraded 417792 0 ----- 0:0:2 0
--- 3:3:0 FC 10 degraded 417792 0 ----- 0:0:2* 0
Chain Node 1 Loop DP-2 (B Drive Enclosures and the solid green lines)
1. Install a cable from the last B drive enclosure I/O module 1 out port (DP-2) to the
in port (DP-1) of I/O module 1 on the second from last B drive enclosure.
2. Install a cable from the second from last B drive enclosure I/O module 1 out port (DP-2)
to the in port (DP-1) of I/O module 1 on the third from last B drive enclosure.
Chain Node 1 Loop DP-1 (A Drive Enclosures and the dashed green lines)
Install a cable from the last A drive enclosure I/O module 1 out port (DP-2) to the in
port (DP-1) of I/O module 1 on the second from last A drive enclosure (see Figure 72 (page 104)).
Check Pathing
Execute the showpd command.
All drives should have two paths. All the original drives should have returned to a normal state.
New drives are now ready to be admitted into the System.
cli> showpd
----Size(MB)----- ----Ports----
Id CagePos Type RPM State Total Free A "B" Cap(GB)
--- 3:0:0 FC 10 new 417792 0 1:0:2* 0:0:2 0
--- 3:1:0 FC 10 new 417792 0 1:0:2 0:0:2* 0
--- 3:2:0 FC 10 new 417792 0 1:0:2* 0:0:2 0
--- 3:3:0 FC 10 new 417792 0 1:0:2 0:0:2* 0
--- 3:4:0 FC 10 new 417792 0 1:0:2* 0:0:2 0
--- 3:5:0 FC 10 new 417792 0 1:0:2 0:0:2* 0
--- 4:0:0 FC 10 new 417792 0 1:0:1* 0:0:1 0
Execute admithw
Issue the admithw command to start the process to admit new hardware.
cli> admithw
Checking nodes...
Checking volumes...
Checking ports...
Checking cabling...
IMPORTANT: If you are prompted for permission to upgrade drive enclosure (cage) or physical
disk (disk drive) firmware always agree to the upgrade.
Verify Pathing
Execute the showpd command; all drives should have two paths and a state of normal.
cli> showpd
----Size(MB)----- ----Ports----
Id CagePos Type RPM State Total Free A B Cap(GB)
0 0:0:0 FC 10 normal 417792 313344 1:0:1* 0:0:1 450
1 0:1:0 FC 10 normal 417792 313344 1:0:1 0:0:1* 450
2 0:2:0 FC 10 normal 417792 313344 1:0:1* 0:0:1 450
3 0:3:0 FC 10 normal 417792 313344 1:0:1 0:0:1* 450
4 0:4:0 FC 10 normal 417792 313344 1:0:1* 0:0:1 450
5 0:5:0 FC 10 normal 417792 313344 1:0:1 0:0:1* 450
6 1:0:0 FC 10 normal 417792 313344 1:0:1* 0:0:1 450
7 1:1:0 FC 10 normal 417792 313344 1:0:1 0:0:1* 450
8 1:2:0 FC 10 normal 417792 313344 1:0:1* 0:0:1 450
9 1:3:0 FC 10 normal 417792 313344 1:0:1 0:0:1* 450
10 1:4:0 FC 10 normal 417792 313344 1:0:1* 0:0:1 450
11 1:5:0 FC 10 normal 417792 313344 1:0:1 0:0:1* 450
12 2:0:0 FC 10 normal 417792 313344 1:0:2* 0:0:2 450
13 2:1:0 FC 10 normal 417792 313344 1:0:2 0:0:2* 450
14 2:2:0 FC 10 normal 417792 313344 1:0:2* 0:0:2 450
15 2:3:0 FC 10 normal 417792 313344 1:0:2 0:0:2* 450
16 2:4:0 FC 10 normal 417792 313344 1:0:2* 0:0:2 450
17 2:5:0 FC 10 normal 417792 313344 1:0:2 0:0:2* 450
18 3:0:0 FC 10 normal 417792 313344 1:0:2* 0:0:2 450
19 3:1:0 FC 10 normal 417792 313344 1:0:2 0:0:2* 450
20 3:2:0 FC 10 normal 417792 313344 1:0:2* 0:0:2 450
21 3:3:0 FC 10 normal 417792 313344 1:0:2 0:0:2* 450
22 3:4:0 FC 10 normal 417792 313344 1:0:2* 0:0:2 450
23 3:5:0 FC 10 normal 417792 313344 1:0:2 0:0:2* 450
24 4:0:0 FC 10 normal 417792 313344 1:0:1* 0:0:1 450
25 4:1:0 FC 10 normal 417792 313344 1:0:1 0:0:1* 450
26 4:2:0 FC 10 normal 417792 313344 1:0:1* 0:0:1 450
27 4:3:0 FC 10 normal 417792 313344 1:0:1 0:0:1* 450
28 4:4:0 FC 10 normal 417792 313344 1:0:1* 0:0:1 450
29 4:5:0 FC 10 normal 417792 313344 1:0:1 0:0:1* 450
30 5:0:0 FC 10 normal 417792 313344 1:0:2* 0:0:1 450
31 5:1:0 FC 10 normal 417792 313344 1:0:2 0:0:1* 450
32 5:2:0 FC 10 normal 417792 313344 1:0:2* 0:0:1 450
Verify Cabling
Execute the checkhealth -svc cabling command to verify installed cabling.
cli% checkhealth -svc cabling
Checking cabling
The following components are healthy: cabling
SFF Drives
For HP M6710 Drive Enclosures, drives must be added in identical pairs, starting from slot 0 on
the left and filling to the right, leaving no empty slots between drives. The best practice for installing
or upgrading a system is to add the same number of identical drives to every drive enclosure in
the system, with a minimum of three disk drive pairs in each drive enclosure. This ensures a balanced
workload for the system.
LFF Drives
For HP M6720 Drive Enclosures, drives must be added by pairs of the same drive type (NL, SAS
or SSD). Start adding drives in the left column, bottom to top, then continue filling columns from
left to right beginning at the bottom of the column. The best practice when installing or upgrading
a system is to add the same number of identical drives to every drive enclosure in the system, with
a minimum of two drives added to each enclosure. This ensures a balanced workload for the
system. This ensures a balanced workload for the system.
When upgrading a storage system with mixed SFF and LFF enclosures you must follow these
guidelines to maintain a balanced work load.
• Each drive enclosure must contain a minimum of three pairs of drives.
• Upgrades can be just SFF, LFF, or a mixture of SFF and LFF drives.
• SFF–only upgrades must split the drives evenly across all SFF enclosures.
• LFF–only upgrades must split the drives evenly across all LFF enclosures.
• Mixed SFF and LFF upgrades must split the SFF drives across all SFF enclosures and LFF drives
across all LFF enclosures.
Check Status
Issue the showpd command. Each of the inserted disk drives has a new state and is ready to be
admitted into the System.
cli> showpd
----Size(MB)----- ----Ports----
Id CagePos Type RPM State Total Free A B Cap(GB)
Check Progress
Issue the showpd -c command to check chunklet initialization status:
cli> showpd -c
-------- Normal Chunklets -------- ---- Spare Chunklets ----
- Used - -------- Unused --------- - Used - ---- Unused ----
Id CagePos Type State Total OK Fail Free Uninit Unavail Fail OK Fail Free Uninit Fail
0 0:0:0 FC normal 408 34 0 323 0 0 0 0 0 51 0 0
1 0:1:0 FC normal 408 35 0 323 0 0 0 0 0 51 0 0
2 0:2:0 FC normal 408 33 0 323 0 0 0 0 0 51 0 0
3 0:3:0 FC normal 408 34 0 323 0 0 0 0 0 51 0 0
4 0:4:0 FC normal 408 34 0 323 0 0 0 0 0 51 0 0
5 0:5:0 FC normal 408 34 0 323 0 0 0 0 0 51 0 0
6 1:0:0 NL normal 1805 0 0 1339 0 0 0 0 0 466 0 0
7 1:4:0 NL normal 1805 0 0 1339 0 0 0 0 0 466 0 0
8 1:8:0 NL normal 1805 0 0 1339 0 0 0 0 0 466 0 0
9 1:12:0 NL normal 1805 0 0 1339 0 0 0 0 0 466 0 0
10 1:16:0 NL normal 1805 0 0 1339 0 0 0 0 0 466 0 0
11 1:20:0 NL normal 1805 0 0 1339 0 0 0 0 0 466 0 0
12 2:0:0 FC normal 408 34 0 323 0 0 0 0 0 51 0 0
13 2:1:0 FC normal 408 34 0 323 0 0 0 0 0 51 0 0
14 2:2:0 FC normal 408 33 0 323 0 0 0 0 0 51 0 0
15 2:3:0 FC normal 408 34 0 323 0 0 0 0 0 51 0 0
16 2:4:0 FC normal 408 34 0 323 0 0 0 0 0 51 0 0
17 2:5:0 FC normal 408 34 0 323 0 0 0 0 0 51 0 0
18 0:6:0 FC normal 408 0 0 53 304 0 0 0 0 0 51 0
19 0:7:0 FC normal 408 0 0 53 304 0 0 0 0 0 51 0
20 1:1:0 NL normal 1805 0 0 559 780 0 0 0 0 0 466 0
21 1:5:0 NL normal 1805 0 0 559 780 0 0 0 0 0 466 0
22 2:6:0 FC normal 408 0 0 53 304 0 0 0 0 0 51 0
23 2:7:0 FC normal 408 0 0 53 304 0 0 0 0 0 51 0
-----------------------------------------------------------------------------------------
28 total 20968 383 0 13746 2776 0 0 0 0 3408 1136 0
Upgrade Completion
When chunklet initialization is complete, issue the showpd -c command to display the available
capacity:
cli> showpd -c
-------- Normal Chunklets -------- ---- Spare Chunklets ----
- Used - -------- Unused --------- - Used - ---- Unused ----
Id CagePos Type State Total OK Fail Free Uninit Unavail Fail OK Fail Free Uninit Fail
0 0:0:0 FC normal 408 34 0 323 0 0 0 0 0 51 0 0
1 0:1:0 FC normal 408 35 0 322 0 0 0 0 0 51 0 0
WARNING! Fibre Channel HBA and iSCSI CNA upgrade on the HP 3PAR StoreServ 7400
Storage system must be done by authorized service personnel and cannot be done by a customer.
Contact your local service provider for assistance. Upgrade in HP 3PAR StoreServ 7200 Storage
systems may be performed by the customer.
CAUTION: To avoid possible data loss, only one node at a time should be removed from the
storage system. To prevent overheating, node replacement requires a maximum service time of 30
minutes.
NOTE: If two FC HBAs and two CNA HBAs are added in a system, the HBAs should be installed
in nodes 0 and 1, and the CNAs should be installed in nodes 2 and 3. The first two HBAs or
CNAs added in a system should be added to nodes 0 and 1 for the initially installed system and
for field HBA upgrades only.
1. Identify and shut down the node. For information about identifying and shutting down the
node, see “Node Identification and Shutdown” (page 32).
2. Remove the node and the node cover.
3. If a PCIe Adapter Assembly is already installed:
a. Remove the PCIe Adapter Assembly and disconnect the PCIe Adapter from the riser card.
b. Install the new PCIe Adapter onto the riser card and insert the assembly into the node.
For information about installing a PCIe adapter, see “PCIe Adapter Installation”.
4. If a PCIe Adapter is not installed:
a. Remove the PCIe Adapter riser card.
b. Install the new PCIe Adapter onto the riser card and insert the assembly into the node.
For information about installing a PCIe adapter, see “PCIe Adapter Installation”.
5. Replace the node cover and the node.
HP 3PAR StoreServ 7200, 7400, and 7450 Storage StoreServ 7000 Storage
systems
HP 3PAR documentation
For information about: See:
Supported hardware and software platforms The Single Point of Connectivity Knowledge for HP
Storage Products (SPOCK) website:SPOCK
(http://www.hp.com/storage/spock)
Using the HP 3PAR Management Console (GUI) to configure HP 3PAR Management Console User's Guide
and administer HP 3PAR storage systems
Using the HP 3PAR CLI to configure and administer storage HP 3PAR Command Line Interface Administrator’s
systems Manual
Installing and maintaining the Host Explorer agent in order HP 3PAR Host Explorer User’s Guide
to manage host configuration and connectivity information
Creating applications compliant with the Common Information HP 3PAR CIM API Programming Reference
Model (CIM) to manage HP 3PAR storage systems
Migrating data from one HP 3PAR storage system to another HP 3PAR-to-3PAR Storage Peer Motion Guide
Configuring the Secure Service Custodian server in order to HP 3PAR Secure Service Custodian Configuration Utility
monitor and control HP 3PAR storage systems Reference
Using the CLI to configure and manage HP 3PAR Remote HP 3PAR Remote Copy Software User’s Guide
Copy
Identifying storage system components, troubleshooting HP 3PAR F-Class, T-Class, and StoreServ 10000 Storage
information, and detailed alert information Troubleshooting Guide
Installing, configuring, and maintaining the HP 3PAR Policy HP 3PAR Policy Server Installation and Setup Guide
Server HP 3PAR Policy Server Administration Guide
HP 3PAR 7200, 7400, and 7450 storage systems HP 3PAR StoreServ 7000 Storage Site Planning Manual
HP 3PAR StoreServ 7450 Storage Site Planning Manual
HP 3PAR 10000 storage systems HP 3PAR StoreServ 10000 Storage Physical Planning
Manual
HP 3PAR StoreServ 10000 Storage Third-Party Rack
Physical Planning Manual
Installing and maintaining HP 3PAR 7200, 7400, and 7450 storage systems
Installing 7200, 7400, and 7450 storage systems and HP 3PAR StoreServ 7000 Storage Installation Guide
initializing the Service Processor HP 3PAR StoreServ 7450 Storage Installation Guide
HP 3PAR StoreServ 7000 Storage SmartStart Software
User’s Guide
Maintaining, servicing, and upgrading 7200, 7400, and HP 3PAR StoreServ 7000 Storage Service Guide
7450 storage systems HP 3PAR StoreServ 7450 Storage Service Guide
Troubleshooting 7200, 7400, and 7450 storage systems HP 3PAR StoreServ 7000 Storage Troubleshooting Guide
HP 3PAR StoreServ 7450 Storage Troubleshooting Guide
Maintaining the Service Processor HP 3PAR Service Processor Software User Guide
HP 3PAR Service Processor Onsite Customer Care
(SPOCC) User's Guide
Backing up Oracle databases and using backups for disaster HP 3PAR Recovery Manager Software for Oracle User's
recovery Guide
Backing up Exchange databases and using backups for HP 3PAR Recovery Manager Software for Microsoft
disaster recovery Exchange 2007 and 2010 User's Guide
Backing up SQL databases and using backups for disaster HP 3PAR Recovery Manager Software for Microsoft SQL
recovery Server User’s Guide
Backing up VMware databases and using backups for HP 3PAR Management Plug-in and Recovery Manager
disaster recovery Software for VMware vSphere User's Guide
Installing and using the HP 3PAR VSS (Volume Shadow Copy HP 3PAR VSS Provider Software for Microsoft Windows
Service) Provider software for Microsoft Windows User's Guide
Best practices for setting up the Storage Replication Adapter HP 3PAR Storage Replication Adapter for VMware
for VMware vCenter vCenter Site Recovery Manager Implementation Guide
Troubleshooting the Storage Replication Adapter for VMware HP 3PAR Storage Replication Adapter for VMware
vCenter Site Recovery Manager vCenter Site Recovery Manager Troubleshooting Guide
Installing and using vSphere Storage APIs for Array HP 3PAR VAAI Plug-in Software for VMware vSphere
Integration (VAAI) plug-in software for VMware vSphere User's Guide
Remotely servicing HP 3PAR storage systems HP 3PAR Secure Service Collector Remote Operations
Guide
Maintaining, servicing, and upgrading 7200 and 7400 HP 3PAR StoreServ 7000 Storage Service Guide: Service
storage systems Edition
Troubleshooting 7200 and 7400 storage systems HP 3PAR StoreServ 7000 Storage Troubleshooting
Guide: Service Edition
Using the Installation Checklist HP 3PAR StoreServ 10000 Storage Installation Checklist
(for HP 3PAR Cabinets): Service Edition
Upgrading 10000 storage systems HP 3PAR StoreServ 10000 Storage Upgrade Guide:
Service Edition
Installing and uninstalling 10000 storage systems HP 3PAR StoreServ 10000 Storage Installation and
Deinstallation Guide: Service Edition
Using the Installation Checklist HP 3PAR T-Class Storage System Installation Checklist
(for HP 3PAR Cabinets): Service Edition
Upgrading T-Class storage systems HP 3PAR T-Class Storage System Upgrade Guide:
Service Edition
Maintaining T-Class storage systems HP 3PAR T-Class Storage System Maintenance Manual:
Service Edition
Installing and uninstalling the T-Class storage system HP 3PAR T-Class Installation and Deinstallation Guide:
Service Edition
Using the Installation Checklist HP 3PAR F-Class Storage System Installation Checklist
(for HP 3PAR Cabinets): Service Edition
Upgrading F-Class storage systems HP 3PAR F-Class Storage System Upgrades Guide:
Service Edition
Maintaining F-Class storage systems HP 3PAR F-Class Storage System Maintenance Manual:
Service Edition
Installing and uninstalling the F-Class storage system HP 3PAR F-Class Storage System Installation and
Deinstallation Guide: Service Edition
Bold monospace text • Commands you enter into a command line interface
• System output emphasized for scannability
WARNING! Indicates that failure to follow directions could result in bodily harm or death, or in
irreversible damage to data or to the operating system.
CAUTION: Indicates that failure to follow directions could result in damage to equipment or data.
Required
Indicates that a procedure must be followed as directed in order to achieve a functional and
supported implementation based on testing at HP.
117
A Installing Storage Software Manually
WARNING! Use this procedure only if access to HP SmartStart CD or the Storage System and
Service Processor Setup wizards are not available.
This appendix describes how to manually set up and configure the storage system software and
SP. You must execute these scripted procedures from a laptop after powering on the storage system.
WARNING! Proceeding with the system setup script causes complete and irrecoverable loss
of data. Do not perform this procedure on a system that has already undergone the system
setup. If you quit this setup script at any point, you must repeat the entire process
Is this correct? Enter < C > to continue or < Q > to quit ==> c
3. Verify the number of controller nodes in the system, then type c and press Enter. If the system
is not ready for the system setup script, an error message appears. After following any
instructions and correcting any problems return to step 2 and attempt to run the setup script
again.
4. Set up the time zone for the operating site:
a. Select a location from the list, type the corresponding number <N>, and press Enter.
b. Select a country, enter the corresponding number <N>, and press ENTER.
c. Select a time zone region, type the corresponding number <N>, and press Enter
d. Verify the time zone settings are correct, type 1 and press Enter.
NOTE: The system automatically makes the time zone change permanent. Disregard
the instructions on the screen for appending the command to make the time zone change
permanent.
5. Press Enter to accept the default time and date, or type the date and time in the format
<MMDDhhmmYYYY>, where MM, DD, hh, mm, and YYYY are the current month, day, hour,
minute, and year, respectively, and then press Enter.
Enter dates in MMDDhhmmYYYY format. For example, 031822572008 would be March 18,
2012 10:57 PM.
Enter the correct date and time, or just press enter to accept the date shown
above.=> <enter>
(...)
Is this the desired date? (y/n) y
Patches: None
Component Name Version
CLI system 3.1.2.xxx
CLI Client 3.1.2.xxx
System Manager 3.1.2.xxx
Kernel 3.1.2.xxx
TPD Kernel Code 3.1.2.xxx
Enter < C > to continue or < Q > to quit ==> c
9. Verify the number of drives in the storage system. Type c and press Enter to continue.
10. If there are any missing or nonstandard connections, an error message displays. Verify that
all nonstandard connections are correct or complete any missing connections, then type r
and press Enter to recheck the connections. If it is necessary to quit the setup procedure to
resolve an issue, type q and press Enter When all connections are correct, type c and press
Enter to continue.
11. The system prompts you to begin the system stress test script. Type y and press Enter. The
system stress test continues to run in the background as you complete the system setup.
At this point, it is recommended that the OOTB stress test be started. This will
run heavy I/O on the PDs for 1 hour following 1 hour of chunklet initialization.
The results of the stress test can be checked in approximately 2 hours and 15
minutes. Chunklet initialization will continue even after the stress test
completes. Select the "Run ootb-stress-analyzer" option from the console menu
to check the results.Do you want to start the test (y/n)? ==> y
CAUTION: HP recommends that at least four physical disks worth of chunklets be designated
as spares to support the servicemag command. The default sparing options create an
appropriate number of spare chunklets for the number of disks installed.
Enter "Ma" for maximal, "D" for default, "Mi" for minimal, or "C" for custom: D
14. Verify the correct license is displayed and press Enter. If the license information is not correct,
type c and press Enter to continue with the system setup. After completing the system setup,
contact your local service provider for technical support to obtain the proper license keys.
15. Complete the network configuration:
a. When prompted, type the number of IP addresses used by the system (usually 1) and
press Enter.
b. Type the IP address and press Enter.
c. Type the netmask and press Enter. When prompted, press Enter again to accept the
previously entered netmask.
d. Type the gateway IP address and press Enter.
Please specify speed (10, 100 or 1000) and duplex (half or full), or auto to
use autonegotation: auto
*****************************************************************************
*****************************************************************************
* *
* CAUTION!! CONTINUING WILL CAUSE COMPLETE AND IRRECOVERABLE DATA LOSS *
* *
*****************************************************************************
*****************************************************************************
Node 0
Node 1
Please identify a location so that time zone rules can be set correctly.
Please select a continent or ocean.
1) Africa
2) Americas
3) Antarctica
4) Arctic Ocean
5) Asia
6) Atlantic Ocean
7) Australia
8) Europe
9) Indian Ocean
10) Pacific Ocean
11) none - I want to specify the time zone using the Posix TZ format.
#? 2
Please select a country.
1) Anguilla 28) Haiti
2) Antigua & Barbuda 29) Honduras
3) Argentina 30) Jamaica
4) Aruba 31) Martinique
5) Bahamas 32) Mexico
6) Barbados 33) Montserrat
7) Belize 34) Nicaragua
8) Bolivia 35) Panama
9) Bonaire Sint Eustatius & Saba 36) Paraguay
10) Brazil 37) Peru
11) Canada 38) Puerto Rico
12) Cayman Islands 39) Sint Maarten
13) Chile 40) St Barthelemy
14) Colombia 41) St Kitts & Nevis
15) Costa Rica 42) St Lucia
16) Cuba 43) St Martin (French part)
17) Curacao 44) St Pierre & Miquelon
18) Dominica 45) St Vincent
19) Dominican Republic 46) Suriname
20) Ecuador 47) Trinidad & Tobago
21) El Salvador 48) Turks & Caicos Is
22) French Guiana 49) United States
23) Greenland 50) Uruguay
24) Grenada 51) Venezuela
25) Guadeloupe 52) Virgin Islands (UK)
26) Guatemala 53) Virgin Islands (US)
27) Guyana
#? 49
Please select one of the following time zone regions.
1) Eastern Time
2) Eastern Time - Michigan - most locations
3) Eastern Time - Kentucky - Louisville area
4) Eastern Time - Kentucky - Wayne County
5) Eastern Time - Indiana - most locations
6) Eastern Time - Indiana - Daviess, Dubois, Knox & Martin Counties
7) Eastern Time - Indiana - Pulaski County
8) Eastern Time - Indiana - Crawford County
9) Eastern Time - Indiana - Pike County
10) Eastern Time - Indiana - Switzerland County
11) Central Time
12) Central Time - Indiana - Perry County
13) Central Time - Indiana - Starke County
14) Central Time - Michigan - Dickinson, Gogebic, Iron & Menominee Counties
15) Central Time - North Dakota - Oliver County
16) Central Time - North Dakota - Morton County (except Mandan area)
17) Central Time - North Dakota - Mercer County
18) Mountain Time
19) Mountain Time - south Idaho & east Oregon
20) Mountain Time - Navajo
21) Mountain Standard Time - Arizona
22) Pacific Time
23) Alaska Time
24) Alaska Time - Alaska panhandle
25) Alaska Time - southeast Alaska panhandle
26) Alaska Time - Alaska panhandle neck
27) Alaska Time - west Alaska
28) Aleutian Islands
29) Metlakatla Time - Annette Island
30) Hawaii
#? 22
United States
Pacific Time
You can make this change permanent for yourself by appending the line
TZ='America/Los_Angeles'; export TZ
to the file '.profile' in your home directory; then log out and log in again.
Here is that TZ value again, this time on standard output so that you can use the /usr/bin/tzselect command in
shell scripts:
Current date according to the system: Wed Dec 5 11:19:30 PST 2012
Enter dates in MMDDhhmmYYYY format. For example, 031822572002 would be March 18, 2002 10:57 PM.
Enter the correct date and time, or just press enter to accept the date shown above. ==>
Cluster is being initialized with the name < 3par_7200 > ...Please Wait...
Ensuring all ports are properly connected before continuing... Please Wait...
At this point, it is recommended that the OOTB stress test be started. This will run heavy I/O on the PDs for
1 hour following 1 hour of chunklet initialization. The stress test will stop in approximately 2 hours and
15 minutes. Chunklet initialization may continue even after the stress test completes. Failures will show up
as slow disk events.
Failed --
... will retry in roughly 30 seconds.
... re-issuing the request
Creating .srdata volume.
Failed --
... will retry in roughly 30 seconds.
... re-issuing the request
Failed --
... will retry in roughly 100 seconds.
... re-issuing the request
Failed --
... will retry in roughly 37 seconds.
... re-issuing the request
Failed --
1 chunklet out of 120 is not clean yet
... will retry in roughly 5 seconds
... re-issuing the request
Failed --
1 chunklet out of 120 is not clean yet
... will retry in roughly 5 seconds
... re-issuing the request
InServ Network Configuration
Please specify speed (10, 100 or 1000) and duplex (half or full), or auto to use autonegotiation: auto
Disabling non-encrypted ports will disable SP event handling,Recovery Manager for VMWare, SRA, and CLI connections
with default parameters. It should only be done if there is a strict requirement for all connections to be
encrypted.
Disable non-encrypted ports? n
No NTP server.
No DNS server.
Enter "Ma" for maximal, "D" for default, "Mi" for minimal, or "C" for custom: d
If the enabled features are not correct, take note of this and correct the issue after the out of the box script
finishes.
Support for the CIM-based management API is disabled by default. It can be enabled at this point.
These alerts may indicate issues with the system; please see the Messages and Operator's Guide for details on
the meaning of individual alerts.
SPXXXXX
1 SP Main
3PAR Service Processor Menu
1 ==> SP Control/Status
2 ==> Network Configuration
3 ==> InServ Configuration Management
4 ==> InServ Product Maintenance
5 ==> Local Notification Configuration
6 ==> Site Authentication Key Manipulation
7 ==> Interactive CLI for a StoreServ
X Exit
3
5. Enter a valid user credentials (CLI super-user name and password) to add the HP 3PAR InServ
and press Enter.
Please enter valid Customer Credentials (CLI super-user name and password) to add
the HP 3PAR InServ.
Username:<Valid Username>
Password:<Valid Password>
NOTE: If adding a storage system fails, exit from the process and check the SP software
version for compatibility. Update the SP with the proper InForm OS version before adding
additional systems.
6. After successfully adding the system, press Enter to return to the SP menu.
...
validating communication with <static.ip.address>...
site key ok
interrogating <static.ip.address> for version number...
Version 3.1.x.GA-x reported on <static.ip.address>
retrieving system data for <static.ip.address> ...
Defining Hosts
In order to define hosts and set port personas, you must access the CLI. For more information about
the commands used in this section, see the HP 3PAR OS Command Line Interface Reference.
To set the personas for ports connecting to host computers:
1. In the CLI, verify connection to a host before defining a host:
where <hostpersonaval> is the host persona ID number, <hostname> is the name of the
test host, and <WWN> is the WWN of an HBA in the host machine. This HBA must be physically
connected to the storage system.
3. After you have defined a system host for each physically connected WWN, verify host
configuration information for the storage system as follows:
where <connmode> is the name of the disk, host, or rcfc. The -ct subcommand sets the
connection type and is optional. Use loop for the disk; loop or point for the host; and
point for rcfc. The <node:slot:port> specifies the controller node, PCI slot, and PCI
adapter port to be controlled.
5. When finished setting each connected target port, verify that all ports are set correctly.
Are you ready to configure the SP at this time? (yes or no) [yes]:
yes
13:27:32 Reply='yes'
A Secure Site is a site where the customer will NEVER allow an HP 3PAR
SP to access the public internet. Thus the SP public interface will be
used only to access and monitor the HP 3PAR InServ attached to this SP.
13:27:35 Reply=''
Type of install
131
1 ==> Continue with spmob ( new site install )
2 ==> Restore from a backup file ( SP rebuild/replacement )
3 ==> Setup SP with original SP ID ( SP rebuild/replacement no backup files)
Type of install
Please enter the Serial Number of the InServ that will be configured with this Service Processor:
-OR-
type quit to exit
1400383
12:29:03 Reply='1400383'
Confirmation
SP Network Parameters
Valid length is upto 32 characters and Valid characters are [a-z] [A-Z] [0-9] dash(-) underscore(_)
Please enter the host name or press ENTER to accept the default of [SP0001400383]:
13:33:18 Reply=''
SP Network Parameters
13:33:33 Reply=''
Please enter the IP address of a default gateway, or NONE: [192.192.10.1]
13:33:35 Reply=''
Please enter the network speed
(10HD,10FD,100HD,100FD,1000HD,1000FD,AUTO)[AUTO]
13:33:40 Reply=''
SP Network Parameters
Please enter the IPv4 address (or blank separated list of addresses) of the Domain Name Server(s)
or 'none' if there will not be any DNS support: [none]:
13:33:44 Reply=''
Will remote access to this Service Processor be allowed (yes or no)? [yes]:
13:34:22 Reply=''
13:34:29 Reply=''
To which HP 3PAR Secure Service Collector Server should this SSAgent connect?
1 ==> Production
OTHER ==> HP 3PAR Internal testing (not for customer sites!)
Will a proxy server be required to connect to the HP 3PAR Secure Service Collector Server? (yes or no) [no]:
13:34:45 Reply=''
13:34:48 Reply=''
SP Permissive Firewall
133
SP Network Parameters - Confirmation
13:35:22 Reply=''
Physical location
They will be presented a screen at a time (using the 'more' command), in the format
xxx) country_name yyy) country_name
When you find the country you want, remember the number to its left (xxx or yyy).
If you have found the country you want, type 'q' to terminate the display.
Otherwise, press the SPACE bar to present the next screen.
13:35:30 Reply=''
1) Andorra 2) United Arab Emirates
3) Afghanistan 4) Antigua and Barbuda
5) Anguilla 6) Albania
7) Armenia 8) Netherlands Antilles
9) Angola 10) Antarctica
11) Argentina 12) American Samoa
13) Austria 14) Australia
15) Aruba 16) Azerbaijan
17) Bosnia and Herzegovina 18) Barbados
19) Bangladesh 20) Belgium
21) Burkina Faso 22) Bulgaria
23) Bahrain 24) Burundi
25) Benin 26) Bermuda
27) Brunei Darussalam 28) Bolivia
29) Brazil 30) Bahamas
31) Bhutan 32) Botswana
33) Belarus 34) Belize
35) Canada 36) Cocos (Keeling) Islands
37) Congo - The Democratic Republic of 38) Central African Republic
39) Congo 40) Switzerland
41) Cote d'Ivoire 42) Cook Islands
43) Chile 44) Cameroon
45) China 46) Colombia
47) Costa Rica 48) Cuba
49) Cape Verde 50) Christmas Island
51) Cyprus 52) Czech Republic
53) Germany 54) Djibouti
55) Denmark 56) Dominica
57) Dominican Republic 58) Algeria
59) Ecuador 60) Estonia
61) Egypt 62) Eritrea
63) Spain 64) Ethiopia
65) Finland 66) Fiji
67) Falkland Islands (Malvinas) 68) Micronesia - Federated States of
69) Faroe Islands 70) France
135
6) Atlantic Ocean
7) Australia
8) Europe
9) Indian Ocean
10) Pacific Ocean
11) none - I want to specify the time zone using the Posix TZ format.
#? 2
Please select a country.
1) Anguilla 27) Honduras
2) Antigua & Barbuda 28) Jamaica
3) Argentina 29) Martinique
4) Aruba 30) Mexico
5) Bahamas 31) Montserrat
6) Barbados 32) Netherlands Antilles
7) Belize 33) Nicaragua
8) Bolivia 34) Panama
9) Brazil 35) Paraguay
10) Canada 36) Peru
11) Cayman Islands 37) Puerto Rico
12) Chile 38) St Barthelemy
13) Colombia 39) St Kitts & Nevis
14) Costa Rica 40) St Lucia
15) Cuba 41) St Martin (French part)
16) Dominica 42) St Pierre & Miquelon
17) Dominican Republic 43) St Vincent
18) Ecuador 44) Suriname
19) El Salvador 45) Trinidad & Tobago
20) French Guiana 46) Turks & Caicos Is
21) Greenland 47) United States
22) Grenada 48) Uruguay
23) Guadeloupe 49) Venezuela
24) Guatemala 50) Virgin Islands (UK)
25) Guyana 51) Virgin Islands (US)
26) Haiti
#? 47
Please select one of the following time zone regions.
1) Eastern Time
2) Eastern Time - Michigan - most locations
3) Eastern Time - Kentucky - Louisville area
4) Eastern Time - Kentucky - Wayne County
5) Eastern Time - Indiana - most locations
6) Eastern Time - Indiana - Daviess, Dubois, Knox & Martin Counties
7) Eastern Time - Indiana - Pulaski County
8) Eastern Time - Indiana - Crawford County
9) Eastern Time - Indiana - Pike County
10) Eastern Time - Indiana - Switzerland County
11) Central Time
12) Central Time - Indiana - Perry County
13) Central Time - Indiana - Starke County
14) Central Time - Michigan - Dickinson, Gogebic, Iron & Menominee Counties
15) Central Time - North Dakota - Oliver County
16) Central Time - North Dakota - Morton County (except Mandan area)
17) Central Time - North Dakota - Mercer County
18) Mountain Time
19) Mountain Time - south Idaho & east Oregon
20) Mountain Time - Navajo
21) Mountain Standard Time - Arizona
22) Pacific Time
23) Alaska Time
24) Alaska Time - Alaska panhandle
25) Alaska Time - southeast Alaska panhandle
26) Alaska Time - Alaska panhandle neck
27) Alaska Time - west Alaska
28) Aleutian Islands
29) Metlakatla Time - Annette Island
30) Hawaii
#? 22
United States
Pacific Time
You can make this change permanent for yourself by appending the line
TZ='America/Los_Angeles'; export TZ
to the file '.profile' in your home directory; then log out and log in again.
Here is that TZ value again, this time on standard output so that you
can use the /usr/bin/tzselect command in shell scripts:
America/Los_Angeles
13:36:09 Reply=''
13:36:11 Reply=''
13:36:14 Reply=''
Date set
Generating communication keys for connex...
Confirmation
137
13:36:35 Reply=''
Welcome to the HP 3PAR Service Processor Moment of Birth
Country Code (type '?' to see a list of valid ISO 3166-1 country codes): ?
Country Name - ISO 3166-1-alpha-2 code
AFGHANISTAN - AF
ALAND ISLANDS - AX
ALBANIA - AL
ALGERIA - DZ
AMERICAN SAMOA - AS
ANDORRA - AD
ANGOLA - AO
ANGUILLA - AI
ANTARCTICA - AQ
ANTIGUA AND BARBUDA - AG
ARGENTINA - AR
ARMENIA - AM
ARUBA - AW
AUSTRALIA - AU
AUSTRIA - AT
AZERBAIJAN - AZ
BAHAMAS - BS
BAHRAIN - BH
BANGLADESH - BD
BARBADOS - BB
BELARUS - BY
BELGIUM - BE
BELIZE - BZ
BENIN - BJ
BERMUDA - BM
BHUTAN - BT
BOLIVIA, PLURINATIONAL STATE OF - BO
BONAIRE, SINT EUSTATIUS AND SABA - BQ
BOSNIA AND HERZEGOVINA - BA
BOTSWANA - BW
BOUVET ISLAND - BV
BRAZIL - BR
BRITISH INDIAN OCEAN TERRITORY - IO
BRUNEI DARUSSALAM - BN
BULGARIA - BG
BURKINA FASO - BF
BURUNDI - BI
CAMBODIA - KH
CAMEROON - CM
CANADA - CA
CAPE VERDE - CV
CAYMAN ISLANDS - KY
CENTRAL AFRICAN REPUBLIC - CF
CHAD - TD
139
MALTA - MT
MARSHALL ISLANDS - MH
MARTINIQUE - MQ
MAURITANIA - MR
MAURITIUS - MU
MAYOTTE - YT
MEXICO - MX
MICRONESIA, FEDERATED STATES OF - FM
MOLDOVA, REPUBLIC OF - MD
MONACO - MC
MONGOLIA - MN
MONTENEGRO - ME
MONTSERRAT - MS
MOROCCO - MA
MOZAMBIQUE - MZ
MYANMAR - MM
NAMIBIA - NA
NAURU - NR
NEPAL - NP
NETHERLANDS - NL
NEW CALEDONIA - NC
NEW ZEALAND - NZ
NICARAGUA - NI
NIGER - NE
NIGERIA - NG
NIUE - NU
NORFOLK ISLAND - NF
NORTHERN MARIANA ISLANDS - MP
NORWAY - NO
OMAN - OM
PAKISTAN - PK
PALAU - PW
PALESTINE, STATE OF - PS
PANAMA - PA
PAPUA NEW GUINEA - PG
PARAGUAY - PY
PERU - PE
PHILIPPINES - PH
PITCAIRN - PN
POLAND - PL
PORTUGAL - PT
PUERTO RICO - PR
QATAR - QA
REUNION - RE
ROMANIA - RO
RUSSIAN FEDERATION - RU
RWANDA - RW
SAINT BARTHELEMY - BL
SAINT HELENA, ASCENSION AND TRISTAN DA CUNHA - SH
SAINT KITTS AND NEVIS - KN
SAINT LUCIA - LC
SAINT MARTIN (FRENCH PART) - MF
SAINT PIERRE AND MIQUELON - PM
SAINT VINCENT AND THE GRENADINES - VC
SAMOA - WS
SAN MARINO - SM
SAO TOME AND PRINCIPE - ST
SAUDI ARABIA - SA
SENEGAL - SN
SERBIA - RS
SEYCHELLES - SC
SIERRA LEONE - SL
SINGAPORE - SG
SINT MAARTEN (DUTCH PART) - SX
SLOVAKIA - SK
SLOVENIA - SI
SOLOMON ISLANDS - SB
SOMALIA - SO
SOUTH AFRICA - ZA
SOUTH GEORGIA AND THE SOUTH SANDWICH ISLANDS - GS
SOUTH SUDAN - SS
SPAIN - ES
SRI LANKA - LK
SUDAN - SD
SURINAME - SR
SVALBARD AND JAN MAYEN - SJ
SWAZILAND - SZ
SWEDEN - SE
SWITZERLAND - CH
SYRIAN ARAB REPUBLIC - SY
TAIWAN, PROVINCE OF CHINA - TW
TAJIKISTAN - TJ
TANZANIA, UNITED REPUBLIC OF - TZ
THAILAND - TH
TIMOR-LESTE - TL
TOGO - TG
TOKELAU - TK
TONGA - TO
TRINIDAD AND TOBAGO - TT
TUNISIA - TN
TURKEY - TR
TURKMENISTAN - TM
TURKS AND CAICOS ISLANDS - TC
* Company: HP
* HW Installation Site Address
Street and number: 4209 Technology Drive
City: Fremont
State/Province: CA
ZIP/Postal Code: 94538
Country Code: US
* Technical Contact
First Name: Joe
Last Name: Thornton
Phone: 555-555-0055
E-Mail: joethornton19@hp.com
FAX:
* Direct Support from HP: Y
Rebooting....
.
.
.
.
.
.
login:
Password:
SP0001400383
1 SP Main
HP 3PAR Service Processor Menu
141
Enter Control-C at any time to abort this process
1 ==> SP Control/Status
2 ==> Network Configuration
3 ==> InServ Configuration Management
4 ==> InServ Product Maintenance
5 ==> Local Notification Configuration
6 ==> Site Authentication Key Manipulation
7 ==> Interactive CLI for a StoreServ
X Exit
# Ports Description
Setting Value
Parity None
Word Length 8
Stop Bits 1
Transmit Xon/Xoff
Receive Xon/Xoff
NOTE: When performing automatic node-to-node rescue, there may be instances where a node
is to be rescued by another node that has been inserted but has not been detected. If this happens,
issue the CLI command, startnoderescue –node <nodenum>. Before you do, you must have
the rescue IP address. This is the IP address that is allocated to the node being rescued and must
be on the same subnet as the SP.
Use the showtask -d command to view detailed status regarding the node rescue:
root@1400461-0461# showtask -d
Id Type Name Status Phase Step ----StartTime---
----FinishTime---- Priority User
4 node_rescue node_0_rescue done --- --- 2012-04-10 13:42:37 PDT 2012-04-10
13:47:22 PDT n/a sys:3parsys
Detailed status:
2012-04-10 13:42:37 PDT Created task.
2012-04-10 13:42:37 PDT Updated Running node rescue for node 0 as 1:8915
2012-04-10 13:42:44 PDT Updated Using IP 169.254.136.255
2012-04-10 13:42:44 PDT Updated Informing system manager to not autoreset node 0.
2012-04-10 13:42:44 PDT Updated Resetting node 0.
2012-04-10 13:42:53 PDT Updated Attempting to contact node 0 via NEMOE.
2012-04-10 13:42:53 PDT Updated Setting boot parameters.
2012-04-10 13:44:08 PDT Updated Waiting for node 0 to boot the node rescue kernel.
2012-04-10 13:44:54 PDT Updated Kernel on node 0 has started. Waiting for node
to retrieve install details.
2012-04-10 13:45:14 PDT Updated Node 32768 has retrieved the install details.
Waiting for file sync to begin.
2012-04-10 13:45:36 PDT Updated File sync has begun. Estimated time to complete
this step is 5 minutes on a lightly loaded system.
2012-04-10 13:47:22 PDT Updated Remote node has completed file sync, and will
reboot.
2012-04-10 13:47:22 PDT Updated Notified NEMOE of node 0 that node-rescue is done.
2012-04-10 13:47:22 PDT Updated Node 0 rescue complete.
2012-04-10 13:47:22 PDT Completed scheduled task.
NOTE: This node rescue procedure should only be used if all nodes in the HP 3PAR system are
down and needs to be rebuilt from the HP 3PAR OS image on the service processor. The SP-to-node
rescue procedure is supported with HP 3PAR OS version 3.1.2 or higher and HP 3PAR Service
Processor 4.2 or higher.
To perform SP-to-node rescue:
1. At the rear of the storage system, uncoil the red crossover Ethernet cable connected to the SP
(ETH) private network connection and connect this cross-over cable to the E0 port of the node
that is being rescued (shown).
NOTE: Connect the crossover cable to the following ETH port of a specific SP brand:
• HP 3PAR Service Processor DL320e: ETH port 2
• Supermicro II: ETH port 1
2. Connect the maintenance PC to the SP using the serial connection and start an spmaint
session.
3. Go to 3 StoreServ Configuration Management > 1 Display StoreServ information to perform
the pre-rescue task of obtaining the following information:
• HP 3PAR OS Level on the StoreServ system
• StoreServ system network parameters including netmask and gateway information
Return to the main menu.
NOTE: Copy the network information on to a separate document for reference to complete
the subsequent steps of configuring the system network.
6. Establish a serial connection to the node being rescued. If necessary, disconnect the serial
cable from SP.
7. Connect a serial cable from the laptop to the serial port on the node being rescued (S0).
NOTE: Connect the crossover cable to the following ETH port of a specific SP brand:
• HP DL320e or DL360e: ETH port 2
NOTE: If necessary, check the baud rate settings before establishing a connection.
This is the procedure for manually rescuing a 3PAR StoreServ node (i.e.,
rebuilding the software on the node's internal disk). The system will install
the base OS, BIOS, and InForm OS for the node before it joins the cluster.
You must first connect a Category 5 crossover Ethernet cable between the SP's
private/internal network (Eth-1) and the "E0" Ethernet port of the node to be
rescued. Note that the diagram below does not represent the physical port
numbers or configuration of all node types.
New Node
Service Processor +------------+
+-----------------+ ||||||| |
| | ||||||| |
|Eth-0 Eth-1(Int) | ||||||| E0 C0|
+-----------------+ +------------+
^ ^ ^
|____Crossover Eth____| |__Maintenance PC (serial connection)
This operation will completely erase and reinstall the node's local disk.
Are you sure? (Y/N) No
9. Verify the node status LED is slowly blinking green and provides a login prompt.
NOTE: Access STATs to obtain the network information or request it from the system
administrator.
cli% shownode
Control Data Cache
Node --Name--- -State- Master InCluster ---LED--- Mem(MB) Mem(MB) Available(%)
0 1000163-0 OK No Yes GreenBlnk 4096 6144 100
1 1000163-1 OK Yes Yes GreenBlnk 4096 6144 100
18. Execute the shutdownsys reboot command and enter yes to reboot the system.
When the system reboot is complete, reestablish an SPMAINT session to perform additional
CLI commands.
19. Reconnect the host and host cables if previously removed or shutdown.
20. Issue the checkhealth -svc -detail command to verify the system is healthy.
21. In the SP window, issue the exit command and select X to exit from the 3PAR Service
Processor Menu and to log out of the session.
22. Disconnect the serial cable from the maintenance PC and the red cross-over Ethernet cable
from the node and coil and replace the cable behind the SP. If applicable, reconnect the
customer's network cable and any other cables that may have been disconnected.
23. Close and lock the rear door.
PLEASE NOTE THAT THIS PROCEDURE IS FOR USE WITH A VIRTUAL SERVICE
PROCESSOR (VSP) WHEN ALL NODES ARE DOWN. Verify that 10.0.121.245
(the last known IP address of the StoreServ) is not in use. All nodes
in this StoreServ must be offline and the nodes can only be rescued one
at a time.
The following network configuration assumes that the VSP and the
StoreServ are on the same subnet. If the VSP and the StoreServ are
not on the same subnet, the netmask (255.255.248.0) and the gateway
(10.0.120.1) need to be changed in the commands below to the
netmask and gateway values used by the StoreServ.
The system will install the base OS, HP 3PAR OS, and reboot. Repeat this
procedure for all nodes and then wait for all nodes to join the cluster
before proceeding.
5. Disconnect the serial cable from the serial adapter on the SP.
NOTE: The VSP is connected to the target node being rescued via the customer network.
7. Reset the node by pressing Ctrl+w to establish a Whack> prompt. When the prompt displays,
type reset.
NOTE: Make sure to monitor the reset and do not complete a full reset. After 30 seconds,
press Ctrl+w to interrupt the reset.
8. At the Whack> prompt, refer to the output in step 4, copy and paste the commands for the
following setting prompts:
a. Whack> nemoe cmd unset node_rescue_needed
b. Whack> net server <VSP IP Address>
c. Whack> net netmask <netmask IP Address>
d. Whack> net gateway <Gateway IP address>
e. Whack> net addr <StoreServ IP address>
f. Whack> boot net install ipaddr=<StoreServ IP address> nm=<netmask IP Address>
gw=<Gateway IP Address> rp=<VSP IP address>::rescueide
The following table is only an example.
NOTE: If you get a message about a failing ARP response, type reset and wait about 30
seconds before pressing Ctrl+w to halt the reboot. When the whack> prompt displays, repeat
step 8.
12. Wait for all of the nodes to join the cluster. The node status LEDs should be blinking green.
13. Establish an SPMAINT session. Use console as the login name.
14. Select option 2 Network Configuration to enter the network configuration. Return to the main
menu when complete.
NOTE: The cluster must be active and the admin volume must be mounted before changing
the network configuration.
15. Disconnect the cable (serial) from the node and reconnect to the adapter on the SP. Press Enter
16. Before deconfiguring the node rescue, disconnect the crossover cables and reconnect the
public network cable.
17. Return to the SP Main menu and choose 4 StoreServ Product Maintenance > 11 Node Rescue.
Enter y to confirm rescue is completed and press Enter to continue.
a. Choose 1 ==> Deconfigure Node Rescue to deconfigure the node rescue.
b. Choose X ==> Return to previous menu to return to the main menu.
c. Choose 7 ==> Interactive CLI for a StoreServ,, then select the desired system.
18. Issue the shownode command to verify that all nodes have joined the cluster.
cli% shownode
Control Data Cache
Node --Name--- -State- Master InCluster ---LED--- Mem(MB) Mem(MB) Available(%)
0 1000163-0 OK No Yes GreenBlnk 4096 6144 100
1 1000163-1 OK Yes Yes GreenBlnk 4096 6144 100
19. Execute the shutdownsys reboot command and enter yes to reboot the system.
When the system reboot is complete, reestablish an SPMAINT session to perform additional
CLI commands.
20. Reconnect the host and host cables if previously removed or shutdown.
21. Execute the checkhealth -svc -detail command to verify the system is healthy.
22. In the SP window, issue the exit command and select X to exit from the 3PAR Service
Processor Menu and to log out of the session.
23. Disconnect the serial cable from the maintenance PC. If applicable, reconnect the customer's
network cable and any other cables that may have been disconnected.
24. Close and lock the rear door.
Service Processor
Figure 95 Service Processor DL320e