Sei sulla pagina 1di 28

CX3 Model 10 Systems

Hardware and Operational Overview


January 23, 2007

This document describes the hardware, powerup and powerdown sequences, and status indicators for CX3 model 10 systems, which are members of the CX3 UltraScale series of storage systems. Major topics are: Storage-system major components.................................................. Storage processor enclosure (SPE3) ................................................. Disk-array enclosures (DAE3Ps) ..................................................... Standby power supplies (SPSs)....................................................... Powerup and powerdown sequence ............................................... Status lights (LEDs) and indicators ................................................. 2 4 7 13 14 18

Storage-system major components


The storage system consists of: A storage processor enclosure (SPE3) and a standby power supply (SPS) One Fibre Channel disk-array enclosure (DAE) with a minimum of five disk drives Optional DAEs Optional second SPS

DAE3P

SPS SPE3

EMC3464

Figure 1

Storage system

The high-availability features for the storage system include: Redundant storage processors (SPs) Standby power supplies (SPS) Redundant power/cooling modules The SPE3 is a highly available storage enclosure with redundant power and cooling. It is 1U high (a U is a NEMA unit; each unit is 1.75 inches) and includes two storage processors (SPs). Table 1 gives the number of Fibre Channel and iSCSI I/O front-end ports and Fibre Channel back-end disk ports supported by each CX3 model 10 system.

Hardware and Operational Overview

Table 1

Front-end and back-end ports Storage system CX3-10c Fibre Channel front-end I/O ports 2 iSCSI front-end I/O ports 2 Fibre Channel back-end disk ports 1

The storage system supports 4 Gb/s Fibre Channel operation from its front-end host I/O ports through its back-end disk ports. The host I/O front-end ports can operate at up to 4 Gb/s and the back-end ports can operate at 2 or 4 Gb/s. The storage system senses the speed of the incoming host I/O and sets the speed of the front-end ports to the lowest speed it senses. The speed of each back-end port is determined by the speed of the DAEs connected to it. The storage system requires at least five disks and works in conjunction with one or more disk-array enclosures (DAEs) to provide terabytes of highly available disk storage. A DAE is a basic disk enclosure without an SP. SPE3 systems include a 4 Gb/s point-to-point DAE3P, which supports up to 15 Fibre Channel disks. Each DAE3P connects to the SPE3 or another DAE with simple FC-AL serial cabling. The storage system supports a total of 60 disks on its single back-end bus. You can place the disk enclosures in the same cabinet as the SPE, or in one or more separate cabinets. High-availability features are standard.

Hardware and Operational Overview

Storage processor enclosure (SPE3)


The SPE3 components include: A sheet-metal enclosure with a midplane and front bezel Two storage processors (SPs) Four power supply/system cooling modules (referred to as power/cooling modules) Figure 2 shows the SPE3 components. Details on each component follow the figure. If the enclosure provides slots for two identical components, the component in slot A is called component-name A. The second component is called component-name B. For increased clarity, the following figures depict the SPE3 outside of the rack cabinet. Your SPE3 may be installed in a rackmount cabinet.
Front Rear SPS B SPS A

TLA S/Nxxxxxxxx

A B

Storage processor B Power/cooling modules


Figure 2 SPE3 outside the cabinet front and rear views

Storage processor A
EMC3454

Midplane
The midplane distributes power and signals to all the enclosure components. The power/cooling modules and storage processors (SPs) plug directly into midplane connectors.

Front bezel
The front bezel has a key lock and two latch release buttons. Pressing the latch release buttons releases the bezel from the enclosure.

Storage processors (SPs)


The SP is the SPE3s intelligent component and acts as the control center. Each SP includes:
Hardware and Operational Overview

A single-processor CPU module comprised of: 1 GB of DDR DIMM (double data rate, dual in-line memory module) memory Two small-form factor pluggable (SFP) shielded Fibre Channel connectors (optical SFP) for server I/O (connection to an FC switch or server HBA) One SFP shielded Fibre Channel connector (copper SFP) for disk connection (BE 0) One serial port for connection to a standby power supply (SPS) micro DB9 connector One 10/100 Ethernet LAN port for management RJ45 connector One serial port for RS-232 connection to a service console micro DB9 connector One 10/100 Ethernet LAN port for service RJ45 connector For a CX3-10c One I/O module with two 10/100gigabit Ethernet ports (RJ45 connector) for iSCSI I/O to a network switch or server NIC or HBA Figure 3 shows the locations of the connectors on the rear of the SPs.
Service only Management LAN AC cord
0 iSCSI
+-

iSCSI ports

1 iSCSI

BE 0

2 Fibre

3 Fibre

SPS port

Power and fault LEDs

Back-end Fibre Front-end Fibre Channel port Channel ports


CL3667

Figure 3

Connectors on the rear of a CX3-10c SP

Power/cooling modules
Each of the four power/cooling modules integrate one independent power supply and one blower into a single module. The power supply in each module is an auto-ranging, power-factor-corrected, multi-output, offline converter.
Hardware and Operational Overview
5

The four power/cooling modules (A0, A1, B0, and B1) are located in front of the SPs. A0 and A1 share load currents and provide power and cooling for SP A, and B0 and B1 share load currents and provide power and cooling for SP B. A0 and B0 share a line cord, and A1 and B1 share a line cord. An SP or power/cooling module with power-related faults does not adversely affect the operation of any other component. If one power/cooling module fails, the others take over. If both power/cooling modules for an SP fail, write caching is disabled.

SPE3 field-replaceable units (FRUs)


The following are field-replaceable units (FRUs) that you can replace while the SPE3 is powered up: CPU modules Memory modules (DIMMs) I/O modules Small form-factor pluggable (SFP) modules, which plug into the Fibre Channel front-end port slots Power/cooling modules You or your service provider can replace a failed power/cooling module or SFP module. A service provider must replace the other FRUs if they fail.

Hardware and Operational Overview

Disk-array enclosures (DAE3Ps)


DAE3P UltraPoint (sometimes called point-to-point) disk-array enclosures are highly available, high-performance, high-capacity storage-system components that use a Fibre Channel Arbitrated Loop (FC-AL) as the interconnect interface. A disk enclosure connects to another DAE3P or an SPE3 and is managed by storage-system software in RAID (redundant array of independent disks) configurations. The enclosure is only 3U (5.25 inches) high, but can include 15 hard disk drive/carrier modules. Its modular, scalable design allows for additional disk storage as your needs increase. A DAE3P includes either high-performance Fibre Channel disk modules or economical SATA (Serial Advanced Technology Attachment, SATA II) disk modules. You can integrate and connect FC and SATA enclosures within a storage system, but you cannot mix SATA and Fibre Channel components within a DAE3P. The enclosure operates at either 2 or 4 Gb/s bus speed (2 Gb/s components, including disks, cannot operate on a 4 Gb/s bus). Simple serial cabling provides easy scalability. You can interconnect disk enclosures to form a large disk storage system; the number and size of buses depends on the capabilities of your storage processor. Highly available configurations require at least one pair of physically independent loops (for example, A and B sides of bus 0, sharing the same dual-port disks). Other configurations use two, three, four, or more buses. You can place the disk enclosures in the same cabinet, or in one or more separate cabinets. High-availability features are standard. The DAE3P includes the following components: A sheet-metal enclosure with a midplane and front bezel Two FC-AL link control cards (LCCs) to manage disk modules As many as 15 disk modules Two power supply/system cooling modules (referred to as power/cooling modules) Any unoccupied disk module slot has a filler module to maintain air flow. The power supply and system cooling components of the power/cooling modules function independently of each other, but the

Hardware and Operational Overview

assemblies are packaged together into a single field-replaceable unit (FRU). The LCCs, disk modules, power supply/system cooling modules, and filler modules are field-replaceable units (FRUs), which can be added or replaced without hardware tools while the storage system is powered up. Figure 4 shows the disk enclosure components. Where the enclosure provides slots for two identical components, the components are called component-name A or component-name B, as shown in the illustrations.
For increased clarity, the following figures depict the disk enclosure outside of the rack or cabinet. Your disk enclosure may be installed in a rackmount cabinet.

Power/cooling module B
! ! !

Link control card B

Fault LED (amber)

Power LED (green or blue)

EXP

PRI

Power/cooling module A
Figure 4

Link control card A

DAE3P outside the cabinet front and rear views

As shown in Figure 5, an enclosure address (EA) indicator is located on each LCC. (The EA is sometimes referred to as an enclosure ID.) Each link control card (LCC) includes a bus (loop) identification indicator. The storage processor initializes bus ID when the operating system loads.

Hardware and Operational Overview

EXP

PRI

#
!

PRI PRI EXP

EXP

Disk activity LED (green)

Fault LED (amber)

EMC3437

Bus ID 0 1 2 3

Enclosure address 0 1 2 3
PRI
!

EA selection (press here to change EA)

#
4 5 6 7 4 5 6 7
#
PRI EXP PRI EXP EXP PRI
!

! !

EXP

B
#

EMC3210

Figure 5

Disk enclosure bus (loop) and address indicators

The enclosure address is set at installation. Disk module IDs are numbered left to right (looking at the front of the unit) and are contiguous throughout a storage system: enclosure 0 contains modules 0-14; enclosure 1 contains modules 15-29; enclosure 2 includes 30-44, and so on.

Midplane
A midplane between the disk modules and the LCC and power/cooling modules distributes power and signals to all components in the enclosure. LCCs, power/cooling modules, and disk drives the enclosures field-replaceable units (FRUs) plug directly into the midplane.

Front bezel
The front bezel has a locking latch and an electromagnetic interference (EMI) shield. You must remove the bezel to remove and install drive modules. EMI compliance requires a properly installed front bezel.

Link control cards (LCCs)


An LCC supports and controls one Fibre Channel bus and monitors the DAE3P.

Hardware and Operational Overview

Expansion link active LED


EXP

Primary link active LED


PRI
!

Fault LED (amber) Power LED (green)

EXP

Figure 6

LCC connectors and status LEDs

A blue link active LED indicates a DAE3P enclosure operating at 4 Gb/s. The link active LED(s) is green in DAE3Ps operating at 2 Gb/s.

The LCCs in a DAE3P connect to other Fibre Channel devices (processor enclosures, other DAEs) with twin-axial copper cables. The cables connect LCCs in a storage system together in a daisy-chain (loop) topology. Internally, each DAE3P LCC uses FC-AL protocols to emulate a loop; it connects to the drives in its enclosure in a point-to-point fashion through a switch. The LCC independently receives and electrically terminates incoming FC-AL signals. For traffic from the systems storage processors, the LCC switch passes the input signal from the primary port (PRI) to the drive being accessed; the switch then forwards the drives output signal to the expansion port (EXP), where cables connect it to the next DAE in the loop. (If the target drive is not in the LCCs enclosure, the switch passes the input signal directly to the EXP port.) At the unconnected expansion port (EXP) of the last LCC, the output signal (from the storage processor) is looped back to the input signal (to the storage processor). For traffic directed to the systems storage processors, the switch passes input signals from the expansion port directly to the output signal of the primary port. Each LCC independently monitors the environmental status of the entire enclosure, using a microcomputer-controlled FRU (field-replaceable unit) monitor program. The monitor communicates
10

Hardware and Operational Overview

EXP

PRI

PRI

EXP

PRI
PRI

EXP
! !

#
PRI
!

EXP

EMC3226

status to the server, which polls disk enclosure status. LCC firmware also controls the LCC port bypass circuits and the disk-module status LEDs. LCCs do not communicate with or control each other. Captive screws on the LCC lock it into place to ensure proper connection to the midplane. You can add or replace an LCC while the disk enclosure is powered up.

Disk modules
Each disk module consists of one disk drive in a carrier. You can visually distinguish between module types by their different latch and handle mechanisms and by type, capacity, and speed labels on each module. An enclosure can include Fibre Channel or SATA disk modules, but not both types. You can add or remove a disk module while the DAE3P is powered up, but you should exercise special care when removing modules while they are in use. Drive modules are extremely sensitive electronic components. Disk drives The DAE3P supports Fibre Channel disk drives that conform to FC-AL specifications and 2 or 4 Gb/s Fibre Channel interface standards, and supports dual-port FCAL interconnects through the two LCCs. A DAE3P supports 2 Gb/s drives only if the entire back-end bus that contains the drives is operating at 2 Gb/s. SATA disk drives conform to Serial ATA II Electrical Specification 1.0 and include dual-port SATA interconnects; a paddle card on each drive converts the assembly to Fibre Channel operation. The disk module slots in the enclosure accommodate 2.54 cm (1-in) by 8.75 cm (3.5-in) disk drives. Drive carrier The disk drive carriers are metal and plastic assemblies that provide smooth, reliable contact with the enclosure slot guides and midplane connectors. Each carrier has a handle with a latch and spring clips. The latch holds the disk module in place to ensure proper connection

Hardware and Operational Overview

11

with the midplane. Disk drive activity/fault LEDs are integrated into the carrier.

Power/cooling modules
The power/cooling modules are located above and below the LCCs. The units integrate independent power supply and dual-blower cooling assemblies into a single module. Each power supply is an auto-ranging, power-factor-corrected, multi-output, offline converter with its own line cord. Each supply supports a fully configured DAE3P and shares load currents with the other supply. The drives and LCCs have individual soft-start switches that protect the disk drives and LCCs if they are installed while the disk enclosure is powered up. A FRU (disk, LCC, or power/cooling module) with power-related faults does not adversely affect the operation of any other FRU. The enclosure cooling system includes two dual-blower modules. If one blower fails, the others will speed up to compensate. If two blowers in a system (both in one power/cooling module, or one in each module) fail, the DAE3P goes offline within two minutes.

12

Hardware and Operational Overview

Standby power supplies (SPSs)


A 1U, 1000-watt DC SPS provides backup power for storage processor A and LCC A on the first (enclosure 0, bus 0) DAE adjacent to it. An optional second SPS provides the same service for SP B and LCC B. The SPSs allow write caching which prevents data loss during a power failure to continue. A faulted or not fully charged SPS disables the write cache. Each SPS rear panel has one AC inlet power connector with power switch, AC outlets for the SPE3 and the first DAE (EA 0, bus 0) respectively, and one phone-jack type connector for connection to an SP. Figure 7 shows the SPS connectors.

SPE

SP interface

Active LED (green) On battery LED (amber) Replace battery LED (amber)
EMC2292

AC power connector

Power switch

Fault LED (amber)

Figure 7

1000 W SPS connectors

A service provider can replace an SPS while the storage system is powered up.

Hardware and Operational Overview

13

Powerup and powerdown sequence


The SPE3 and DAE3P do not have power switches.

Powering up the storage system


1. Verify the following: Master switch/circuit breakers for each cabinet/rack power strip are off. The power cord for SP A is plugged into the SPS and the power cord retention bails are in place. The power cord for SP B is plugged into the nearest power distribution unit on a different circuit feed than the SPS. (In systems with two SPSs, plug SP B into its corresponding SPS.) The serial connection between SP A and the SPS is in place. (In systems with two SPSs, each SP has a serial connection to its corresponding SPS.) The power cord for LCC A on the first DAE3P (EA 0, bus 0) is plugged into the SPS and the power cord retention bails are in place. The power cord for LCC B is plugged into the nearest power distribution unit on a different circuit feed than the SPS. (In systems with two SPSs, each LCC plugs into its corresponding SPS.) The power cords for the SPSs and any other DAE3Ps are plugged into the cabinets power strips. The power switches on the SPSs are in the on position. Any other devices in the cabinet are correctly installed and ready for powerup. 2. Turn on the master switch/circuit breakers for each cabinet/rack power strip.
In standard EMC cabinets, master switches are on the power distribution panels (PDPs), as shown in Figure 8.

14

Hardware and Operational Overview

! !

ON I O OFF
EXP PRI
!

ON I

EXP

PRI
!

O OFF
!

EXP

PRI
!

ON I O OFF

EXP

PRI
!

EXP

PRI
!

EXP

PRI
!

EXP

PRI
!

Power source B Power source D


Figure 8

Power source C

PDP master switches and power sources in the 40U cabinet

The storage system can take 10 to 15 minutes to complete a typical powerup. If the storage system was installed in a cabinet at your site (field-installed system), the first powerup will require several reboots and can take 30 to 45 minutes. Amber warning LEDs flash during the power on self-test (POST) and then go off. The front fault LED and the

Hardware and Operational Overview

O OFF

ON I

EXP

PRI

PRI

EXP

Master switch
O OFF ON I

EXP

PRI
!

#
PRI
!

EXP

O OFF

ON I

EXP

PRI

PRI

EXP

#
PRI
!

EXP

ON I

EXP

PRI

PRI
!

EXP

O OFF

PRI

EXP

EXP

PRI

PRI

EXP

#
PRI
!

EXP

EXP

PRI

PRI

EXP

PRI
!

EXP

PRI

PRI

EXP

#
PRI
!

EXP

EXP

PRI

ON I

PRI

EXP

#
PRI
!

EXP

EXP

PRI

Master switch
! !

PRI PRI
!

EXP
! ! ! ! ! ! ! ! ! ! ! !

#
EXP

O OFF

Master switch
ON I O OFF

ON I O OFF

SPS switch

#
EXP

O OFF O OFF ON I ON I

SPS switch Master switch

Power source A
CL3641

15

SPS recharge LEDs commonly stay on for several minutes while the SPSs are charging. If amber LEDs on the front or back of the storage system remain on for more than 15 minutes (45 minutes for the first powerup of a field-installed system), make sure the storage system is correctly cabled, and then refer to the troubleshooting flowcharts on the CLARiiON Tools page on the EMC Powerlink website (http://Powerlink.EMC.com). If you cannot determine any reasons for the fault, contact your authorized service provider.

Powering down the storage system


1. Stop all I/O activity to the SPE. If the server connected to the SPE is running the Linux or UNIX operating system, back up critical data and then unmount the file systems. Stopping I/O allows the SP to destage cache data, and may take some time. The length of time depends on criteria such as the size of the cache, the amount of data in the cache, the type of data in the cache, and the target location on the disks, but it is typically less than one minute. We recommend that you wait five minutes before proceeding. 2. After five minutes, use the power switch on each SPS to turn off power. Storage processors and DAE LCCs connected to the SPS power down within two minutes.

CAUTION Never unplug the power supplies to shut down an SPE. Bypassing the SPS in that manner prevents the storage system from saving write cache data to the vault drives, and results in data loss. You will lose access to data, and the storage processor log displays an error message similar to the following:
Enclosure 0 Disk 5 0x90a (Cant Assign - Cache Dirty) 0 0xafb40 0x14362c

Contact your service provider if this situation occurs.

16

Hardware and Operational Overview

3. For CX3 model 10 systems with a single SPS, wait two minutes and then unplug the power cables for SP B on the SPE3 and LCC B on DAE 0, bus 0. This turns off power to the SPE and the first DAE (EA 0, bus 0). You do not need to turn off power to the other connected DAEs.

Hardware and Operational Overview

17

Status lights (LEDs) and indicators


Status lights made up of light emitting diodes (LEDs) on the SPE3, its FRUs, the SPSs, and the DAE3P and their FRUs indicate the components current status.

Storage processor enclosure (SPE3) LEDs


This section describes status LEDs visible from the front and the rear of the SPE3. SPE3 front status LEDs Figure 9 and Figure 10 show the location of the SPE3 status LEDs that are visible from the front of the enclosure. Table 2 describes these LEDs.

Fault LED

Power LED DAE3P

SPS SPE3

Fault LED
Figure 9

Power LED

EMC3428

SPE3 front status LEDs (bezel in place)

Power/cooling LEDs
TLA S/Nxxxxxxxx

Fault LED
Figure 10

Power LED

EMC3427

SPE3 front status LEDs (bezel removed)

18

Hardware and Operational Overview

Table 2 LED Power Quantity 1

Meaning of the SPE3 front status LEDs State Off Solid green Meaning SPE3 is powered down. SPE3 is powered up. SPE3 is operating normally. A fault condition exists in the SPE3. If the fault is not obvious from another fault LED on the front, look at the rear of the enclosure. Power/cooling module is not powered up. Power/cooling module is powered and operating normally. Power/cooling module is faulted. Fault condition exists external to the power/cooling module.

Fault

Off Solid amber

Power/cooling fault (see note)

1 per module

Off Solid green Solid amber Blinking amber

Note: Light is visible only with the bezel removed.

SPE3 rear status LEDs Figure 11 shows the status LEDs that are visible from the rear of the SPE3. Table 3 describes these LEDs.
SP B I/O module fault LED
A B

SP A

I/O module fault LED

Rear

Power & fault LEDs


Figure 11 Table 3 LED SP fault Quantity 1 per SP

Fibre channel link LEDs

Power & fault LEDs

Fibre channel link LEDs

EMC3466

SPE3 rear status LEDs Meaning of the SPE3 rear status LEDs State Off Solid amber Blinking amber Meaning SP is powered up and operating normally. SP is faulted. SP is in process of powering up. I/O module is powered up and operating normally. I/O module is faulted.

I/O module fault

1 per I/O module

Off Solid amber

Hardware and Operational Overview

19

LED BE port link

Quantity 1 per back-end Fibre Channel port

State Off

Meaning No link because of one of the following conditions: the cable is disconnected, the cable is faulted or it is not a supported type. 1 Gb/s or 2 Gb/s link speed. 4 Gb/s link speed. Cable fault. No link because of one of the following conditions: the host is down, the cable is disconnected, an SFP is not in the port slot, the SFP is faulted or it is not a supported type. 1 Gb/s or 2 Gb/s link speed. 4 Gb/s link speed. SFP or cable fault.

Solid green Solid blue Blinking green then blue FE port link 1 per front-end Fibre Channel port Off

Solid green Solid blue Blinking green then blue

20

Hardware and Operational Overview

DAE3P status LEDs


This section describes the following status LEDs and indicators: Front DAE3P and disk modules status LEDs Enclosure address and bus ID indicators LCC and power/cooling module status LEDs Front DAE3P and disk modules status LEDs Figure 12 and Figure 13 show the location of the DAE3P and disk module status LEDs that are visible from the front of the enclosure. Table 4 describes these LEDs.

Fault LED

Power LED DAE3P

SPS SPE3

Fault LED
Figure 12

Power LED

EMC3428

Front DAE3P and disk modules status LEDs (bezel in place)

Hardware and Operational Overview

21

Fault LED (Amber)

Power LED (Green or Blue)

Disk Activity LED (Green)


Figure 13

Fault LED (Amber)


EMC3422

Front DAE3P and disk modules status LEDs (bezel removed)

22

Hardware and Operational Overview

Table 4 LED DAE power Quantity 1

Meaning of the front DAE3P and disk module status LEDs State Off Solid green Solid blue Meaning DAE3P is not powered up. DAE3P is powered up and back-end bus is running at 2 Gb/s. DAE3P is powered up and back-end bus is running at 4 Gb/s. On when any fault condition exists; if the fault is not obvious from a disk module LED, look at the back of the enclosure. Slot is empty or contains a filler module or the disk is powered down by command, for example, as the result of a temperature fault. Drive has power but is not handling any I/O activity (the ready state). Drive is spinning and handling I/O activity.

DAE fault

Solid amber

Disk activity

1 per disk module

Off

Solid green Blinking green, mostly on

Blink green at a constant rate Blinking green, mostly off

Drive is spinning up or spinning down normally. Drive is powered up but not spinning; this is a normal part of the spin-up sequence, occurring during the spin-up delay of a slot. On when the disk module is faulty, or as an indication to remove the drive.

Disk fault

1 per disk module

Solid amber

Enclosure address and bus ID indicators Figure 14 shows the location of the enclosure address and bus ID indicators that are visible from the rear of the enclosure. In this example, the DAE3P is enclosure 2 on bus (loop) 1; note that the indicators for LCC A and LCC B always match. Table 5 describes these indicators.

Hardware and Operational Overview

23

Bus ID 0 1 2 3

Enclosure address 0 1 2 3

EA selection

#
4 5 6 7 4 5 6 7
!

! !

EXP

PRI

EA selection
Figure 14 Table 5 LED Enclosure address Bus ID Quantity 8 8

Enclosure address

Bus ID
EMC3178

Location of enclosure address and bus ID indicators Meaning of enclosure address and bus ID indicators State Green Blue Meaning Displayed number indicates enclosure address. Displayed number indicates bus ID. Blinking bus ID indicates invalid cabling; LCC A and LCC B are not connected to the same bus; or, the maximum number of DAEs allowed on the bus is exceeded.

Power/cooling module status LEDs Figure 15 shows the location of the status LEDs for the power supply/system cooling modules (referred to as power/cooling modules). Table 6 describes these LEDs.

24

Hardware and Operational Overview

EXP

PRI

#
!

PRI PRI EXP

EXP

4 5 6 7

4 5 6 7

#
0 1 2 3 0 1 2 3

Power LED (green) Power fault LED (amber) Blower fault LED (amber)

! !

! !

EXP

PRI

Blower fault LED (amber) Power fault LED (amber) Power LED (green)
EMC3179

Figure 15 Table 6 LEDs Power supply active Power supply fault (see note) Quantity 1 per supply 1 per supply

Power/cooling module status LEDs Meaning of power/cooling module status LEDs State Green Amber Meaning On when the power supply is operating. On when the power supply is faulty or is not receiving AC line voltage. Flashing when either a multiple blower or ambient over-temperature condition has shut off power to the system. On when a single blower in the power supply is faulty.

Blower fault (see note)

1 per cooling module

Amber

Note: The DAE3P continues running with a single power supply and three of its four blowers. Removing a power/cooling module constitutes a multiple blower fault condition, and will power down the enclosure unless you replace a blower within two minutes.

LCC status LEDs Figure 16 shows the location of the status LEDs for a link control card (LCC). Table 7 describes these LEDs.

Hardware and Operational Overview

EXP

PRI

#
!

PRI PRI EXP

EXP

25

Expansion link active LED (2 Gb/s - green 4 Gb/s - blue)

Primary link active LED (green or blue)


EXP PRI
!

Fault LED (amber) Power LED (green)

EXP

Power LED (green) Fault LED (amber)


!

PRI

EXP

Primary link active LED


Figure 16 Table 7 Light LCC power LCC fault Primary link active Quantity 1 per LCC 1 per LCC 1 per LCC LCC status LEDs Meaning of LCC status LEDs State Green Amber Green Blue Expansion link active 1 per LCC Green Blue Meaning

Expansion link active LED


EMC3184

On when the LCC is powered up. On when either the LCC or a Fibre Channel connection is faulty. Also on during power on self test (POST). On when 2 Gb/s primary connection is active. On when 4 Gb/s primary connection is active. On when 2 Gb/s expansion connection is active. On when 4 Gb/s expansion connection is active.

SPS status LEDs


Figure 17 shows the location of SPS status LEDs that are visible from rear. Table 8 describes these LEDs.

26

Hardware and Operational Overview

EXP

EXP

PRI

PRI

PRI

EXP

PRI
PRI

EXP
! !

#
PRI
!

EXP

Active LED (green) On battery LED (amber) Replace battery LED (amber)
EMC3421

Fault LED (amber)

Figure 17 Table 8 LED Active Quantity 1 per SPS

1000 W SPS status LEDs Meaning of 1000 W SPS status LEDs State Green Meaning When this LED is steady, the SPS is ready and operating normally. When this LED flashes, the batteries are being recharged. In either case, the output from the SPS is supplied by AC line input. The AC line power is no longer available and the SPS is supplying output power from its battery. When battery power comes on, and no other online SPS is connected to the SPE, the file server writes all cached data to disk, and the event log records the event. Also on briefly during the battery test. The SPS battery is not fully charged and may not be able to serve its cache flushing function. With the battery in this state, and no other online SPS connected to the SPE, the storage system disables write caching, writing any modified pages to disk first. Replace the SPS as soon as possible. The SPS has an internal fault. The SPS may still be able to run online, but write caching cannot occur. Replace the SPS as soon as possible.

On battery

1 per SPS

Amber

Replace battery

1 per SPS

Amber

Fault

1 per SPS

Amber

Hardware and Operational Overview

27

Copyright 2006-2007 EMC Corporation. All Rights Reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners.

28

Hardware and Operational Overview

Potrebbero piacerti anche