Sei sulla pagina 1di 105

EMC VNX

VNX Operating Environment for Block 05.33.009.5.184


VNX Operating Environment for File 8.1.9.184
EMC Unisphere 1.3.9.1.184

Release Notes
P/N 302-000-403
REV 26
September, 2016

The software described in this document is intended for the VNX5200,


VN5400, VNX5600, VNX5800, VNX7600, VNX8000, VNX-F5000, and VNXF7000 but the following software packages are intended for general use with
VNX and CLARiiON products:

Unisphere Host software, CLI, and Utilities

Unisphere Service Manager

ESRS IP Client

The software versions are the same for all platforms. Topics include:
Revision history .................................................................................... 2
Software media, organization, and files ................................................ 4
New features and enhancements .......................................................... 5
Fixed problems ..................................................................................... 6
VNX Operating Environment for Block 05.33.009.5.184, VNX
Operating Environment for File 8.1.9.184, EMC Unisphere
1.3.9.1.184 ............................................................................. 6
Fixed in previous releases .................................................................. 22
Known problems and limitations ........................................................ 65
Documentation................................................................................. 102
Configuring VNX Naming Sevices................................................ 102
Configuring Virtual Data Movers on VNX ..................................... 102
Parameters Guide for VNX for File ............................................... 103
Security Configuration Guide for VNX.......................................... 103
Using FTP, TFTP, and SFTP on VNX ............................................... 103
Using VNX Replicator ................................................................. 104
VNX 5400 Parts Location Guide .................................................. 104
Where to get help ............................................................................. 105

Revision history

Revision history

Rev

Date

Description

26

September, 2016

Updated for 05.33.009.5.184 (Block OE), 8.1.9.184 (File OE), and 1.3.9.11.184
(Unisphere)

25

March, 2016

Updated SSD FAST Cache content.

24

March, 2016

Updated for 05.33.009.5.155 (Block OE), 8.1.9.155(File OE), and 1.3.9.1.155


(Unisphere)

23

November, 2015

Updated for VNX for File OE 8.1.8.132 and VNX for Block OE 05.33.008.5.132 for the
VDM Metrosync Manager.

22

October, 2015

Updated for VNX for File OE 8.1.8.121 with editorial updates.

21

October, 2015

Updated for VNX for File OE 8.1.8.121.

20

August, 2015

Updated for 05.33.008.5.119 (Block OE), 8.1.8.119 (File OE), and 1.3.8.1.0119
(Unisphere)

19

April, 2015

Updated for 05.33.006.5.102 (Block OE) and 8.1.6.101 (File OE).

18

April, 2015

Updated Block OE fixes from previous document version.

17

March, 2015

Updated for 05.33.006.5.096 (Block OE), 8.1.6.96(File OE), and 1.3.6.1.0096


(Unisphere).

16

December, 2014

Updated for the release of VNX for Block 05.33.000.5.081.

15

December, 2014

Updated the Data at Rest Encryption (D@RE) feature description

14

December, 2014

Editorial update.

13

November, 2014

Updated fixes and known issues for release:

05.33.000.5.079 (Block)

8.1.3.79 (File)

12

October, 2014

Editorial update.

11

October, 2014

Updated for a security advisory.

10

September, 2014

The known issues list has been updated.

09

September, 2014

Updated software revision numbers:


05.33.000.5.074 (Block)

08

July, 2014

Updated for release:

05.33.000.5.072 (Block)

8.1.3.72 (File)

1.3.3.1.0072-1 (Unisphere)

06

March 2014

Updated fixes from previous document version.

05

February2014

Updated for release:


05.33.000.5.051 (Block)
8.1.2.51 (File)
1.3.2.1.0051 (Unisphere)

04

January 2014

Changed symptom description for AR607962 in the Fixed problem section under VNX
Operating Environment for Block 05.33.000.5.038.

03

January 2014

Updated for release 05.33.000.5.038 (Block)

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Revision history

Rev

Date

Description

02

November 2013

Updated for release 05.33.000.5.035 (Block)

01

October 2013

Initial release of 05.33.000.5.034 (Block), 8.1.1.33 (File), 1.3.1.1.0033 (Unisphere),


1.3.1.1.0033 (ESRS IP Client), 8.1.1.33 (VIA), and 1.3.1.1.0033 (USM)

These release notes contain supplemental information about:

EMC VNX Operating Environment (OE) for Block

EMC VNX Operating Environment (OE) for File

System Management
Unisphere UI
Platform
Security
Replication
CIFS
ESRS (Control Station)
RecoverPoint FS
Migration
Unisphere

Unisphere Analyzer
Unisphere Host software, CLI, and Utilities
Unisphere Quality of Service (QoS) Manager
Virtual Provisioning

FAST Cache and FAST VP

EMC SnapView for:

VNX OE for Block


Admsnap
Admhost
EMC SAN Copy for VNX OE for Block

EMC MirrorView/Asynchronous and MirrorView/Synchronous for VNX OE for


Block

EMC Serviceability for VNX OE for Block, VNX OE for File, and Unisphere

Unisphere Service Manager (USM)


VNX Installation Assistant (VIA)
EMC Secure Remote Support (ESRS) IP Client
EMC Snapshots for:

VNX OE for Block


SnapCLI
Virtualization for EMC VNX for:
VNX OE for Block
VNX OE for File

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Software media, organization, and files

Software media, organization, and files


The VNX OE for Block version 05.33.009.5.184 and the VNX OE for File version 8.1.9.184 are
available in their respective upgrade bundles.
To upgrade to the VNX OE for Block OE or the VNX OE for File OE, use the Unisphere Service
Manager (USM) System Software wizards. For the latest version of USM, go to EMC Online
Support at Support.EMC.com and choose Support by Product > VNX2 Series > Downloads
Note: You must perform VNX File OE upgrades before performing upgrades for any
attached VNX Block systems.
You can also obtain updated versions of the following software at support.EMC.com:

Unisphere version 1.3.9.1.184

Unisphere Host Agent version 1.3.9.1.0184-1


VNX for Block CLI version 7.33.9.1.184
Unisphere Storage System Initialization Utility version 1.3.9.1.0184-1
Unisphere Server Utility version 1.3.9.1.0184-1
Unisphere Service Manager version 1.3.9.1.0184

VNX Installation Assistant version 8.1.9.184

EMC Secure Remote Support IP Client version 1.3.9.1.0184-1

Java support
The following 32 bit Java Platforms are verified by EMC and compatible for use with Unisphere,
Unified Service Manager (USM), and VNX Installation Assistant (VIA):
Oracle Standard Edition 1.7 up to Update 75
Oracle Standard Edition 1.8 up to Update 25
The 32-bit JRE is required, even on 64 bit systems.
JRE Standard Edition 1.6 is not recommended because Oracle has stopped support for this edition.
IMPORTANT: Some new feature/changes may not take effect in the Unisphere GUI after an upgrade.
To avoid this, EMC recommends clearing your Java cache after an upgrade.
Firmware
The following firmware variants are included with this release:
If a lower revision is installed, the firmware is automatically upgraded to the revision
contained in this version.
If a higher revision is running, the firmware is not downgraded to the revision contained
in this version.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

New features and enhancements

Enclosure type

Current firmware version

15 Drive 3U DAE (DAE6S)

1.55

25 Drive 2U DAE (DAE5S)

1.55

60 Drive 4U DAE (DAE7S)

8.08

120 Drive 3U DAE (DAE8S)

15.12

Platform
MT
JF

BIOS
33.60
33.51

BMC FW
25.50
25.50

POST
69.30
61.00

Online access to VNX installation documents


VNX installation manuals are available exclusively online. EMC recommends downloading the latest
version of the documentation from the VNX product page at support.EMC.com.
Security information
For information on an individual technical or security advisory, go to the EMC Online Support
website and search by using the ESA number or EMC Security Advisories as the keyword. For a list
of EMC security advisories in the current year, refer to EMC Security Advisories All EMC Products
Current Year. For a list of older ESAs, refer to EMC Security Advisories All EMC Products Archive.
Set up the My Advisory Alerts option to receive alerts for EMC Technical Advisories (ETAs) and EMC
Security Advisories (ESAs) to stay informed of critical issues and prevent potential impact to your
environment. Go to Account Settings and Preferences, type the name of an individual product, click
to select it from the list, and then click Add Alert. For the individual product or All EMC Products,
select ETAs and/or ESAs.
VDM synchronous replication operations
When you need to perform VDM synchronous replication operations, use VDM MetroSync,
which uses MirrorView replication.

New features and enhancements


Unisphere support for NFS exports
A storage administrator can now view and manage all VDM-level exports in Unisphere, for both CIFS
and NFS. Earlier functionality allowed only CIFS to be viewed and managed.
VPLEX/VNX2 interoperability
VNX2 family arrays will clear REPORT LUNS DATA HAS CHANGED unit attentions as required by
the SCSI Architecture Model SAM-5r21, Section 5.14. This enhancement fixes an interoperability
issue with VPLEX appliances where adding one or more LUNs to, or removing one or more LUNs
from, the VPLEX storage group may have resulted in VPLEX volumes going offline.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Fixed problems

Fixed problems
VNX Operating Environment for Block 05.33.009.5.184, VNX Operating Environment
for File 8.1.9.184, EMC Unisphere 1.3.9.1.184
Category

Details

Symptom

Version

VNX Block OE

Platform: VNX for Block


Severity: High
Tracking: 835959

VNX Block OE

Platform: VNX for Block


After an NDU, during the LCC
firmware upgrade, drives may
Severity: High
Tracking: 79612508/828617 fail, enclosures may go offline,
or the LCC firmware upgrade
may fail.

The code was updated to avoid any LCC


upgrade while one is in progress on the
peer SP.

Fixed in:
05.33.009.5.184
Exists in versions:
All prior 05.33
versions

VNX Block OE

Platform: VNX for Block


During an NDU, a single storage Corrected memory allocation during I/O
processor bugcheck
abort handling.
Severity: High
Tracking: 79147712/828510 (0000001E) occurred when the
peer SP was being upgraded.

Fixed in:
05.33.009.5.184
Exists in versions:
All prior 05.33
versions

VNX Block OE

Platform: VNX for Block


Storage processor bugchecks
Corrected resource allocation code.
(0x05900000) occurred due to
Severity: High
Tracking: 78491320/821326 internal resource starvation.

Fixed in:
05.33.009.5.184
Exists in versions:
05.33.009.5.155

VNX Block OE

Platform: VNX for Block


During a period of backend
instability, multiple internal
Severity: High
drive health checks were
Tracking: 78799806/820737
initiated on the same RAID
group, causing the RAID group
to be broken temporarily.

VNX Block OE

The code was updated to prevent the SP


Platform: VNX for Block
When one SP bugchecked
during configuration changes, if bugcheck.
Severity: High
Tracking: 78220474/817161 the peer SP had low memory or
was out of memory, pools and
pool LUNs may be erased from
persistent storage.

Fixed in:
05.33.009.5.184
Exists in versions:
All prior 05.33
versions

VNX Block OE

The code was updated to add a check for


Platform: VNX for Block
A RAID group rebuild stopped
while in process because of
media errors to the RAID-6 code, allowing
Severity: High
the rebuild to continue.
Tracking: 77523988/806755 media errors. This caused a
portion of the RAID group to be
degraded, which impacted
performance.

Fixed in:
05.33.009.5.184
Exists in versions:
All prior 05.33
versions

Brief (less than 45 seconds)


failures of multiple drives may
cause LUNs to go offline.

Fix

The code was updated to allow the


Fixed in:
system to suppress drive failures for up 05.33.009.5.184
to 45 seconds before taking LUNs offline.
Exists in versions:
All prior 05.33
versions

The code was updated to allow only one Fixed in:


internal drive health check to be
05.33.009.5.184
performed at a time for each SP.
Exists in versions:
All prior 05.33
versions

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Fixed problems

Category

Details

Symptom

Fix

Version

VNX Block OE

The code was updated to mark the RAID


Platform: VNX for Block
A drive that was being
group as broken in this situation.
proactively spared failed at a
Severity: High
Tracking: 77001192/801467 time when other drives in the
RAID group were already faulted
(two other drives in RAID-6, and
one in RAID-5). However, the
RAID group was not marked as
broken. This could lead to
unexpected results, including,
but not limited to, an SP
bugcheck.

Fixed in:
05.33.009.5.184
Exists in versions:
All prior 05.33
versions

VNX Block OE

Platform: VNX for Block


Severity: High
Tracking: 798952

LUNs went offline after power


Improved drive state change handling for Fixed in:
was resumed from a failure, and broken RAID groups to cover this
05.33.009.5.184
situation.
one of the drives came online
Exists in versions:
later than others.
All prior 05.33
versions

VNX Block OE

Platform: VNX for Block


Severity: High
Tracking: 798649

After the system was upgraded


to version R33.155 and the LCC
firmware was being upgraded,
multiple drives reported as
faulted.

VNX Block OE

Platform: VNX for Block


Severity: High
Tracking: 795742

A RAID group got stuck in the


Fixed a timing window that prevented the
degraded state when a drive in timeout error from being cleared
a redundant RAID group
correctly.
experienced timeouts (which
initiated an internal drive health
check and rebuild logging).

Fixed in:
05.33.009.5.184
Exists in versions:
All prior 05.33
versions

VNX Block OE

Platform: VNX for Block


Severity: High
Tracking: 791238

While an SP was rebooting, the The SP initialization code was updated.


peer SP was manually rebooted,
causing the first SP to crash and
reboot.

Fixed in:
05.33.009.5.184
Exists in versions:
All prior 05.33
versions

VNX Block OE

Platform: VNX for Block


Enclosures may go offline or
become faulted as a result of a
Severity: Medium
Tracking: 79119078/828579 race condition between Drive
and Enclosure handling during
an LCC upgrade.

The Enclosure code was updated to


process Enclosure objects before Drive
objects.

Fixed in:
05.33.009.5.184
Exists in versions:
All prior 05.33
versions

VNX Block OE

The code was updated to resolve this


Platform: VNX for Block
A single storage processor
issue.
bugcheck occurred when the
Severity: Medium
SAS controller firmware crashed
Tracking: 78953658/822547
while it was processing SAS
topology changes.

Fixed in:
05.33.009.5.184
Exists in versions:
All prior 05.33
versions

VNX Block OE

The code was updated to resolve this


Platform: VNX for Block
A direct LUN (DLU) might
become inaccessible because a issue.
Severity: Medium
Tracking: 78989068/821891 snapshot was created on the
DLU during a relocation, or a
snapshot was created
immediately after the last
snapshot on the DLU was
destroyed.

Fixed in:
05.33.009.5.184
Exists in versions:
All prior 05.33
versions

Fixed in:
The code was updated to treat
consecutive link errors as a single event, 05.33.009.5.184
allowing the drives to not be faulted too Exists in versions:
quickly.
05.33.009.5.155

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Fixed problems

Category

Details

Symptom

Fix

Version

This version includes updated Disk Array Fixed in:


Enclosure firmware for the DAE6S to
05.33.009.5.184
address the power supply issue.
Exists in versions:
KnowledgeBase ID: 476599
All prior 05.33
versions

VNX Block OE

Platform: VNX for Block


Severity: Medium
Tracking: 819633

A 15-Drive 3U Disk Array


Enclosure (DAE6S) may shut
down after a power supply fuse
blows.

VNX Block OE

Platform: VNX for Block


Severity: Medium
Tracking: 817366

A LUN became disabled on both The code was updated to resolve this
SPs after the LUN recovered.
issue.

Fixed in:
05.33.009.5.184
Exists in versions:
All prior 05.33
versions

VNX Block OE

Platform: VNX for Block


Various issues occurred, due to The code was updated to resolve this
issue.
incorrect internal memory
Severity: Medium
allocation handling during
Tracking: 72382254/813574
cancellation of I/O requests.

Fixed in:
05.33.009.5.184
Exists in versions:
All prior 05.33
versions

VNX Block OE

Platform: VNX for Block


Severity: Medium
Tracking: 813153

Fixed in:
05.33.009.5.184
Exists in versions:
All prior 05.33
versions

VNX Block OE

Platform: VNX for Block


After converting from a
VNX5600 or VNX5800, a
Severity: Medium
VNX7600 had incorrect LUN
Tracking: 77635160/812007
limits.

The code was updated to resolve this


issue.

Fixed in:
05.33.009.5.184
Exists in versions:
All prior 05.33
versions

VNX Block OE

Platform: VNX for Block


Could not perform clone
operations (such as AddClone,
Severity: Medium
Tracking: 77870966/810992 Sync, and Reverse-Sync) on
deduplication-enabled LUNs
due to insufficient pool space.

The code was updated to resolve this


issue.

Fixed in:
05.33.009.5.184
Exists in versions:
All prior 05.33
versions

VNX Block OE

Platform: VNX for Block


Severity: Medium
Tracking: 801555

The code was updated to remove a race


condition.

Fixed in:
05.33.009.5.184
Exists in versions:
All prior 05.33
versions

VNX Block OE

Platform: VNX for Block


Access to LUNs on systems with The conversion code was updated to
encryption enabled failed after preserve KEK-KEKs.
Severity: Medium
Tracking: 76248726/798707 performing a conversion that
removed but did not restore
KEK-KEKs.

The BBU fault LED did not clear The code was updated to properly clear
after it was replaced.
this fault condition.

Both SP's rebooted from


bugcheck code: E117B264

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Fixed in:
05.33.009.5.184
Exists in versions:
All prior 05.33
versions

Fixed problems

Category

Details

Symptom

Fix

Version

VNX Block OE

The code was updated to resolve this


Platform: VNX for Block
Users could not attach to
snapshots they created in order issue.
Severity: Medium
Tracking: 75857678/795850 to allow I/O to and from the
snapshots.

VNX Block OE

Platform: VNX for Block


Severity: Medium
Tracking: 794838

VNX Block OE

Platform: VNX for Block


During a period of heavy I/O, a The code was updated to fix a race
condition.
storage processor bugcheck
Severity: Medium
(0xe117b264)
occurred
when
Tracking: 69090538/778872
hosts cancelled I/O during data
movement between LUNs (such
as that initiated by FAST Cache,
migration, and so on).

VNX Block OE

Platform: VNX for Block


SP reboots may occur if some
hardware components fail to
Severity: Medium
respond for 180 seconds.
Tracking: 73944754/774563

Fixed in:
The code was updated to lower a
hardware timer from 180 seconds to 30 05.33.009.5.184
seconds. This allows at least three retries
Exists in versions:
before the SP is rebooted.
All prior 05.33
versions

VNX Block OE

Platform: VNX for Block


When FAST Cache was enabled,
an NDU failed after a storage
Severity: Medium
Tracking: 74260452/774197 processor bugchecked multiple
times.

Updated the FAST Cache code to remove


a timing issue during its initialization.
This prevents the issue with
synchronization between storage
processors.

Fixed in:
05.33.009.5.184
Exists in versions:
All prior 05.33
versions

VNX Block OE

Platform: VNX for Block


A single SP bugchecked and
rebooted. Several different
Severity: Medium
bugcheck types were possible,
Tracking: 73271614/766108
depending on the sequence of
events.

The SP cache code was updated to fix a


race condition.

Fixed in:
05.33.009.5.184
Exists in versions:
All prior 05.33
versions

VNX Block OE

Platform: VNX for Block


After a Block OE upgrade, if the
RecoveryImage or
Severity: Medium
Tracking: 71437626/734107 UtilityPartition package was
installed within one hour, the
peer SP might not upgrade the
LCC firmware. The firmware
version on the SPs would be
different.

Updated the NDU code to skip the abort


of the firmware upgrade on the peer
when installing the RecoveryImage and
UtilityPartition packages.

Fixed in:
05.33.009.5.184
Exists in versions:
All prior 05.33
versions

VNX Block OE

Platform: VNX for Block


Severity: Low
Tracking: 819434

The SAS Driver code was updated to


recover resources, reset the controller,
and return the controller to an
operational state. This prevents the SP
from bugchecking.

Fixed in:
05.33.009.5.184
Exists in versions:
All prior 05.33
versions

A race condition in auto tiering


may occur during slice
relocation on a DLU and result
in data unavailability or an SP
bugcheck.

A bugcheck (0x000000D1)
occurred in some rare instances
where the SAS backend
experienced a hardware issue.

The code was updated to properly track


I/O status of a pool LUN.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Fixed in:
05.33.009.5.184
Exists in versions:
All prior 05.33
versions
Fixed in:
05.33.009.5.184
Exists in versions:
All prior 05.33
versions
Fixed in:
05.33.009.5.184
Exists in versions:
All prior 05.33
versions

Fixed problems

Category

Details

Symptom

Fix

VNX Block OE

Platform: VNX for Block


In rare cases, an SP bugcheck The code was updated to remove the
thread deadlock potential.
may occur because a thread
Severity: Low
deadlock could occur in a
Tracking: 76975134/804021
background job responsible for
free space reclamation from
LUNs in a pool.

VNX Block OE

Platform: VNX for Block


Severity: Low
Tracking: 798651

VNX Block OE, Platform: VNX for Block


CBFS
Severity: High
Tracking: 825289

Fixed in drive firmware.


A Samsung SAS Flash 3 drive
failed to come back online after
a very brief (less than 3
seconds) power down event to
an enclosure.

Failover can be delayed during


planned storage processor
reboot or shutdown (including
during NDU), possibly causing
temporary loss of access to
data.

VNX Block OE, Platform: VNX for Block


Add or sync mirror operations
MirrorView
might fail on deduplicationSeverity: Medium
enabled LUNs that have
Tracking: 77870966/814187
insufficient pool space.

The code was updated to resolve this


issue.

Version
Fixed in:
05.33.009.5.184
Exists in versions:
All prior 05.33
versions
Fixed in:
Samsung SAS
Flash 3 drive
firmware revision
EQP6 or later.
Exists in versions:
All Samsung SAS
Flash 3 drive
firmware revisions
earlier than EQP6.
Fixed in:
05.33.009.5.184
Exists in versions:
05.33.009.5.155

Fixed in:
The code was updated to correctly
determine the LUN size and execute add 05.33.009.5.184
and sync mirror operations on
Exists in versions:
deduplication-enabled LUNs.
All prior 05.33
versions

VNX File OE

Platform: VNX for File


The Data Mover experienced an The code was updated to resolve this
Invalid Opcode exception panic. issue.
Severity: High
Tracking: 78640888/819977 A CIFS client was performing a
large read.

Fixed in:
8.1.9.184
Exists in versions:
8.1.9.155

VNX File OE

Platform: VNX for File


An abort operation on an NDMP The code was updated to resolve this
two-way backup hung, causing issue.
Severity: High
the backup threads to stay in a
Tracking: 78374138/817807
hung state. This eventually led
to a situation where backups
were no longer possible on the
Data Mover.

Fixed in:
8.1.9.184
Exists in versions:
8.1.9.155

VNX File OE

This issue was resolved by solving the


Platform: VNX for File
The Data Mover was using
NFSv3 file locks (NLM). It
cause of the panic, allowing NLM to grant
Severity: High
a callback when the node is detached.
Tracking: 78331038/816632 experienced a Page Fault
Interrupt panic with Virt
ADDRESS: 0x0000839713
Err code: 0 Target
addr: 0x00000000 in
routine
lockd_buildGrantedRequ
est().

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

10

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Fixed problems

Category

Details

Symptom

Fix

Version

VNX File OE

The code was updated to fix the


Platform: VNX for File
A file was offline due to being
deduplicated. Writes to the file deadlock which occurs when cached
Severity: High
writes are flushed to an offline file.
Tracking: 78422692/816217 were buffered while it was
offline. When the file came back
online, the thread deadlocked
itself, causing other blocked
SMB and NFS threads.
Messages similar to:
Service:CIFS Pool:SMB2
BLOCKED for 405
seconds: Server
operations may be
impacted. were seen in the
sys_log.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE

This code was updated to prevent the


Platform: VNX for File
The Data Mover was not
replying to CIFS or NFS clients, issue.
Severity: High
Tracking: 73685670/814454 and SMB threads on the Data
Mover were blocked. A DHSM
connection was being modifed.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE

Platform: VNX for File


Because of periodic Data Mover The code was updated to resolve this
issue.
blocked threads, the Data
Severity: High
Mover did not respond to any
Tracking: 73005388/812664
command.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE

Platform: VNX for File


A variety of symptoms were
seen when mounting a file
Severity: High
Tracking: 77774706/810965 system with FLR enabled,
including Data Mover panics
and file system hangs.

VNX File OE

Platform: VNX for File


Some SMB threads were
The code was updated to fix a race
blocked in
condition.
Severity: High
File_LocksData::checkO
Tracking: 77205588/804314
plock() or
File_LocksData::checkC
onflictWithRangeLock().

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE

Platform: VNX for File


One or more NFSV3 clients
This issue was resolved by detecting
reached the maximum number duplicate blocked requests and removing
Severity: High
Tracking: 73356688/799234 of file locks allowed. The clients granted locks if the callback is denied.
could not write to the NFS
filesystem. There were
messages similar to the
following in the sys_log file:
LOCK: 3: Client
x.x.x.x(NLM) can't be
granted new range
locks, it already owns
30000.

Fixed in:
8.1.9.184
Exists in versions:
8.1.9.155
8.1.8.132
8.1.8.121
8.1.8.119
8.1.6.101
8.1.6.96
8.1.3.79
8.1.3.72

The code was updated to fix a logic error. Fixed in:


8.1.9.184
Exists in versions:
All prior 8.1
versions

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

11

Fixed problems

Category

Details

Symptom

Fix

Version

VNX File OE

Platform: VNX for File


The Control Station did not send The Control Station code was updated to
a CallHome when multiple Data send a CallHome for all Data Mover
Severity: High
panics.
Movers panicked at the same
Tracking: 76620428/798681
time due to an issue with the
SPs.

VNX File OE

Platform: VNX for File


The hidden parameter
rpcusesplitmsg was
Severity: High
Tracking: 76293352/796606 enabled on the Data Mover.
Later, the Data Mover stopped
responding to NFS writes.

VNX File OE

Platform: VNX for File


The Data Mover experienced a The code was updated to resolve this
Page Fault Interrupt panic in
issue.
Severity: High
routine
Tracking: 76198338/794533
DP_RepSecondary::sendP
ostEventToVer().

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE

Platform: VNX for File


SMB threads were blocked on The code was updated to resolve this
issue.
the Data Mover. This caused
Severity: High
users on the client systems to
Tracking: 74094606/767230
experience timeouts when
attempting to access any of the
CIFS shares on the Data Mover.
Messages similar to SMB2
BLOCKED for 412
seconds: Server
operations may be
impacted. were seen in the
sys_log.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE

Platform: VNX for File


The Data Mover experienced a The code was updated to fix an issue
isAValidIndBlkBuf: bad related to offline inodes.
Severity: High
bn panic while deduplication
Tracking: 72421770/745772
was in process on a file system.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE

Platform: VNX for File


Under some circumstances, file The issue was resolved by flushing file
system metadata was not being system metadata in a timely manner.
Severity: High
Tracking: 71812052/736183 flushed often enough on the
Data Mover. This led to a Data
Mover NOT reducing dirty
list_ panic.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

12

The code was updated to fix the RPC


issue.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions
Fixed in:
8.1.9.184
Exists in versions:
8.1.9.155
8.1.8.132
8.1.8.121
8.1.8.119
8.1.6.101
8.1.6.96

Fixed problems

Category

Details

Symptom

Fix

Version

VNX File OE

Platform: VNX for File


A file system with deduplication
enabled was nearly full. The
Severity: High
Tracking: 51266376/645057 Data Mover experienced an
alloc failed: counters
out of sync panic.

This issue was resolved by flushing the


pending I/O list before bringing an offline
file online so that the counters used for
calculating space usage are
synchronized.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE

Platform: VNX for File


A RecoverPoint system was
incorrectly configured with
Severity: Medium
improper HLU to ALU mapping
Tracking: 75112000/809286
(for example, a data LUN had an
HLU that was less than 16). The
DisasterRecovery (DR)
initialization operation did not
detect the misconfiguration. The
DR failover detected the
misconfiguration and failed to
failover.

This issue was resolved by displaying a


warning message during initializaton if a
LUN is configured with an HLU that is not
allowed for a data LUN. If this warning is
received for a data LUN, the HLU for that
LUN should be changed to a number
greater than 15. If this warning is
received for a LUN that is not a data LUN,
it can be safely ignored.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE

Platform: VNX for File


With deduplication enabled, an The code was updated to fix a logic error. Fixed in:
FLR file system could not be
8.1.9.184
Severity: Medium
Exists in versions:
Tracking: 63911682/807186 unmounted because the
operation hung.
All prior 8.1
versions

VNX File OE

Platform: VNX for File


In IPv6 environments, an error
occurred when the
Severity: Medium
Tracking: 76730694/801351 nas_migrate command was
run with the -dr option.

VNX File OE

Platform: VNX for File


CallHome operation failed from The UDoctor code was updated so that
Fixed in:
UDoctor.
event 0x76008106 is allowed to initiate 8.1.9.184
Severity: Medium
a CallHome.
Tracking: 76620428/799563
Exists in versions:
All prior 8.1
versions

VNX File OE

Platform: VNX for File


Messages similar to:
mcd_helper: failed to
Severity: Medium
set low-space
Tracking: 76595382/798940
threshold on <name>
were seen in the
/var/log/messages file even
though setting a low-space
threshold is not valid for the
VNX hardware configuration.

The script was updated to resolve this


issue.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE

Platform: VNX for File


There were many VDMs defined The code was updated to resolve this
on the VNX. The server_stat issue.
Severity: Medium
Tracking: 76197150/794195 command caused the Data
Mover to panic with Page
Fault Interrupt in the
routine
VDM_StatSessionListLis
t::findSessionList().

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

The nas_migrate code was updated to Fixed in:


allow IPv6 addresses, IPv6 lists, and
8.1.9.184
IPv4/IPv6 mixed lists to be parsed
Exists in versions:
correctly.
All prior 8.1
versions

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

13

Fixed problems

Category

Details

Symptom

Fix

Version

VNX File OE

This issue was resolved by changing the


Platform: VNX for File
There was a high load on the
Control Station with no obvious server_stats code.
Severity: Medium
cause. This caused the Control
Tracking: 75811178/792101
Station to respond to
commands very slowly. A
server_stats session was in
process and there were VDMs
defined on the system.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE

Platform: VNX for File


The Data Mover experienced a The code was updated to enable the
SYSTEM WATCHDOG panic in hidden parameters, which enables the fix
Severity: Medium
Mem_RuntimeAlloc::allo by default.
Tracking: 58342356/791958
cVMextFindPages. This
occurred with a high level of
NFS over TCP on a network
interface with a large mtu set
(for example mtu=9000). The fix
was included in a previous
release, but was disabled by
default by hidden parameters.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE

Platform: VNX for File


On Windows clients with some
third-party applications
Severity: Medium
installed, a file cut and paste
Tracking: 73063620/781762
operation from one location to
another does not prompt the
user for a file overwrite if the file
already exists. In this situation,
the operation seems to be
successful, but nothing is
actually moved or renamed.

The cut and paste operation now


prompts the user to overwrite files. The
operation performed is as requested by
the users response to the overwrite
message.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE,


CIFS

Platform: VNX for File


Severity: High
Tracking: 816581

The code was updated to resolve this


issue.

Fixed in:
8.1.9.184
Exists in versions:
8.1.9.155

VNX File OE,


CIFS

Platform: VNX for File


Intermittently, UNIX users were The Data Mover code was updated to
resolve the Active Directory issue.
denied temporary access to
Severity: High
their
NFS
exports
when
the
Tracking: 76988178/815045
Active directory service did not
provide mapping.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE,


CIFS

Platform: VNX for File


A Page Fault Interrupt panic
The code was updated to refuse
occurred in the
ACLGPOS commands for standalone
Severity: High
sddlParser::resolveNam servers.
Tracking: 74740808/796825
es() routine when running the
ACLGPOS command for a
standalone server.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

14

The Data Mover experienced a


panic while the Viruschecker
was enabled. The routine
AppLibNT_updateCEPP
appeared in the panic
backtrace.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Fixed problems

Category

Details

Symptom

Fix

An application on the Windows The code was updated to resolve this


client was using the MSDN API issue.
GetCompressedFileSize
function to request the size of a
compressed file on the VNX. For
a compressed file that did not
have the sparse attribute set,
the VNX returned the size of the
original uncompressed file,
rather than the size of the
compressed file. This caused a
variety of symptoms on the
client, depending on how the
application was using the size
information.

Version
Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE,


CIFS

Platform: VNX for File


Severity: Medium
Tracking: 809207

VNX File OE,


CIFS

Platform: VNX for File


Confusing messages similar to
SPN mismatch for the
Severity: Medium
server 'aaa.bbb.com is
Tracking: 72902326/799135
possible' were seen in the
server log. They are a proactive
warning that there might be
Service Principle Names (SPNs)
in use that should be
configured in the Active
Directory, but the messages do
not display the SPN that was
not found in the AD. They also
do not specify that the SPN
could be missing rather than a
mismatch.

The message was changed to:


Possible missing SPN
detected. Client connected to
CIFS server 'aaa.bbb.com'
using name 'spnname.ddd.com'.
See the server_cifs -o audit
command output and the
server_cifs -setspn command
for more details.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE,


Install,
Configure

Platform: VNX for File


If the system had HTTPS
The code was updated to restore the
configured for the
entire nas_connecthome configuration
Severity: Medium
Tracking: 68225782/700859 nas_connecthome command during an upgrade.
and an upgrade was performed,
the system either missed the
HTTPS or the entire
nas_connecthome
configuration completely.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE,


NFS

The code was updated to resolve the


Platform: VNX for File
An NFSv4 client read the root
directory of a VDM. This caused deadlock condition.
Severity: High
a deadlock, and eventually all
Tracking: 76888636/817548
NFS activity was blocked on the
Data Mover.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE,


NFS

Platform: VNX for File


Severity: High
Tracking: 794704

With VAAI, if a VMDK file was


converted to a VERSION file, it
could not be read with NFSV4.

The code was updated to read VMDK files Fixed in:


with NFSV4, even if they are converted to 8.1.9.184
a VERSION file.
Exists in versions:
All prior 8.1
versions

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

15

Fixed problems

Category

Details

Symptom

Fix

Version

VNX File OE,


NFS

Platform: VNX for File


The sqlite3 application reported The code was updated to resolve this
disk I/O error. A network issue.
Severity: Medium
Tracking: 74262086/815877 trace identified an NFSv4
BAD_STATEID error that was
related to a lock state being
revoked in error.

VNX File OE,


NFS

Platform: VNX for File


An NFS 4.1 client was not able The code was updated to allow a client to Fixed in:
to mount file systems located in mount multiple file systems on different 8.1.9.184
Severity: Medium
different VDMs simultaneously. VDMs.
Tracking: 75213994/815871
Exists in versions:
All prior 8.1
versions

VNX File OE,


NFS

Platform: VNX for File


Severity: Medium
Tracking: 789776

When using NFS V4.1, VAAI fast Updated the NFS V4.1 protocol to report
cloned files could not be read fast clones as regular files.
on the ESX host.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

The code was updated to resolve this


issue.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

The code was updated to resolve this


Data Mover panics with GP
exception. Virt
issue.
ADDRESS: 0x000015666e.
Err code: 0 error.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE,


Platform: VNX for File
A ReplicationV2 destination
ReplicationV2 Severity: High
Data Mover did not detect a
message corruption, and the
Tracking: 72505350/817183
Data Mover panicked.

VNX File OE,


SnapSure

Platform: VNX for File


Severity: High
Tracking: 836139

VNX File OE,


SnapSure

Platform: VNX for File


Data Mover panics with
messages similar to free()
Severity: High
called with invalid
Tracking: 79742550/830238
header - not pointing
to valid free list.

The code was updated to resolve this


issue.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE,


SnapSure

Platform: VNX for File


A SnapSure checkpoint was not This code was updated to resolve the
deleted correctly and caused a issue.
Severity: High
Tracking: 73775698/814685 Data Mover SYSTEM
WATCHDOG panic.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE,


SnapSure

Platform: VNX for File


Severity: High
Tracking: 814683

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

16

The system had SnapSure


The code was updated to resolve this
enabled. I/Os were slower than issue.
expected.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Fixed problems

Category

Details

Symptom

Fix

Version

VNX File OE,


SnapSure

Platform: VNX for File


Severity: Medium
Tracking:
234567891/770367

Deleting the last SavVol


checkpoint created a large
amount of unnecessary log
messages, filling the Log files,
causing earlier important
information to be overwritten.

The code was updated to resolve the


issue.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE,


System
Management

Platform: VNX for File


The upgrade process disabled
fixed-block deduplication of a
Severity: High
Tracking: 78926906/821332 mapped pool. This caused the
clients of the mapped pool to
use an excessive amount of
storage.

The code was updated to resolve this


issue.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE,


System
Management

Platform: VNX for File


Automatic Data Mover failover
failed with error failed to
Severity: High
Tracking: 70607876/732806 complete command.

The issue was resolved by improving the Fixed in:


failover algorithm.
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE,


System
Management

Platform: VNX for File


The name of a remote storage
system was changed, for
Severity: Medium
Tracking: 78915918/821672 example with the
nas_storage rename
command. Following this all
nas_cel syncrep
commands for that remote
storage system failed.

The code was updated to properly handle Fixed in:


a storage rename.
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE,


System
Management

Platform: VNX for File


A Restore checkpoint
The code was updated to separate
schedules on the
checkpoint schedules from reclaim
Severity: Medium
schedules.
Tracking: 77612720/810561 destination failed error
occurred because a
nas_migrate operation failed
to restore checkpoint schedules
on the destination VDM.

VNX File OE,


System
Management

Platform: VNX for File


Severity: Medium
Tracking: 801924

VNX File OE,


System
Management

Platform: VNX for File


The nas_volume -info
-size -all command took
Severity: Medium
Tracking: 76599686/797510 over 30 minutes to complete.

A VDM synchronous replication The code was updated to resolve this


issue.
relationship existed between
two VNX systems. The active
system also had a VDM
replication (Replicator V2/
RepV2) relationship with a third
VNX system. The active system
went down due to an unrelated
reason. The attempt to use
nas_syncrep failover to
failover to the standby system
failed with the error:
Error 5005: Device or
resource busy.
The code was updated to achieve better
performance without losing any
functionality.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions
Fixed in:
8.1.9.184
Exists in versions:
All prior 05.33
versions

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

17

Fixed problems

Category

Details

Symptom

Fix

Version

VNX File OE,


System
Management

Platform: VNX for File


The dbchk command failed
The code was updated to refine user
access permissions.
with multiple errors similar to
Severity: Medium
Tracking: 74765252/781997 the following Error: Non
Zero exit Status while
running .Server_config
for server_5.

VNX File OE,


System
Management

Platform: VNX for File


Severity: Medium
Tracking:
1243568790/773650

VNX File OE,


System
Management

Platform: VNX for File


LDAP user failed to login into
Unisphere with User Principal
Severity: Medium
Tracking: 72983448/766687 Name.

VNX File OE,


System
Management

The code was updated to remove the


Platform: VNX for File
The disableUser option
obsolete option.
was no longer a valid option
Severity: Low
Tracking: 75568228/786945 and was removed from the
admin_management.pl
command. However, it was not
removed from the usage error
message, causing confusion.
For example the command:
/nas/http/webui/bin/ad
min_management.pl
-disableUser <uid>
properly displayed the usage
error message, but the usage
error message incorrectly
showed disableUser as a
valid option.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE,


UFS

Platform: VNX for File


A client system was attempting The code was updated to resolve this
issue.
to create a file on an NFS file
Severity: High
system on the VNX system. The
Tracking: 77205462/807579
VNX system took longer than
the client expected to respond
to the NFSV3 create request
from the client system. This
caused the client application to
report an RPC timeout, or there
was some other indication from
the client system that it failed to
receive a response to the NFSV3
create request in the expected
timeframe.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

18

A Control Stations LDAP users The code was updated to disable LDAP
and groups did not get disabled users and groups on the Control Station
when connecting to a different domain.
after reconfiguring the LDAP
service connection to a different
domain.
The code was updated by correcting
LDAP user-related logic in the login
script.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions
Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions
Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

Fixed problems

Category

Details

Symptom

Fix

Version

VNX File OE,


UFS

Platform: VNX for File


Server log was flooded with
multiple UFS: 6:
Severity: Medium
hashalloc directly
Tracking: 75246402/785019
alloc failed messages.

The code was updated to resolve this


issue.

Fixed in:
8.1.9.184
Exists in versions:
8.1.9.155
8.1.8.132
8.1.8.121
8.1.8.119
8.1.6.101
8.1.6.96

VNX File OE,


VDM
MetroSync
Manager

Platform: VNX for File


Severity: Medium
Tracking: 817722

When both source side SPs


reboot, if the nas_syncrep
-Clean command was run, the
VDM with the synchronous
replication session, and the file
systems on the VDM, could be
deleted. Because of the SP
reboot, the consistency group
information could not be
retrieved. An error should occur,
but the synchronous replication
session, and the file systems on
the VDM were deleted by
mistake.

The code was updated to allow the


nas_syncrep -Clean command to
fail if the consistency group information
cannot be retrieved.

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

VNX File OE,


VDM
MetroSync
Manager

Platform: VNX for File


Severity: High
Tracking: 817600

The code was updated to resolve this


A synchronous replication
issue.
reverse or failover operation
failed after an importing
sync replica of NAS
database error occurred. After
running a nas_syncrep
-Clean -all command, the
next synchronous replication
reverse or failover operation
failed.

Unisphere

Platform: VNX for Block


Severity: Medium
Tracking: 841501

ESRS IP Client / Management


Utility's Add System always
failed with Error
Processing connection
request.

Unisphere

Platform: VNX for Block


User could not power off a VNX1 The code was updated to resolve this
array by using VNX2 Unisphere. issue.
Severity: Medium
Tracking: 77592216/807380

Fixed in:
1.3.9.1.184
Exists in versions:
All prior 1.3
versions

Unisphere

Platform: VNX for Block


CallHome operation failed to
The code was updated to resolve this
report a reboot of both SPs with issue.
Severity: Medium
Tracking: 76620428/805738 a bugcheck.

Fixed in:
05.33.009.5.184
Exists in versions:
All prior 1.3
versions

Fixed in:
8.1.9.184
Exists in versions:
All prior 8.1
versions

Fixed ESRS IP Client / Management


Fixed in:
Utility's Add System so that it now works 1.3.9.1.184
as expected.
Exists in versions:
All prior 1.3
versions

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

19

Fixed problems

Category

Details

Symptom

Fix

Version

Unisphere

The code was updated to increase the


Platform: VNX for Block
A LUN Pool size was greater
than the maximum parameter maximum value of the LUN pool size
Severity: Low
parameter type.
type value of 4,294,967,295
Tracking: 74995212/791420
blocks. The LUN size was
truncated to a small number,
making the LUN Pool size
smaller than the used LUN Pool
size. The percentage usage of
the Reserved LUN Pool was an
invalid value.

Unisphere

Platform: VNX for Block


MirrorView/S Attention state
alert was not displayed in
Severity: Low
Tracking: 71876700/741908 Unispheres Dashboard.

The code was updated to add an alert for Fixed in:


MirrorView/S in Unispheres Dashboard. 05.33.009.5.184
Exists in versions:
All prior 1.3
versions

Unisphere,
Analyzer

Platform: VNX for Block


Analyzer could not be started.
Severity: High
Tracking: 74422332/775267

The code was updated to resolve this


issue.

Fixed in:
1.3.9.1.184
Exists in versions:
All prior 1.3
versions

Unisphere,
Analyzer,
Block CLI

Platform: VNX for Block


Could not run the naviseccli The code was updated to resolve this
analyzer archivedump
issue.
Severity: Medium
Tracking: 76441094/807102 command against more than
one nar file in Linux.

Fixed in:
1.3.9.1.184
Exists in versions:
All prior 1.3
versions

Unisphere,
Block CLI

Platform: VNX for Block


Severity: High
Tracking: 791829

The destination LUN is The code was updated to resolve this


not available for
issue.
migration error occurred
when trying to perform a
migration operation by using
naviseccli.

Fixed in:
1.3.9.1.184
Exists in versions:
All prior 5.33
versions

Unisphere,
Block CLI

Platform: VNX for Block


Severity: Medium
Tracking:
78233834/76526568/
816959

LUN migrations were leaving


residual internal private LUNs.

The code was updated to resolve this


issue.

Fixed in:
1.3.9.1.184
Exists in versions:
All prior 1.3
versions

Unisphere,
Block CLI

Platform: VNX for Block


ManagementServer restarted
intermittently.
Severity: Medium
Tracking:
64755754/686500/781866

The code was updated to resolve this


issue.

Fixed in:
1.3.9.1.184
Exists in versions:
All prior 1.3
versions

20

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Fixed in:
05.33.009.5.184
Exists in versions:
All prior 1.3
versions

Fixed problems

Category

Details

Symptom

Fix

Version

Unisphere,
Block CLI

The code was updated to resolve this


Platform: VNX for Block
User could not specify a
issue.
Customer Contact email
Severity: Low
Tracking: 77133490/803817 address field that had a leading
underscore or a leading dash.

Fixed in:
1.3.9.1.184
Exists in versions:
All prior 1.3
versions

Unisphere,
NQM, Block
CLI

Platform: VNX for Block


If a drive was faulted and a hot The code was updated to resolve this
spare was activated, the NQM issue.
Severity: Medium
policy stopped and restarted on
Tracking: 72595066/807560
every poll cycle.

Fixed in:
1.3.9.1.184
Exists in versions:
All prior 1.3
versions

Unisphere,
NQM

Platform: VNX for Block


After an upgrade, enabling QoS The code was updated to resolve this
caused a high LUN response
issue.
Severity: Medium
time.
Tracking: 78713486/823234

Fixed in:
1.3.9.1.184
Exists in versions:
All prior 1.3
versions

Unisphere,
Unisphere
Central

Platform: VNX for Block


Severity: Medium
Tracking: 72368842/789286
75860206/798966

Unisphere Central incorrectly


reported performance impact
everyday at 10:30 and 17:00
PST.

The code was updated to convert the


Performance Timestamp to the correct
time format.

Fixed in:
1.3.9.1.184
Exists in versions:
All prior 1.3
versions

Unisphere,
VDM
MetroSync
Manager

Platform: All
Severity: Medium
Tracking: 797255

If a Data Mover in a primary-site The code was updated to improve the


VNX has a standby Data Mover timeout processing.
configured, and the standby
Data Mover failed, VDM
MetroSync Manager did not
trigger a failover for network
failures or file system failures.

Fixed in:
1.3.9.1.184
Exists in versions:
All prior 1.3
versions

Unisphere,
VDM
MetroSync
Manager

Platform: All
Severity: Medium
Tracking: 797250

A Data Mover in a primary-site The code was updated to resolve this


VNX has a standby Data Mover issue.
configured. During a local Data
Mover failover, if the file system
service failed but the network
interface was available, VDM
MetroSync Manager may trigger
a failover without waiting for a
local Data Mover timeout.

Fixed in:
8.1.9.184
Exists in versions:
All prior 1.3
versions;
All prior 8.1
versions

USM

The code was updated to resolve this


Platform: All
"View System Config" in USM
failed with the following error: issue.
Severity: Medium
Error: Generating
Tracking: 77044638/804774
Config Report Could
not capture the
storage system
configuration XML file
from selected system.

Fixed in:
1.3.9.1.184
Exists in versions:
All prior 1.3
versions

USM

Platform: All
The Check System Upgrade
Readiness test reported an
Severity: Low
Tracking: 75939780/795398 incorrect power supply of
110_220_op.

The code was updated to resolve this


issue.

Fixed in:
1.3.9.1.184
Exists in versions:
All prior 1.3
versions

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

21

Fixed in previous releases

Fixed in previous releases


Category

Description

Tracking

Fixed in version

VNX Block OE

On a 64-bit system, a user may find that the new repository


location changed during installing USM doesn't take effect.
The default location (C:\EMC\repository) is used instead.

71961938 / 745926

05.33.009.5.155

VNX Block OE

Coherency errors were reported with a configured FAST Cache


after powering down the system.

69309944 / 745344

05.33.009.5.155

VNX Block OE

The read and write cache hit ratio showed a value larger than
100% in both the Unisphere GUI and the naviseccli.

71746328 / 739480

05.33.009.5.155

VNX Block OE

Concurrent reads which are not aligned to 64K boundaries


and reading from overlapped 64K areas experienced
degraded performance due to read serialization. This issue
was most noticeable when using iSCSI with TCP delayed ACK
enabled.
When a peer SP was booting and requested FAST Cache
clean/dirty status for the LUNs in the system, the thread that
processed the peer to peer messaging completed the request
before the thread that sent the message existed.
When performing an NDU from USM, the user couldn't control
which SP was primary or secondary.

63726358 / 682360
66010424 / 682364
67624288 / 694624 /
682220

05.33.009.5.155

75056638 / 780445 /
742695

05.33.009.5.155

71829328 / 758565
73794720 / 765110 /
750158
75852870 / 790439
76172048 / 793963
76204854 / 794982
76330100 / 795458
77279582 / 804586
77088816 / 806101
76558198 / 806103
77419972 / 807071
68342174 / 742549

05.33.009.5.155

VNX Block OE

VNX Block OE

VNX Block OE

A user reported that their Data Mover bugchecked. When a


user configured FAST VP on DLUs their I/O operations failed,
reporting a device not ready error. If the I/O operations came
from the Data Mover, a bugcheck occurred.

VNX Block OE

The storage processor returned the following error message:


EV_Agent::Process -- Outstream data xfer
error. Err: EMULSocket::send()

VNX Block OE

A user was unable to delete a call home notification template


when the Grapical User Interface (GUI) language was not set
to English.
A VNX snap restore/destroy operation failed and the user was
presented with the following error: The operation
failed because a snapshot restore operation
is in progress.

72744294 / 756098

05.33.009.5.155

69934054 / 761868

05.33.009.5.155

A hardware exception was generated.


The USM System Verification Wizard timed out on some
block-only arrays whose Capture Configuration takes longer
than 3 or 4 minutes.

72417248 / 749613
73632390 / 766201

05.33.009.5.155
05.33.009.5.155

VNX Block OE

VNX Block OE
VNX Block OE

22

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

05.33.009.5.155

05.33.009.5.155

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX Block OE

If a system exceeds the limit for maximum number of slots,


the enclosures exceeding this limit will fail (as well as the
drives within these enclosures). The drive faults are
persisted, so they are not allowed back into the system
without manual intervention. This usually happens when a
new enclosure is added which causes the slot count to
increase above the max.
After a Storage Processor reboot the MCR driver is slow in
reporting that one or more LUNs are online and comprising
the Fast Cache. Fast Cache will restart its load of cache pages
but will put them into a state that is not in sync with the peer
SP. This out of sync condition can cause a single SP bug
check.

60659572 / 619114

05.33.009.5.155

61132026 / 627657
68372910 / 698048
72119400 / 743317
73495202 / 760229
73947128 / 767034
74283170 / 773365
74703166 / 781571
65651032 / 670754

05.33.009.5.155

VNX Block OE

VNX Block OE

The Management Server restarted because more than one


naviseccli getall command were running at the same time.

VNX Block OE

After a pool expansion, only part of the expanded capacity is


available.
A disk drive metadata error caused incorrect location and
serial numbers. The disk would not be accessed correctly.
A storage processor bug checked (0x0000007E) or similar.

70506790 / 728910

05.33.009.5.155

65700452 / 670381

05.33.009.5.155

69468616 / 710577

05.33.009.5.155

A naviseccli request to expand a storage pool may still be


performed, even if the rules check fails.
When a RAID group is degraded, I/Os are suspended and
can't be finished.

69856422 / 714631

05.33.009.5.155

70736120 / 724320
70736120 / 726015
7041380 / 735840

05.33.009.5.155

70598948 / 737559
72703416 / 747627
73967238 / 765254
72013458 / 739426

05.33.009.5.155

73924290 / 775848

05.33.009.5.155

667307

05.33.009.5.155

73009628 / 751451
73009628 / 753515

05.33.009.5.155
05.33.009.5.155

760129

05.33.009.5.155

74599468 / 780452

05.33.009.5.155

69259750 / 716725
70692738 / 732432
71956862 / 740943
69230038 / 746586
74752816 / 778599

05.33.009.5.155
05.33.009.5.155
05.33.009.5.155
05.33.009.5.155
05.33.009.5.155

VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE

VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE

VNX Block OE

VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE

An LDAP user was able to log into the GUI after it was
disabled from the LDAP server.
Some offline pool LUNs were marked for recovery due to an
underlying metadata corruption.
During a non disruptive upgrade, a Storage Processor bug
checked with code 7E.
Single storage processor bug checked when a power fail
occurred.
ESRS username/passoword longer than 30 characters was
not supported in the GUI.
A naviseccli process caused high CPU utilization.
A temporary file was stuck in C:\temp and subsequent
Config/Capture schedules were affected.
A clone source LUN and clone LUN might become
inconsistent even though they appear in a
synchronized/normal state.
When running VAAI I/O for protected volumes and while the
volume was in split mode and the backlog was full , the
system triggered an SP bugcheck (0xE117B264).
An NTP time synchronization failed.
A high CPU usage was seen when enabling an NQM Policy.
A drive was faulted after returning an unexpected error.
A host lost access during an upgrade.
The naviseccli port -diagnose host command didn't
work.

05.33.009.5.155

05.33.009.5.155

05.33.009.5.155

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

23

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX Block OE

The LUN Provisioning Wizard displayed the Write caching will


be enabled for the LUN, but not for the storage system pop
up warning, even though Write Cache was enabled.
A storage processor bug checked and rebooted with
C000021E.

68538732 / 715855

05.33.009.5.155

69880048 / 723766

05.33.009.5.155

When I/O is canceled, the storage processor responded with


multiple bug checks.
A RAID group double faulted after an LCC firmware upgrade.
When an end of life or drive fault is cleared, there is no
message in the event log.
A storage processor rebooted with a bugcheck code of
0x0000001e.
A hard reset on one or both storage processors occurred
following a drive failure.

71359452 / 738710

05.33.009.5.155

71380624 / 749264
72640808 / 749499

05.33.009.5.155
05.33.009.5.155

73078514 / 752546

05.33.009.5.155

73689542 / 761654
74579368 / 774552
74621678 / 775580
74585484 / 775907
74923400 / 778812
75266646 / 785037
75587520 / 787732
75342362 / 788163
72391200 / 789881
76568192 / 797369
73717128 / 763900

05.33.009.5.155

70024168 / 715256
76163286 / 799793
72875350 / 755719
72628862 / 755259
72634096 / 749179
71880024 / 737140
70396450 / 732631
72924344 / 750541
75295458 / 784584
75757686 / 789090
76346712 / 795886
76826284 / 800160
72481812 / 745528

05.33.009.5.155

VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE

VNX Block OE

A storage processor bugchecked with E1158018.

VNX Block OE

A storage processor bugchecked with the code,


0x01901005, during an upgrade.

VNX Block OE

A storage processor rebooted with bug check C000021E.

VNX Block OE

During the process of converting un-encrypted RAID groups to


encrypted RAID groups, there is a window of time where
encryption could hang and lead to either a single or dual SP
bug check.
SPA responded with a bugcheck code C000021E.

VNX Block OE
VNX Block OE

24

When an NQM provider sets a small delay value (less than 1


ms) to a driver, the NQM driver can keep scanning the I/O list,
using outstanding CPU resources. This can cause the CLI/GUI
to not respond.

05.33.009.5.155

05.33.009.5.155

05.33.009.5.155

72730540 / 750297

05.33.009.5.155

71670692 / 756388
75296136 / 790048

05.33.009.5.155

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX Block OE

A user received an alert that Unisphere could no longer


manage the SPs.
Navi CLI commands responded very slowly. The
admin_tlddump.txt, showed the TLD response time to be
around 4 secs.
USM displayed an incorrect message after a drive
replacement was complete.
When trying to connect a host to storage group, a message
was returned, saying:
Results from call to add host(s) to the
storage group:
The overall operation failed.
Error details:
Success

72477754 / 746981
69568684 / 716485 /
74911744 / 782113 /
74250132 / 787804 /
74697790 / 787983
742030 / 729304

05.33.009.5.155

744343

05.33.009.5.155

When the CLI command server_reclaim was executed,


error message 2237 was displayed.
A Speed change from AUTO to 16G-only will not login without
link bounce. A Speed change from 16G-only to AUTO will not
login without some link bounce.
When a power failure for the entire array (DPE and DAEs)
happened, bugcheck (0x0340406a) occurred.
Attempting to delete a LUN while specific operations are
active results in a bugcheck (0x05900000) on both VNX
Storage Processors (SPs)
These operations include:
Clone - Sync
CPM - Copy/Resume
SanCopy - Rollback/SnapLU activate
MLU - Attach/Rollback snap
The ODFU wizard did not progress and was stuck on the
Prepare for Disk Firmware Installation page.
When a user clicked on the Disk Firmware Upgrade
notification to initiate an online firmware upgrade and also
clicked on Software > Disk Firmware > Install Disk Firmware
Online, a second window was opened.
User received notifications for information events while only
"Error and Critical Error" events were selected in the template.
After creating a VNX storage pool, the following combination
of actions created a system bugcheck (0x76008301):
1. Swapping out a disk in the storage pool
2. Deleting the storage pool
3. Creating a new storage pool
When targeting R32 arrays, USM's LCC Status window can
time out if the event log has excessive events.
When using QoS to limit I/O throughput to a certain value, I/O
jumped significantly at regular intervals.

747548

05.33.009.5.155

742243

05.33.009.5.155

720833

05.33.009.5.155

68041418 / 662213

05.33.009.5.155

745970 / 745613

05.33.009.5.155

746384 / 746702

05.33.009.5.155

74078752 / 771646

05.33.009.5.155

694090

05.33.009.5.155

73119584 / 753846

05.33.009.5.155

742112

05.33.009.5.155

VNX Block OE
VNX Block OE

VNX Block OE
VNX Block OE

VNX Block OE
VNX Block OE

VNX Block OE
VNX Block OE

VNX Block OE
VNX Block OE

VNX Block OE
VNX Block OE

05.33.009.5.155

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

25

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX Block OE

When a read error occurred during a disk copy operation


(such as Proactive Copy or rebuild to hot spare), and the data
was directly promoted into FAST Cache before the read error
could be corrected by a background verify, the FAST Cache
reported an error on a host read.

05.33.009.5.155

VNX Block OE

A single storage processor bugcheck (0x000000D1) occurred


when a clone image was synchronizing and there was an IO
cancellation on the clone source LUN.
During a period of heavy I/O, a storage processor
encountered a bugcheck (0x05900000).

768685
71024192 / 737634
73554468 / 761400
74300706 / 774479
74465916 / 775374
75225728 / 782594
75658014 / 788635
75825358 / 790383
75831040 / 793192
76112472 / 793799
76366310 / 798205
71599430 / 733934

67095570 / 687329
66236706 / 676748
70235702 / 720402
72772694 / 748981
69537274 / 709736
73524350 / 772981
786680
75965998 / 791585

05.33.009.5.155

VNX Block OE

VNX Block OE

A service processor bugcheck (0x05900000) occurred


processing aborted host I/O.

VNX Block OE

A storage processor bugcheck (0xe117b164) occurred when


hosts cancelled I/O during data movement between LUNs
(such as that initiated by FAST Cache, migration, and so on).
A set of Deduplication-enabled LUNs in a pool went offline
after an NDU from previous versions of the OE. Recovery was
run without benefit of UFSLog Replay. Persistent user data in
the dedup container was lost.
RAID Groups were broken due to a failed drive in an already
degraded RAID Group.
Refer to the New features and enhancements to read details
about this feature.

VNX Block OE

VNX Block OE

VNX Block OE

There was an I/O timeout due to a CBFS internal deadlock.

VNX Block OE

The network port did not properly release. When this issue
occurred, the navi cimom process failed to start. The
cimomlog.txt, reported that network port 443 was occupied
by another process.

26

05.33.009.5.155

05.33.009.5.155
05.33.009.5.155

789059

05.33.009.5.155

740193
70041390 / 719629
70239252 / 720587
75873498 / 790773
757363
74317178 / 773445 /
771591
75087372 / 757469
75087372 / 782710
75346340 / 783692
75882842 / 794733

05.33.009.5.155

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

05.33.009.5.155

05.33.009.5.155

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX Block OE

Single, and in isolated cases dual, storage processor


bugchecks (various including 0x00000000, 0x0000001E and
0x06000000) occurred.

745992/68583616/711908/
70124318/718260/70160028/
719042/70579306/722143/
70410880/722435/70460808/
727069/71031670/730140/
70720506/729441/71433446/
732583/71032226/732636/
71122914/734498/71653810/
735589/71707848/737354/
71710906/737416/71934826/
741293/72210706/741597/
72218658/745901/72603090/
746812/72449556/746830/
72537124/747406/72716654/
748918/72786516/749463/
72786516/749463/71723488/
750105/72361548/750419/
72460654/750722/72576654/
751103/72562244/751137/
72867486/751836/73034174/
753425/73008808/753836/
73198930/756123/73242084/
757322/73221472/757836/
73344740/758947/73549316/
759556/73522806/761511/
73584960/762244/73788884/
763710/73782220/764006/
73942168/767280/74097100/
767551/73833426/772940/
74354674/773506/74603914/
775232/74685002/775921/
74529834/775942/74633358/
775949/74690526/776281/
74709630/777106/74557752/
777152/74760414/778038/
74914482/780093/74841632/
780320/75000926/780344/
75006366/780590/74933114/
781763/75385234/784760/
75034794/785168/75501106/
785249/75567772/787746/
75536858/788021/75660370/
788621/75532270/789271/
75903016/791076/75987138/
793707/76444362/795984/
76444362/795985/76289730/
796396/76386254/797748/
76579342/802414

05.33.009.5.155

VNX Block OE

LUNs went offline when an NDU was performed while FAST VP


was relocating slices to another storage tier.
When enabling support options in Unisphere, single sign on
failed and a popup message, an error occurred was
displayed.
Both SPs rebooted due to a bugcheck (7E ) during recovery.

751240

05.33.009.5.155

73132226/ 750931

05.33.009.5.155

74485638 / 776467 /
733790
72327522/ 745720

05.33.008.5.119

VNX Block
OE,
Unisphere
VNX Block OE
VNX Block OE

During an SP reboot, an error message is displayed within the


RecoverPoint user interface that says, Splitter XX is down.

05.33.008.5.119

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

27

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX Block OE

A single storage processor bugcheck (various including


0xc000021e) occurred following an error on the Fibre Channel
hardware component.
LUNs with little or no I/O were implicitly trespassed.
A single SP bugcheck (0x5900000) occurred due to a race
condition between a host I/O cancel and a data movement
completion operations.
A single SP rebooted with bugcheck 7E
[SYSTEM_THREAD_EXCEPTION_NOT_HANDLED].
Relocation failed when a scheduled window ends. This may
cause LUNs to go offline.
A single SP bugchecked due to a condition between the
aggregate I/O cancelation path and the normal I/O path.
A single storage processor bugcheck (0x03302004) occurred.
The FEDisk code path was executed multiple times, so that
the memory finally was consumed out.
A single storage processor bugcheck occurred while replacing
a system drive.
Errors occured when a drive went end of life and reported
drive faults at the same time.
Customer sees the badly formed log entry.
The host can experience timeouts due to a bad drive which is
streaming hardware errors.
A log entry was recorded in the Windows event log when BBU
state was temporarily unknown.
An ESX Storage Vmotion failed at 40% migrating Virtual
Machines between LUNs in the same Storage Pool.
When a single strorage processor bugcheck occurred, a LUN
was inaccessible on the other storage processor.
A storage processor rebooted due to a bugcheck code
0x03006001.
A single storage processor bugcheck (0xE111805F) occurred
when creating/destroying many LUNs and performing LUN
migration at the same time.
An NDU attempt from a previous version of VNX Block OE
failed a rule check due to a default SAS address being
detected.
Storage processor B encountered a bugcheck code
0x0000001E.
Storage processor A encountered a bugcheck code
0x05900000.
LUNs with little to no I/O were implicitly trespassed.
A single storage processor bugcheck occurred due to the
handling of a rare SAS controller\firmware issue.
When configuring IPv4 on iSCSI or a management port, the IP
configuration will fail if the 4th octet is 255 and subnet mask
is less than 24 bits.

66800310/ 688607

05.33.008.5.119

62840900/ 643410
64975010/ 661855

05.33.008.5.119
05.33.008.5.119

65263512/ 666561

05.33.008.5.119

673991

05.33.008.5.119

67016532/ 684704

05.33.008.5.119

62922652/ 686495
67384810/ 688757

05.33.008.5.119
05.33.008.5.119

68250170/ 700491

05.33.008.5.119

68458206/ 701988

05.33.008.5.119

69505944/ 709249
69490164/ 711777

05.33.008.5.119
05.33.008.5.119

69682770/ 712701

05.33.008.5.119

70271682/ 721881

05.33.008.5.119

70634516/ 722759

05.33.008.5.119

64252492/ 663772

05.33.008.5.119

68492010/ 700300

05.33.008.5.119

69194110/ 715753

05.33.008.5.119

70890884/ 745687

05.33.008.5.119

61018100/ 636333
67626278/ 699803

05.33.008.5.119
05.33.008.5.119

72704554/ 749117

05.33.008.5.119

VNX Block OE
VNX Block OE

VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE

VNX Block OE

VNX Block OE

VNX Block OE
VNX Block OE
VNX Block OE

28

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX Block OE

VNX cannot be used in a Read-Only Domain Controller


environment (RODC) for W2K8_R2/W2K12 domains.
An error was returned when several naviseccli connection pingnode or naviseccli connection -traceroute commands
were executed at the same time.
Unisphere Analyzer disk statistics are not always updated
(may show 0).
The LUN prepare process was frozen.

67585474/ 734750

05.33.008.5.119

693686

05.33.008.5.119

667981

05.33.008.5.119

68016422 / 700220

05.33.008.5.119

A service processor bugcheck (0x0000007E) occurred when a


LUN recovery was running.
A single storage processor bugcheck occurred because the
log space could not be released normally.
Pool LUNs were offline after creating a large number of LUNs
at the same time.
An added system disappeared after relaunching ESRS.

69776356/ 713410

05.33.008.5.119

69319126 / 719632

05.33.008.5.119

64781490 / 659363

05.33.008.5.119

71611654/724452/619114

05.33.008.5.119

A single storage processor bugcheck (0x01901004,


0x0190101A) occurred when a Fibre Channel logout
command was received while processing a link down event.
A race condition between the removal of a CAS command and
its completion resulted in an SP bugcheck (0x05900000).
Storage processor bugchecks occurred (various, including
0x05900000 and 0x0000001E) on an iSCSI connected
storage system due to a packet storm received by the iSCSI
data ports.
A single processor bugcheck (0xE1198006) occurred.

61787212/687024/687026

05.33.008.5.119

64825240/659963

05.33.008.5.119

65654434/711927/671492

5.33.008.5.119

68061026/ 695711

05.33.008.5.119

If a storage processor was shut down, the peer storage


processor rebooted unexpectedly due to incorrect handling of
an error condition within MirrorView/S.
An error message with 0x715281e8 was reported in the navi
log.

69898952/ 715224

05.33.008.5.119

64921968/ 662696

05.33.008.5.119

Incomplete I/O requests were seen by an ESX server under a


workload with overlapping write requests.

60859568/ 669717

05.33.008.5.119

A Navi Cimom dump was seen due to memory leaks in


MirrorView/S admin.

55507470/ 702768

05.33.008.5.119

Attempting to destory a mirror resulted in one or both storage


processors rebooting unexpectedly.

69335508/ 707403

05.33.008.5.119

During a power failure, the storage pool was incorrectly


marked as recovery needed.
Single SP bugcheck.

64781490/ 706840

05.33.008.5.119

69045616/ 704235

05.33.008.5.119

SnapView sessions were stopped with an error message:


a100402d.

62970312/ 724669

05.33.008.5.119

VNX Block CLI

VNX Block
OE, Analyzer
VNX Block
OE, CBFS
VNX Block
OE, CBFS
VNX Block
OE, CBFS
VNX Block
OE, CBFS
VNX Block
OE, ESRS
VNX Block
OE, Fibre
Channel
VNX Block
OE, Host
VNX Block
OE, iSCSI
Driver
VNX Block
OE,
MirrorView
VNX Block
OE,
MirrorView
VNX Block
OE,
MirrorView
VNX Block
OE,
MirrorView
VNX Block
OE,
MirrorView
VNX Block
OE,
MirrorView
VNX Block
OE, NFS
VNX Block
OE, Platforms
VNX Block
OE, SnapView

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

29

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX Block
OE, Snap
Clones

A single SP bugcheck occurred (0x000000d1) and host I/O


was removed for the clone source LUN.

67742876/ 691992

05.33.008.5.119

VNX Block
OE, System
Management
VNX Block
OE, System
Management
VNX Block
OE, System
Management,
UDoctor,
Serviceability
VNX Block
OE,
Unisphere
VNX Block
OE,
Unisphere
VNX Block
OE,
Unisphere
VNX Block
OE,
Unisphere
VNX Block
OE, USM
VNX Block
OE, USM,
Unisphere
VNX Block
OE, Virtual
Provisioning
VNX Block
OE, Virtual
Provisioning
VNX Block
OE, Virtual
Provisioning
VNX Block OE
VNX Block
OE, Virtual
Provisioning
VNX Block OE

The management servers re-started frequently on both SPs.

60933588/ 662081

05.33.008.5.119

Storage Processor B was very slow to respond to any


commands and had to be re-started.

65228246/ 666980

05.33.008.5.119

UDoctor received .xml files, but the .xml files were not
processed and were stuck in UDoctor folder.

69353288/ 709946

05.33.008.5.119

The system was unable to create FAST Cache during a non


disruptive upgrade.

65169048/ 664516

05.33.008.5.119

A pool expansion failed.

64586292/ 686517

05.33.008.5.119

Unable to create a clone if the clone destination LUN is a


metaLUN and the metaLUN has a configured capacity.

68384336/ 698899/ 703769

05.33.008.5.119

LDAP login failed if there was no leaf certificate in an LDAP


configuration.

68540890/ 701325/680720

05.33.008.5.119

The USM DAE install wizard's cabling suggestion was


incorrect.
The USM report wizard creates folders with multiple version
strings when it creates and saves the reports.

66354048/ 677685

05.33.008.5.119

65466552/ 668662

05.33.008.5.119

When a storage pool encounters an error and goes offline, it's


possible that the command Naviseccli storagepool -list still
shows the status as OK.
A storage processor rebooted due to bugcheck code 0x
E111805F.

67699060/ 692639

05.33.008.5.119

68222912/ 697328

05.33.008.5.119

In a rare instance, a single SP bugcheck (0xC0000021E)


occurred.

653263

05.33.008.5.119

SP A rebooted due to a bugcheck.


Single SP bugcheck happened when the array was under high
pressure from external I/Os and internal background
operations, while at the same time the other SP rebooted.
When deduplication was not enabled on a LUN in a pool, the
query command on the pool returned an error (2237).
SP A bugchecked when SPB had 250 virtual desktops that
had been deduped and completed the I/O run.

702129
692912

05.33.006.5.102
05.33.006.5.102

69046416 / 710022 /
653311
68197756 /698062 /
690469

05.33.006.5.096

VNX Block OE

30

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

05.33.006.5.096

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX Block OE

A Link Control Card (LCC) or an Inter Connect Module (ICM) in


a VNX DAE reported a power supply unit (PSU) fault or fan
fault because it did not detect the replaced module.
Under certain conditions, reverse read is reported as several
times slower than read.
In certain scenarios, key manager may enter deadlock which
prevents peer SP from coming up.
FAST VP relocation (moving data within one storage pool)
would occasionally fail with error code 0xe12d8417 when its
scheduled relocation window ended , causing the LUN to go
offline.
On rare occasions, VNX disks report faults because the VNX
system did not handle IO fault states correctly.
Initiating a non-disruptive Upgrade (NDU) of the VNX OE
bundle while also installing a large number of enablers and
creating a Synchronous Mirror led to failures when creating
legacy SnapView sessions.
Non-disruptive Upgrade (NDU) failed due to the Windows
command bcdedit failing to run. An error message displayed:
The data area passed to a system call is too small" should
exist in c:\temp\ndu-bcdedit-log.out.
One SP would not go online after it was restarted. The SP will
became pingable but the getagent command failed as
timeout.
If a user sent IO to a RAID Group before expanding a LUN, only
a small number of slices were moved to new raid group.
Event ID 10 from source WMI was logged in the Application
log after every reboot6
If a user modified a property of an 10GB iSCSI port, the
modified portentered a degraded state resulting in
suboptimal performance.
A user migration session changed to a system migration
(compression/ deduplication) session unexpectedly, then
stuck in SYNCHRONIZED state.
Creation of a snapshot or snap session was supported only
on LUNs less than 256TB.
Failure mode and effects analysis was improperly handling
internal fault reporting for the battery backup unit.
Any failure of the read for a RAID Group capacity resulted in
an insufficient capacity error message, even if capacity was
not the cause of the error.
Seagate drive write performance was not optimal.

599765 / 614380

05.33.006.5.096

65649146/ 695091

05.33.006.5.096

66912942/ 683415

05.33.006.5.096

665509, 670416, 676377 /


64183616

05.33.006.5.096

687676 / 67152848

05.33.006.5.096

66051036/673828

05.33.006.5.096

613202

05.33.006.5.096

645883

05.33.006.5.096

633096

05.33.006.5.096

657111

05.33.006.5.096

648479/ 659444

05.33.006.5.096

656149

05.33.006.5.096

640680

05.33.006.5.096

546173/ 548762

05.33.006.5.096

565361/ 567110

05.33.006.5.096

53114380/ 564310
584313
47877908/ 607594

05.33.006.5.096
05.33.006.5.096

61370926/ 628608

05.33.006.5.096

VNX Block OE
VNX Block OE
VNX Block OE

VNX Block OE
VNX Block OE

VNX Block OE

VNX Block OE

VNX Block OE
VNX Block OE
VNX Block OE

VNX Block OE

VNX Block OE
VNX Block OE
VNX Block OE

VNX Block OE
VNX Block OE
VNX Block OE

In rare occurences, when a LUN was reset the trespass


operation did not complete, causing bugcheck 0xE111805F.
Unable to change the network port for VNX management
interface to auto.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

31

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX Block OE

The VNX Management Server was not accessible because a


connection limit was reached.

05.33.006.5.096

VNX Block OE

If both the VNX A1 and B1 power supplies were missing or


faulted, the system incorrectly reported fans 1,2,5,and 6 as
"Ok" even though they were faulted and no longer running. If
both the VNX A0 and B0 power supplies were missing or
faulted, the system incorrectly reported fans 3,4,8,and 9 as
faulted.
Unisphere returned a battery "Not Ready" alert during the
weekly battery tests even when the battery was healthy.
If the Storage Processor (SP) did not have a serviceable power
supply, the SP would shutdown during local SPS testing.
If a system battery backup unit (BBU) was not ready, and the
other system BBU was in test mode, the cache status was
reported as failed.
If connections to one the VNX Storage Processors (SP) link
control cards (LCCs) became unreliable, the IO was not
properly redirected to the other system SP. This could result
in a system unavailability, with event logs showing the LCC
going up and down.
The maximum transmission unit (MTU) size for VNX Storage
Processor (SP) iSCSI ports operated at 14 less than the value
set for them.
During a period of heavy I/O, storage processor bugchecks
(various including 0xDAAAEEEE, 0x0000007E) occurred when
hosts cancelled I/O during data movement between LUNs
(such as that initiated by FAST Cache, migration and so on).
When attempting a non-disruptive upgrade (NDU) for a VNX
system, connectivity to the system was temporarily
interrupted while the system detected the default expander
SAS address.
Request from Storage Resource Management (SRM) software
issued using the EMC XML API failed with an HTTP 503
[Service unavailable] error.
The Storager Processor (SP) Setport command did not reboot
the SP when the command completed running.
When upgrading the VNX OE, if a drive failed (and was
building to a hotspare) while one system Storage Processor
(SP) was upgraded and the other was not, the hotspare
operation failed and the RAID group became degraded.
A single Storage Processor (SP) bugcheck (various, including
0x05900000 and 0x0000001e) occurred in the FAST Cache
driver when it was unable to acquire an internal resource.

61640548/ 627975
62520788/ 638971
63182168/ 643064
63607866/ 647050
64209578/ 667181
60322976/ 613498
67604626/690953
637601

58482710 /642288

05.33.006.5.096

62898824/ 640321

05.33.006.5.096

63502640/ 647811

05.33.006.5.096

63823874/ 649179

05.33.006.5.096

63910430/ 650471

05.33.006.5.096

60487186/ 650678

05.33.006.5.096

63260516/ 651506

05.33.006.5.096

654611

05.33.006.5.096

64483380 / 656693

05.33.006.5.096

667695

05.33.006.5.096

65771190/ 671378/717629

05.33.006.5.096

VNX Block OE
VNX Block OE
VNX Block OE

VNX Block OE

VNX Block OE

VNX Block OE

VNX Block OE

VNX Block OE

VNX Block OE
VNX Block OE

VNX Block OE

32

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

05.33.006.5.096

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX Block OE

Attempts to expand a virtual provisioning pool for a direct


LUN (DLU) failed with error code 712d8e0e because an
internal create-mapping request and its I/O write operation
required the same memory.
An attempt to expand a LUN failed with the 712d8e0e error
code.
Could not re-configure a VNX system IO module.
Service interruption sometimes occurred during a nondisruptive upgrade array if a default expander SAS address
existed.
After a non-disruptive upgrade to the VNX Block OE package
05.33.000.5.072 or 05.33.000.5.074, with enablers
installed, when a MirrorView/S session was created, the
SnapView session failed.
Rebuild rates were slow, sometimes taking days or weeks
weeks to complete.
A RAID group double faulted.
If the kernel decided to move the deduplicated domain from
one SP to another, the kernel would try to stop this effort. As
a result, the second SP experienced a sharing violation.
In VNX systems where FAST VP was enabled, intermittent
service interruption could occur because of a single SP
bugcheck (code C000021E).
VNX systems occasionally experienced precise one-second
gaps in CMI Peer to Peer traffic flow (IO) in latency scenarios,
where Storage Processor A (SPA) would reboot repeatedly
until SPB was rebooted.
If a RecoverPoint Splitter failed to read from the storage while
volume was in Virtual Access, a Storage Processor bugcheck
could potentially occur.
When Data Copy Avoidance (DCA) was enabled, a deadlock
could occur when IO to the RecoverPoint Appliance (RPA)
failed while the VNX Storage Processor had insufficient
resources to accommodate the IO demands. After 180
seconds, the IO interruption caused a Storage Processor
bugcheck.
After enabling NQM, an ESX server seemed to loose
connectivity due to performance issues.
When this issue occurred, multiple NQM control engine
threads continually set poor performance parameters every
20 seconds.
LUNs sometimes failed to come online after a power failure or
after resolving a hardware failure that led to the offline
condition.
Unexpected results occurred when a storage processor
rebooted during data in place encryption.

64499178/ 657304

05.33.006.5.096

64472012/ 657569

05.33.006.5.096

64483380/ 661513
63260516/ 665700

05.33.006.5.096
05.33.006.5.096

66051036/ 673856

05.33.006.5.096

65259884 / 664411/680711

05.33.006.5.096

661327
649682

05.33.006.5.096
05.33.006.5.096

649085 / 63732104

05.33.006.5.096

646038 / 60964096

05.33.006.5.096

683826 / 64500382

05.33.006.5.096

683830, 654290 / 65434254

05.33.006.5.096

691525 / 66260842

05.33.006.5.096

579483, 545677

05.33.006.5.096

67209396/688738
67136530/686425
67173272/686779
66040194/674144

05.33.000.5.081

VNX Block OE
VNX Block OE
VNX Block OE

VNX Block OE

VNX Block OE
VNX Block OE
VNX Block
OE, Block
Deduplication
VNX Block
OE, CBFS
VNX Block
OE, Platform

VNX Block
OE,
RecoverPoint
VNX Block
OE,
RecoverPoint

VNX Block
OE,
Unisphere
NQM
VNX Block OE
Virtual
Provision
VNX Block OE

VNX Block OE

A storage processor became unresponsive when updating


SAS controller firmware during an NDU from an OE version
earlier than 05.33.000.5.072 to an OE version
05.33.000.5.072 (or later).

05.33.000.5.079

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

33

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX Block OE

A storage processor bugcheck (0x01901004) occurred when


the fibre channel speed setting was manually changed via
Unisphere GUI or CLI.
Coherency errors were reported during an NDU to
05.33.000.5.074 from an earlier version of code (prior to
05.33.000.5.072).
Repeated storage processor bugcheck (including
0x0000001e) occurred following an NDU failure.
Coherency errors were reported during an NDU when one SP
was running 05.33.000.5.074 and the other SP was running
an earlier version of code (prior to 05.33.000.5.072).
Unisphere allowed users to set the speed on the port of an
1Gb iSCSI/TOE IO Module to either 10Mb, 100Mb, or 1Gb.
A subset of LUNs in a storage pool remain offline following
double-faulted disk removal.
When a VNX5200 or VNX5400 system was installed with the
maximum number of I/O modules, but at least one I/O
module is uninitialized, occasionally an alert message
occured indicating that an uninitialized I/O module had
exceeded the system limit.
An SP bug check occurred during software upgrade or
installation operations.
Running the naviseccli ndu list command in engineering
mode sometimes showed a default version
Incorrect current output for the DC power supply was
reported.
A single storage processor bugcheck (various, including
0x05900000 and 0x0000007e) occurred.

65185426/663684
65365958/665952

05.33.000.5.079

65298752/ 664729

05.33.000.5.079

65270166/672994

05.33.000.5.079

684145

05.33.000.5.079

649466

05.33.000.5.072

542529, 545677

05.33.000.5.072

618635

05.33.000.5.072

556166, 562545

05.33.000.5.072

567026

05.33.000.5.072

591576, 593267

05.33.000.5.072

65789046/671288
66357332/677368
664045

05.33.000.5.072
05.33.000.5.072

637319

05.33.000.5.072

627961/635306

05.33.000.5.072

621573
621540

05.33.000.5.072
05.33.000.5.072

622564

05.33.000.5.072

62859966/639466

05.33.000.5.072

580959

05.33.000.5.072

613776, 629616

05.33.000.5.072

VNX Block OE

VNX Block OE
VNX Block OE

VNX Block OE
VNX Block OE
VNX Block OE

VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE

VNX Block OE
VNX Block OE
VNX Block OE

VNX Block OE

34

After upgrading to VNX for Block OE version 05.33.000.5.072,


coherency errors are reported in the system logs.
Hardware modules are being indicted in logs/traces by ESP
due to underlying issues in our environment path.
User gets data corrupted error, but in disk array those sectors
have been recovered.
A LUN 's Address Offset value is incorrect.
The Raid Group enters a cycle of full rebuild, followed by a
short period where the copy starts and the drive position
goes offline.
Under heavy I/O load utilizing FAST Cache, an internal timer
may trigger a single processor bug check.
Thin Lun 131 of Pool 1 went offline with error code
0xE12D8D0D.
Drives were faulted if their firmware did not support
Enhanced Queuing. The drives were marked as Faulted with a
reason indicating enhanced queuing check.
During powerup of an enclosure or during a power glitch, one
or more drives did not progress to the ready state.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX Block OE

During an LCC, SLIC, or Base Module replacement, multiple


drives within a RAID group went offline, causing rebuild
logging to initate.
SPA and SPB encountered error code 0x05900000 in the Data
Mover Library (DML).
The user experiences intermittent 1 second latency spikes.
User saw two of the same enclosures, when they only had
one.
Scheduled Battery Backup Unit (BBU) checks showed a BBU
was faulted, even though the component was operational.
SP bug-checked during shut down if there were many
TLU/DLUs in failover mode.
Requests for overlapping cache pages between the Fast
Cache Idle Cleaner and the host request resulted in a
deadlock that timed out after 10 seconds.
The amber LED was illuminated when a cable was connected
to iSCSI ports.
Unisphere and FBECLI Peer Info reported IO module limits
incorrectly.
Under heavy front-end FCoE I/O, a single storage processor
bug check (0x01901008) occurred.
A bugcheck occurred with error code 0x0340201a.

622581, 626540

05.33.000.5.072

62778786/639100

05.33.000.5.072

60964096/648029
60659572/616891

05.33.000.5.072
05.33.000.5.072

612219

05.33.000.5.072

627774, 616729, 627776

05.33.000.5.072

VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE

VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block
OE, CBFS
VNX Block
OE, CBFS

VNX Block
OE, CBFS
VNX Block
OE, CBFS

VNX Block
OE, Platform
VNX Block
OE, Platform
VNX Block
OE, Platform
VNX Block
OE, Platform
VNX Block
OE, Platform
VNX Block
OE, Platform

05.33.000.5.072

612801

05.33.000.5.072

618635

05.33.000.5.072

593740

05.33.000.5.072

62294340/637286

05.33.000.5.072

I/O to the affected LUN will hang if the error path is hit. The
system may bigcheck due to IO timeout.
SCSI GET_LBA_STATUS commands maps to VMFR commands
from MLU to CBFS. When there are many of these commands
sent to the system, the CPU is busy and leads to a bugcheck.
This happens when running:
- Win2K12, TRIM enabled
- RecoverPoint, Thin Extender enabled
Watchdog Panic due to two COREs holding spinlock for a long
time.
Some mapping cannot proceed and the following error is
returned:
CBFSA: UFS: 4: IndUsableList::Load failed to get free entry.
System may be <NL>
CBFSA: stressed! CurrentNumAllocatedEntries (368260)
MaxEntries (368255)
A flase message is presented to the user saying that the
management switch type has changed, when no change has
been made.
IPv6 would not work on SPA.

646502

05.33.000.5.072

628196

05.33.000.5.072

619482

05.33.000.5.072

626128

05.33.000.5.072

59070064/602429

05.33.000.5.072

59876582/610818

05.33.000.5.072

IPv6 configuration failed when the network prefix started with


fc00::
Data was unavailable when one of the SPs was rebooting.
Error Code 0x25 is displayed.
When more than 64 FC front-end ports were configured on an
array, not all of the ports were visible to the hosts and/or
switch. Several were assigned duplicate WWNs.
SP queue statistics are disabled and displayed as zero in the
GUI and CLI.

59876582/614231, 633470

05.33.000.5.072

61296778/622987

05.33.000.5.072

63411674/645909

05.33.000.5.072

565385, 588543

05.33.000.5.072

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

35

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX Block
OE, SANCopy
for Block
VNX Block
OE, Security
VNX Block
OE,
MirrorView/A,
MirrorView/S
for Block
VNX Block
OE, Virtual
Provisioning
VNX Block
OE, Virtual
Provisioning
VNX Block OE

An incremental SAN Copy session or aMirrorView/A session


sometimes failed to be removed.

560498, 561780

05.33.000.5.072

LDAP host queries do not report IPv6 addresses.

602169

05.33.000.5.072

False-positive logs and user traces indicated MVA thread


starvation.

545496

05.33.000.5.072

After a LUN encountered an internal error during a trespass, it


could not be brought back online.

534327, 539374

05.33.000.5.072

Single SP bugcheck occurred because a CDCA (cache


dirty/cant assign) condition after dual SP reboots.

557018, 632291

05.33.000.5.072

The serial attached SCSI (SAS) status LED for Port 2 on Base
Module was not ON.
A VNX single Storage Processor (SP) reboots when a host
performs a 1MB write operation to the system that is attached
to a Recoverpoint Appliance (RPA) system for replication
MirrorView Asynchronous connections are fractured on a VNX
system. The following symptoms can occur:
Deduplication LUNS are temporarily unable to provide I/O
and multiple snapshots remain in a destroying state in the
deduplication domain.
Single storage processor (SP) reboot after LUN expansion
When a drive is removed, no event is logged to indicate the
reason for the drive fault.
The Taken offline event (ID 7167802f) is dropped from both
the Windows event log and the VNX event log because the
string length exceeded the maximum character limit.
After a driver reset, an unexpected SP reboot occurs
producing an alert/log message such as the following: CPD
PANIC -FCOEDMQL 1 (FE5)

566542

05.33.000.5.051

594685

05.33.000.5.051

595166

05.33.000.5.051

593320

05.33.000.5.051

593659

05.33.000.5.051

An inaccurate VNX call-home message is generated which


indicates that a backup battery unit (BBU) is missing when it
is actually present. This can potentially cause BBU status
indications to show the component status as degraded
Storage processor bugchecks (0x00000041) occurred due to
excessive traffic on the management ports.
The VNX generates warning messages (BMS found 184
entries) because the number of Background Media Scan
(BMS) log entries recorded for vault drives quickly reaches
defined thresholds.
Coherency errors for RAID-1 and RAID-6 occasionally do not
generate the proper XOR sector trace messages in debug
operations (performed by EMC Support).

595715

05.33.000.5.051

596197

05.33.000.5.051

597131

05.33.000.5.051

599355

05.33.000.5.051

VNX Block OE

VNX Block OE

VNX Block OE

VNX Block OE

VNX Block OE

VNX Block OE
VNX Block OE

VNX Block OE

36

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX Block OE

A dial home event is generated even though no fault has


occurred.
A single SP bugcheck can occur when multiple rollback
processes are in simulatneously active.
A single SP reboot can occur when multiple rollback
processes are in simulatneously active.
mgmtd used almost 100% CPU on Disaster Recovery-side
Control Station.
SPA and SPB bugcheck within minutes of each other, and
their associated LUNs and DMs go offline.
This problem occurs every 90-99 days in the following
systems:
VNX5200, VNX5400, VNX5600, VNX5800, VNX7600
This problem occurs in a VNX8000 system every 80 days.
When the weekly BBU test time arrives, SPA and SPB can start
their BBU test at the same time. Later when the BBU test
completes, the test is not set as completed and the BBU test
will start again repeatedly.
After a reboot pool luns can go offline. The mount may fail
due to an unexpected slice object state initialized from slice
cache.
In a system configured for large deduplication, a mismatched
deduplication counter size/cast (64-bit/32-bit) can cause
data loss.
When copper twinaxial cables that supported iSCSI
connections were in use, iSCSI hosts experienced data
unavailable (DU) after a non-disruptive upgrade (NDU) to VNX
OE for Block version 05.33.000.5.034.
Some disks remained in the Transitioning state for more than
1 day after being reenabled.
Could not upgrade the drive firmware for Micron Buckhorn
SAS drives which had a drive firmware size of over 2 MB.
Panic occurred while shutting system down.
SP B reported a panic and created a Dump file on SP B.
The SP Fault LED blink rates were half the expected rate
during normal and degraded boot scenarios.
Physical power switches on the SPs were automatically
turned OFF during a power down. Array did not power up after
power glitch (or short term power loss).
Not enough free space in a pool to recover LUNs.

599721

05.33.000.5.051

583999

05.33.000.5.051

583999

05.33.000.5.051

645614 / SR63207028

05.33.000.5.038

607962

05.33.000.5.038

601545

05.33.000.5.038

606436
608736

05.33.000.5.038

605245

05.33.000.5.038

58805510
/599567

05.33.000.5.035

577106

05.33.000.5.034

573654

05.33.000.5.034

574220
574235
577670

05.33.000.5.034
05.33.000.5.034
05.33.000.5.034

578752

05.33.000.5.034

580536 / 569009

05.33.000.5.034

Battery Backup Unit (BBU) A: Not Ready || Battery Backup Unit


B: Not Ready was reported in Unisphere.
The BBU self test was in a cycle where the tests repeated
continuously.
An on-array debug utility/process was inadvertently left
enabled in the shipping version of the VNX OE for Block code.
When attempting to delete a deduplication LUN while there
were some active copy requests in-progress, the
deduplication LUN could get stuck in destroying state.

593666

05.33.000.5.034

584042

05.33.000.5.034

504517

05.33.000.5.034

574723

05.33.000.5.034

VNX Block
OE, Platforms
VNX Block
OE, SnapView
VNX Block OE
VNX Block OE

VNX Block OE

VNX Block OE

VNX Block
OE,
Deduplication
VNX Block OE

VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block OE

VNX Block
OE, Virtual
Provisioning
VNX Block OE
VNX Block OE
VNX Block OE
VNX Block
OE,
Deduplication

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

37

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX Block
OE,
Deduplication

When shrink was performed while deduplication was running


and in the process of collecting data to be deduplicated, the
deduplication job stuck in a loop and it did not make any
progress.
Snap creation failed with the error message: The operation
cannot be performed because the LUN is Preparing. Wait for
the LUNs Current Operation to complete Preparing and retry
the operation.
If an attempt was made to activate a snap session on a
snapshot which already had an activated session, the
operation failed but no error was returned.
After attempting to destroy a storage pool, the destroy
operation would fail. The storagepool list command
displayed:
Current Operation: Destroying
Current Operation State: Failed
Current Operation Status: Subroutine failure (0x4000803c)
A LUN shrink on a deduplication-enabled LUN failed to
complete only when all the LUNs in the deduplication domain
were empty (had no data). In this case lun -list showed the
"Current Operation" as Shrink indefinitely. In this state, the
LUN could not be shrunk, expanded, or snapped; it could only
be destroyed. I/O to the LUN was unaffected.
A single SP bugcheck occurred with BugCheck code 7E.

577233

05.33.000.5.034

577314

05.33.000.5.034

548616/539292

05.33.000.5.034

578172

05.33.000.5.034

578421

05.33.000.5.034

578490

05.33.000.5.034

LUNs failed to come online following an SP reboot.

579050

05.33.000.5.034

A create LUN operation could result in an internal error if the


operation happened when the peer SP was rebooting and the
peer SP was also the preferred path to service IO for the LUN.
LUN expand failed with Unrecognized Thin Provisioning
Driver error code: 36365. This could happen if an SP
bugchecked when hitting a particular boundary condition
during LUN shrink; the symptom was observed during a
subsequent LUN expand.
A pool and all its LUNs went offline following an SP reboot.

573637

05.33.000.5.034

575603

05.33.000.5.034

576389

05.33.000.5.034

Host I/Os failed to a Pool LUN which was reporting ready after
the system recovered from a dual SP reboot/bugcheck.

576467

05.33.000.5.034

A LUN destroy command completed succesfully, but the LUN


still remained in the system.

567511, 585191

05.33.000.5.034

VNX Block
OE,
Snapshots
VNX Block
OE,
Snapshots
VNX Block
OE, Virtual
Provisioning

VNX Block
OE, Virtual
Provisioning

VNX Block
OE, Virtual
Provisioning
VNX Block
OE, Virtual
Provisioning
VNX Block
OE, Virtual
Provisioning
VNX Block
OE, Virtual
Provisioning

VNX Block
OE, Virtual
Provisioning
VNX Block
OE, Virtual
Provisioning
VNX Block
OE, Virtual
Provisioning

38

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX Block
OE, Virtual
Provisioning,
VNX
Snapshots
VNX File OE

Snap creation failed and the snapshot was left in an errored


state when the source LUN was in the middle of a trespass.

571738, 574415

05.33.000.5.034

A Data Mover on the users system had blocked SMB2


threads.
With VAAI, when a VMDK file was converted to a VERSION file,
it could not be read with NFSV4.
UDP access to the KDC was blocked. The Data Mover waited
for UDP to time out before trying again with TCP. This resulted
in long delays when remote clients were accessing files.
Automatic checkpoints extensions did not stop after the %full
parameter was increased from 75% to 90%.
Domain administrators who belonged to multiple groups
could not join new CIFS servers to the active directory.
Backups of VM snapshots were failing in a Hyper-V
environment using the Remote Volume Shadow copy Service
(RVSS) feature. This was caused by interoperability issues
with 3rd party backup software vendors.
Attempts to extend the destination pool associated with a
VDM sync replication session failed and resulted in an error
condition, Error 3027: d74 : inconsistent disktype.
Syncrepservice enable failure with the following:
Error 13431996489: NBS configuration operation failed on
server server_2. Failed to add volumes. Error 13431996606:
Add NBS.
LDAP user was unable to log into Unisphere.
While fetching all stats of a single volume with the
server_stats command, WriteKiB, WriteRequests and
WriteOps values were incorrect.
A VNX File OE upgrade failed.
A CIFS server does not accept an SMB NULL session request
when the cifs.nullSession parameter was set to 0.
File was deleted in the time slot between FLR scaning it and
locking it.
Get_backend_status cron job failed with error: Error
running command:
/nas/opt/Navisphere/bin/navicli -h -t 60
getall -array

75775996 / 793529

8.1.9.155

794704

8.1.9.155

715070

8.1.9.155

731941 / 12345679 /
729241
71024074 / 788471

8.1.9.155

73360626 / 774424

8.1.9.155

781945

8.1.9.155

777414

8.1.9.155

70928040 / 728569
63530180 / 748541

8.1.9.155
8.1.9.155

66693454 / 755202
71014830 / 773990

8.1.9.155
8.1.9.155

72292110 / 746170

8.1.9.155

68342174 / 736577

8.1.9.155

72380056 / 748944

8.1.9.155

60657782 / 749618

8.1.9.155

62981026 / 750965

8.1.9.155

62844700 / 750967

8.1.9.155

VNX File OE
VNX File OE

VNX File OE
VNX File OE
VNX File OE

VNX File OE

VNX File OE

VNX File OE
VNX File OE

VNX File OE
VNX File OE
VNX File OE
VNX File OE

VNX File OE

VNX File OE

VNX File OE

VNX File OE

When a FSCK was exectuted on an existing file system that


had a size larger than 16TB-64MB, it removed blocks within
the last CG of a 16TB. This caused corruption of some data.
The Data Mover responded with "couldn't get a free page",
"Out of memory", or another message indicating an inability
to allocate memory.
An error occurred while establishing a TCP connection to an
(external) KDC and Kerberos closed the stream. Later, it tried
to close the stream again, resulting in a Data Mover panic.
When the Data Mover requested a Kerberos ticket using UDP,
the response was too large for a UDP packet and an error was
returned. The request was sent again using TCP, but the
previous reply was not deleted, which caused a memory leak.

8.1.9.155

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

39

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX File OE

Customer was not able to access file system, all NFS/CIFS


requests were blocked.
When a replication session stopped while data was being
transferred, a second version restore happend when the
session was restarted. This made the session restart last
longer.
The statsd crashed when an invalid response was received
from the nas_server -query command.

68864834 / 751007

8.1.9.155

65001990 / 751316

8.1.9.155

69126648 / 763056

8.1.9.155

VNX File OE

VNX File OE
VNX File OE

When an LDAP user upgraded the File OE for a Unified system,


USM failed with the message: Insufficient
permission to run.

67819406 / 724815

8.1.9.155

VNX File OE

There is a risk of a Data Mover bugcheck when trying to


reboot system while there is an ongoing Backup.
Dbchk was unable to detect the problem that caused a
filesystem extension failure: Error 10264: There is a
volume definition mismatch between the
Control Station and server_4 for v8774.
Component volume(s) v28277 on the Control
Station do not exist on server_4.

70157824 / 751017

8.1.9.155

71785850 / 763620

8.1.9.155

74363306 / 773551

8.1.9.155

74078324 / 791878

8.1.9.155

75581540 / 792684

8.1.9.155

75186726 / 790431

8.1.9.155

73591176 / 768798
74726358 / 784972

8.1.9.155
8.1.9.155

68388050 / 750826
68209328 /702964
60262164 / 750838

8.1.9.155

57606950 / 777372

8.1.9.155

60525818 / 615062

8.1.9.155

68012370 / 694787

8.1.9.155

67458600 / 717261

8.1.9.155

VNX File OE

VNX File OE

VNX File OE
VNX File OE

VNX File OE

VNX File OE
VNX File OE
VNX File OE
VNX File OE

VNX File OE
VNX File OE
VNX File OE
VNX File OE

40

USM health check failed on "File System Usage on the Data


Mover" on systems whose "avail" space is more than
2,147,483,647.
Under rare circumstances when restarting the SMB service,
the windows event log auto archive may stop working.
Under rare circumstances, when there is a very large number
of event logs generated and the retention is not set to infinite,
there is a memory leak.
Temporary SMB server performance decreased when an
application used the File change notify feature on an
EMC server share.
Customer was unable to mount NFS exports on ESXi host.
The VNX for File OE crashed due to the use of a nonreferenced class instance.
The system failed to dial home when the configuration
(configuration files and log files) had invalid UTF-8 characters.
NFSv4 Access Control Entries (ACEs) were displayed in the
format of OWNER@ or GROUP@ if the ACE belonged to a user or
group that matched with the current owner user or owner
group of the file or directory. The OWNER@ and GROUP@
formats were misleading to the user.
When a Data Mover unexpectedly rebooted, there was no call
home event sent back to EMC.
Unisphere was inaccessible after a Control Station failover
occurred.
There is a race window that FLR log is written when a mount is
not complete.
Command nas_fs -info -all failed with a backtrace.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

8.1.9.155

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX File OE

The SMB2 client is unable to change ownership or


permissions on symlinks in mixed Windows-/Unix file
systems. The operation hangs.
Applications on SMB2 clients can be confused with deduped
files, because of the "sparse" file attribute.
In situations where CEPP over MSRPC is used, the user
experienced a Data Mover bugcheck.

68538084 / 717435

8.1.9.155

72122142 / 760436

8.1.9.155

72894094 / 764319
72894094 / 764890
67618804 / 717238

8.1.9.155

729965
67433430 / 690491

8.1.9.155

68439794 / 732568
71928238 / 751248

8.1.9.155

71194884 / 733687

8.1.9.155

70805746 / 734409

8.1.9.155

71428114 / 743850

8.1.9.155

73808368 / 769160

8.1.9.155

72368390 / 744622

8.1.9.155

71522612 / 747102

8.1.9.155

73375412 / 757841

8.1.9.155

73651308 / 761153

8.1.9.155

With all the Data Movers in standby mode, the arrayconfig


script hung when querying the replication information.
A large percentage of CPU was consumed when handling a
large number of SMB2 durable handles.
A Data Mover bug checked when performing a Codenomicon
test suite.
When a checkpoint schedule was created using the GUI, the
command, nas_syncrep -reverse failed.

763116 / 763116

8.1.9.155

74030184 / 773991

8.1.9.155

72386430 / 745392

8.1.9.155

71880786 / 745936

8.1.9.155

A Create Replication task failed with the following error: Error


13422034977: Operation task id=3285 on
DEFTHW991X3STO at first internal checkpoint create state
failed with error: 13421840573: Execution failed:
Segmentation fault: Operating system signal.
[C_STRING.c_strlen].

72728264 / 751235
72150002 / 751826
73212910 / 754586

8.1.9.155

VNX File OE
VNX File OE
VNX File OE
VNX File OE

VNX File OE

VNX File OE
VNX File OE
VNX File OE

VNX File OE
VNX File OE

VNX File OE

VNX File OE
VNX File OE

VNX File OE
VNX File OE
VNX File OE
VNX File OE
VNX File OE

Regular I/O pauses observed with snapshots enabled


causing 20-25% performance high.
NFS/SMB write request latency exceeded 20 seconds on
filesystems that were involved in a Data Mover failover or
restart.
The nas_migrate command failed with the message,
failed to validate or generate a migration
plan.
The File System Checkpoints page in the Unisphere GUI had
an empty Schedule Name column.
An AIX host failed when executing commands within a
checkpoint directory.
A panic occurred during the upload to the VNX for File OE of a
large file (above 1GB) through SFTP, due to a bug in the
management of data buffers.
When using NFSV4, the datamover bugchecked under a
heavy load leading to network packet fragmentation.
After the creation of a new user role, many management
commands run by the nasadmin user failed with error The
user is not authorized to perform the
specified operation.
Informational log messages produced during the
server_devconfig probe operation add unnecessary
noise in the log, which could distract from real issues, and
could be interpreted as failures.
The VNX for File OE bugchecked (0x20b8b28) due to a very
busy IPv6 network with a large destination cache.
On a nearly full file system, a customer was truncating a
dense file while trying to write another file. The modification
sequence in the truncate process led to a counter out of
sync issue.

8.1.9.155

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

41

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX File OE

A Data Mover bugchecked with a SYSTEM WATCHDOG string.

8.1.9.155

VNX File OE

The command, nas_storage -check all, failed with a


backtrace.

VNX File OE

A Data Mover may reboot or failover very slowly and orphan


files may be created which can only be removed by running
fsck.
New Linux clients were unable to mount the file system using
NFSv4.1 protocol.
Connected machine with NFS protocol hung and the following
message was displayed in the server_log:
2015-08-06 10:16:28: KERNEL: 3: 3:
ThreadsServicesSupervisor: Service:NFSD
Pool:NFSD_Exec BLOCKED for 409 seconds:
Server operations may be impacted.

72770820 / 752714
74713150 / 776397
72728264 / 753113
74513944 / 774532
75319920 / 789151
70653478 / 755323

VNX File OE
VNX File OE

VNX File OE
VNX File OE

VNX File OE

VNX File OE

VNX File OE

VNX File OE
VNX File OE

42

A user was unable to mount NFS export from a Windows 2008


R2 running Windows NFS client over IPv6.
A Data Mover bugcheck occurred when using the
server_statmon command for monitoring the NFS export
access. It is likely to happen more frequently when using
snap file systems.
After upgrading to 8.1.8.119, users were unable to access
certain CIFS shares. An extra "root_vdm_x" directory was
added to the export path, which is invalid. Attempts to reexport without that path would not resolve the problem - the
invalid directory would be re-added.
This occured if the share was exported via both NFS and CIFS,
if the share was exported from a VDM from CIFS but a physical
DM from NFS, and if the share was exported on NFS using an
NFS alias that matched its mountpoint name on the VDM.
An auto-fsck was initiated on a system. The reason given for
triggering the auto-fsck was bad dir read. However, the
auto-fsck did not detect any on-disk directory inconsistencies
on the file system.

When a file system is frozen and the structure of the file


system is not initialized correctly, space reclaim may access
an invalide NULL pointer.
When a user tried to create a file system larger than 16TB-8G,
but a File System of only 2TB was created.
CIFS blocked threads and the user received strange output.

8.1.9.155

8.1.9.155

75213994 / 791579

8.1.9.155

73206702 / 754306

8.1.9.155

72878838 / 756120

8.1.9.155

73249794 / 756898
73931218 / 766962

8.1.9.155

71623722 / 737659
71652346 / 757815
73633222 / 769189
75334706 / 785301
76828706 / 801007

8.1.9.155

74066422 / 767110
73903652 / 769772
74087822 / 769774
74144132 / 769775
74317178 / 771591
74265562 / 771837
74329672 / 771956
74118550 / 772045
74362446 / 772442
74317178 / 771485

8.1.9.155

99999999 / 772881

8.1.9.155

75098050 / 781069
73533420 / 767732

8.1.9.155

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

8.1.9.155

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX File OE

The SSL-enabled LDAP service failed to connect to LDAP


server with 91 / Connect error. Server log reports the
following error:
LDAP/SSL protocol error: The LDAP server
certificate verification failed, the
signature is not valid.

74774892 / 784540

8.1.9.155

VNX File OE

Failed to create a filesystem replication to a destination


system due to the following error: Query storage pools
All. Remote command failed.

75066808 / 789103
74144666 / 793619

8.1.9.155

VNX File OE

A customer experienced an out of memory bugcheck when


using the CEPP feature.
When using NFSv4 with GSS Kerberos, integrity from Suse
clients, CIFS, and NFS might be unable to connect the server.
When using DHSM over SMB and the system is under heavy
load or is in a misconfigured environment, a server
bugchecked.
The Data Mover bugchecked. After failover to the standby
Data Mover, messages could not be written to the FLR log file
and messages similar to the following were seen in the server
log: "Error opening the activity log file, status = 17".
When using NFS V4.1, VAAI fast cloned files are not readable
on the ESX host.
If the SavVol reaches full capacity and cannot be extended
automatically, the oldest checkpoints are deleted.
Files with a partial corruption may be unreadable by FSCK on
a deduplication-enabled filesystem.
GetAttr may return the same ChangeID, but different file
sizes.
Customers are unable to join a compname to a domain's
Active Directory when using an administrative account with a
non ASCII password.
If the first disk in a mixed thin_storage filesystem is not a thin
disk, the filesystem may be not handled properly.
Modules such as CAVA and RDE FSCK failed unexpectly
without notifying the user.
The Data Mover bugchecked with the following message:
Page Fault Interrupt. Virt ADDRESS:
0x0000e595ec Err code: 0 Target addr:
0x0000000064.

68140920 / 740878

8.1.9.155

70524574 / 748032

8.1.9.155

71648738 / 751190

8.1.9.155

68810662 / 735334
73453320 / 759296

8.1.9.155

791145

8.1.9.155

751010

8.1.9.155

41472302 / 429397

8.1.9.155

751009

8.1.9.155

776257

8.1.9.155

779839

8.1.9.155

67479716 / 778506

8.1.9.155

75739878 / 791144 /
785696

8.1.9.155

72878838 / 769713

8.1.8.132

70805746 / 756461

8.1.8.132

71777016 / 756450

8.1.8.132

VNX File OE
VNX File OE

VNX File OE

VNX File OE
VNX File OE
VNX File OE
VNX File OE
VNX File OE

VNX File OE
VNX File OE
VNX File OE

VNX File OE
VNX File OE

VNX File OE

Unable to mount an NFS export from Windows 2008 R2 when


running a Windows NFS client over IPv6.
An AIX host failed when executing du/cp/tar etc commands
within a ckpt dir. The following error was genrated when
accessing the ckpt dir from the PFS share folder exported by
VNX NFS server:
root@GERDC1AIX03:/pfsmnt/.ckpt/2015_05_31_07.51.01_
GMT/data/dir1# du
du: 0653-175 Cannot find the current directory.
When two Data Movers (DMs) with the same IPv6 brodcast
domain were brought up serially, if the first DM lost
connectivity, the IPv6 network neighbor client's cache was
not updated.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

43

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX File OE

Modules like CAVA aborted unexpectly without notification,


and generated log message such as bad dir read in the
server_log. Although the logged messages implied that the
directory was corrupt, the file system did not always contain
any inconsistencies.
When running FSCK on an file system that is larger than 16TB64MB, FSCK could remove blocks within last cylinder group
(CG), potentially corrupting some data within the file system.
During VNX2 migration operations, if the source virtual data
mover (vdm) was attached to more than 200 interfaces,
migration operations could hang.
AIX host failed when executing du/cp/tar etc commands
within a ckpt dir. The error occured when accessing the ckpt
directory from a PFS share folder that had been exported by a
VNX NFS server. For example:
root@GERDC1AIX03:/pfsmnt/.ckpt/2015_05_31_07.51.01_
GMT/data/dir1# du du: 0653-175 Cannot find the current
directory.
I/O pauses occurred during snapshot operations causing 2025% performance reduction.
Tests caused Data Mover bug checks sent corrupted NFS
requests to the server. This did not occur with standard
NFSV4 clients.
NFS/SMB write request latency sometimes exceeded 20
seconds on filesystems that were involved in Data Mover
failover or restart.
NFS/SMB write request latency sometimes exceeded 20
seconds on filesystems that were involved in Data Mover
failover or restart.
Tests caused Data Mover bug checks sent corrupted NFS
requests to the server. This did not occur with standard
NFSV4 clients.
I/O pauses occurred during snapshot operations causing 2025% performance reduction.
AIX host failed when executing du/cp/tar etc commands
within a ckpt dir. The error occured when accessing the ckpt
directory from a PFS share folder that had been exported by a
VNX NFS server. For example:
root@GERDC1AIX03:/pfsmnt/.ckpt/2015_05_31_07.51.01_
GMT/data/dir1# du du: 0653-175 Cannot find the current
directory.
During VNX2 migration operations, if the source virtual data
mover (vdm) was attached to more than 200 interfaces,
migration operations could hang.
When running FSCK on an file system that is larger than 16TB64MB, FSCK could remove blocks within last cylinder group
(CG), potentially corrupting some data within the file system.

74066422 / 780387

8.1.8.132

72380056 / 769719

8.1.8.132

72284998 / 763440

8.1.8.132

70805746 / 756461

8.1.8.132

756456

8.1.8.132

755234

8.1.8.132

756459/ 729965

8.1.8.132

756459

8.1.8.132

755234

8.1.8.132

756456

8.1.8.132

70805746 / 756461

8.1.8.132

72284998 / 763440

8.1.8.132

72380056 / 769719

8.1.8.132

VNX File OE

VNX File OE

VNX File OE

VNX File OE
VNX File OE

VNX File OE

VNX File OE

VNX File OE

VNX File OE
VNX File OE

VNX File OE

VNX File OE

44

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX File OE

Modules like CAVA aborted unexpectly without notification,


and generated log message such as bad dir read in the
server_log. Although the logged messages implied that the
directory was corrupt, the file system did not always contain
any inconsistencies.
When two Data Movers (DMs) with the same IPv6 brodcast
domain were brought up serially, if the first DM lost
connectivity, the IPv6 network neighbor client's cache was
not updated.
An AIX host failed when executing du/cp/tar etc commands
within a ckpt dir. The following error was genrated when
accessing the ckpt dir from the PFS share folder exported by
VNX NFS server:
root@GERDC1AIX03:/pfsmnt/.ckpt/2015_05_31_07.51.01_
GMT/data/dir1# du
du: 0653-175 Cannot find the current directory.
Unable to mount an NFS export from Windows 2008 R2 when
running a Windows NFS client over IPv6.
A warm reboot of a Data Mover cannot be executed and a cold
reboot occurs instead. If this occurs, client access to the data
could be temporarily disrupted.
An automatic file system check was initiated on a file or
unified system. The reason given for triggering the auto-file
system check was bad dir read. However, the auto-file
system check did not detect any on-disk inconsistencies on
the file system.
Linux clients were misconfigured. The file system service
hung and the server log showed messages similar to:
NFSD Pool:NFSD_v4Daemons BLOCKED NFSD
Pool:NFSD_Exec BLOCKED
A memory corruption occurred while trying to rename a file.
In a failover situation when there is a heavy load, some NFS
clients could get StaleHandle errors.
A Data Mover rebooted with the following error code:
0x00027e4c19.
A system reboot occurred with hardlink pointer to a file.
When a checkpoint FS corruption was detected and the
corruption came from production FS, only the checkpoint FS
was marked corrupted. The Data Mover will bug check again,
since the production FS was not marked corrupted.
With NFSv4, Unix permissions (mode bits) are generated from
the ACL, the group setting is then propagated to the owner.
NFS deadlocked when a file written with pNFS was truncated.
Thread blocked messages were seen in the server log.
When using NIS as name resolvers and when there is an entry
without a name in the group map, the Data Mover
bugchecked.
Permissions set on DFS Folders through MMC or by using CLI
commands were not accepted.
NFSv4 performance slowed, with some operations lasting
about 20 seconds.
A slow response time was seen on clients using specific
applications.

74066422 / 780387

8.1.8.132

71777016 / 756450

8.1.8.132

70805746 / 756461

8.1.8.132

72878838 / 769713

8.1.8.132

758179

8.1.8.121

74066422/ 767110

8.1.8.121

67020256 / 689865

8.1.8.119

67999806/ 699293
67433430/ 691027

8.1.8.119
8.1.8.119

64607060/ 678801

8.1.8.119

68506210/ 716653
63195614/ 715376

8.1.8.119
8.1.8.119

60262164/ 715367

8.1.8.119

68729530/ 710839

8.1.8.119

68579608/ 701865

8.1.8.119

64120168/ 699142

8.1.8.119

65738438/ 693366

8.1.8.119

66739608/ 689050

8.1.8.119

VNX File OE

VNX File OE

VNX File OE
VNX File OE

VNX File OE

VNX File OE

VNX File OE
VNX File OE
VNX File OE
VNX File OE
VNX File OE

VNX File OE
VNX File OE
VNX File OE

VNX File OE
VNX File OE
VNX File OE

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

45

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX File OE

The NTXMAP feature does not support Unix user names


defined in NIS.
The server_date did not reflect the sync_delay option set by
the customer.
When using the nas_checkup, a warning is returned for
parameter canRunRT whose current and default values are
not the same.
When a file was overwritten and the file system was full, the
new file creation is allowed, even if the new file size was
larger than the old file size. Writing this much data generated
a QUOTA_EXCEEDED error. The end result was that the old file
was lost and the new file couldnt be created.
Execution of the command, nas_replicate switchover, failed.

68878404/ 718434

8.1.8.119

56244034/ 600639

8.1.8.119

56242026/613387

8.1.8.119

54995532/ 613405

8.1.8.119

47184462 / 613479/
485477
54593730/ 613485

8.1.8.119

53513236/ 649331

8.1.8.119

649841
64309006/ 693813

8.1.8.119
8.1.8.119

66537256/ 693832

8.1.8.119

706261
69662042/ 713354

8.1.8.119
8.1.8.119

63811662/ 655693

8.1.8.119

722333

8.1.8.119

60586898/ 617060
60893710/ 623870/ 724173
71419220/ 737617/ 745754

8.1.8.119

65340286/ 669498

8.1.8.119

64121286/ 680458

8.1.8.119

66737082/ 682013

8.1.8.119

VNX File OE
VNX File OE

VNX File OE

VNX File OE
VNX File OE
VNX File OE
VNX File OE
VNX File OE

VNX File OE
VNX File OE
VNX File OE

VNX File OE

VNX File OE

VNX File OE
VNX File OE
VNX File OE

VNX File OE
VNX File OE

46

A Db chk failed to detect an invalid entry that was inserted


manually into the /nas/server/slot_2/ufs file.
The Control Station rebooted unexpectedly after 485 days of
uptime.
The command nas_cs -set -dns_domain backup failed.
When a file is detected by an antivirus, there is no
information about the anti virus engine which has detected
the virus.
An error was encountered: ../ufslog.cxx at line: 11125 : sync:
no progress reducing dirty list.
An incoherency was seen in the NFS statistics counters.
Customer get STATUS_INTERNAL_ERROR for
FSCTL_QUERY_ALLOCATED_RANGES request on compressed
file.
When executing cpio -m on a Solaris client to migrate files to
a NFS share folder, the modified time of the files were not
kept as expected.
When a system running an earlier version of code was used
as a destination to create a replicationV2 session, the
destination file system had a mismatched log type between
the Control Station and VNX for File OE.
LDAP user failed to login via SSH into Control Station
following Control Station failover.
Storage processor B rebooted due to bugcheck code
0x0000007E.
Customer received an out of memory issue with bugcheck
text that stated, couldn't get a free page, due to a bug in the
data cache flush algorithm.
Backup report shows 0MB is processed after a successful
backup.
Under certain network configurations ndp x route commands
generate a route that is missing a next hop.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

8.1.8.119

8.1.8.119

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX File OE

A single SP bugcheck occurred when the VNX2 array was


under pressure that included external I/Os and internal
background operations, while at the same time the other SP
is rebooting.
The VNX for File OE bugchecked (0x000068df41,
0x0000000008) during NFSV4 access with a message saying,
Page Fault Interrupt.
User experienced a slow host write response time or a
timeout.
A performance issue was experienced due to a high CPU
utilization when using CIFS file-filtering.
Share paths were not canonicalized before stored in the
share databases.
When Vista or Windows 2008 server clients access to a share
and compress files, an attempt to rename a directory resulted
in blocked SMB2 threads on the Data Mover and a loss of
access to the CIFS share.
Audit security settings were not correctly restored after a
reboot on Data Movers with many VDMs and large File System
configurations.
A server might receive invalid kerberos tickets during SMB
authentication, leading to an unexpected error.
The LDAP configuration file (ldap.conf) is present in the /.etc
directory, but incomplete.
A permissions error occurred for CIFS users when changing a
directory.
On a compressed file, the following message was received:

643402

8.1.8.119

68781348/ 704740

8.1.8.119

384143

8.1.8.119

67303398/ 690463

8.1.8.119

61227688/ 688267

8.1.8.119

67948752/ 706949

8.1.8.119

708253

8.1.8.119

66615986/ 713081

8.1.8.119

69680002/ 718807

8.1.8.119

69929998/ 721410

8.1.8.119

69662042/ 722593

8.1.8.119

A bug check occurred (0x000072e499) when unmounting a


file system and stopping the virus checker at the same time.
CIFS users can't access SFTP with homedir and FQDN domain.

58454314/ 724284

8.1.8.119

724359/ 70440966

8.1.8.119

CEPA reports actual path while deleting the file instead of


symlink path.
When the FS is paused, the nas_fs -info <fs_name> -o mpd
command cannot retrieve the Multi-Protocol Directory
information.
An external Open LDAP server configured to deny anonymous
connection on the RootDSE showed error messages in the log
file.
An incremental NDMP backup running through Avamar of a
large file system with many files hung. The backup hung after
Avamar attempted to stop and restart the backup. The
backup threads started to terminate immediately after the
backup restarted and the transfer rate showed as 0 kbs.
A backup report showed 0MB is processed after a successful
backup.
An IPv6 network tool may cause a condition on the Data
Mover.
A user was unable to mount replication destination
checkpoints via NFSv4 if the access policy of the file system
was not NATIVE.

65618098/ 736528

8.1.8.119

54065574/ 652192

8.1.8.119

58961190/ 720929

8.1.8.119

65043266/ 715375

8.1.8.119

64121286/ 680458

8.1.8.119

69685320/ 713567

8.1.8.119

54333266/
610491

8.1.8.119

VNX File OE

VNX File OE,


CBFS
VNX File OE,
CIFS
VNX File OE,
CIFS
VNX File OE,
CIFS

VNX File OE,


CIFS
VNX File OE,
CIFS
VNX File OE,
CIFS
VNX File OE,
CIFS
VNX File OE,
CIFS
VNX File OE,
CIFS
VNX File OE,
CIFS
VNX File OE,
CIFS
VNX File OE,
Install,
Configure
VNX File OE,
LDAP, VNX
VNX File OE,
NDMP

VNX File OE,


NDMP
VNX File OE,
Network
VNX File OE,
NFS

STATUS_INTERNAL_ERROR for
FSCTL_QUERY_ALLOCATED_RANGES.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

47

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX File OE,


NFS
VNX File OE,
NFS
VNX File OE,
NFS
VNX File OE,
NFS

After a client reboot, a server panicked when NFSV4 locks


were held by the client.
Under NFSV4 load, the server may crash in some special
circumstances.
NFS writes are slow after pNFS client switches over the I/O to
NFS after a failure.
pNFS: The datamover returns NFS4_ERR_TOOSMALL to the
Linux client when the layout description (list of extents)
doesn't fit in the client buffer size. The client falls back to
NFS.
When the dbms_backup script is run as nasadmin, a failure
occurred because nasadmin didnt see the server when ACL
was set.
As the load of CPU reaches 12%, mail gets rejected with
following error messages grep "rejecting connections"
/var/log/maillog |tail -20 Apr 25 02:15:46 WKPSVNN-PTCC2165 sendmail[2543]: rejecting connections on daemon MTA:
load average: 14 Apr 25 02:16:01 WKPSVNN-PTCC-2165
sendmail[2543]: rejecting connections on daemon MTA: load
average: 14.
The Installation process failed when the same IP was
assigned to SPA and SPB.
A bug check was initiated on the Data Mover when PMDV was
deleted while the prefetch I/O still exists.
The destination file system replication or migration was
corrupt, but the source file system was clean.
When SavVol is larger than 2G, auto-extend cannot stop until
all the space in a pool is consumed.
A Data Mover bugchecked with error code 0x00009d5fd7.

68927792/ 704534

8.1.8.119

70410652/ 720785

8.1.8.119

68729530/ 725727

8.1.8.119

68729530/ 738744

8.1.8.119

55783542/ 651834

8.1.8.119

61526254/ 715294

8.1.8.119

67650918/ 693125

8.1.8.119

59553716/ 651820

8.1.8.119

63316290/ 669268

8.1.8.119

69343968/ 711230/ 712801

8.1.8.119

61627400/ 652203

8.1.8.119

A server exceeded the timeout value during a boot operation.

60532776/ 652204

8.1.8.119

The creation of a 2TB file system from ViPR failed.

64298388/ 654578/ 655198

8.1.8.119

Thousands of defunct sysadmin accounts were created on the


Control Station.

64507704/ 661838/ 665714

8.1.8.119

An NFS export created on a VDM appeared in the Unisphere


GUI, but then disappeared later.

64979880/ 666806

8.1.8.119

5 weekly historical statistical files were found in


/nas/jserver/sdb/control_station/data_movers/fs_capacity/
directory, instead of the expected files.
A Data Mover failover command failed with an unclear error
message, Execution failed: valid_entry: Routine failure.
[EXTRACTOR.set_entry.

62954784/ 669470

8.1.8.119

64647466/ 692625

8.1.8.119

VNX File OE,


Platform
Services
VNX File OE,
Platform
Services

VNX File OE,


Platforms
VNX File OE,
SnapSure
VNX File OE,
SnapSure
VNX File OE,
SnapSure
VNX File OE,
Storage
VNX File OE,
Storage
VNX File OE,
System
Management
VNX File OE,
System
Management
VNX File OE,
System
Management
VNX File OE,
System
Management
VNX File OE,
System
Management

48

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX File OE,


System
Management
VNX File OE,
System
Management
VNX File OE,
System
Management
VNX File OE,
System
Management
VNX File OE,
System
Mangement
VNX File OE,
System
Mangement

After a VNX for File upgrade, the CIFS server was no longer
available.

68106062/ 698162
68191680/ 700842/ 700096

8.1.8.119

Command nas_fs -info -all failed with a backtrace.

67458600/ 699981/ 701067

8.1.8.119

MoverStats statSet="NFS-All" XML API query did not return


"v3aai" counter.

706255

8.1.8.119

ViPR SRM Client was unable to retrieve NFS v4.1 statistics via
XML API.

717380

8.1.8.119

SRM queries caused frequent Control Station reboots due to


an out-of-memory condition.

65172254/ 699436/ 699435

8.1.8.119

A user encountered an error that states the following when


upgrading from a previous version of code:
Error: File system has mount record present, but is not in use
in filesys table.
Error: File system is found not in use, yet there is rw/ro server
in its entry in $NAS_DB/volume/filesys file.
The /nas/log/cmd_log showed that nas_storage -sync
command was executed every 5 minutes.

703739

8.1.8.119

60933588/ 667347

8.1.8.119

Symmetrix poller shut down and never restarted.

63091868/ 696681

8.1.8.119

UDoctor cannot take the correct recommended action.

64430016/ 715296

8.1.8.119

When executing cpio -m on a Solaris client to migrate files an


NFS share folder, the modified time of the files were not kept
as expected.
Customer encountered a bugcheck with divide exception.

63811662/ 655693

8.1.8.119

65637828/ 676161

8.1.8.119

A customer encountered a bad bn error on the destination


side when performing an offload copy operation.
When creating a fast clone (for a vmdk image), the Data
Mover hung, or returned with a message, sync: no progress.
In the time zone of UTC+14 (or UTC+13 during daylight
savings time), when upgrading the Unisphere File software,
the post upgrade check was failing with "Error 3501: Storage
API code=4562: SYMAPI_C_CLAR_CIL_FAILURE Failure during
Clariion Interface Layer's attempt to obtain data from Clariion
array". The control station "nas_storage -c -a" command
also failed, with the same error. The naviseccli command
autotiering -info -schedule (from the control station or to the
array) failed with: -GMTOFF:Value 840 is greater than
maximum. The maximum value is 780.

60825130/ 680460

8.1.8.119

66254656/ 687722

8.1.8.119

68621960/ 700810

8.1.8.119

VNX File OE,


System
Management
VNX File OE,
System
Management
VNX File OE,
UDoctor
VNX File OE,
UFS
VNX File OE,
UFS
VNX File OE,
UFS
VNX File OE,
UFS
VNX File OE,
Unisphere

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

49

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX File OE

VNX OE for File 8.1.6.96 did not include the Cabinet-level


disaster recovery feature. VNX OE for File Cabinet-level
disaster recovery utilizes RecoverPoint, MirrorView, or SRDF
technologies to replicate the underlying storage of File
resources to a secondary array. It provided NAS file system
and VDM remote replication to a secondary site by
implementing Data Mover level replication and Cabinet-level
failover. This allows for recovery and continued business
operation in the event of a disaster to the primary array.
A storage pool reached an out of space condition resulting in
a Data Mover panic when a file system whose name
contained a space tried to use the pool. This file system was
not automatically unmounted after the panic which resulted
in recurring panics on the Data Mover.
Migration failed because a Replication V2 session cannot be
created with a mismatch of file attributes.
Customer will encounter page fault error creating fs using
MVM with size 16T.
Clients were not able to log onto the server using NTLMSSP
authentication after changing NTLM Authentication Level
Security policies to:
0 LAN Manager Auth Level: Send LM & NTLM responses
2 LAN Manager Auth Level: Send NTLM response only
An access right issue occurred, when the facility to use more
than 16 groups in an NFS credential is used and the inherited
parent ACL includes ACE with Creater_Owner/Creator_Group.
Users cannot configure FT files larger than the current
maximum size of 100MB.
The DNS records for a CIFS server interface are not updated
on the DNS server secured zone. DNS updates for the CIFS
inteface fail.
An SP reboot occurred when converting a unix mount path
into Unicode.
When a PFS with a special size is created, and is not a
mutiple of 1MB, the nas_migrate creation for that PFS will fail
on the Create Replication step.
A file rename occasionally resulted in an incorrect audit log
entry.
When an incorrect host names are present in the export
options,
the resolution is executed every time the export options are
checked.
The Data Mover hung when a file system was unmounted.
The SMB2 client receives a message: STATUS_FILE_CLOSE on
file operations.
The Data Mover experienced multiple "Invalid Opcode
exception. Virt ADDRESS: 0x0001be2471" bugchecks
on routine calls from "UFS_FileSystem::findExistingNode()"
after experiencing file system corruption.

779900, 708754, 708759,


708756

8.1.6.101

717000

8.1.6.101

65997274/ 69876844

8.1.6.96

61554224/ 678167

8.1.6.96

659697

8.1.6.96

63892708/ 661114

8.1.6.96

60261046/ 658493

8.1.6.96

62112774/ 654895

8.1.6.96

63917158/ 654476

8.1.6.96

63685744/ 651226

8.1.6.96

63603546/677639

8.1.6.96

64529252/661903

8.1.6.96

64873316/ 672774
60876796/ 654183

8.1.6.96
8.1.6.96

651824 / 58854686

8.1.6.96

VNX File OE,


System
Management

VNX File OE
VNX File OE
VNX File OE

VNX File OE

VNX File OE
VNX File OE

VNX File OE
VNX File OE

VNX File OE
VNX File OE

VNX File OE
VNX File OE
VNX File OE

50

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX File OE

A SavVol used for either SnapSure or Replication was


automatically extended due to reaching the HWM. However,
even if the SavVol had sufficient free space after the autoextension, if the SavVol size was larger than 2TB, the autoextension kept repeating automatically until the size reached
16TB or all of the free space in the pool was used if there was
not enough free space to reach 16TB. There were many
informational messages similar to:
CS_PLATFORM:SVFS:INFO:20:::::Slot2:1424053636:
ckpt_3Xday_for30days_054 autoextension succeeded. in the
sys_log..
A file system was created directly on top of meta volumes that
did not belong to any pools. A command to extend this file
system by specifying another volume failed with the
message:
Error 2237: Execution failed: Segmentation fault: Operating
system signal. [FS_EXEC.pre_exec].
The server_mount command did not validate command
options. The command attempted to run, even with invalid
(misspelled) options, and did not work.
When adding two volumes with same name, the Data Mover
(DM) performed a GP exception bugcheck.
The trunk.LoadBalance parameter accepted invalid values,
preventing the load balancing feature from working as
expected.
A large number of messages similar to the following
accumulated in the the system server_log:
NFS3:NfsGssRequest::doStreamProcessing: RPC protocol
violation, verf size <n>
In additon, the Data Mover (DM) performed an out of msgb
bugcheck.

711230/ 69343968

8.1.6.96

704369

8.1.6.96

506271

8.1.6.96

54676948/ 613383

8.1.6.96

57768114/ 613397

8.1.6.96

58237744/ 613403

8.1.6.96

During a file system failover, when a file system with DFS


shares was replicated from version 5.6.49 (or earlier) to a
later version, the destination side running the higher version
failed to unmount, leaving it in a frozen state.
When the VNX control station accumulated multiple weeks of
historical data, the Jserver ran out of internal memory and restarted. This prevented accurate file system statistics from
appearing in Unisphere and interrupted the system from
generating appropriate file system alerts.
File storage rescan failed to diskmark LUNs with identifiers
equal or greater than 10000, preventing users from using the
LUNs as File-side disk volumes.
When automatic file system extension was enabled for a file
system, the Data Mover (DM) on which the file system was
mounted was not eligible for remote DR.
VBB restore did not restore CIFS related attributes if a file was
created on NFS share.
Setting the NDMP module log level to LOG_DBG2, caused a
File side OE bugcheck with the following alert:
NDMP: 10: Thread rest004 DAR: waiting for finish of previous
file, thrdId=4, write_owner=3, bufpt=0x1c58a9ffb,
bufend=0x1c58aa000, bufpt string

59133330/ 613406

8.1.6.96

58504016/ 618756

8.1.6.96

61572438/ 625553

8.1.6.96

680653

8.1.6.96

61065884/ 631900

8.1.6.96

62110602/ 632115

8.1.6.96

VNX File OE

VNX File OE

VNX File OE
VNX File OE

VNX File OE

VNX File OE

VNX File OE

VNX File OE

VNX File OE

VNX File OE
VNX File OE

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

51

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX File OE

When generating user quotas, Unisphere took an extended


period of time to populate the user name information.
When using Unisphere to view a VNX system running a 05.33
version of File OE targeting a VNX system running a 05.32
version of the File OE, the Search LUN page did not return any
results.
CIFS clients were unable to delete symbolic links with targets
that pointed to absolute path (that is, paths beginning with
/).
Using the File CLI server_stats cifs.user command to monitor
a user name with the following format: DOMAIN\\\\username
caused a File OE "Page Fault Interrupt" bugcheck or caused
memory corruption.
When configuring the LDAP Service to use Kerberos
authentication, the vbb restore process caused a system
bugcheck.
Log files generated by the server_stats/nas_stats process
and the statsd daemon were not properly cleaned up in the
directory /nbsnas/stat_groups. Eventually the number of files
grew to excessive size.
The checkpoint auto extension was occasionally skipped,
producing log entries such as:
2013:96104480787::Slot4:1384521607:
ckpt_h_f00385_001 autoextension skipped: used 89, HWM
90%.
This led to condition where the save volume was full and the
checkpoint remained inactive.
In reverse replication operations, systems encountered an
error that forced them to retry the operation. This produced
messages such as:
priIp:10.10.16.78, secIp:127.0.0.1 not found";
"CmdReplicatev2ReverseSec::startingRep fsId: 0 not found";
and/or "VersionSetContext::removeReplicaContext() failed
If the reverse replication operation issue was not resolved in
a timely manner, the Data Mover (DM) performed a "couldn't
get a free page" bugcheck or an out of memory bugcheck.
While performing a diagnostic check, USM displayed
incorrect information about pool space."
A Data Mover (DM) bugcheck occurred with the following
message:
Memcpy size 0x7fff8 is too large
A corrupted server message block (SMB) request contained a
corrupted length value that was larger than the available
data. This caused a large memory allocation that failed and
resulted in a Data Mover (DM) bugcheck.
An incorrectly formatted Service Principal Name (SPN)
indentifier - lacking either the service component or
hostname value - caused a Data Mover (DM) to bugcheck.

61862190/ 631383/634746

8.1.6.96

61727774 /636371

8.1.6.96

61689146/ 639434

8.1.6.96

62007318/ 643364

8.1.6.96

60419804/ 646763

8.1.6.96

60520098/ 647406

8.1.6.96

59015588/ 651818

8.1.6.96

57418482/ 651819

8.1.6.96

63613130/ 651218

8.1.6.96

652194/ 652194

8.1.6.96

59848028/ 652199

8.1.6.96

60584052/ 652201

8.1.6.96

VNX File OE

VNX File OE

VNX File OE

VNX File OE

VNX File OE

VNX File OE

VNX File OE

VNX File OE
VNX File OE

VNX File OE

VNX File OE

52

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX File OE

When the system experienced a hardware resume checksum


error, a call home message was not generated.
SRM queries caused frequent Control Stations reboots due to
out-of-memory condition
The VNX File CLI command nas_fs -info -all failed with a
backtrace.
The VNX File CLI command nas_pool -info id=42 failed with
the following error:
Error 2237: Execution failed: Segmentation fault: Operating
system signal. [LINKED_LIST.first]
When working with a deduplicated file system, the VNX OE
incorrectly reported that the file system was extended to its
maximum size and generated log messages such as the
following:
Couldn't reserve -1 blocks for inode 3976589. Error
encountered - NoSpace
The condition most commonly occurred on file systems where
the autoextension feature was enabled.
When attempting to mount a file system named with an
invalid character, everything after the invalid character was
ignored. This sometimes led to unexpected behavior for the
file system.
For example, MIXED file systems could be treated as native
file systems because the accesspolicy=MIXED setting was
ignored.
After a standby operation in an IPv6 environment, when a
VNX Data Mover (DM) took over for as the primary system DM,
its IP address could not be accessed.
In an IPv6 environment on a VNX system with static routes
configured, the VNX Data Mover (DM) experienced a "Page
Fault Interrupt" bugcheck.
In a statically routed environment with supernetted subnets,
IPv6 networking did not work on a VNX Data Mover (DM)
because the IPv6 routing logic did not select the appropriate
source IP address for a given remote destinatlion.
IPv6 static routes to supernetted networks were created using
the "server_ip -route -create" command. Under some
circumstances the Data Mover (DM) did not recognize these
static routes until it was rebooted.
IPv6 traffic from a VNX Data Mover (DM) failed in particular
configurations.
Unisphere returned an alert that the Block OE software
version was not compatible with the File OE software version.
Occasionally during synchronous replication failover
operations, the source Data Mover (DM) experienced a
bugcheck because the source LUN became read-only.

57606950/ 652210

8.1.6.96

65172254/ 679760

8.1.6.96

67458600/ 699981/700863

8.1.6.96

56276112/ 579242
55669828 / 587686 657156

8.1.6.96

58909740/ 662444

8.1.6.96

64938512/ 663866

8.1.6.96

64936446/ 668537

8.1.6.96

64936446/ 668538

8.1.6.96

64635042/ 668539

8.1.6.96

64635042/ 674559

8.1.6.96

64635042/ 674560

8.1.6.96

66124824 / 676340

8.1.6.96

700784

8.1.6.96

566413, 566887

8.1.6.96

VNX File OE
VNX File OE
VNX File OE

VNX File OE

VNX File OE

VNX File OE

VNX File OE

VNX File OE

VNX File OE

VNX File OE
VNX File OE
VNX File OE

VNX File OE,


Replication

Limitations for nas_halt .

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

53

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX File OE,


Security

Some SFTP client tools such as "Hiteksoftware JaSFTP" or


"Avaya Aura" open 2 SSH channels per TCP connection to the
SFTP server. These 2 SSH channels are used to transfer files
by SFTP protocol.
SSH multichannels are not supported by VNX. By
consequence, the DART may have some blocked SSHD
threads, visible running 'server_thread -list -pool SSHD'
When installing the ESRS IP Client on a Windows Server 2012
or above system (which are not supported), the ESRS
environment check incorrectly reported the OS as Windows
2008 (which is supported).
The command /nas/bin/nas_cs -info reported the IPv4
Gateway value incorrectly.

63667372/ 652531

8.1.6.96

64018344/ 656572

8.1.6.96

655932

8.1.6.96

652523

8.1.6.96

675307

8.1.3.79

610831

8.1.3.72

60238902/634648

8.1.3.72

621305, 614391

8.1.3.72

572543

8.1.3.72

583674
584290

8.1.3.72
8.1.3.72

603082, 602262, 611854,


621903, 626956, 644194
632514

8.1.3.72

VNX File OE,


Serviceability
ESRS
VNX File OE,
System
Management
VNX File OE,
System
Management
VNX File OE

VNX File OE
VNX File OE
VNX File OE

VNX File OE

VNX File OE
VNX File OE

VNX File OE
VNX File OE

54

Startup and termination errors appeared when the


deduplication enabler was installed on the system but was
not in use.
Applying the latest security related hot fix to any current VNX2
release (8.1.0, 8.1.1, 8.1.2 and 8.1.3) will cause upgrades
using USM (Unisphere Service Manager) to a later release to
fail. Upgrades using CLI would remove the hot fix, which will
have to be re-applied after completing the upgrade.
When the pool name has a space, the fs_extend_handler puts
error messages in apl_tm.log.
Oracle Siebel clusters crashes the application when trying to
access NFS export.
FS offline with corruption detected and recovery (FSCK)
reports corruption on the File System SuperBlock.
FS marks a SuperBlock related transaction complete, even
the modification on the SuperBlock has not been persistent.
If a panic happens before the system has a chance to write
the SuperBlock again (periodical 30 seconds), it results in a
lost-write conditions on the SuperBLock, thus the corruption.
Unable to configure LDAPS if LDAP server certificate 'Subject
Name' property didn't have LDAP server URL and in the
certificate 'Subject Alternative Name' property it had the LDAP
server URL.
ProcessMFCExtent error handling
When all the initiators are visible in a storage group except
one initiator (which is in the ~management storage group),
the Connect button on the Initiators tab on the Host List page
showed the error "Found no user created Storage groups to
reconnect on subsystem 184" and did not put the initiator in
the requested storage group.
All dedup-enabled LUNs on pool 0 went offline.
The following error was displayed:
DEDUP OFFLINE: Pool Gold, all luns are offline and need
recovery

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

8.1.3.72

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX File OE

Subfolder contents were not being restored when the Restore


option on the previous version tab in WindowsExplorer was
used with C$ path.
A SYSTEM WATCHDOG panic, or a GP exception panic or a
isAValidIndBlkBuf: bad bn panic, or a Corrupted indirect
block panic, or a mangled directory entry panic, or a Page
Fault Interrupt panic, and/or a number of other symptoms
occurred.
LDAP Service Connection configuration failed on attempt to
add server certificate in PKCS7 format
An install issue occurred when using non-default account for
the nasadmin role
GUI showed a faulted DM but nas_server -l did not.
A loss of CIFS access occurred and the Data Mover did not
respond to any commands.
User could not initiate ROBV on the RAID Group with a
number larger than 256.
When performing an EI recover, an incorrect message that the
Data Mover needed to be rebooted was received.
When CS0 failed-over to CS1, and the Pre-Upgrade Health
Check (PU HC) had been run on CS1, The PUHC reports a
failure due to not being able to find some commands under
/nas_standby.
Reading a DHSM offline file could cause a Data Mover panic if
there were connection issues between the primary storage
and secondary storage or another failure of reading remote
data.
Server lost connectivity to Domain Controllers on the reboot
after adding a specific Service Principal Name to a server
using the server_cifs server_N -setspn -add command.
CIFS clients failed to access a file due to a lock conflict. File
system unmounts or a freeze operation hangs.
When enumerating open sessions or open files on a CIFS
server which belonged to a Data Mover with many servers, the
reports may have included information that belonged to other
CIFS servers.
User sees a panic when using FS that was mounted with the
ceppnfs option.
User encounters a 'bad bn' error code when performing an
offload copy operation.
YP requests hung indefinitively when the domain was
removed.
Could not access CIFS server with NULL session.

47151218/613372

8.1.3.72

61243802/623117

8.1.3.72

52658438/613483

8.1.3.72

54342902/613486

8.1.3.72

60433682/ 618249
642440

8.1.3.72
8.1.3.72

62946704/
649286
53678158/
613386
57195216/
613396

8.1.3.72

61225828/
629252

8.1.3.72

60226850/617052

8.1.3.72

59688542/619315

8.1.3.72

6037558/620402

8.1.3.72

639006

8.1.3.72

632014

8.1.3.72

61583520/638168

8.1.3.72

618728

8.1.3.72

Under certain circumstances, our internal FS layer returns


invalid uninitialized status on NFS write. This results in a
memory corruption.
In NFS 4.1, the system may hang when it reconnects.

641492

8.1.3.72

629752

8.1.3.72

When using the parameter nfs.manageGids. The system could


panic if the client accessed an NFS export with user as root.

63288760/646572

8.1.3.72

VNX File OE

VNX File OE
VNX File OE
VNX File OE
VNX File OE
VNX File OE
VNX File OE
VNX File OE

VNX File OE

VNX File OE,


CIFS
VNX File OE,
CIFS
VNX File OE,
CIFS

VNX File OE,


CIFS
VNX File OE,
CIFS
VNX File OE,
CIFS
VNX File OE,
CIFS
VNX File OE,
NFS
VNX File OE,
NFS
VNX File OE,
NFS

8.1.3.72
8.1.3.72

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

55

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX File OE,


NFS

When the user mapping service was not started or not


configured on a Linux client, the client had access issues
through NFSV4.
Inability to access the Data Mover via NFS V4 from a client
using UDP for NFSV4.
A System Watchdog panic occurred with the error:
file: ../sched.cxx at line: 1909 : SYSTEM WATCHDOG
When there was invalid syntax in the host list (access, ro, rw
or root), the export was accepted without errors. When the
export options were processed, the host list was analyzed
and the error is only printed in the log. It is not clear to the
user that his host list is wrong.
The Data Mover hung when a file-system was mounted with
the option ntcredential and when replication was used.
FS was taken offline because of reAcquire failure.
cbfs:CBFSA: UFS: 3: Unmounting fs 5: Reason: VBM
processToBeModified reAcquire failure
LUN was taken offline and during LUN recovery, FSCK would
not report corresponding corruption at CBFS SliceMap
metadata.

59549160/621157

8.1.3.72

58917286/621161

8.1.3.72

61114502/622178

8.1.3.72

638559

8.1.3.72

54489848/641100

8.1.3.72

627477, 620510, 651332

8.1.3.72

619255, 596690, 608904,


610756, 618451, 618971,
637794, 639663, 644557,
646120
601275, 615600

8.1.3.72

568365

8.1.2.51

591553

8.1.2.51

591565

8.1.2.51

592737

8.1.2.51

593570

8.1.2.51

598131

8.1.2.51

VNX File OE,


NFS
VNX File OE,
NFS
VNX File OE,
NFS

VNX File OE,


NFS
VNX File OE,
System
Management
VNX File OE,
System
Management
VNX File OE,
System
Management
VNX File OE

VNX File OE

VNX File OE
VNX File OE

VNX File OE

VNX File OE

56

SP paniced due to IOD watchdog timeout.

After changing the admin password for a CIFS client


connection, the original credential is be retained in the
Kerboros cache for the file system. This can cause a password
mismatches and make the CIFS client unable to access CIFS
data.
Windows XP (and earlier) clients cannot navigate directories
created with newer clients (for example, Windows 8 or
Windows 2012 server OS) that support the SMB3 protocol.
The Windows Event Viewer cannot open a VNX security audit
log file that was previouslyopened in UNIX (NFS).
Unable to access CIFS shares from Windows 7/Windows
2008, although Windows XP/2003 clients can access the
shares without problem. Unable to unmount check point file
systems that were mounted in a VNX virtual data mover (VDM)
environment. The unmount process is interrupted when a
CIFS client accesses the check point though the .ckpt
directory.
In Windows, previous versions of deleted files cannot be
retrieved with the Restore Previous Version option if the
widelinks feature is enabled for a CIFS server. Deleted files
can only be retrieved through the .ckpt directory.
When the VNX default server names (for example, server_2 )
are renamed (for example, to server_hr), the
migrate_system_conf command generates the following error:
Error! Invalid source <movername> is provided

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

8.1.3.72

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX File OE

The VNX File OE reboot occurs after a failover with a message


similar to the following:
GP exception. Virt ADDRESS: <addr>. Err code: 0
When the Windows audit auto-archive feature is enabled for a
VNX virtual data mover (VDM) but disabled on a VNX data
mover (DM), auto-archive does not work correctly after:
Restarting the DM
Unloading/loading the VDM
A VNX Data Mover (DM) exeriences a "couldn't get a free
page" bug check.
The Data Mover experienced a "GP exception" bug check.
When permanently unmounting a split-log filesystem and all
its checkpoints, a VNX Data Mover (DM) failover fails with the
following error:
replace_ufslog: failed to complete command
When one or more Data Movers (DMs) are powered off or
pulled out from the system, the nas_rdf -restore function
finishes without returning an error code.
Windows 8 or 8.1 and W2012/R2 servers are occasionally not
allowed to access a CIFS share on a VNX Data Mover (DM)
when browsing the server UNC path \\<server or IP>.
A pop-up window appears with a \\server\share is not
accessible message.
The Data Mover experienced a rolling panic when the nas_fs translate command was run on the Control Station to convert
the access policy of a file system from UNIX or SECURE to
MIXED.
When running the nas_fs -translate control station command
to convert a file system access policy from UNIX or SECURE to
MIXED, the VNX File OE goes into a rolling panic.
A workaround invlved removing the following option from the
boot.cfg file:
accesspolicy=MIXED
Avamar backup fails, and server log is filled with alert
messages.
NFS mounts did not complete successfully. NFS mount
threads were hung. If all of the NFS mount threads were hung
the Data Mover could panic.
When a VNX Data Mover (DM) is associated with a large
number of file systems, and the fiile systems contain a large
number of directories that remain open or are frequently
opened, this can exhaust VNX system memory and cause the
a Data Mover (DM) bug check.
When a SavVol is unavailable at boot time, any relevant
Production File Systems (PFS) are mounted, invalidating any
associated checkpoints.
On a Data Mover (DM) with several concurrent backup
sessions running for long time, it is possible for the DM to
reboot.
The VNX System Management Daemon crashes with an
OutOfMemoryException message recorded in the
/nas/log/nas_log.al.mgmtd.crash file.

603272

8.1.2.51

607285

8.1.2.51

602155

8.1.2.51

595437
52621636/
597422

8.1.2.51
8.1.2.51

600826

8.1.2.51

602082

8.1.2.51

602139

8.1.2.51

602168

8.1.2.51

604117

8.1.2.51

605740

8.1.2.51

607082

8.1.2.51

605163

8.1.2.51

601412

8.1.2.51

VNX File OE

VNX File OE
VNX File OE
VNX File OE

VNX File OE

VNX File OE

VNX File OE

VNX File OE
VNX File OE

VNX File OE

VNX File OE

VNX File OE

VNX File OE,


System
Management

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

57

Fixed in previous releases

Category

Description

Tracking

Fixed in version

VNX File OE,


System
Management
VNX File OE

FC port did not function after P2P to tape (DDR) attempted.

587501, 591030

8.1.2.51

When using Filemover (DHSM), files with a modification time


earlier than 1970/01/01 could not be recalled.

8.1.1.33

VNX File OE,


CIFS

Windows Quotas View of a CIFs Share were not displayed


from one VNX when the drive was mapped. The user quota
information was missing (using Window MMC Quota
management) when several CIFS users were mapped to the
same Unix Id.
RecoverPoint Failover Consistency Group failed in both
directions

53114380
/563271
52223586
/554095

VNX File OE,


RecoverPoint
FS
VNX File OE,
Security
VNX File OE,
System
Management
VNX File OE,
System
Management
VNX File OE,
System
Management
Unisphere
Unisphere
Unisphere

Unisphere
Analyzer
Unisphere
Analyzer
Unisphere
Analyzer,
Unisphere
Central
Unisphere,
CLI
Unisphere,
CLI

58

DART panicked while downloading a large file (more than a


few GB) using SCP (secure copy over ssh).
SECURITY: Oracle JRE/JDK Multiple Vulnerabilities (CPU-FEB2013) - UNIX/Linux JRE.

8.1.1.33

563551, 571761

8.1.1.33

55032486
/578783
541017, 543319

8.1.1.33
8.1.1.33

Data Mover completely hung and was unresponsive.


FileMover (DHSM) users using the partial recall policy could
experience a hang if they were reading a file that was larger
than 4GB.
Temporary Checkpoints flooded the Control Station. Users
were unable to create their own checkpoints or file systems.

56871206
/580075

8.1.1.33

570984

8.1.1.33

Some new features/changes do not take effect in the


Unisphere GUI after the array is updated.
An error message was returned when generating certificate
signing request via the Setup page.
If Snapshots were taken on dedup enabled LUNs in a pool,
the GUI incorrectly showed values in the Snapshot
Allocations and Snapshot Subscriptions fields under LUN
properties.
Analyzer CLI comands would not dump the pool's
performance statistics data if the pool only has RLP (Reserved
LUN Pool) LUNs.
Analyzer is not showing LUN utilization when it reaches
100%.
When Unisphere Central is in the systems domain, the GUI's
Statistics/Retrieve Archive page did not display the NAR files
for retrieval. Users were required to use CLI as a workaround
to list and retrieve NAR files.
The naviseccli storagegroup XML output did not report the
host name within the XML elements.
Failed to create or expand pool if Management Server was
restarted.

746018

05.33.009.5.155

64280272/ 684304

05.33.008.5.119

706194

05.33.008.5.119

66484864/ 682179

05.33.008.5.119

69878898/ 720666

05.33.008.5.119

64983040/ 686352/ 673140

05.33.008.5.119

65503296/ 678064

05.33.008.5.119

69873438/ 714173

05.33.008.5.119

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Fixed in previous releases

Category

Description

Tracking

Fixed in version

Unisphere,
Host
Software, CLI,
Utilities
Unisphere,
Host
Software, CLI,
Utilities
Unisphere
QoS Manager
Unisphere

Unisphere Host Agent logged message:

65814450/ 678938

05.33.008.5.119

The Host Agent service could not be started successfully on


Red Hat Enterprise Linux 7.

67949012/ 696546

05.33.008.5.119

An unknown exception error message was encountered when


stopping NQM.
The User Names column of the User Quotas List in Unisphere
populated with names very slowly.
Tree Quotas drop down list on User Quotas GUI page was
missing existing tree quotas.
If Unisphere with the Analyzer was enabled and USM was
running on the same host, Analyzer Real-Time statistics
sometimes failed to update.
ESX server lost its path to the storage system.

66928156/ 713632

05.33.008.5.119

58851466/ 696532

8.1.8.119

71623624/ 739543

8.1.8.119

560704, 649788

05.33.006.5.096

63964894/651722

05.33.006.5.096

Using Unisphere to create Mirrorview secondary LUN images


failed for LUNs with mixed RAID types if no disks were
available.
An incorrect error message - I/O module - was reported when
a mezzanine card was in a faulted state.

59188084/ 654433

1.3.6.1.0096-1

61304630/ 654455

1.3.6.1.0096-1

Unisphere CLI

Error 0x60000500 was reported when restoring an offline


Storage Pool that used RAID 3 or RAID 6.

62707090/ 654457

1.3.6.1.0096-1

Unisphere CLI

SNMP MIB read requests reported incorrect information about


VNX2 arrays.

63916164/ 655157

1.3.6.1.0096-1

Unisphere CLI

Sending test SNMP traps produced core dumps.

63316528/ 654719

1.3.6.1.0096-1

Unisphere
host software

LDAPS did not work if the LDAPS server's certificate


contained a certificate chain.

65187126/ 666544

1.3.6.1.0096-1

Unisphere,
host software

Unisphere, the Unisphere Service Manager (USM), or the VNX


Installation Assistant (VIA) did not work correctly because the
Java Runtime Environment (JRE) on the host was not
supported.
In the Unisphere "Storage Group Advanced Properties" panel,
in cases where multiple iSCSI initiators were in different
storage groups but were connected to the same host, when
users sorted the host's initiators before attempting to remove
a particular initiator from a specific storage group, all of the
initiators associated with the storage group were removed.
Could not create a FS on Windows 2012 with Chrome.
CLI command response time was slow in some conditions.
When running the getlun Block CLI command to retrieve disk
statistics, the returned Prct Idle and Prct Busy values were
inaccurate.
In some circumstances, the Unisphere Search LUN page
returned no results for searches, even when the Search
criteria matched existing LUNs.

697449/ 697215/699525

1.3.6.1.0096-1

642279 / 63020558

1.3.6.1.0096

642579
60933588/ 619647
58886038/ 624200

1.3.6.1.0096
1.3.6.1.0096
1.3.6.1.0096

61727774/ 628463

1.3.6.1.0096

Unisphere
Unisphere
Unisphere,
NQM, VNX
Block OE
Unisphere
Unisphere CLI

Unisphere

Unisphere
Unisphere
Unisphere
Unisphere

EV_RAIDGroupState::RAIDGroupOpPriorityConvert Enum out of range, -1.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

59

Fixed in previous releases

Category

Description

Tracking

Fixed in version

Unisphere

The Tree Quota usage did not display properly in Unisphere,


when the soft and hard limit were set to zero.
Unisphere did not allow users to create hidden CIFS shares.
During the user login/authentication process, user accounts
were locked out after three incorrect attempts.
When a CIFS share was created with the VNX File CLI and then
modified with Unisphere, some configuration options were
not avaialble.
In Unisphere, the "Copy to Hotspare" option was grayed out
for disks associated with mixed RAID type storage groups.
The Unisphere Analyzer application stopped without
apparent reason.
Unisphere and USM did not operate correctly in Chrome.

61551390/ 631797

1.3.6.1.0096

61918032/ 631865
58357456/ 652208

1.3.6.1.0096
1.3.6.1.0096

498866/ 654499

1.3.6.1.0096

62067244/59395272/
606329/654723
64649720/ 660841

1.3.6.1.0096
1.3.6.1.0096

635310

1.3.6.1.0096

Unable to create a network device using the Unisphere


wizards or the Unisphere management screens.
Unisphere and the CLI did not accurately display the 60-drive
Disk Array Enclosure (DAE) information after replacing a 25drive DAE with a 60-drive DAE.
Creatingf FAST Cache could fail if created at the same time as
the storage pool.
Unisphere would sometimes hang after connecting a newlycreated LUN to a storage group when the creation of more
than 100 LUNs was in process.
Unisphere hung after attaching LUNs to a storage group on a
VNX5800 with a large configuration.
User was unable to login with LDAP accounts when using
certification chains.
From within the GUI, a user can create a VNX Snapshot using
a single quote mark in the name. For example, te'st.
The user is unable to delete the snapshot using that name.
The user receives an error message, te'st - Cannot

612485, 614365

1.3.3.1.0072-1

563290, 564080

1.3.3.1.0072-1

561400, 597279

1.3.3.1.0072-1

563392, 564522

1.3.3.1.0072-1

574400, 574986

1.3.3.1.0072-1

65187126/665689

1.3.3.1.0072-1

637800, 618792

1.3.3.1.0072-1

555002, 537437, 628171

1.3.3.1.0072-1

578520

1.3.3.1.0072-1

8215634/ 59420, 594205


58483984/597279

Unisphere
Unisphere
Unisphere
Unisphere
Unisphere
Unisphere,
USM
Unisphere
Unisphere
Unisphere
Unisphere
Unisphere
Unisphere
Unisphere

destroy snapshot. the specified snapshot does not


exist.

Unisphere

Unisphere
Unisphere
Unisphere
Unisphere

Unisphere

60

The Unisphere GUI Get Diagnostic Files option truncated the


retrieved size of files larger than 4 GB. GUI truncated
diagnostic file transfers.
Unauthenticated HTTP Requests were sent to Tomcat. Error
pages displyed for HTTP 404 for non-existent addresses.
Failed to change the SP name.
FAST Cache creation failed at 14%.
Issues with ECOM could result in the following symptoms:
1. CQ 601785: hardware exception dump. When open 2 GUI
session on same laptop, hardware exception dump may be
generated.
2. CQ 602079: ECOM will return inaccurate message "No
provider found to handle this request".
3. CQ 606857: Memory lead happened due to BSAFE/SSL.

597295, 62770, 637243

1.3.3.1.0072-1
1.3.3.1.0072-1
1.3.3.1.0072-1

Unable to login to Block array using LDAP account.

53681376/ 600034

1.3.3.1.0072-1

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Fixed in previous releases

Category

Description

Tracking

Fixed in version

Unisphere

Peer Boot State is showing UNKNOWN in Unisphere.

59060556 / 602760

1.3.3.1.0072-1

Unisphere

LDAP timeout.

58765738/
604355

1.3.3.1.0072-1

Unisphere

Unable to view the Tiering info [LUN Properties/Tiering > Tier


Details].

58106360/604472

1.3.3.1.0072-1

Unisphere

The Data Mover experienced an updateDouble failure in


relocateIndexSlot panic.

61897752/
632237

1.3.3.1.0072-1

Unisphere

LDAP is not working on multiple Unisphere storage domains.

6128260/ 622499

1.3.3.1.0072-1

Unisphere

Not able to list the Symmetrix LUN View for the file systems in
Unisphere.

615316

1.3.3.1.0072-1

Unisphere

Edit and Use Configuration buttons are disabled in Available


Configurations Page of Unisphere.

615749

1.3.3.1.0072-1

Unisphere

When creating a RAID Group with automatic disk selection


mode, the creation process failed with an error message
indicating that the RAID Group could not be created with the
selected drive types.

616360

1.3.3.1.0072-1

Unisphere

Using Java 1.7 versions on PC/host to access Unisphere


prevents a new language setting from taking effect.

61766992/627950

1.3.3.1.0072-1

Unisphere,
ESRS IP Client

Unishphere Host Agent, ConnectEMC, and VNX for Block CLI


are always installed on the systems drive, even if the
installation path is changed to another drive.

623176

1.3.3.1.0072-1

Unisphere
Host
software, CLI,
and Utilities
Unisphere
Host
Software, CLI,
and Utilities
Unisphere
Host
Software, CLI,
and Utilities
Unisphere
Host
Software, CLI,
and Utilities

CQECC00606857-CIMOM_PerfCounter_Exceeded.dmp.zip
generated on DAE testing secondary array.

576807, 59729

1.3.3.1.0072-1

If the host timezone was 'UTC + x', the commmand 'naviseccli


analyzer -status' failed with an error.

616366, 631684

1.3.3.1.0072-1

Unable to configure LDAPS on Off Array Management server.

58009364/593000, 591485

1.3.3.1.0072-1

To fix outputting 'Get Drives for a Tier' audit log message

58005674/
613959

1.3.3.1.0072-1

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

61

Fixed in previous releases

Category

Description

Tracking

Fixed in version

Unisphere
Host
Software, CLI,
and Utilities

The Storage Processor (SP) was unmanaged when the


Management Server restarted due to improper memory
manipulation.

SR numbers: 51019358,
53107146, 56043768,
56047852, 56774788,
58603098, 59247626,
59892166, 60874054,
60951330, 61083608
AR numbers:
528255,543184,
577835,78959,
586192,93053,
593077,96368,
603192,05252,
610054,616203,
618190,619026, 622270,
622015

1.3.3.1.0072-1

Unisphere
Host
Software, CLI,
and Utilities

Using VASA with Unisphere sometimes resulted in a memory


leaks due to the misuse of an internal string, where the
management process eventually produced a dump file and
restarted.

59002892/616454

1.3.3.1.0072-1

Unisphere
Host
Software, CLI,
and Utilities
Unisphere
Host
Software, CLI,
and Utilities
Unisphere
Host
Software, CLI,
and Utilities
Unisphere
Host
Software, CLI,
and Utilities
Unisphere
Host
Software, CLI,
and Utilities
Unisphere
Host
Software, CLI,
and Utilities
Unisphere
Host
Software, CLI,
and Utilities

Improper internal library management resulted in a restart of


the management process when the SP will be temporarily
unmanaged.

58312648/ 622224

1.3.3.1.0072-1

Navisphere CLI (naviseccli) segfaults and dumps.

60874280/623692

1.3.3.1.0072-1

Pool creation failed on Unisphere with Japanese language


pack.

59815336/ 616387

1.3.3.1.0072-1

Storage Pool creation failed when a removed drive was


selected.

618360

1.3.3.1.0072-1

Fail Safe Network feature for bxgv2 devices could not be


tested by bringing down the ports using software commands.

647510

1.3.3.1.0072-1

A blue screen error occurs on Windows server 2012 64-bit


environment with MPIO enabled when running Unisphere
Host Agent.

632609

1.3.3.1.0072-1

When power source was restored from the failure state, LUNs
may have stayed offline after backend retruned to the normal
state.

539374, 646381

1.3.3.1.0072-1

62

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Fixed in previous releases

Category

Description

Tracking

Fixed in version

Unisphere
Host
Software, CLI,
and Utilities
Unisphere
Host
Software, CLI,
and Utilities
Unisphere,
Virtual
Provisioning

The Virtual Provisioning feature is in a degraded state.

616361

1.3.3.1.0072-1

Unexpected SP reboot.

619259

1.3.3.1.0072-1

When power source was restored from the failure state, LUNs
may have stayed offline after backend retruned to the normal
state.

539374, 646381

1.3.3.1.0072-1

Unisphere,
Virtual
Provisioning
Unisphere,
Virtual
Provisioning,
Unisphere
Host
Software, CLI,
and Utilities
Unisphere

Unexpected SP reboot.

619259

1.3.3.1.0072-1

Pool statistics are not updated after destroying LUNs.

612574

1.3.3.1.0072-1

After completing installation with the VNX Installation


Assistant (VIA), the system sometimes remained in an
unfused state.

566078

1.3.2.1.0051

Unisphere,
ESRS IP Client

Unable to install ESRS IP Client version 1.3 when selecting


proxy server connection option.

600457

1.3.2.1.0051

Unisphere,
FAST Cache,
FAST VP

In Fast Cache configurations, when a drive is removed from


the enclosure and then reinserted after a hot spare has been
activated, both the new hot spare and the removed disk
appear when you run the cache -fast -ino CLI command or
when you view system information though Unisphere

592310

1.3.2.1.0051

Unisphere

The Unisphere SP Properties dialog box shows the speed of


the management port as as 10Mbps, even if it is actually set to
Auto.
Any modifications made in the SP Properties dialog changes
the requested speed from Auto to 10Mbps.

593974

1.3.2.1.0051

Unisphere

When using Unisphere version 1.3.0 to log in to a domain


with a legacy Celerra system, the client reports that the
Celerra system is Not Logged in.
Unisphere off-array applications will not launch in Java 7
Update 45. This is caused by a known issue of Java 7 Update
45.
When running VNX Unisphere, a Java warning appears saying
that future Java versions will not work with the application

598046

1.3.2.1.0051

595857

1.3.2.1.0051

596623

1.3.2.1.0051

Unisphere
Analyzer

When opening a Unisphere Analyzer (NAR) file obtained from


a VNX system with no defined pools or LUNs the following
error message is generated:
An unknown error has occurred.

593280

1.3.2.1.0051

Unisphere
Analyzer

When VNX LUNs are connected to multiple hosts, searching


Unisphere Analyzer (NAR) files based on a Host criteria does
not show all connections.

579127

1.3.2.1.0051

Unisphere

Unisphere

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

63

Fixed in previous releases

Category

Description

Tracking

Fixed in version

Unisphere,
Serviceability

The USM registration wizard sometimes fails with the error:


The version of management software installed on the storage system
does not suport the storage system registration .

595816

1.3.2.1.0051

Unisphere,
Serviceability

USM application stops when at the VNX OE for File


Installation Status stage of the Install Software wizard. It
displays the following message:
Retrieving data. Please wait ...

594553

1.3.2.1.0051

Unisphere

The user received a security message when closing Unisphere


if using JRE 7u21 or later.

568652, 570148

1.3.1.1.0033

Unisphere

When running a compression command, the followng error


code was sometimes returned

592568, 589781

1.3.1.1.0033

Output: Unrecognized Error Code: (0xe12d8110)

Unisphere

IPv6 addresses appeared in the domain list when IPv6 was


configured on the CS and received any operation that
involved setup_slot.

575343

1.3.1.1.0033

Unisphere,
Serviceability
,
ESRS IP Client
Unisphere,
Powerpath

When uninstalling the ESRS IP Client, the ESRS IP Client


would sometimes report that ConnectEMC, one of the ESRS IP
Client components, could not be uninstalled.

577404

1.3.1.1.0033

Two hosts with the same name were displayed in the host list
in Unisphere.

548376, 566546

Power Path 5.7


SP2

USM

The pop up notifications for array software upgrades and disk


drive upgrades can display at the same time. The user may
need more time to read both notifications.

749831

1.3.9.1.0152-1

USM

An unexpected error occurred during USM Online Disk


Firmware Upgrade (ODFU).

644788

1.3.6.1.0096

USM

Unified Service Manager (USM) showed incorrect information


about available pool space while performing a diagnostic
check.

63613130/ 651218

1.3.6.1.0096

USM

USM was unable to view certain information in the report


page.

61047728/621079

1.3.3.1.0072-1

USM

A Data Mover error occurred during Installation.

51004590/613374

1.3.3.1.0072-1

USM

Customers are unable to complete an NDU via USM because


of a timeout.

643460

1.3.3.1.0072-1

USM

Information in the Unisphere Service Manager (USM)


Available Storage tab does not show all bus/loop information
associated with the VNX configuration.

603183

1.3.2.1.0051

USM

When running the Disk Replacement wizard in USM, the state


of a replaced disk remains unchanged.

576837, 577469

1.3.1.1.0033

USM,
Serviceability

An unexpected error was received while running the Install


DAE wizard. The error was seen during the Connect to
SPB/SPA Bus/Loop step. USM stopped the wizard with a
message saying a fault had been detected on the storage
system.

574069

1.3.1.1.0033

64

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Known problems and limitations

Category

Description

Tracking

Fixed in version

USM,
Serviceability

Install Software wizard keept reporting Control Station reboot


timeout, and waiting for the suggested 3 minutes did not
solve the problem.

575966

1.3.1.1.0033

USM,
Serviceability

The VNX for Block OE packages were not automatically


committed after running the USM software wizards to perform
an upgrade.

577921

1.3.1.1.0033

Known problems and limitations


Category

Details

Description

Symptom

Workaround/Version

VNX OE

Platform: Unified
Severity: Medium
Tracking: 748971

Unisphere GUI may display


stale information after an
array software upgrade.

Some new features/changes


may not be correctly
displayed by the Unisphere
GUI after array upgrade.

Clear the java cache after array


upgrade.

Block-to-Unified upgrade
failed.

The user should follow the


instruction in VIA to reconnect the
FC cable and then continue with
the ugprade. If the BTU upgrade
failed later due to the ndf
partition being offline, then retry
the upgrade.

Install,
Upgrade, VNX
Block OE,
VNX File OE

Platforms: All
Severity: Medium
Tracking: 652467

A Block-to-Unified upgrade
failed due to misconnected
FC cables from one Data
Mover to SPA and SPB.

Exists in versions:
8.1.9.184
8.1.9.155
8.1.8.119

Exists in versions:
05.33.009.5.184
05.33.009.5.155
05.33.008.5.119
8.1.9.184
8.1.9.155
8.1.8.121
8.1.8.119
VNX Block OE

Platform: VNX for


Block
Severity: High
Tracking: 796010

A LUN compression state is


listed as faulted.
Compression is not
automatically restarted
following resolution of a
failure of the LUN being
compressed.

The user received the


following message in the
GUI:
Compression
encountered an I/O
failure. Please
resolve any issues
with the LUN for
compression to
continue.
(0x71658221)

Trespassing the LUN will restart


the compression.
Exists in versions:
05.33.009.5.184
05.33.009.5.155

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

65

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

VNX Block OE

Platform: VNX
Severity: High
Tracking: 789919

LUN may show Faulted


status when deduplication
is being enabled.

Enabling deduplication
failed.

If the dedup migration


progression increases, then wait
for the migration to complete. If
the problem persists, contact
customer support.
Exists in versions:
05.33.009.5.184
5.33.009.5.155

VNX Block OE

VNX Block OE

VNX Block OE

VNX Block OE

66

Platform: VNX for


Block
Severity: High
Tracking: 775029

The reserved space of DLU is


more than the pool could
provide.

The user sees an SP panic.

Platform: VNX for


Block
Severity: Medium
Tracking: 795742

A raid group with types R1,


R3, R5 and R6 had drives
whose rebuilding
percentage displayed as
zero. This happened
because the downstream
TIMEOUT ERRORS
attribute was not cleared
correctly.

Drives showed rebuilding


percentage as zero.

Platform: VNX for


Block
Severity: Medium
Tracking: 757945

A deduplicated LUN
migration did not progress
or was faulted. The user had
trouble deleting the LUN.

Deduplication failed to
enable correctly on a LUN.

Platforms: VNX
Severity: Medium
Tracking: 701406

After removing a VNX


enclosure, Unisphere may
still show the enclosure as
present.

1. Find the storagepool whose


consumed space is larger than
usable space.
2. Expand the storagepool or
destroy some DLUs in the pool.
Exists in versions:
05.33.009.5.184
5.33.009.5.155
Reboot the storage processor
from which the raid group has the
TIMEOUT ERROR attributes.
Exists in versions:
05.33.009.5.184
05.33.009.5.155
Delete the private migration LUN.
Exists in versions:
05.33.009.5.184
05.33.009.5.155
Try to restart the management
server via the Unisphere Setup
page. If the removed enclosure
still appears, reboot the Storage
Processor (SP).
Exists in versions:
05.33.009.5.184
05.33.009.5.155
05.33.008.5.119
05.33.006.5.102
05.33.006.5.096

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

VNX Block OE

Platforms: All
Severity: Medium
Frequency: Rarely
under a rare set of
circumstances
Tracking: 686838

A single VNX Storage


Processor (SP) bugcheck
occurs when a migration
process attempts to start a
new session on storage pool
LUN.

VNX Storage Processor (SP)


bugcheck occurs during a
migration process.

Exists in versions:
05.33.009.5.184
05.33.009.5.155
05.33.008.5.119
05.33.006.5.102
05.33.006.5.096

VNX Block OE

Platforms: All
Severity: Medium
Tracking: 649384

Encryption percentage does


not change.

VNX Block OE

Platforms: All

When migrating LUNs within


the same deduplication
domain, deduplication
savings are lower than
expected when migration is
complete.

Severity: Medium
Frequency:
Occasionally
Tracking: 611558

For more information, refer


to KnowledgeBase article
176653.

Look for the following activities in


the system:
Faulted disk
Disk zeroing in progress
Disk rebuild in progress
Disk verify in progress
Exists in verions:
05.33.009.5.184
05.33.009.5.155
05.33.008.5.119
05.33.006.5.102
05.33.006.5.096
05.33.000.5.081
05.33.000.5.079
05.33.000.5.074
05.33.000.5.072
Avoid migrating LUNs within a
deduplication domain. If you
need to move the current LUN to a
larger one, use the LUN
expansion functionality instead.
Exists in versions:
05.33.009.5.184
05.33.009.5.155
05.33.008.5.119
05.33.006.5.102
05.33.006.5.096
05.33.000.5.081
05.33.000.5.079
05.33.000.5.074
05.33.000.5.072
05.33.000.5.051

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

67

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

VNX Block OE

Platforms: All

When downloading firmware


to drives using The Online
Disk Firmware Upgrade
(ODFU) process, if a drive in
a RAID group is faulted
during the download
process it can take an
extended time (up toseveral
days) for the download
process to fully complete.
ODFU shows that it is in the
ACTIVATE state for the
remaining drives in the
degraded RAID group.

In this case, ODFU process


is not hung but is paused
waiting for the RAID group
rebuild to complete.

Wait for the RAID group rebuild to


complete and the ODFU process
will automatically resume.

The VNX Security


Configuration Guide lists
steps required to change the
Unisphere session timeout
period. These steps only
apply to VNX Unified and
VNX File-only systems. VNX
Block-only systems do not
provide a way to adjust the
Unisphere timeout session
period.

The timeout value for


Unisphere on VNX Blockonly systems cannot be
changed with VNX
Unisphere or the VNX for
Block CLI.

Severity: Medium
Frequency: Rarely
Tracking: 616702

VNX Block OE

Platforms: All
Severity: Minor
Frequency: Always
Tracking: 617430

68

Exists in versions:
05.33.009.5.184
05.33.009.5.155
05.33.008.5.119
05.33.006.5.102
05.33.006.5.096
05.33.000.5.081
05.33.000.5.079
05.33.000.5.074
05.33.000.5.072
05.33.000.5.051
Exists in versions:
05.33.009.5.184
05.33.009.5.155
05.33.008.5.119
05.33.006.5.102
05.33.006.5.096
05.33.000.5.081
05.33.000.5.079
05.33.000.5.074
05.33.000.5.072
05.33.000.5.051

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

VNX Block OE

Platforms: All

After a non-disruptive
upgrade (NDU), the
management port for one of
the VNX systems Storage
Processors (SPs) may
transition to disconnected
status.

In rare cases, a Broadcom


driver can cause issues
when bringing up VNX
system SP management
ports after a NDU.

Log in through the VNX system


service port, then first disable
and then re-enable the
disconnected VNX SP
management port.

Severity: Low
Frequency: Seldom
Tracking: 613348

If disabling and then re-anabling


the SP port does not work, try
rebooting the SP.
Exists in versions:
05.33.009.5.184
05.33.009.5.155
05.33.008.5.119
05.33.006.5.102
05.33.006.5.096
05.33.000.5.081
05.33.000.5.079
05.33.000.5.074
05.33.000.5.072
05.33.000.5.051

VNX Block OE

Platform: VNX for


Block
Severity: Low
Tracking: 794267

When an automatic failover


is initiated in VMSM, the
failover text output is not
displayed in real time.

The failover command


reports an elapsed time
longer than timestamp in
the Main Log window.

VNX Block OE

Platforms: All
Severity: Medium
Frequency: Rarely
under specific
circumstances
Tracking: 569147

The Service Processor


Enclosure (SPE) fault LED
does not turn on after SPE
battery removal.

This issue occurs when both


the xPE SPA battery and the
DAE 0_0 SPB battery are
removed. In this case, the
fault LED on DAE 0_0 will
assert, but the fault LED on
the xPE will not. In this
specific case, the fault LED
should assert on the xPE
because a second SPS on
both the xPE and DAE allows
cache to remain enabled.

Platforms: All
Severity: Medium
Frequency: Rarely
under specific
circumstances
Tracking: 568692

Online Drive Firmware


Upgrade (ODFU) will wait for
a degraded RAID group to
become healthy before it
will complete the firmware
upgrade of all drives in that
degraded RAID group.

This problem occurs when


using USM to install
firmware on drives that are
in a degraded RAID
group. Some drives in the
degraded RAID group may
be shown as Non-qualified,
while other drives wont be
listed by USM and wont be
upgraded. The ODFU screen
may show 100% in the
progress bar, although the
installation status will still
show In progress.

VNX Block OE

Check the output of the


commands only after the
commands are finished via ssh.
Exists in versions:
VDM MetroSync Manager 2.0.6
No workaround.
Exists in versions:
All 5.33 versions.

The workaround is to repair the


degraded RAID group, so it is
healthy again and the firmware
upgrade can continue/complete,
or cancel the outstanding ODFU
from USM.
Exists in versions:
All 5.33 versions.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

69

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

VNX Block OE

Platforms: All
Severity: Medium
Frequency: Rarely
under specific
circumstances
Tracking: 566151

Unexpected error is
displayed during the online
disk firmware upgrade
(ODFU).

When using the ODFU


Wizard, if you cancel the
operation after it has
already started, the nondisruptive upgrade (NDU)
installation that is part of
the underlying
implementation will
continue. The Wizard
confirms the cancellation
even though the NDU
operation is ongoing and
cannot be interrupted.

Wait until the NDU operation that


is part of the ODFU Wizard's
activities is actually completed
before retrying the Wizard. This
can be determined by running
naviseccli ndu status
command and confirming the
status is: Status: Operation
completed successfully. If so,
then retry the ODFU Wizard.

VNX Block OE

VNX Block OE

70

Platforms: All
Severity: Medium
Frequency: Always
under specific
circumstances
Tracking: 558151

When a module is removed


from an enclosure,
approximately 3 seconds
elapse before the removal
detection occurs.

Platforms:
VNX5200, VNX5400,
VNX5600, VNX5800,
VNX7600
Severity: Medium
Frequency: Always
under specific
circumstances
Tracking: 554516

On the battery back-up


(BBU) unit, the marker LED
does not get set for certain
fault conditions.

If you retry the ODFU Wizard


and it attempts to perform
the underlying NDU
operation again, it may
result in an error saying that
a package is already
installed. The error is an
accurate reflection of what
is happening on the array.
It takes about 3 seconds for
the drive or the enclosure to
be reported as removed
after a drive pull or an
enclosure pull. In other
words, there is a 3-second
delay between the time the
drive is rendered out of
service and the time the
drive is detected as
removed.
On the backup battery unit,
in certain Battery failure
cases, the fault LED (located
on the battery backup unit)
will not assert. In most
cases, the firmware will
assert the LED. But if the
BBU fails due to specific
self-test faults, or the
battery backup unit fails to
charge fully after 16 hours,
the fault LED will not be set.

Exists in versions:
All 5.33 versions.

No workaround.
Exists in versions:
All 5.33 versions.

No workaround.
Exists in versions:
All 5.33 versions.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

VNX Block OE

Platforms:
VNX5200, VNX5400,
VNX5600, VNX5800,
VNX7600, VNX8000
Severity: Low
Frequency:
Infrequently under
specific
circumstances
Tracking: 564282

The Dirty Cache Pages

Dirty Cache Pages (MB)

message displays an
estimation that may be
higher than the actual value
for short time periods,
especially while the SP
Cache is in the Disabling
state.

No workaround.

(MB):, message displayed

Platforms: All
Severity: Low
Tracking: 561589

A Windows 2012 server may


unexpectedly reboot.

Windows 2012 server


reboots unexpectedly when
the specified threshold of
70% is crossed on the
storage pool. This event only
happens with
NativeWindows MPIO and
when no Multi-pathing is
enabled on the server.

No workaround.

Platforms:
VNX5200, VNX5400,
VNX5600, VNX5800,
VNX7600, VNX8000
Severity: Low
Frequency:
Frequently under
specific
circumstances
Tracking: 555603

A Background Verify
operation may not complete
while a RAID type 10 is
initializing after a bind
operation.

A Background Verify
operation may not complete
as expected; it is not normal
to issue a Background Verify
operation immediately after
a bind operation.

Reissue the Background Verify


operation after the initialization is
complete.

Platforms: All
Severity: Low
Frequency: Always
under specific
circumstances
Tracking: 549338

A decompressing LUN
cannot be destroyed.

Compression uses LUN


Migrator technology for
decompression and
destroying a LUN under this
condition is not allowed.

Enable compression on the LUN


before destroying it, or wait for
decompression to complete.

VNX Block
OE,
Compression

Platforms: All
Severity: Low
Frequency:
Infrequently under
specific
circumstances
Tracking: 543919

A LUN shrink operation on a


LUN with compression
enabled takes a long time.

Compression has internal


logic to iterate many times
for the shrink operation to
properly clear the data.

No workaround.

VNX Block
OE, Virtual
Provisioning

Platforms: All
Severity: Medium
Frequency: Always
during a specific
event.
Tracking: 556191

In the case of single-fault


disk failures, the pool LUN
state is reported by the nonowning SP as ready when
the LUNs are actually
faulted.

On an idle array (where no


IOs are running on the
array), in the case of singlefault disk failures (RAID
protection is still intact), the
pool LUNs on the nonowning SP show as ready
when they should show as
faulted like the LUNs on the
owning SP.

If IOs are running on the array, or


even a single write is done to the
LUN right after the disk fault
occurred, the situation will
correct itself after two minutes
and both SPs will report pool
LUNs as faulted.

VNX Block OE

VNX Block OE

VNX Block
OE,
Compression

by the Unisphere CLI after


issuing the command cache
-sp -info, may be inexact.

Exists in versions:
All 5.33 versions.

Exists in versions:
All 5.33 versions.

Exists in versions:
All 5.33 versions.

Exists in versions:
All 5.33 versions.

Exists in versions:
All 5.33 versions.

Exists in versions:
All 5.33 versions.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

71

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

VNX Block
OE, Virtual
Provisioning

Platforms: All
Severity: Medium
Frequency: Always
during a specific
event
Tracking: 539153

Windows Server 2012


storage pool low space
warning shows the incorrect
used and available capacity
of the pool.

When the storage pool low


space warning threshold is
crossed, the event logged
on a Windows 2012 server
does not show the correct
used and available capacity
of the pool.

This is a reporting issue with the


Windows Server event log. The
correct used and available
capacity are displayed in the
Unisphere UI.

Platforms: All
Severity: Medium
Frequency: Rarely
under a rare set of
circumstances
Tracking: 481887

LUN destroy operation fails


with snapshots exist error if
a LUN is destroyed
immediately after destroying
snapshot(s) associated with
the LUN.

Snapshot destroy is an
asynchronous operation. If a
LUN is destroyed
immediately after destroying
its snapshots, there is a
possibility of a snapshot
being in the destroying state
that prevents the LUN from
being destroyed.

Wait for the snapshots to be


destroyed or include the
destroySnapshots switch while
destroying the LUN.

Platform: VNX for


File
Severity: High
Tracking: 772096

The Preserved RepV2 restore


failed with the error
13422034976:
Internal
communication error

The following error is


displayed: 13422034976:
Internal
communication
error...

After the apl_task_mgr has


automatically restarted, retry the
operation again.

After disabling
deduplication on a LUN, it
can take longer than
expected to enable
deduplication on the same
LUN again.

When attempting to enable


or disable the deduplication
on a LUN while it is
disengaging from its
deduplication destination,
the system returns the
following error:

Wait for the enable or disable


deduplication operation to
complete, and then try again.
Exists in versions:
8.1.9.184
8.1.9.155
8.1.8.121
8.1.8.119
8.1.6.101
8.1.6.96

VNX Block
OE, VNX
Snapshots

VNX File OE

VNX File OE

Platforms: All
Severity: High
Tracking: 661800

Could not set


properties:(Deduplic
ation: SP A: Cannot
enable/disable
Deduplication on the
LUN %x which is not
ready. If problems
persist please
gather SP Collects
and contact your
service provider.
(0x716a841b)).

72

Exists in versions:
All 5.33 versions.

Exists in versions:
All 5.33 versions.

Exists in versions:
8.1.9.184
8.1.9.155
8.1.8.132

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

VNX File OE

Platforms: All
Severity: High
Tracking: 763988

During a failover, if the


failed source system is not
stable during the operation,
IP address conflicts can
occur.

IP address conflicts after


failover.

Disconnect the source interfaces


from network.

VNX File OE

Platforms: All
Severity: High
Tracking: 758202

When there are several


simulanous active repv2
sessions running, an
apl_task_mgr bugcheck can
occur.

An error such as the


following is generated:

Retry the failed command after


the apl_task_mgr process
automatically starts.

Platform: VNX for


File
Severity: Medium
Frequency: Rarely
Tracking: 838196

The latest home directory for


the VNX administration
account is not always
created on the standby
Control Station. When the
standby Control Station
becomes the primary
Control Station, this VNX
administration account does
not have its home directory,
preventing some
administrative functions.

VNX File OE

VNX File OE,


NDMP

Platform: VNX for


File
Severity: Medium
Frequency: Always
Tracking: 836772

An NDMP restore operation


does not overwrite the
symlink if a symlink with the
same name already exists
on the target file system.

repv2_tsys_fs_16_1,Failed,
Error 13432061960: Repv2
session repv2_tsys_fs_16_1 on
remote system ID=0 cannot be
created with the response:
Operation task id=196490 on
Secondary-VNX8000 at first
internal checkpoint mount state
Some USM operations (for
example: System
Verification Wizard,
Registration wizard, View
System Config Report) fail,
with the USM log file
showing the error: Could
not chdir to home
directory.

During an NDMP restore, an


error similar to the following
occurs:
42619:nsrndmp_recove
r: NDMP Service
Warning: Cannot
restore soft link
{link_name} File
exists.

Create the home directory


(/home/sysadmin<n>) for the
administration account that is
missing its home directory with
the command:
# mkdir
/home/sysadmin<n> &&
chmod 700
/home/sysadmin<n>;
chown
sysadmin<n>:nasadmin
/home/sysadmin<n>
This command must be run on
the Control Station that is missing
the home directory and that
Control Station must be
configured as the primary Control
Station.
Exists in versions:
8.1.9.184
Delete the symlink on the target
file system before performing the
NDMP restore.
Exists in versions:
All 8.1 versions

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

73

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

VNX File OE,


ReplicationV2

Platform: VNX for


File
Severity: Medium
Tracking:
53766452 / 751328
/ 831929

While a replication internal


checkpoint is being
refreshed, the
server_mount -a
command sometimes
causes a page fault panic.

server_mount a
sometimes causes a page
fault panic while replication
internal checkpoint is being
refreshed.

In the fixed version, when


mounting a file system, all
sessions are checked for internal
checkpoints. If an internal
checkpoint is found, the mount is
denied. The message Device
or resource busy will be
displayed instead of a Data
Mover panic.
Fixed in version:
8.1.9.155
Exists in versions:
8.1.9.184
8.1.8.132
8.1.8.121
8.1.8.119
8.1.6.101
8.1.6.96
8.1.3.79
8.1.3.72
8.1.2.51
8.1.1.33

VNX File OE,


System
Management

VNX File OE,


System
Management

74

Platform: VNX for


File
Severity: Medium
Frequency: Rarely
under a specific set
of circumstances
Tracking: 843948

CLI commands that require


using the System
Management daemon fail.

Platform: VNX for


File
Severity: Medium
Frequency: Always
under a specific set
of circumstances
Tracking: 841438

An out of space indication or


event occurred while
restoring SnapSure
checkpoints.

CLI commands that require


the System Management
daemon fail with an error
similar to the following:
Error 13958709251:
For task Query
control stations, an
invalid list was
returned.

Restart the System Management


daemon.

After a server_mount
server_2 restore
all restore operation,
some of the related NFS
pathnames or CIFS shares
were no longer available to
client systems. Upon
investigation with using the
server_export ALL
command, it was found that
the unavailable NFS
pathnames and CIFS shares
were no longer defined on
the VNX as exported file
system mount points.

Use the appropriate


server_export command to
export each missing NFS
pathname and each missing CIFS
share. For example:
# server_export vdm1
-Protocol nfs -option
anon=0
/fs1_ckpt1_writeable1

Exists in versions:
All 8.1 versions

Exists in versions:
8.1.9.184

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

VNX File OE

Platform: VNX for


File
Severity: Medium
Tracking: 802648

After a repV2 operation such


as failover/reverse, the
interface attached on the
source Virtual Data Mover
(VDM) replicated to the
target VDM was not created
in the target Data Mover.

The VDM replication failed


due to a source attached
interface that had been
synchronized to the
destination.

The workaround is to create the


missed interface manually so that
all interfaces attached on the
VDM exist in both the source and
the destination Data Mover.
Exists in versions:
8.1.9.184
8.1.9.155

VNX File OE

Platforms: All
Severity: Medium
Tracking: 701819

After a cache lost event,


running Storage Pool
recovery failed, and the
Storage Pool was
inaccessable.

Storage pool resources are


temporarily unavailable.

Manually fix the corruption and


run Pool Recovery again.
Exists in versions:
8.1.9.184
8.1.9.155
8.1.8.121
8.1.8.119
8.1.6.101
8.1.6.96

VNX File OE

Platforms: All
Severity: Medium
Tracking: 661888

When a VNX Storage


Processor (SP) reboot is
initiated, the SP can stop
booting because of a POST
error. The SP will appear to
be faulted by its peer
storage processor.

Connecting the faulted SP


with console server or
terminal server, displays a
message such as ErrorCode:
0x00000310.

Manually reset the


Storage Processor.
Exists in versions:
8.1.9.184
8.1.9.155
8.1.8.121
8.1.8.119
8.1.6.101
8.1.6.96

VNX File OE

Platforms: All
Severity: Medium
Tracking: 564012

The following error occurred


when the user renamed a
mapped pool in the
backend:
com.emc.cs.symmpoller.Agen
tException: The specified
object does not exist.

The pool name on the


Control Station (CS) is
different from the pool name
on the backend. This error
results when renaming a
mapped pool on the
backend, but the
nas_diskmark command is
not run on the Control
Station to synchronize the
pool names.

Run the nas_diskmark -m a


command on the CS after
performing any of the following
operations related to the
~filestorage group:
Add/remove a LUN.
Turn on, turn off, pause,
resume or modify
Compression on a LUN.
Modify the Tiering Policy or
Initial Tier of a LUN.
Create/destroy a mirror
(both sync and async) on a
LUN.
Exists in versions:
All 8.1 releases.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

75

faulted

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

VNX File OE,


CIFS

Platforms: All
Severity: Medium
Tracking: 566436

VNX CIFS server cannot be


managed by Windows 2012
Server Manager.

When a VNX CIFS server is


added to a Windows 2012
Server manager, the
following error is displayed:

Microsoft Windows 2012 Server


Manager requires Microsoft
Agent on the server; EMC does
not support third party
installation on its server.
To access information on a server
managed by a VNX storage
system, you must use 'computer
management' instead of 'server
manager'.

cannot manage the operating


system of the target computer

Exists in versions:
All 8.1 releases.
VNX File OE,
ESRS (Control
Station)

Platforms: All
Severity: Critical
Tracking: 557982

Unisphere failed to access


the Control Station.

When attempting to connect


to the array by using
Unisphere and the ESRS
Service Link, a Certificate
Error may occur.

Close or terminate the existing


session using the terminate
button. Alternately, go to Service
Link > View All > End (for all
existing sessions) and start a new
session.
Exists in versions:
All 8.1 releases.

VNX File OE,


MetroSync
Manager,
Synchronous
Replication

Platforms: All
Severity: Medium
Tracking: 769649

When VDM MetroSync


Manager service is stopped,
the replication session
status shown in VDM
MetroSync Manager does
not update in a timely
manner.

VNX File OE,


Migration

Platforms: All
Severity: Critical
Tracking: 565968

Usermapper service was


enabled on destination
while it was disabled on
source.

On global configuration
migration, if usermapper is
disabled on the source
cabinet, warning is reported
but manual operation is
asked to the user to disable
the target usermapper
service.

Manually disable the usermapper


service on the target.

Platforms: All
Severity: Low
Tracking: 565886

Migration information does


not accurately reflect the
task information and
percent complete on the
source system.

The UI provides inaccurate


progress information on
migration tasks.

CLI and log files provide more


accurate info.

Platforms: All
Severity: Low
Tracking: 565259

nas_migrate -i
<migration name> printed

Some warning messages


relative to other migration
sessions running on the
same box could appear
when information is
requested on a particular
session.

These messages are only


warnings and will not impact the
execution of the current session.

VNX File OE,


Migration

VNX File OE,


Migration

76

other migrations' warning


messages.

Perform a check status


operation to obtain the current
sync replication task status.

Exists in versions:
All 8.1 releases.

Exists in versions:
All 8.1 releases.

Exists in versions:
All 8.1 releases.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

VNX File OE,


RecoverPoint
FS

Platforms: All
Severity: Critical
Tracking: 570980

The Recover Point init


command failed while
discovering storage on a
source site.

Running /nas/sbin/nas_rp

This issue rarely occurs. Rerun


/nas/sbin/nas_rp -cabinetdr -init
<cel_name>.

-cabinetdr -init
<cel_name> failed with the

following error:
Error 5008: Error discovering
storage on src site.

VNX File OE,


RecoverPoint
FS

Platforms: All
Severity: Medium
Tracking: 644251

RecoverPoint failback fails.

When running RecoverPoint


failback, the following error
appears, then the operation
fails:
13421849338: File system FS1 is
made up of disk volumes with
inconsistent disk types: d7 (dense
UNSET),d8 (dense UNSET),d9
(dense UNSET),d10 (dense
UNSET),d11 (dense
Mirrored_performance),d12 (dense
UNSET),d13 (dense
Mirrored_performance),d14 (dense
UNSET),d15 (dense UNSET),d16
(dense Mirrored_performance)

VNX File OE,


Replication

VNX File OE,


Replication

Platforms: All
Severity: Critical
Tracking: 586433

RPA reboot regulation and


detach form cluster.

Platforms: All
Severity: Critical
Tracking: 568516

Replication switchover
stops after the source Data
Mover fails over.

Exists in versions:
All 8.1 releases.
No workaround.
Exists in versions:
8.1.9.184
8.1.9.155
8.1.8.121
8.1.8.119
8.1.6.101
8.1.6.96
8.1.3.79
8.1.3.72

Replication replica cluster


will become unstable and
rpa will reboot until affected
CG is expelled or Journal
Volume LUN configuration is
fixed.

Disable group will stabilize RP


system.

The replication session fails;


no transfer will continue
although the session status
shows fine. A message such
as the following appears in
/nas/log/sys_log.txt:

Add disk space to the SavVol


pool.

Exists in versions:
All 8.1 releases.

Exists in versions:
All 8.1 releases.

CS_PLATFORM:SVFS:ERROR:2
1:::::Slot4:1372057453:
/nas/sbin/rootnas_fs -x
root_rep_ckpt_978_56827_1
QOSsize=20000M Error 3024:
There are not enough free disks
available to satisfy the request..

VNX File OE,


Replication

Platforms: All
Severity: Critical
Tracking: 558919

RecoverPoint File: nas_rp failback - Error


13421904947:
Unable to communicate with
Control Station on source site.

The primary system has not


completely rebooted,
resulting in the error.

Wait for the Primary Control


Station to complete its reboot
operation before running nas rp failback.
Exists in versions:
All 8.1 releases.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

77

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

VNX File OE,


Synchronous
Replication

Platforms: File and


Unified
Severity: High
Tracking: 768954

When a user creates a


na_syncrep session for an
interface attached to a
virtual data mover (VDM),
the network mapping is not
displayed when running the
nas_syncrep info command.

Network interface
information is not displayed
in the nas_syncrep -info
details.

Check the device mapping by


running the following FILE CLI
commands:
1. On the active system, run:
nas_syncrep -i sync_id

where the VDM ID = 155124.


2. Check the interface on the VDM
by running:
nas_server -i -v
vdm_155124

3. Then find the interface on the


destination system by running:
server_ifconfig server_n
-all
server_2:

VNX File OE,


Platform,
Synchronous
Replication

Platforms: File and


Unified
Severity: High
Tracking: 737780

If you delete a synchronous


replication session locally
on the standby side by
using the nas_syncrep delete
command and the
nas_syncrep -delete -local
command, the session
fails.

The following errors are


returned:

VNX File OE,


Platform,
Synchronous
Replication

Platforms: File and


Unified
Severity: Medium
Tracking: 730159

When creating a
synchronous replication
session, the process fails
because storage is locked
on the remote end.

The following error is


returned:

VNX File OE,


Platform,
Synchronous
Replication

Platforms: File and


Unified
Severity: Low
Tracking: 751082

When reversing or failing


over sync replication
session, if any active or
inactive interface at the
remote end uses the same
IP address with the source
VDM, reverse/failover fails
with the following error:

If the interface at remote


side is active, the error is
expected. However, if the
interface at remote side is
down, there is no impact on
reverse/failover.

Error 13431997103: The mirror


<mirror_name> is half removed.
Error 13431997108: Error occur
when remove consistency
group.

Power down the remote site and


use the nas_syncrep -delete -local to
command delete the sync
replication session locally at
active site.

Wait and then retry the operation.

Error 13431997070: Failed to


synchronize local storage on
remote site when creating
sync replication session.
At remote side, delete the
interfaces which uses the same IP
address with the source VDM.

Error 13431996668: IP
address conflict of network
interface <interface_name> on
Data Mover <server_name>.

78

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

VNX File OE,


System
Management

Platforms: All
Severity: Critical
Tracking: 563493

Failure to create a file


system from a NAS pool.
The following error message
displays:

Some LUNs were bound


from a RAID group. The user
ran diskmark and then
created file systems on
them. Some of these LUNs
were moved to another thin
pool via LUN compression or
migration. The new file
system could not be created
on these LUNs and diskmark
failed.

Avoid operations that lead to LUN


migration when file systems are
built on them.
Verify that the disk type of all disk
volumes under the file system are
from the same mapped pool or
compatible RAID group based
pool(s). For file system extension,
choose the matching pool that
the current storage of the file
system is from.

File system cannot be built


because it would contain space
from multiple mapped pools or a
mix of both mapped pool and
RAID group based pool.

VNX File OE,


System
Management

Platforms: All
Severity: Critical
Tracking: 558325

During an upgrade, if there


are heavy writes to a thin file
system, operations can fail
with an out of space
error.

The out of space error may


occur during an upgrade
when the system encounters
write operations on file
systems. The file system
auto-extension cannot
complete because the NAS
service is down during the
upgrade.

VNX File OE,


System
Management

Platforms: All
Severity: Medium
Tracking: 648929

CS cannot be added to the


master domain. File
functions in Unisphere were
disabled as a result.

If the hostname was not


mapped to a public
Operating System IP, the CS
could not be added to
master domain.
As a result, the File Function
in the Unisphere GUI was
disabled if the user logged
in with the domain user
account.

VNX File OE,


System
Management

Platforms: All
Severity: Medium
Tracking: 645204

SSL Certificate Weak Public


Key Strength.

SSL Certificate Weak Public


Key Strength Found Value:
bits [=] 1024.

Exists in versions:
All 8.1 releases.
Limit or suspend write operations
during upgrades.
Exists in versions:
All 8.1 releases.

Ensure there is a public CS IP


entry in /etc/hosts. If there is not,
add one.
Exists in versions:
8.1.9.184
8.1.9.155
8.1.8.121
8.1.8.119
8.1.6.101
8.1.6.96
8.1.3.79
8.1.3.72
If the current certificate uses a
weak key, a new certificate can
be generated by executing
"nas_config -ssl" in the control
station in order to generate a new
key with 2048 length.
Exists in versions:
8.1.9.184
8.1.9.155
8.1.8.121
8.1.8.119
8.1.6.101
8.1.6.96
8.1.3.79
8.1.3.72

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

79

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

VNX File OE,


System
Management

Platforms: All
Severity: Medium
Tracking: 624401

Deduplication LUN is left in


inconsistent state when
Storage Processor (SP)
reboots and
enabling/disabling
operation on its way to
completion.

Deduplication on a private
LUN hangs at enabling for
weeks.

Please contact your service


provider to recover the LUNs to
normal state.
Exists in versions:
8.1.9.184
8.1.9.155
8.1.8.121
8.1.8.119
8.1.6.101
8.1.6.96
8.1.3.79
8.1.3.72

VNX File OE,


System
Management

Platforms: All
Severity: Medium
Tracking: 577681

An error posts during rescan


stating that root_disk(s) are
unavailable when the limit
on the SG is set that
contains the control
volumes.

Error during rescan after


setting Host I/O Limits
(previously called Front-End
I/O Quota) on UVMAX SG
that contains the control
volumes.

No workaround.
Exists in versions:
8.1.9.184
8.1.9.155
8.1.8.121
8.1.8.119
8.1.6.101
8.1.6.96
8.1.3.79
8.1.3.72

VNX File OE,


System
Management

Platforms: All
Severity: Medium
Tracking: 546410

Error 5007:

Running nas_quotas
occasionally produces the
following error:

If the error occurs, retry the


operation.

VNX File OE,


System
Management

80

Platforms: All
Severity: Medium
Tracking: 540001

server_5: /nas/sbin/repquota -u
/nas/server/server_4/quotas_fs_file
/nas/server/server_4/quotas_uid_file
.etc/rpt_file .etc/rpt_file.sids : :
exec failed

The battery properties in the


inventory page do not
display input power.

nas_quotas CMD failed:


/nas/bin/nas_quotas -user -report fs pbrfs_0005 501 Error 5007:
server_5: /nas/sbin/repquota -u
/nas/server/server_4/quotas_fs_fil
e
/nas/server/server_4/quotas_uid_fi
le .etc/rpt_file .etc/rpt_file.sids : :
exec failed

Users are unable to obtain


detailed power information
when using the GUI or CLI.

Exists in versions:
All 8.1 releases.

No workaround.
Exists in versions:
All 8.1 releases.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

VNX File OE,


System
Management

Platforms: All
Severity: Low
Tracking: 646174

Unisphere incorrectly
displays alerts that the VNX
Block OE and VNX File OE
versions are not compatible.

After fixing a compatibility


problem, Unisphere
displayed alerts that the
VNX Block OE and VNX File
OE versions are not
compatible.

VNX File OE,


System
Management

Platforms: All
Severity: Low
Tracking: 569073

SavVol auto extension does


not use up all space left in
the pool if the pool is
smaller than 20 GB.

Currently, SnapSure SavVol


auto-extend asks for a
certain amount of storage
(for example, 20 GB) which
is calculated from storage
pool properties (such as
HWM). If the storage pool
has less storage than that
amount (such as 19GB), the
SavVol auto-extend will fail
and some existing
checkpoints may be
inactive. The following alert
displays in Unisphere:

Manually delete the alert from the


GUI.
Exists in versions:
8.1.9.184
8.1.9.155
8.1.8.121
8.1.8.119
8.1.6.101
8.1.6.96
8.1.3.79
8.1.3.72
Add more storage into the storage
pool used by SnapSure SavVol
(make sure at least 20GB is free)
and the SavVol auto-extend will
succeed on the next attempt.
Exists in versions:
All 8.1 releases.

/nas/sbin/rootnas_fs -x
<ckpt_name> QOSsize=20G Error
12010: The requested space
20000 is not available from the
pool <pool_name>.

VNX File OE,


System
Management

Platforms: All
Severity: Low
Tracking: 573536

Storage Pools for File is


empty in the GUI even
though multiple pools exist.

There will always be a small


amount of storage (for
example, 19GB) that cannot
be fully consumed by
SavVol auto-extend, but
which can be consumed by
other operations (such as
Production File System autoextend).
When running nas_diskmark
on a system with massive
LUNs, there is a very low
possibility that diskmark
succeeds; the GUI does not
receive data about its
Storage Pool for File.

Press the "Rescan Storage


System" or manually run
nas_diskmark again.
Exists in versions:
All 8.1 releases.

The Storage Pool for File


page has no change.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

81

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

VNX File OE,


System
Management

Platforms: All
Severity: Low
Tracking: 532380

The following error can


occur when using
naviseccli.bin after a fresh
installation:

The file
/usr/lib/libNonFipsUtilities.
so is missing from the
Control Station after an
Express Install installation.

Do not use naviseccli.bin. Use


naviseccli instead.

When connecting a VNX via


a CS IP where the CS's
certificate cannot be
verified, a web browser
generates a popup, Do you
wish to continue? It also
provides a checkbox,
"Always trust content from
this publisher". If it is
unchecked, the login screen
will not display and there
will be communication
errors with the CS.

Check the checkbox "Always trust


content from this publisher".

./naviseccli.bin: error while loading


shared libraries:
libNonFipsUtilities.so: cannot open
shared object file: No such file or
directory

VNX File OE,


Unisphere

VNX File OE,


VNX
Installation
Assistant
(VIA)

VNX File OE,


VNX
Installation
Assistant
(VIA)

82

Platforms: All
Severity: Low
Tracking: 573444

Platforms: All
Severity: Medium
Frequency: Rarely
under specific
circumstances
Tracking: 593647

Platforms: All
Severity: Medium
Tracking: 566667

After File install & VIA


execution, the initial
Unisphere login to a Unified
system fails with a
Connection Error.
Subsequent login attempts
succeed.

VIA window closes after


clicking Next from Welcome
screen.

VIA fails to change the


sysadmin password on the
Apply page

The Java pop up warning


does not appear until you
click Show Options.
After launching VIA, the
Welcome screen appears. If
Next is clicked, the VIA
application closes.

An error message displays


continuously when trying to
change the password on the
storage system and the retry
fails.

Exists in versions:
All 8.1 releases.

Exists in versions:
All 8.1 releases.

Run Windows ipconfig command


to determine if the system has
more than six connections. If so,
disable the unused connections
to make the total number no
more than six and retry the VIA
operation.
Exists in versions:
All 8.1 releases.
Repair any network connectivity
issues on the storage system.
Wait 10 minutes, then click Retry.
Exists in versions:
All 8.1 releases

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

Unisphere

Platforms: All
Severity: Critical
Frequency:
Occasionally
Tracking: 616169

When creating a storage


pool of "MAX" size while I/O
is running to other LUNs in
that pool, Unisphere can
return error that states that
the requested LUN size is
too large.

Since I/O is running, the


amount of available space
can change from the time
the request to create the
pool is initiated until the
time it is processed by the
system.

Try to create storage pool again,


or select a specific size for the
storage pool.

Platforms: All
Severity: Critical
Frequency:
Occasionally
Tracking: 612365

When managing background


tasks in Unisphere, the
Abort and Delete buttons
appear disabled in the
Background Tasks tab.

Users must have admin


privileges to abort or delete
background tasks.

Unisphere

Platforms: All
Severity: Critical
Frequency:
Always under a
specific set of
cirumstances
Tracking: 611766

When a VNX system is


assigned a hostname
composed strictly of
numeric characters, the
system will not start and
users cannot access the
system through Unisphere.

Unisphere

Platforms: All
Severity: Critical
Frequency:
Occasionally
Tracking: 606043

When changing the

Use the VNX for Block CLI

Recommended Ratio of
Keep Unused setting from

hotsparepolicy -set

Unisphere

30 to 60, insufficient disk


space may be reserved for
hotspares and may increase
the risk of failure.

Exists in versions:
1.3.9.1.184
1.3.9.1.155
1.3.8.1.0119
1.3.3.1.0072-1
1.3.2.1.0051
Ensure that anyone who wants to
delete or abort a background task
has admin privileges in
Unisphere.
Exists in versions:
1.3.9.1.184
1.3.9.1.155
1.3.8.1.0119
1.3.6.1.0096
1.3.3.1.0072-1
1.3.2.1.0051

command to overwrite the


recommended value (for
example, overwriting 60
with 30) to the desired
value.
For example:

Workaround: change the VNX


system hostname so that the
hostname contains one or more
non-numeric characters.
Exists in versions:
1.3.9.1.184
1.3.9.1.155
1.3.8.1.0119
1.3.6.1.0096
1.3.3.1.0072-1
1.3.2.1.0051
1.3.1.1.0033
Exists in versions:
1.3.9.1.184
1.3.9.1.155
1.3.8.1.0119
1.3.6.1.0096
1.3.3.1.0072-1
1.3.2.1.0051
1.3.1.1.0033

hotsparepolicy -set
<Policy ID> keep1unusedper 30 -o

Unisphere

Platforms: All
Severity: Medium
Tracking: 741477

The progress of a pool or


LUN operation is frozen. On
another SP, the display is as
expected.

The progress of a pools


deduplication operation
hangs.

Restart Unisphere by using net


stop k10governor and net
start k10governor

Exists in versions:
1.3.9.1.184
1.3.9.1.155
1.3.8.1.0119

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

83

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

Unisphere

Platforms: All
Severity: Medium
Tracking: 745895

Some thin LUNs in a pool


don't show the Consumed
Capacity in the LUN
properties field.

It is normal that consumed


capactiy changed to N/A
after dedup is enabled.

Functions as designed.
Exists in versions:
05.33.009.5.184
05.33.009.5.155
05.33.008.5.119

Unisphere

Platforms: All
Severity: Medium
Tracking: 635311

When running VNX


Unisphere in off-array mode,
the Unisphere online help
will not load if Chrome is set
as the default browser.

When Unisphere off-array


online help is invoked in
Chrome, the browser
displays a blank page

Work arounds:
Use Internet Explorer or
Firefox to open Unisphere
Prior to accessing Unisphere,
open Chrome with the
following command:
<Chrome-path>" --allow-fileaccess-from-files
Exists in versions:
1.3.9.1.184
1.3.9.1.155
1.3.8.1.0119
1.3.6.1.0096

Unisphere

Platforms: All
Severity: Medium
Tracking: 652886

The Storage Capacity


displayed in the Dashboard
section shows File free
capacity as smaller than the
Free capacity displayed on
Storage Pool table.

There is more free space


than what is displayed.

The displayed free capacity of


storage pool accounts for the File
side only.
The File free capacity displayed
on Dashboard is from the point
view of the whole system
including File and Block.
Exists in versions:
1.3.9.1.184
1.3.9.1.155
1.3.8.1.0119
1.3.6.1.0096
1.3.3.1.0072-1

Unisphere

Platforms: All
Severity: Medium
Tracking: 651009

After BBU B is removed from


the enclosure, navi faults list fails to report the
message "Bus 0 Enclosure 0
BBU B: Removed, as
expected. only reports the
first line of the error "Bus 0
Enclosure 0: Faulted". It
does not report the
complete message.

Only the first line of the error


is reported, "Bus 0 Enclosure
0: Faulted". The complete
message is not reported.

Use naviseccli account to login


<SP IP>/debug, click the Force A
Full Poll.
Exists in versions:
1.3.9.1.184
1.3.9.1.155
1.3.8.1.0119
1.3.6.1.0096
1.3.3.1.0072-1

84

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

Unisphere

Platforms: All
Severity: Medium
Tracking: 641746

With PowerPath installed,


the user wants to
disconnect the storage
group from the host, but the
host cannot be found.

The host could not be found


when the user wants to
disconnect the storage
group from the host.

There are two workarounds for


this issue:
1) Set the DWORD registry value
AutoHostRegistration located
under
HKLM\System\CurrentControlSet\
Control\EmcPowerPath registry
key to 0. Then, reboot the host.
2) Uninstall PowerPath. Then,
install PowerPath using the
<Setup.exe>
/v"AUTO_HOST_REGISTRATION=0
" option.
Exists in versions:
1.3.9.1.184
1.3.9.1.155
1.3.8.1.0119
1.3.6.1.0096
1.3.3.1.0072-1

Unisphere

Platforms: All
Severity: Medium
Tracking: 567075

In the Unisphere UI, an


asynchronous mirror with a
fractured secondary image
and no primary image
cannot be deleted.

When an asynchronous
mirror only has a fractured
secondary image but no
primary image, the
secondary image cannot be
deleted from the Unisphere
UI. After selecting the
secondary image and
clicking on the Delete
button, nothing will happen
and there will be no failure
message.

There are two workarounds for


this issue:
1. Select the asynchronous
mirror and click properties to
open the Remote Mirror
Properties dialog. Go to the
Secondary Image tab and
click Force Delete to forcedestroy this secondary image.
2. Use the CLI to destroy this
secondary image by using the
-force switch.
Exists in versions:
All 1.3 versions.

Unisphere

Unisphere

Platforms: All
Severity: Medium
Tracking: 564905

A faulted LUN does not


display in the output of the
faults list command
unless it is associated with
a system feature.

A faulted LUN will not be


shown in the faults -list
output unless it is
associated with a system
feature such as a storage
group, mirror, snap, and so
on.

No workaround.

Platforms: All
Severity: Medium
Tracking: 563384

After a bus cable is moved


from one port to another,
faults may be seen on both
the original bus and the new
bus.

If a bus cable is moved from


one port to another, there
will be faults from both the
original bus and the new
bus.This will remain even
after moving the bus cable
back to the first port.

This is a display issue only.


Restart the management server
from the setup page or reboot the
SP.

Exists in versions:
All 1.3 versions.

Exists in versions:
All 1.3 versions.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

85

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

Unisphere

Platforms: All
Severity: Medium
Tracking: 559676

Host appears registered


with unexpected
information and may be
unavailable.

After attaching a cloned


host to array, the host might
be registered with
unexpected information,
such as Host IP, host ID or
host name. This may be
inherited from the image
that was used to create the
host. Without the right host
information, the new host
might be unavailable on the
array.

To create a host from a cloned


image, do not include a Host
Agent in the image. Or
alternatively, before attaching the
new host to array, uninstall Host
Agent with all the configuration
information removed and install a
new Host Agent from scratch.
Refer to Knowledgebase articles
emc66921, and emc63749 for
details about how to clean up the
Host Agent inherited from the
image.
Exists in versions:
All 1.3 versions.

Unisphere

Unisphere

Platforms: All
Severity: Medium
Tracking: 538390

Unisphere sometimes loads


slowly or does not load
completely.

Some Unisphere content, for


example, wizard panels and
the support section, may not
appear. The Language
Package does not take
effect and the UI appears
only in English.

Enable Java Cache to ensure that


future load times are faster.

Platforms:
Windows Server
2008
Severity: Medium
Tracking: 526482

Two Windows 2008 iSCSI


hosts connected at the
same time do not display
under Host > Host List.

When running Unisphere,


after two Windows 2008
iSCSI hosts are connected to
an array and rebooted,
although they are shown as
logged in and registered
under Hosts > Initiators, the
following warning is shown
on the status column.

Log out of Unisphere and log in


again.

Exists in versions:
All 1.3 versions.

Exists in versions:
All 1.3 versions.

The initiator is not


fully connected to the
Storage System. As a

result they do not show


under Hosts > Host List.
Unisphere

Platforms: All
Severity: Medium
Tracking: 522202

Error when pinging/tracing


the route SP management IP
from an iSCSI data port
inUnisphereI.

The following error message


will display when
pinging/tracing the route SP
management IP from an
iSCSI data port in Unisphere
UI:

Do not ping/trace route SP


management IP from an iSCSI
data port; it is not recommended.
Exists in versions:
All 1.3 versions.

Ping/traceroute to peer SP or
management workstation from iSCSI
port disrupts management
communication and is not permitted.

86

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

Unisphere

Platforms: All
Severity: Medium
Tracking: 472986

There are missing columns


in the UI table if the
language setting has been
recently changed.

There may be missing


columns in a UI table after
recently switching the
language setting.

Delete the existing persistence


file persistence.per from
<userHome>\emc\Unisphere for
Unisphere, or
<userHome>\emc\USM for
Unisphere Service Manager
(USM).
Exists in versions:
All 1.3 versions.

Unisphere

Platforms: All
Severity: Low
Tracking: 574138

More LUNs cannot be


created in the Storage >
LUNs table by clicking
Create when either the pool
LUNs or RAID group LUNs
have reached the limit.

In the Storage > LUNs table,


more LUNs cannot be
created by clicking Create
when either the pool LUNs
or RAID group LUNs reach
the limit. A popup warning
message appears and then
the create LUN dialog
closes.

In the case where pool LUNs


reach the limit, but RAID group
LUNs do not, create RAID group
LUNs in Storage > Storage Pools >
RAID Groups.
In the case where RAID group
LUNs reach the limit, but pool
LUNs do not, create pool LUNs in
Storage > Storage Pools > Pools.
Exists in versions:
All 1.3 versions.

Unisphere

Unisphere

Unisphere

Platforms: All
Severity: Low
Tracking: 551081

Platforms: All
Severity: Low
Tracking: 532907

Platforms: All
Severity: Low
Tracking: 490729

It can take multiple


attempts to access the
Control Station.

When a user launches


Unisphere, the applet may
hang. The user has to try
multiple times before
launching Unisphere
successfully.

A timeout error occurs when


deleting LUNs at the same
time as binding a large
number of LUNs to a RAID
group.

In rare cases, binding a


large number of LUNs to a
RAID group using Naviseccli
at nearly the same time as
deleting LUNs in the UI will
result in a timeout error,
although the LUNs are
deleted sucessfully.

LUN compression fails if the


system is configured with
the maximum number of
LUNs.

If the system is configured


with maximum LUNs,
enabling the LUN
compression will fail.

Increase the amount of RAM in


the computer running Unisphere
and disable on-access scanning
of anti-virus applications when
launching Unisphere for the first
time.
Exists in versions:
All 1.3 versions.
No workaround. LUNs are deleted
successfully, despite this timeout
error.
Exists in versions:
All 1.3 versions.

Delete another LUN, then


compress the target LUN.
Exists in versions:
All 1.3 versions.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

87

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

Unisphere

Platforms: All
Severity: Low
Tracking: 488940

After issuing the getlog


command where there are
many event log
entries,CIMOM may restart
and thus stop service
temporarily.

If there are many log entries


in the event log, the getlog
command may output a lot
of information, which can
take a long time. If getlog
command takes over half an
hour, a dump will be
generated, and CIMOM will
stop service for about two
minutes due to CIMOM's
restarting.

There are two steps to this


workaround:
1. Ensure the network
bandwidth between the host
which issued the getlog
command and the target array
is found.
2. Redirect the command
output to a file instead of
outputting to the screen. For
example: naviseccli -h
xx.xx.xx.x getlog >
naviloginfo.txt

Exists in versions:
All 1.3 versions.
Unisphere
Analyzer

Platforms: All
Severity: Critical
Tracking: 572930

Unisphere may crash when


using Analyzer on a system
with a large configuration
(hundreds of LUNs).

For systems with large


configurations, the
Unisphere UI may crash if
there are hundreds of LUNs.

Increase Java memory by:


1. Go the Start menu in
Windows.
2. Select Settings > Control
Panel.
3. Double-click the Java Plugin
icon.
4. Select the Advanced tab.
5. In the Java Runtime
Parameters, type Xmx1024m.
Exists in versions:
All 1.3 versions.

Unisphere
Analyzer

Platforms: All
Severity: Medium
Tracking: 562742

The deduplication featurelevel state of the array is


inaccurate in both the UI or
CLI.

The deduplication feature


state of the array is
inaccurate when opening
the archive dump file in the
UI or CLI.

No workaround
Exists in versions:
All 1.3 versions.

In the UI, the deduplication


tab of the specific mapped
LUN should display the
deduplication feature-level
state, but it actually
displays the state of the
pool in which the mapped
LUN is contained. In the CLI,
the deduplication feature
state is incorrect in the
dump for the LUNs
configuration. It shows the
states of the LUN instead of
the feature-level state.

88

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

Unisphere
Analyzer

Platforms: All
Severity: Low
Tracking: 559104

Analyzer does not work


when there are two open
sessions for the same
system.

Unisphere Real-Time
Analyzer does not work
when there are two sessions
for the same host system.

Close both sessions and start


only one Unisphere Analyzer UI
session.
Exists in versions:
All 1.3 versions.

Unisphere,
CLI

Platforms: File and


Unified
Severity: Medium
Tracking: 772431

Before performing a nondisruptive upgrade (NDU),


when running the ndu runrules command to check
the array status, the
command fails and reports
that the stats logging
process is turned on.

Run the setstats -off command to


turn off the stats logging, and
then run the ndu -runrules
command again for the NDU.

Unisphere,
MetroSync
Manager

Platforms: All
Severity: High
Tracking: 773994

When SMTP Send Test Email


fails in MetroSync Manager
operation, there is little
information to troubleshoot
the cause of the failure.

Use tools such as ping, tracert,


telnet, etc. to diagnose the
network environment and server
behavior.

Unisphere
Host
software, CLI,
and Utilities

Platforms: All Linux


and Unix platforms
Severity: Critical
Tracking: 572417

Server Utility update


command failed with error
message:

Platforms: Linux and


UNIX platforms
Severity: Critical
Tracking: 547526

Unisphere Host Agent


cannot be started on an
AsianUx host.

Unisphere
Host
software, CLI,
and Utilities

NAVLInetHostNotFound:
No such host...

Unisphere Server Utility


serverutilcli update

command fails with error


message:
No such host.

Unisphere Host Agent


cannot be started on
AsianUx host. An error
message such as the
following appears in
/var/log/agent.log:
mm/dd/yyyy 16:43:56 Agent
Main -- Net or File event. Err:
Local hostname/address
mapping unknown; see
installation notes.

Unisphere
Host
software, CLI,
and Utilities

Platforms: Windows
Severity: Critical
Tracking: 537021

When configuring an iSCSI


connection, the Server Utilty
may fail to connect if the
wrong network adapter is
selected.

Server Utility may fail to


connect if the server has
serveral network adapters,
not all of them are reachable
to each other, and the wrong
network adapter is selected
when configuring an iSCSI
connection. The failure
message may also take a
while to appear.

Add the hostname-ipaddress


mapping to /etc/hosts.
Exists in versions:
All 1.3 versions.
The hostname-ipaddress
mapping should be added to
/etc/hosts. Host Agent requires
the hostname-ipaddress mapping
for a network connection.
Exists in versions:
All 1.3 versions.

If multiple network adapters are


configured on the server, EMC
recommends that the default
adapter is selected while using
the Server Utility to create an
iSCSI connection. Alternatively,
ensure that the right network
adapter is selected. Otherwise,
the selected subnet may not be
able to access the target. Make
sure that the selected network
adapter can access the target
configured on array.
Exists in versions:
All 1.3 versions.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

89

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

Unisphere
Host
software, CLI,
and Utilities

Platforms: Windows
Severity: Medium
Tracking: 573772

Removal of an iSNS target is


reported by the Server Utility
as successful when the
operation was actually
unsuccessful.

Although the Server Utility


cannot remove an iSNS
target, the iSNS target still
can be selected for removal.
The remove operation will
then be reported as
completing successfully,
even though it was
unsuccessful and the target
is still present.

No workaround. The Server Utility


cannot be used to remove an
iSNS target.

Unisphere
Host
software, CLI,
and Utilities

Platforms: All
Severity: Low
Frequency: Always
under specific
circumstances
Tracking:
62974296/
641084, 642898

Naviseccli commands on
the control station that have
a "$" in the username or
password fail
authentication.

Naviseccli commands fails


authentication even with
password that is confirmed
to work with Unisphere.

The naviseccli string that


contains a "$" should be
enclosed in single quotes (such
as 'test$me'), so the entire string
will be passed to naviseccli,
passing authentication.
Exists in versions:
1.3.9.1.184
1.3.9.1.155
1.3.8.1.0119
1.3.6.1.0096
1.3.3.1.0072-1

Unisphere
Host
software, CLI,
and Utilities

Platforms: VNX5400
Severity: Low
Tracking: 573776

Pool creation failed on a


VNX5400.

When attempting to create a


storage pool, the following
pool status message may be
received if the system LUN
limits have been reached:

No workaround. This is expected


behavior when the system limits
are exceeded. The storage pool
can be destroyed.

Current Operation: Creating


Current Operation State: Failed
Current Operation Status: Illegal
unit number Current Operation
Percent Completed: 24

Exists in versions:
All 1.3 versions.

Exists in versions:
All 1.3 versions.

This message indicates that


system configuration has
reached it maximum
configuration for FLU's and
the pool cannot be created.

90

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

Unisphere
Host
software, CLI,
and Utilities

Platforms:
Windows Server
2012
Severity: Low
Tracking: 521220

The virtual Fibre Channel


port configured for the
Hyper-V virtual machine
(VM) cannot be recognized
by the VM.

This is an environmental
issue caused by the
management operating
system and the virtual
machine being run on
different versions of
integration services. The
management operating
system (which runs the
Hyper-V role) and virtual
machines should run the
same version of integration
services. Otherwise, some
new features of the
management operating
system might not be
supported on the VM.

Ensure that the management


operating system and virtual
machine are running on the same
version of integration services.
For more information, refer to:

On Windows Server 2012


without February 2013
cumulative updates
installed, Windows STOP
errors (blue or black screen
errors) can occur when
running Server utility or Host
Agent 1.3.

Install Microsoft Windows 8 and


Windows Server 2012 February
2013 cumulative updates. Refer
to Microsoft Knowledgebase
article 2795944 for more
information:
http://support.microsoft.com/kb
/2795944/EN-US

Unisphere
Host
software, CLI,
and Utilities

Platforms: Windows
Server 2012
Severity: Low
Tracking: 519934

On a Windows 2012 server,


Windows STOP errors can
occur when running Server
Utility 1.3 or Host Agent 1.3.

Version compatibility of Integration


Service:
http://technet.microsoft.com/enus/library/ee207413%28v=WS.10
%29.aspx

How to upgrade the Integration


Service:
http://technet.microsoft.com/enus/library/ee941103%28v=WS.10
%29.aspx

Exists in versions:
All 1.3 versions.

Exists in versions:
All 1.3 versions.
Unisphere
QoS Manager

USM

Platforms: All
Severity: Critical
Tracking: 562832

Platforms: All
Severity: Medium
Tracking: 645113

In rare cases, Unisphere


Quality of Service Manager
may lose connection to the
array for two or three
minutes.

In rare conditions, if the


fallback policy is enabled,
the Unisphere QoS Manager
GUI/CLI will lose connection
to the array for two or three
minutes.

Disable the failback policy and


reconnect to the array after three
mintues.

USM Online Disk Firmware


Upgrade (ODFU) failed to
report the information about
failed disks.

The information about failed


disks presented to the user
is not accurate.

Exists in versions:

Exists in versions:
All 1.3 versions.
1.3.9.1.184
1.3.9.1.155
1.3.8.1.0119
1.3.6.1.0096
1.3.3.1.0072-1

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

91

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

USM

Platforms: All
Severity: Medium
Tracking: 572236

The USM Online Disk


Firmware Upgrade (ODFU)
wizard inaccurately reports
the disk firmware upgrade
status and progress.

While watching the status in


the USM ODFU wizard, some
reports are inaccurate. The
completed disks number
increases, while the
Remaining disks and
Upgrading disks number
remains at 0; the progress
bar remains at 100%, while
the upgrade is still in
progress; once the overall
staus is complete, it's
possible that not all disks
have been reported.

No workaround.
Exists in versions:
All 1.3 versions.

USM

Platforms: All
Severity: Medium
Tracking: 558207

Cannot log into USM when


one SP is down.

When one SP is down, USM


takes several minutes to log
into the system.

None.
Exists in versions:
All 1.3 versions.

USM

Platforms: All
Severity: Medium
Tracking: 559742

The software upgrade


process hangs after a file
transfer.

USM does not support


upgrades of VNX OE in a
proxy configuration.

No workaround.
Exists in versions:
All 1.3 versions.

USM

Platforms: CLARiiON
Severity: Medium
Tracking: 550578

An error is presented when


running the Storage System
Verification wizard on a
CLARiiON system.

The wizard returns an error


when run against a
CLARiiON system.

Java 64 does not support loading


32-bit DLL. Install 32-bit Java.

USM

Platforms: VNX
Severity: Medium
Tracking:
561544/551522

An internal error appears


during a language pack
installation.

An internal error is shown


after the File-side language
pack is installed, and
before starting the Blockside language pack
installation.

Verify network connectivity.


Restart USM and log into the
system. Start the Install Software
wizard and choose the third
option to install the Block
language pack.
Exists in versions:
All 1.3 versions.

USM

Platforms: VNX
Severity: Medium
Tracking: 578647

The Install Software wizard


fails to upgrade VNX Block
OE.

The Install Software wizard


fails to upgrade the VNX for
Block OE. An error states
that USM is unable to obtain
the software maintenance
status.

Under Tools, click Software


Maintenance Status to view the
upgrade status. When the
upgrade finishes, commit the
package and re-enable statistics,
if needed.
Exists in versions:
All 1.3 versions.

92

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

USM,
Unisphere Off
Array Tools

Platforms: All
Severity: Medium
Tracking: 748953

A USM upgrade failed and


the user had insufficent
permissions to run a CLI
command. The message ID
was 15569322006.

A control station upgrade


operation failed in USM and
returned following error
message: stderr: Could

To restore the missed home


directories, failover to the
previously activated control
station.

not chdir to home


directory
/home/GlobalAdmin1: No
such file or directory.

Exists in versions:
05.33.009.5.184
05.33.009.5.155
05.33.008.5.119
05.33.008.5.117

USM,
Unisphere Off
Array Tools

Platforms: All
Severity: Medium
Tracking: 736612/
740247

While using VIA to set a host


name with FQDN,
communication with the
VNX repeatedly failed.

Communication with the


system failed.

Don't use the FQDN host name.


To specify a host name, the
maximum number of characters is
64, excluding white spaces and
dot characters.
Exists in versions:
05.33.009.5.184
05.33.009.5.155
05.33.008.5.119

Data at Rest
Encryption

Platforms: VNX
Severity: Medium
Tracking: 700786

Removing hardware
improperly can cause data
to become inaccessible.

If a drive or SP replacement
is required on a D@REenabled array, the standard
VNX2 replacement
procedure can be used.

If the VNX chassis and both


Storage Processors (SPs) need to
be replaced - do not replace both
SPs simultaneously. Instead,
retain one SP until the array is
back online before replacing the
second SP. Alternatively, if the
hardware was already replaced,
you can restore the keystore from
a backup with the assistance of
EMC Support.
Exists in versions:
05.33.009.5.184
05.33.009.5.155
05.33.008.5.119
05.33.006.5.096

If a 6Gb SAS UltraFlex I/O


Module needs to be added
or replaced, the standard
VNX2 procedure can be
used.

MirrorView
Asynch

Platforms: All
Severity: Medium
Frequency: Likely
under specific
circumstances
Tracking: 329924

MirrorView/A
An attempt to promote a
large consistency group
when the storage systems
are connected by iSCSI can
fail.

An uncleared deadlock
prevention flag (error
0x7152807) may prevent
further promote or
administrator operations.
This issue can occur during
heavy I/O traffic and/or
other mirror updates.

When you promote a large group,


reduce other loads on the storage
system where possible. If this
issue is found, destroy and
recreate the affected group.
An SP restart of the primary
and/or secondary storage system
clears the deadlock prevention
flag.
Exists in versions:
All 5.33 versions.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

93

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

MirrorView
Asynch

Platforms: All
Severity: Low
Frequency: Rarely
under specific
circumstances
Tracking: 334575

MirrorView/A
Secure CLI may time out
during a mirror creation
operation.

When Secure CLI processes


a large number of system
actions, it may return the
following error:

Retry the mirror create operation.

Platforms: All
Severity: Low
Frequency: Rarely
under specific
circumstances
Tracking: 333132

MirrorView/A
Mirrors may
administratively fracture
unexpectedly.

Internal data structures can


become inconsistent, which
results in the mirrors being
administratively fractured.

Issue a synchronize command for


the mirror. If this fails the first
time, retry the command.

Platforms: All
Severity: Medium
Frequency: Rarely
under specific
circumstances
Tracking: 237638

MirrorView/S
Storage processor can
reboot unexpectedly when a
mirror is destroyed.

If a mirror is destroyed while


a trespass occurs, it is
possible for internal data
structures to become
inconsistent and result in a
storage processor reboot.

Avoid destroying a mirror when a


trespass is likely, such as just
after a storage processor boots,
when hosts with failover software
may attempt to rebalance the
load.

MirrorView
Asynch

MirrorView
Synch

Exists in versions:
All 5.33 versions.

Request failed. The force polling


failed, because of timeout - the
system may be busy, please try
again. (334575)

Exists in versions:
All 5.33 versions.

Exists in versions:
All 5.33 versions.
MirrorView
Synch

SANCopy for
Block

Platforms: All
Severity: Low
Frequency: Rarely
under specific
circumstances
Tracking: 389452

MirrorView/S
A mirror may fracture
unexpectedly.

If one storage processor


shuts down at the same
time the other storage
processor sends a
synchronize command, the
mirror may fracture.

Reissue the synchronize


command.

Platforms: All
Severity: Medium
Frequency: Rarely,
under specific
circumstances
Tracking: 311047

Remote devices in the SAN


Copy session may fail if
iSCSI ports are used as a
combination of initiator
ports and target ports.

If SAN Copy sessions start


between two storage
systems connected over
iSCSI and both storage
systems act as SAN Copy
systems for some SAN Copy
sessions and remote
systems for other SAN Copy
sessions, remote devices in
the SAN Copy session may
fail with error:

Schedule the SAN Copy sessions


on both storage systems so that
they do not run at the same time.
Or configure the iSCSI
connections between the storage
systems so that iSCSI ports used
for SAN Copy in the I/O module
are all used either as initiator
ports or target ports but not in
combination.

Unable to locate the device. Check


that the device with this WWN
exists.

94

Exists in versions:
All 5.33 versions.

Exists in versions:
All 5.33 versions.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

SANCopy for
Block

Platforms: All
Severity: Low
Frequency: Always
Tracking: 215061

A SAN Copy modify


command fails if the only
destination of a SAN Copy
session fails.

If the only destination of a


SAN Copy session fails and
this session is modified to
replace it with a new
destination, the modify
fails. This issue happens
because the modify
command checks for the
new destination before
checking that the failed
destination is removed in
the modified session. The
modify fails as it thinks a
failed destination exists.

Modify the copy session to a full


session and then back to an
incremental session. Once the
modify is successful, add your
new destination to the session.
Once the addition of the
destination is successful, remove
the previously failed session.

SANCopy for
Block

SANCopy for
Block

Snapview
clones

Exists in versions:
All 5.33 versions.

Platforms: All
Severity: Low
Frequency: Under
specific
circumstances
Tracking: 475390

If you change the max


concurrent setting, and
there are more active
sessions than the new
setting allows, the new
setting does not take effect
immediately.

The max concurrent SAN


Copy setting does not take
effect immediately when
there are already more
active sessions than new
setting.

New setting takes effect after


existing active sessions stop.

Platforms: All
Severity: Low
Frequency: Always
Tracking: 184420

SAN Copy does not verify


the selected LUN size when
the size of the LUN source
changes.

When you use SAN Copy


with Navisphere CLI, you are
able to change the source
LUN to be incorrectly larger
than the destination LUNs.
The SAN Copy session
eventually fails because the
destination LUNs are too
small. SAN Copy does not
verify that the selected LUN
size is valid.

When you modify a copy


descriptor with Navisphere CLI,
ensure any destination LUNs are
always the same size or larger
than the source LUN.

Navisphere CLI and


Unisphere Manager return
an error message when
starting a protected restore.
If a clones protected restore
operation is initiated while
one of the storage
processors is booting, the
SP that owns the clone
source may not be able to
communicate with the peer
SP for a short period of time.

Reissue the protected restore


operation after both SPs have
completed booting.

Platforms: All
Severity: Low
Frequency: Rarely
under specific
circumstances
Tracking: 294717

Unisphere command will fail


during a single SP reboot.

Exists in versions:
All 5.33 versions.

Exists in versions:
All 5.33 versions.

Exists in versions:
All 5.33 versions.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

95

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

Snapview
clones

Platforms: All
Severity: Low
Frequency: Always
under a rare set of
circumstances
Tracking:
247637, 248149,
251731

Synchronizing and reversesychronizing will be started


in automatic mode when
running with a single SP.

SnapView clones will restart


synchronizing/reverse
synchronizing an
unfractured clone after an
SP failure even if the
recovery policy is set to
manual. This behavior
occurs only if the system is
running with a single SP, or
when the clone source is
also a MirrorView
secondary.

Fracture the clone after the SP


recovers from the failure.

Platforms:
Windows
Severity: Low
Frequency: Always
Tracking: 286529

SnapCLI fails when the


operation is targeted for a
dynamic disk.

VNX Snapshots containing


a dynamic disk, which is
rolled back to the
production host, cannot be
imported on the production
host using SnapCLI.

Dynamic disks must be imported


using the Disk Administrator or
the Microsoft Diskpart utility.

Platforms: Windows
Severity: Medium
Frequency: Always
Tracking: 203400

Admhost does not support


Windows Dynamic Drives.

Admhost operations fail if


you attempt them on a
Dynamic Drive.

The workaround is to import


Dynamic Drives using the Disk
Administrator or the Microsoft
Diskpart utility.

SnapCLI

Admhost

Exists in versions:
All 5.33 versions.

Exists in versions:
3.32.0.0.5 and previous

Exists in versions:
1.32.0.0.5 and previous
Admsnap

Admsnap

96

Platforms: All
Severity: Low
Frequency: Rarely
under specific
circumstances
Tracking: 286532

A snapcli command may


report a failure, but the
command succeeds if the
target LUN is trespassed
during the operation.

A snapcli command may


report a failure, but the
command succeeds if the
target LUN was trespassed
during the operation.

The actual state of the operation


can be verified using Unisphere
software.

Platforms: All
Severity: Low
Frequency: Rarely
under specific
circumstances
Tracking: 286532

An admsnap command may


incorrectly report a failure,
but the command succeeds
if the target LUN is
trespassed during the
operation.

The admsnap command


may report a failure but the
operation succeeds if the
target LUN of the operation
is trespassed during the
admsnap operation.

The actual state of the operation


can be verified by using
Navisphere /Unisphere
software.

Exists in versions:
3.32.0.0.5 and previous

Exists in versions:
2.32.0.0.5 and previous

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

Admsnap

Platforms: AIX
Severity: Low
Frequency:
Likely under specific
circumstances
Tracking: 215474

The snapcli attach


command executed in an
AIX PowerPath
environment can generate
SC_DISK_ERR2 messages in
the system log file.

The error messages are


generated due to a presence
of a detached VNX
snapshots mount point in
the storage group.
Whenever VNX snapshots
mount points are in the
storage group, running
cfgmgr causes ASC 2051
problems in the errpt file.
The messages are harmless
and can be ignored.

Activate the VNX snapshots


mount point using Navisphere
Secure CLI. Then execute SnapCLI
on the host to prevent these
messages from being generated.

Platforms: AIX
Severity: Low
Frequency: Likely
under specific
circumstances
Tracking: 215474

Admsnap activate
commands executed in an
AIX PowerPath
environment may generate
SC_DISK_ERR2 messages in
the system log file.

The messages are generated


because a deactivated
snapshot is present in the
storage group. Whenever
snapshots are in the storage
group, running cfgmgr
causes ASC 2051 problems
in the errpt file. The
messages are harmless and
can be ignored.

To prevent these messages from


beign generated, activate the
snapshot using Secure CLI, which
will allow execution of admsnap
on the host.

Platforms: Linux
Severity: Medium
Frequency:
Always under
specific
circumstances
Tracking: 286533

VNX snapshots are not


attached on a Linux system
if more than eight device
paths are required to
complete the operation.

SnapCLI cannot access more


than eight device paths on
the backup host.

The Linux kernel creates only


eight SG devices (SCSI generic
devices) by default. Additional SG
devices must be created and then
linked to the SD devices. (The
internal disk uses one of the SG
devices).
Use the Linux utility
/dev/MAKEDEV to create
additional SCSI generic devices.

Admsnap

Admsnap

Exists in versions:
3.32.0.0.5 and previous

Exists in versions:
2.32.0.0.5 and previous

Exists in versions:
3.32.0.0.6 and previous
Admsnap

Platforms: Linux
Severity: Medium
Frequency: Always
under specific
circumstances
Tracking: 286533

Admsnap sessions are not


activated on a Linux system
if more than eight device
paths are required to
complete the operation.

Admsnap cannot access


more than eight device
paths on the backup host.

The Linux kernel creates only


eight SG devices (SCSI generic
devices) by default. Additional SG
devices must be created and then
linked to the SD devices. (The
internal disk uses one of the SG
devices).
Use the Linux utility
/dev/MAKEDEV to create
additional SCSI generic devices.
Exists in versions:
2.32.0.0.6 and previous

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

97

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

Admsnap

Platforms: Solaris
Severity: Low
Frequency: Always.
Tracking: 207655

The snapcli attach


command causes LUNs to
trespass on the host in a
DMP environment.

The snapcli attach


command (when issued
without the o <device>
parameters) scans the SCSI
bus using the primary and
secondary paths for the
devices, causing the LUNs
to trespass.

Use the o <device> option for the


snapcli attach command.

Platform: Solaris
Severity: Low
Frequency: Always.
Tracking: 207655

The admsnap activate


command causes LUNs to
trespass on the host in a
DMP environment.

The admsnap activate


command (when issued
without the o <device>
parameters) scans the SCSI
bus using the primary and
secondary paths for the
devices, causing the LUNs
to trespass.

Use the o <device> option for the


admsnap activate command.

Platforms: SuSE
Severity: Low
Frequency: Always
under specific
circumstances
Tracking: 230902

If rpm is used on SuSE


systems during the
installation or removal
process, warning messages
are generated or there are
problems in installing or
uninstalling SnapCLI.

The package fails to


complete the operation
(installation or removal) or
warning messages are
generated.

Use the yast2 command on SuSE


Linux systems to install or remove
the SnapCLI package.

Platform: SuSE
Severity: Low
Frequency: Always
under specific
circumstances.
Tracking: 230902

Warning messges are


generated during the
installation or removal
process if Admsnap is
installed or uninstalled
when rpm is used on SuSE
systems.

The package fails to


complete the operation
(installation or removal)
without generating warning
messages.

Use the yast2 command on SuSE


Linux systems to install or remove
the admsnap package.

Platforms: Windows
Severity: Low
Frequency: Always
Tracking: 286529

Admsnap fails when the


operation is targeted for a
dynamic disk.

An admsnap session
containing a dynamic disk
that is rolled back to the
production host cannot be
imported on the production
host using admsnap.

Dynamic disks must be imported


using the Disk Administrator or
the Microsoft Diskpart utility.

Admsnap

Admsnap

Admsnap

Admsnap

98

Exists in versions:
3.32.0.0.5 and previous

Exists in versions:
2.32.0.0.5 and previous

Exists in versions:
3.32.0.0.6 and previous

Exists in versions:
2.32.0.0.6 and previous

Exists in versions:
2.32.0.0.5 and previous

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

Admsnap

Platforms: Windows
Severity: Low
Frequency: Always
under specific
circumstances
Tracking: 226043

SnapCLI fails to create or


destroy a VNX Snapshot in a
DMP environment.

SnapCLI fails to create or


destroy a VNX Snapshot in a
DMP environment.

The LUN trespassing that is the


target of the Snap CLI causes
failure. The DMP/Volume
Manager keeps track of the
current storage processor owner
independently from the operating
system. To resolve this issue,
trespass the volume to the peer
storage processor and use the
snapcli command.
Exists in versions:
3.32.0.0.5 and previous

Admsnap

Admsnap

Platform: Windows
Severity: Low
Frequency: Always
under specific
circumstances.
Tracking: 226043

The admsnap command


fails to start or stop a
session in a DMP
environment.

Platforms: Windows
Severity: Low
Frequency: Always
under specific
circumstances.
Tracking: 225937

The snapcli attach


command might return a
warning if the operating
system maintains a drive
letter mapping for the
volume being brought
online.

The admsnap command


fails to start or stop a
session in a DMP
environment.

To resolve this issue, trespass the


volume to the peer storage
processor and reissue the
admsnap command.

The failure is due to the


trespassing of the LUN that
is the target of admsnap
operation. The DMP/Volume
Manager keeps track of the
current storage processor
owner, independently from
the operating system.

Exists in versions:
2.32.0.0.5 and previous

The snapcli attach


command generates a
warning that one or more
devices are not assigned
drive letters, when the other
volumes are assigned drive
letters.

This usually occurs when the


registry contains stale device
mapping information. Update the
registry by using the scrubber
utility from Microsoft to remove
stale entries. The stale device
mapping information generates a
condition that prevents SnapCLI
from determining if all volumes
attached were assigned drive
letters. SnapCLI generates a
warning.
Exists in versions:
3.32.0.0.5 and previous

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

99

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

Admsnap

Platform: Windows
Severity: Low
Frequency: Always
under specific
circumstances.
Tracking: 225937

The admsnap activate


command may inaccurately
return a warning if the
operating system maintains
a drive letter mapping for
the volume being brought
online.

The admsnap activate


command generates a
warning that one or more
devices was not assigned a
drive letter when all
volumes were assigned
drive letters.

This usually occurs when the


registry contains stale device
mapping information. Update the
registry by using a utility from
Microsoft called scrubber to
remove stale entries. The stale
device mapping information
generates a condition that
prevents admsnap from
determining if all volumes
activated were assigned drive
letters. Admsnap generates a
warning.
Exists in versions:
2.32.0.0.5 and previous

Admsnap

Admsnap

Admsnap

Platforms: Windows
Severity: Low
Frequency: Always
under specific
circumstances
Tracking: 225832

The snapcli list


command might list devices
multiple times when
executed in a DMP
environment.

The snapcli list

Platform: Windows
Severity: Low
Frequency: Always
under specific
circumstances.
Tracking: 225832

The admsnap list


command may list devices
multiple times when
executed in a DMP
environment.

The admsnap list


command may list devices
multiple times when
executed in a DMP
environment.

Use the d option on the


command line to suppress
duplicate entries for a particular
device or volume.

Platforms: Windows
Severity: Low
Frequency: Always
Tracking: 206345

The snapcli create command


fails if the volume contains
a Veritas volume and the
LUNs for that volume are
trespassed to the secondary
path.

The snapcli create command


fails with the following error
message if the volume
contains a Veritas volume
and the LUNs for that
volume are trespassed to
the secondary path:

Trespass the volume to the


primary path or use Navisphere
Secure CLI to create the VNX
Snapshot.

command might list

devices multiple times when


executed in a DMP
environment.

Use the d option on the


command line to suppress
duplicate entries for a particular
device or volume.
Exists in versions:
3.32.0.0.5 and previous

Exists in versions:
2.32.0.0.5 and previous

Exists in versions:
3.32.0.0.5 and previous

Error: 0x3E050011 (One or


more devices need manual
attention by the operator).
Admsnap

100

Platforms: Windows
Severity: Low
Frequency: Always
Tracking: 206345

The admsnap start


command fails if the volume
contains a Veritas volume
and the LUNs for that
volume are trespassed to
the secondary path.

The admsnap start


command fails with Error:
0x3E050011 (One or more
devices need manual attention
by the operator),if the volume
contains a Veritas volume
and the LUNs for that
volume are trespassed to
the secondary path.

Trespass the volume to the


primary path or use Navisphere
Secure CLI to start the SnapView
session.
Exists in versions:
2.32.0.0.5 and previous

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Known problems and limitations

Category

Details

Description

Symptom

Workaround/Version

Virtualization

Platforms: All
Severity: Low
Tracking: 565093

VASA Provider goes off-line


and the certificate is
missing. Information does
not update in Virtual Center.

To add a VNX Block storage


system *twice* to the list of
Vendor Providers (once for
each Storage Processor) is
not a valid configuration.
This results in duplicate
data being returned to
vCenter Server. To remove
one of these duplicate
connections also deauthorizes the vCenter
Server on the other
(remaining) connection, and
results in this connection
becoming offline.

Remove both VASA connections


to the storage system, then readd a single VASA connection to
either of the SPs.

Virtualization

Platform: All
Frequency: Always
under specific
circumstances
Severity: Low
Tracking: 496273

VASA
LDAP users are not properly
mapped to correct VASA
privileges on VNX for File.

Unable to add Vendor


Provider in vSphere using
credentials of an LDAP user
on VNX for File.

To allow a LDAP user to login


through VASA, it is not sufficient
for the LDAP group to be mapped
to an appropriate group.
1. Create an LDAP user account,
if it does not exist, by having
the user login through
Unisphere (through the
Control Station IP). This will
auto-create the account..
2. When the appropriate
mapped account exists on the
Control Station, the
administrator can add the
user to any of the three
groups 'VM Administrator',
'Security Administrator' or
'Administrator'.
Note: The account should be
mapped to the least privileged
account that allows all the
necessary operations to take
place.
Exists in versions:
All versions

Virtualization

Platforms: All
Severity: Low
Tracking: 465905

VMware: Sync does not


perform after a session is
re-established.

VASA information displays


in vSphere may be out of
sync with the current state
of the storage system due to
errors in the Storage
Provider, vCenter Server, or
communication errors
between the two
components.

If you suspect that this


information may be out of date,
perform a manual Sync via the
Storage Providers page and
refresh/update the appropriate
pages in the vSphere Client. If
this does not fix the issue,
remove and re-add the Storage
Provider(s) in question.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

101

Documentation

Category

Details

Description

Symptom

Workaround/Version

Virtualization

Platform: All
Frequency: Always
under specific
circumstances.
Severity: Low

VAAI
Copy offload between SPs
causes implicit trespasses.

The copy offload operation


generateds requests on the
array that trigger load
balancing. If the source and
destination devices are not
owned by the same SP,
the array moveds
ownership of the
destination LUN to match
the source.

No action necessary. After the


copy offload operation
completes, normal host I/O to the
destination LUN trigger the load
balancing back to the correct
user-assigned SP.
Exists in versions:
All 1.3 versions.

Documentation
EMC provides the ability to create step-by-step planning, installation, and maintenance
instructions tailored to your environment. To create VNX customized documentation, go to:
https://mydocuments.emc.com/VNX.
For the most up-to-date documentation and help go to EMC Online Support at
https://Support.EMC.com.

Configuring VNX Naming Sevices


This manual refers to iPlanet support rather than support for Oracle Directory Server
Enterprise Edition (ODSEE) in VNX. ODSEE is the new version formerly known as iPlanet or
Sun Java System Directory Server and Sun ONE Directory Server. Version 11.x is supported in
by VNX. This manual will be updated in a subsequent release.

Configuring Virtual Data Movers on VNX


Please note the following for the manual, Configuring Virtual Data Movers on VNX (P/N 300-

014-560 Rev 01):

Attaching interfaces to a VDM


The Attach one or more interfaces to a VDM section needs to be updated to include details
for the nas_server attach command:
| -vdm <vdm_name> -attach <interface>[,<interface2>]
Detaching interfaces from a VDM
A new topic called Detach a network interface from a VDM needs to be added, which will
include details for the nas_server detach command:
| -vdm <vdm_name> -detach <interface>[,<interface2>]

102

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

Documentation

Also, the Assign an interface to a VDM section needs to be updated to include a crossreference to the new Detach a network interface from a VDM topic.
Querying the interfaces attached to a VDM
The output in the Query the NFS export list on a VDM section needs to be updated to
include network attached interface configuration information.
Editing VDM domain configurations
A new topic called Clear the domain configuration for a VDM needs to be added, which will
include details for the server_nsdomains unset command:
| -unset resolver <resolver> [-resolver <resolver>...]

Parameters Guide for VNX for File


Please note the following for the manual, Parameters Guide for VNX for File (P/N 300-151-171

Rev 01):

The following parameter has been added:


Facility : ftpd
Parameter : showSysInfo
Value : 0 or 1
Default Value : 1
Comments/description : Displays the system information in the FTP server banner.
0 = Disable the display of the system version info in the FTP server banner.
1 = Enable the display of the system version info in the FTP server banner.

Security Configuration Guide for VNX


Please note the following for the manual, Security Configuration Guide for VNX (P/N 300-015-

128 Rev 05):

Port 161 lists UDP as the protocol for SNMP. TCP (Transmission Control Protocol) can
also be used on port 161 for SNMP.

Information about port 135 was omitted from Table 5, VNX for file Data Mover
network ports, as follows:

Port

Protocol

Default State

Service

Comments

135

TCP

Open

DCE Remote Procedure Call


(DCERPC)

Multiple purposes for


MicroSoft client.

Using FTP, TFTP, and SFTP on VNX


Please note the following for the manual, Using FTP, TFTP, and SFTP on VNX (P/N 300-015-

134 Rev 01):

FTP does not support client certificate authentication. On page 21, the corrected information
for the last bullet in the "Authentication methods for FTP" section is as follows:

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

103

Documentation

If you do not specify the username option, but enable SSL and configure the
sslpersona, the anonymous authentication without SSL is used.

Using VNX Replicator


Please note the following for the manual, Using VNX Replicator (P/N 300-014-567 Rev 01):
The syntax for the Snapsure configuration has changed to: ckpt:10:200:20
When changing the Snapsure configuration, the updated process is as follows:
1. Locate this SnapSure configuration line in the file:
ckpt:10:200:20, where:
10 = Control Station polling interval rate in seconds
200 = maximum rate at which a file system is written in mb/second
20 = percentage of the entire systems volume allotted to the creation and extension
of all the SavVols used by the VNX system.
Note: If this line does not exist, it means the SavVol-space-allotment parameter is currently
set to its default value of 20, which means 20 percent of the system space can be used for
SavVols. To change this setting, you must first add the line: ckpt:10:200:20.

VNX 5400 Parts Location Guide


Please note the following for the manual, VNX5400 Parts Location Guide (P/N 300-015-013

Rev 04):

Table 5, SP CPU module part number, on page 15 is incorrect, and should be changed to the
following:
Part number label
location (Figure 8 on
page 14)

Part number
110-201-003B-01
110-201-006B-01

104

Description
SP 4-core 1.8-GHz CPU module with
16 GB of memory
SP 4-core 1.8-GHz CPU module
without memory

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

FRU

CRU

Where to get help

Where to get help


EMC support, product, and licensing information can be obtained as follows:
Product information
For documentation, release notes, software updates, or for information about EMC products,
licensing, and service, go to EMC Online Support (registration required) at:
http://Support.EMC.com.
Troubleshooting
Go to EMC Online Support. After logging in, locate the appropriate Support by Product page.
Technical support
For technical support and service requests, go to EMC Online Support. After logging in, locate
the appropriate Support by Product page and choose either Join Live Chat or Create Service
Request. To open a service request through EMC Online Support, you must have a valid
support agreement. Contact your EMC Sales Representative for details about obtaining a
valid support agreement or to answer any questions about your account.

Copyright 2016 EMC Corporation. All Rights Reserved. Published in the USA.
Published September 2016
EMC believes the information in this publication is accurate as of its publication date. The information is subject to
change without notice.
THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATION MAKES NO REPRESENTATIONS OR
WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS
IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Use, copying, and distribution of any EMC software described in this publication requires an applicable software
license.
EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and
other countries. All other trademarks used herein are the property of their respective owners.
For the most up-to-date regulatory document for your product line, go to the technical documentation and advisories
section on the EMC online support website.

EMC VNX Operating Environment for Block 05.33.009.5.184, EMC VNX Operating
Environment for File for File 8.1.9.184, and EMC Unisphere 1.3.9.1.184 Release Notes

105

Potrebbero piacerti anche