Sei sulla pagina 1di 28

EMC VNXe 3200 Configuration Steps Via

UEMCLI (Part1)
October 22, 2015 VNXe 3200, EMC VNXE, mcx, Uemcli
There are some minor cli changes with VNXe MCX which I will document as part of this series,
for VNXe GEN1 please refer to these earlier posts:
EMC VNXe Gen1 Configuration Using Unisphere CLI
The initial configuration steps outlined in Part1 :
Accept End User License Agreement
Change the Admin Password
Apply License File
Commit the IO Modules
Perform a Health Check
Code Upgrade
Create A New User
Change the Service Password
Enable SSH
Accept End User License Agreement
uemcli -d 192.168.1.50 -u Local/admin -p Password123# /sys/eula set -agree yes
Change the Admin Password
uemcli -d 192.168.1.50 -u Local/admin -p Password123# /user/account show
uemcli -d 192.168.1.50 -u Local/admin -p Password123# /user/account -id user_admin set
-passwd NewPassword -oldpasswd Password123#
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /user/account show
Reference Help for any assistance:
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword / -help
Apply License File
Firstly gather the Serial Number of the VNXe:
uemcli -d 192.168.1.50 -u Local/admin -p Password123# /sys/general show -detail
Then browse to the EMC registration site, entering the VNXe S/N to retrieve the associated lic
file:
https://support.emc.com/servicecenter/registerProduct/
Upload the acquired license file:
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword -upload -f

C:\Users\david\Downloads\FL100xxx00005_29-July-2015_exp.lic license
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /sys/lic show
Commit the IO Modules
The following commits all uncommitted IO modules:
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /env/iomodule commit
Display a list of system IO modules:
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /env/iomodule show
Perform a Health Check
It is good practice to perform a Health Check in advance of a code upgrade:
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /sys/general healthcheck
Code Upgrade
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /sys/soft/ver show
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword -upload -f
C:\Users\david\Downloads\VNXe 2.4.3.21980\VNXe-MR4SP3.1-upgrade-2.4.3.21980RETAIL.tgz.bin.gpg upgrade
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /sys/soft/ver show
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /sys/soft/upgrade create -candId
CAND_1
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /sys/soft/upgrade show
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /sys/general healthcheck
Note: Please see a more detailed overview of the upgrade process in a previous post:
http://davidring.ie/2015/03/02/emc-vnxe-code-upgrade/
Create A New User
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /user/account create -name david -type
local -passwd DavidPassword -role administrator
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /user/account show
The role for the new account can be:
administrator Administrator
storageadmin Storage Administrator
operator Operator (view only)
Change the Service Password
The Service password is used for performing service actions on the VNXe.
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /service/user set -passwd newPassword
-oldpasswd Password123#
Enable SSH
uemcli -d 192.168.1.50 -u service -p NewPassword /service/ssh set -enabled yes

Leave a comment

EMC ViPR Cisco IVR Cross-Connect


Zoning (VPLEX)
October 9, 2015 Cisco, Vblock CISCO IVR, CROSS-CONNECT, EMC VPLEX, METRO,
VIPR, Zoning
Known ViPR&VPLEX Storage Provisioning Issue:
The following error may be encountered while provision a shared VPLEX distributed volume to
an ESXi Cluster using ViPR v2.x 2.3:

The reason why this issue occurs during a ViPR storage provisioning task with VPLEX is due to
the fact that ViPR incorrectly attempts to apply two simultaneous updates to the Cisco MDS IVR
database, correctly the MDS database is locked by the first task and the second task times out
resulting in a failed ViPR provisioning process. The tasks should be executed in a sequential
fashion allowing each task to complete and then commit changes to the IVR database thus
removing the lock it held once the commit is successful. Once the database lock is removed then
the subsequent task may execute on the database.
Workaround:
Executing an exclusive storage provisioning order from ViPR catalog for a single ESXi host
works perfectly, including automatically creating the required Cross-Connect Zoning, this is due
to the fact the single workflow performs MDS IVR database updates sequentially. During the
single ESXi host exclusive storage provisioning task ViPR creates the necessary initiators,
storage views and IVR Zones (both local and cross-connect zoning) for a single host. BUT
performing a shared storage provisioning task to an ESXi Cluster fails in a single catalog order, it
will also fail if two exclusive storage provision orders are executed at the same time. In summary

the workaround is to execute an exclusive storage provisioning order for each host in the cluster
individually one at a time. Once this is complete and each host has a volume presented and
VPLEX has the correct initiators and storage views created by ViPR, you may then create a new
distributed LUN for the whole ESXi cluster. ViPR simply adds the new distributed volumes to
existing storage views in VPLEX (there is no zoning going on when you run the ddev creation,
thus no locking). Once you have a working distributed volume for all of the hosts, you may then
remove the exclusive volumes and everything should function accordingly. Ensure to verify that
all the required zoning (including IVR Zones) is configured correctly on all switches and the
ESXi hosts can see all associated paths.
NOTE: ViPR engineering plan to enhance the Zoning workflow with an additional step to
obtain/monitor any IVR database locks before proceeding with the IVR zoning operations.
This will be targeted for the next ViPR release. I will provide updates to this post in due
course.
Solution Example:
The below diagram depicts the connectivity requirements in order to implement a ViPR storage
provisioning solution with a VPLEX Metro configuration using Cross-Connect Zoning:

From the above digram you can see that an ISL is in place for Site-to-Site connectivity, in this
example configuration the ISL carries VPLEX-FC-WAN-Replication traffic over
VSAN30(Fabric-A) and VSAN31(Fabric-B) -(VPEX FC WAN COM). VSAN30 is stretched
between Fabric-A switches on both sites and VSAN31 is stretched between both switches on
Fabric-B for Site1&2. VSAN30&31 can be used as transit VSANs for this example IVR
configuration.

In order for ViPR v2.x to successfully execute the task of automatically creating the required
cross-connect zoning the following configuration needs to be in place (as per example diagram
above):
Site1:
Fabric-A, VSAN10: associated interfaces|PC (even ESX hba of site1, VPLEX FE&BE and
PC30) added as members to vsan10.
Fabric-B, VSAN11: associated interfaces|PC (odd ESX hba of site1, VPLEX FE&BE and PC30)
added as members to vsan11.
Site 2:
Fabric-A, VSAN20: associated interfaces|PC (even ESX hba of site2, VPLEX FE&BE and
PC31) added as members to vsan20.
Fabric-B, VSAN21: associated interfaces|PC (odd ESX hba of site2, VPLEX FE&BE and
PC31) added as members to vsan21.
Site1 Site2:
Fabric-A: VSAN30 used as a transit vsan over Port-channel 30.
Fabric-B: VSAN31 used as a transit vsan over Port-channel 31.
A prereq is required in order for ViPR to successfully create the cross-connect zoning
automatically as part of the provisioning workflow, the prereq is to manually create an IVR zone
on fabric A, connecting vsan 10 and vsan 20 and an IVR zone on Fabric B connecting vsan11
and vsan 21 (example IVR Zones provided below).
In the case of ViPR v2.2 an additional prereq task is required and that is to stretch the VSANs
between sites, as per this example VSAN20 gets added to switch-A on Site 1 and vice-versa
VSAN10 added to switch-A on Site2, repeat same for Fabric-B switches but no local interfaces
are assigned to these dummy VSANs, essentially a VSAN20 is created without any member on
Switch-A Site1 etc. This is done for all respective VSANs as can be seen in the example
configuration provided below. As part of the VSAN stretch ensure to add the allowed VSANs to
the respective port-channels:
Port-Channel 30 Allowed VSAN 10,20,30
Port-Channel 31 Allowed VSAN 11,21,31
Once the VSAN is stretched across the sites as per the prereq for ViPR v2.2, ViPR will then
automatically create the required IVR zones as part of the provisioning workflow.
Note: The vArray should be set for Automatic Zoning for all this to occur.
Example MDS Configuration
These are example configuration steps to be completed on both sites MDS switches in order to
enable Cisco Inter-VSAN Routing (IVR is the standard for cross-connect zoning with VPLEX
Metro) and to enable automatic cross-connect zoning with ViPR:
FABRIC A Switches

feature ivr
ivr nat
ivr distribute
ivr commit
system default zone distribute full
system default zone mode enhanced
ivr vsan-topology auto
zone mode enhanced vsan 10
zone mode enhanced vsan 20
zone mode enhanced vsan 30
vsan database
vsan 10 name VSAN10
vsan 20 name VSAN20
vsan 30 name vplex1_wan_repl_vsan30
interface port-channel 30
channel mode active
switchport mode E
switchport trunk allowed vsan 10
switchport trunk allowed vsan add 20
switchport trunk allowed vsan add 30
switchport description CROSS-SITE-LINK
switchport speed 8000
switchport rate-mode dedicated
Configuring FABRIC A switches Fcdoamin priorities:
Site1:
fcdomain priority 2 vsan 10
fcdomain domain 10 static vsan 10
fcdomain priority 100 vsan 20
fcdomain domain 22 static vsan 20
fcdomain priority 2 vsan 30
fcdomain domain 30 static vsan 30
Site2:
fcdomain priority 100 vsan 10
fcdomain domain 12 static vsan 10
fcdomain priority 2 vsan 20
fcdomain domain 20 static vsan 20
fcdomain priority 100 vsan 30
fcdomain domain 32 static vsan 30

Example: configuring Inter-VSAN routing (IVR) Zones connecting an ESXi host HBA0
over VSANs 10 and 20 from site1->site2 and vice versa site2->site1 utilising the transit
VSAN30:
device-alias database
device-alias name VPLEXSITE1-E1_A0_FC02 pwwn 50:00:14:42:A0:xx:xx:02
device-alias name VPLEXSITE1-E1_B0_FC02 pwwn 50:00:14:42:B0:xx:xx:02
device-alias name VPLEXSITE2-E1_A0_FC02 pwwn 50:00:14:42:A0:xx:xx:02
device-alias name VPLEXSITE2-E1_B0_FC02 pwwn 50:00:14:42:B0:xx:xx:02
device-alias name ESXi1SITE1-VHBA0 pwwn xx:xx:xx:xx:xx:xx:xx:xx
device-alias name ESXi1SITE2-VHBA0 pwwn xx:xx:xx:xx:xx:xx:xx:xx
device-alias commit
device-alias distribute
ivr zone name ESXi1SITE1-VHBA0_VPLEXSITE2-E1_A0_FC02
member device-alias ESXi1SITE1-VHBA0 vsan 10
member device-alias VPLEXSITE2-E1_A0_FC02 vsan 20
ivr zone name ESXi1SITE1-VHBA0_VPLEXSITE2-E1_B0_FC02
member device-alias ESXi1SITE1-VHBA0 vsan 10
member device-alias VPLEXSITE2-E1_B0_FC02 vsan 20
ivr zone name ESXi1SITE2-VHBA0_VPLEXSITE1-E1_A0_FC02
member device-alias ESXi1SITE2-VHBA0 vsan 20
member device-alias VPLEXSITE1-E1_A0_FC02 vsan 10
ivr zone name ESXi1SITE2-VHBA0_VPLEXSITE1-E1_B0_FC02
member device-alias ESXi1SITE2-VHBA0 vsan 20
member device-alias VPLEXSITE1-E1_B0_FC02 vsan 10
ivr zoneset name IVR_vplex_hosts_XC_A
member ESXi1SITE1-VHBA0_VPLEXSITE2-E1_A0_FC02
member ESXi1SITE1-VHBA0_VPLEXSITE2-E1_B0_FC02
member ESXi1SITE2-VHBA0_VPLEXSITE1-E1_A0_FC02
member ESXi1SITE2-VHBA0_VPLEXSITE1-E1_B0_FC02
ivr zoneset activate name IVR_vplex_hosts_XC_A
ivr commit
FABRIC B Switches
feature ivr
ivr nat
ivr distribute
ivr commit

system default zone distribute full


system default zone mode enhanced
ivr vsan-topology auto
zone mode enhanced vsan 11
zone mode enhanced vsan 21
zone mode enhanced vsan 31
vsan database
vsan 11 name VSAN11
vsan 21 name VSAN21
vsan 31 name vplex1_wan_repl_vsan31
interface port-channel 31
channel mode active
switchport mode E
switchport trunk allowed vsan 11
switchport trunk allowed vsan add 21
switchport trunk allowed vsan add 31
switchport description CROSS-SITE-LINK
switchport speed 8000
switchport rate-mode dedicated
Configuring FABRIC B switches Fcdoamin priorities:
Site1:
fcdomain priority 2 vsan 11
fcdomain domain 11 static vsan 11
fcdomain priority 100 vsan 21
fcdomain domain 23 static vsan 21
fcdomain priority 2 vsan 31
fcdomain domain 31 static vsan 31
Site2:
fcdomain priority 100 vsan 11
fcdomain domain 13 static vsan 11
fcdomain priority 2 vsan 21
fcdomain domain 21 static vsan 21
fcdomain priority 100 vsan 31
fcdomain domain 33 static vsan 31
Example configuring Inter-VSAN routing (IVR) zones connecting an ESXi host HBA1 over
VSANs 11 and 21 from site1->site2 and vice versa site2->site1 utilising the transit VSAN31:
device-alias database
device-alias name VPLEXSITE1-E1_A0_FC02 pwwn 50:00:14:42:A0:xx:xx:03
device-alias name VPLEXSITE1-E1_B0_FC02 pwwn 50:00:14:42:B0:xx:xx:03

device-alias name VPLEXSITE2-E1_A0_FC02 pwwn 50:00:14:42:A0:xx:xx:03


device-alias name VPLEXSITE2-E1_B0_FC02 pwwn 50:00:14:42:B0:xx:xx:03
device-alias name ESXi1SITE1-VHBA1 pwwn xx:xx:xx:xx:xx:xx:xx:xx
device-alias name ESXi1SITE2-VHBA1 pwwn xx:xx:xx:xx:xx:xx:xx:xx
device-alias commit
device-alias distribute
ivr zone name ESXi1SITE1-VHBA1_VPLEXSITE2-E1_A0_FC03
member device-alias ESXi1SITE1-VHBA1 vsan 11
member device-alias VPLEXSITE2-E1_A0_FC03 vsan 21
ivr zone name ESXi1SITE1-VHBA1_VPLEXSITE2-E1_B0_FC03
member device-alias ESXi1SITE1-VHBA1 vsan 11
member device-alias VPLEXSITE2-E1_B0_FC02 vsan 21
ivr zone name ESXi1SITE2-VHBA1_VPLEXSITE1-E1_A0_FC03
member device-alias ESXi1SITE2-VHBA0 vsan 20
member device-alias VPLEXSITE1-E1_A0_FC02 vsan 10
ivr zone name ESXi1SITE2-VHBA1_VPLEXSITE1-E1_B0_FC03
member device-alias ESXi1SITE2-VHBA1 vsan 21
member device-alias VPLEXSITE1-E1_B0_FC03 vsan 11
ivr zoneset name IVR_vplex_hosts_XC_B
member ESXi1SITE1-VHBA1_VPLEXSITE2-E1_A0_FC03
member ESXi1SITE1-VHBA1_VPLEXSITE2-E1_B0_FC03
member ESXi1SITE2-VHBA1_VPLEXSITE1-E1_A0_FC03
member ESXi1SITE2-VHBA1_VPLEXSITE1-E1_B0_FC03
ivr zoneset activate name IVR_vplex_hosts_XC_B
ivr commit
Verification commands to check status of configuration:
show fcdomain domain-list
Verifies unique domain ID assignment. If a domain overlap exists, edit and verify the alloweddomains list or manually configure static, non-overlapping domains for each participating switch
and VSAN.
show interface brief
Verifies if the ports are operational, VSAN membership, and other configuration settings covered
previously.
show fcns database
Verifies the name server registration for all devices participating in the IVR.
show zoneset active
Displays zones in the active zone set. This should include configured IVR zones.

show ivr fcdomain


Displays the IVR persistent fcdomain database.
show ivr internal
Shows the IVR internal troubleshooting information.
show ivr pending-diff
Shows the IVR pending configuration.
show ivr service-group
Shows the difference between the IVR pending and configured databases.
show ivr tech-support
shows information that is used by your customer support representative to troubleshoot IVR
issues.
show ivr virtual-domains
Shows IVR virtual domains for all local VSANs.
show ivr virtual-fcdomain-add-status
Shows IVR virtual fcdomain status.
show ivr vsan-topology
Verifies the configured IVR topology.
show ivr zoneset
Verifies the IVR zone set configuration.
show ivr zone
Verifies the IVR zone configuration.
clear ivr zone database
Clears all configured IVR zone information.
Note: Clearing a zone set erases only the configured zone database, not the active zone database.
Useful CISCO Docs:
Cisco IVR Troubleshooting
IVR Zones and Zonesets
Inter-VSAN Routing (IVR) definition: An IVR zone is a set of end devices that are allowed to
communicate across VSANs within their interconnected SAN fabric. An IVR path is a set of
switches and Inter-Switch Links (ISLs) through which a frame from an end device in one VSAN
can reach another end device in some other VSAN. Multiple paths can exist between two such
end devices. A Transit VSAN is a VSAN that exists along an IVR path from the source edge VSAN
of that path to the destination edge VSAN of that path, in the example solution diagram below
you will see that VSAN 30 and VSAN 31 are transit VSANs. Distributing the IVR Configuration

Using CFS: The IVR feature uses the Cisco Fabric Services (CFS) infrastructure to enable
efficient configuration management and to provide a single point of configuration for the entire
fabric in the VSAN.
Thanks to @HeagaSteve,Joni,Hans,@dclauvel & Sarav for providing valuable input.
Leave a comment

EMC UIM/P Editing The Database


October 4, 2015 Vblock Edit Database, EMC UIM, EMC UIM/P, ESXi
Thank you @CliffCahill for providing this trick!
Ensure to back up the UIM/P(Unified Infrastructure Manager for Provisioning) DB before you
begin. The following provides detailed steps on how to modify IP settings for ESXi host service
offerings stored in the UIM DB.
Login to the UIM CLI via putty.
#To login to uim voyencedb database:
su pgdba
psql voyencedb uim
To pull back all ESXi O/S settings:
select *, from ossettings;
To update the gateway for all service offerings / ESXi host:
update ossettings set gateway = 10.10.1.254;
To update the IP address on individual ESXi host id is listed when you run select command:
update ossettings set ip_address = 10.10.1.10 where id = 2338;
update ossettings set ip_address = 10.10.1.11 where id = 2302;
Leave a comment

CISCO MDS Useful Show Commands


September 30, 2015 Cisco CISCO MDS, COMMANDS, show, Useful
CONFIG:
show startup-config
show running-config
show running-config diff
show run |include host|gateway|ntp|fcdomain|http|telnet|zoneset|vsan

show start |include host|gateway|ntp|fcdomain|http|telnet|zoneset|vsan


show install all status
show switchname
show wwn switch
show switch summary
show version
show cdp neighbors
show boot
show system internal flash
show snmp host
show ntp peers
show ssh server
show telnet server
Switch Serial Number:
The switch serial number can be retrieved by using either of the two following commands:
show license host-id
show sprom backplane 1 | grep Serial
LICENSING:
show feature
show license brief
show license usage
show port-license
show port-license | grep -i acquire
HARDWARE:
show hardware
show hardware internal errors all
show environment
show environment temperature
show environment temperature module 1
show environment power
show inventory
show inventory module
show inventory chassis
show inventory xbar
show clock
INTERFACES:
show interface mgmt0
show ip interface
show ip interface mgmt 0
show interface brief
show interface fc1/1 brief
show run interface fc1/1

show interface description


show interface description | grep VMAX
show interface fc1/1 | include description
show interface transceiver
show interface transceiver details
show interface fc1/1 transceiver
show interface fc1/1 transceiver details
show int | include CRC
show int | include errors
Show int fc1/1 | include errors
Show int fc1/1-48 | include errors
Show int fc1/1-32 | include errors
Show int fc2/1-32 | include errors
show interface fc1/1-32,fc2/1-32 | include errors
show interface fc1/1-32,fc2/1-32,fc3/1-32,fc4/1-32 | include errors
show interface fc1/15,fc2/15,fc3/21,fc4/22 | include errors
show int fc1/1,fc1/12 |include fc|vsan|description|CRC|errors
show port-channel summary
show port-channel usage
show port-channel database
show port-channel consistency
show interface port-channel X
show run interface port-channel X
show port internal info
show port internal info interface fc1/1
show port internal info interface port-channel X
show port-resources module 1
show port-resources module 2
VSANs (Virtual SANs):
show vsan
show vsan membership
show vsan membership usage
show vsan X membership
show vsan membership interface fc1/1
show vsan membership interface fc1/1-32
show vsan membership interface port-channel X
show topology vsan X
FLOGI (FABRIC LOGIN):
*Displays devices connected locally over physical ports.
show flogi database
show flogi database details
show flogi database interface fc1/1
show flogi database interface port-channel X
show flogi database vsan X

show flogi database fcid 0x000xxx


show flogi internal info flogi-table vsan X
show flogi internal event-history debugs
show flogi internal vsan X
show flogi internal errors
show flogi internal vsan-state X
FCDOMAIN (Fibre Channel domain):
*Displays global information about fcdomain configurations.
show fcdomain
show fcdomain vsan X
show fcdomain domain-list
show fcdomain domain-list vsan X
show fcdomain address-allocation vsan X
FCNS (Fibre Channel Name Server):
*Displays wwn&fcid including devices connected over IVR/PC/ISL.
show fcns database
show fcns database detail vsan X
show fcns database fcid 0x010440 detail vsan X
show rscn statistics vsan X
show fspf database vsan X
FC & DEVICE ALIAS:
show fcalias
show fcalias vsan X
show fcalias pending vsan X
show device-alias database
ZONING:
show zone (Displays all VSANs)
show zone status
show zone status vsan X (Display zoning mode status enhanced/basic)
show zone active
show zone active vsan X
show zone vsan X
show zone statistics
show zone statistics vsan X
show zone active vsan X
show zone active vsan X | grep ZoneHostName
show zone analysis vsan X
show zone analysis active vsan X
show zone policy vsan X
show zone pending-diff vsan X
show zone member pwwn 21:00:00:XX:XX:XX:XX:XX

show zone name ZoneX


show zoneset
show zoneset vsan X-Y
show zoneset active (An asterix (*) next to the device indicates it is logged into the name server)
show zoneset vsan X-Y
show zoneset active vsan X
show zoneset brief
show zoneset brief vsan X
show zone-attribute-group vsan X
show zoneset pending active vsan X
show zoneset pending vsan X
IVR (Inter VSAN Routing):
show ivr
show ivr vsan-topology
show ivr zone
show ivr zone | grep HostZoneName
show ivr zoneset status
show ivr zoneset active
show ivr internal fcns database
CFS (Cisco Fabric Services):
show cfs merge status
show cfs peers
show cfs status
SUPPORT:
show system uptime
show tech-support brief
show tech-support detail
Show tech-support flogi
Show tech-support fcdomain
show tech-support vsan x
show tech-support zone
show tech-support zone vsan X
show tech-support ivr
show logging server
Show logging log file
show logging last lines
Show ntp peer-status
show system health
show accounting log
show processes cpu
show processes cpu history
show processes cpu module 1
show processes cpu module 2

show processes log


show processes log details
show processes memory
show flash
Leave a comment

EMC VNX2 Drive Layout (Guidelines


& Considerations)
September 7, 2015 VNX DRIVE, EMC, Layout, mcx, VNX
Applies only to VNX2 Systems.
CHOICES made in relation to the physical placement of Drives within a VNX can have an
impact on how the VNX performs. The intention here is to shed some light on how to best
optimize the VNX by placing Drives in their best physical locations within the VNX Array. The
guidelines here deal with optimising the Back-End system resources. While these considerations
and examples may help with choices around the physical location of Drives you should always
work with a EMC certified resource in completing such an exercise.

Maximum Available Drive Slots


You cannot exceed the maximum slot count, doing so will result in drives becoming unavailable.
Drive form factor and DAE type may be a consideration here to ensure you are not exceeding the
stated maximum. Thus the max slot count dictates the maximum drives and the overall capacity a

system can support.

BALANCE
BALANCE is the key when designing the VNX drive layout:
Where possible the best practice is to EVENLY BALANCE each drive type across all available
back-end system BUSES.This will result in the best utilization of system resources and help to
avoid potential system bottlenecks. VNX2 has no restrictions around using or spanning drives
across Bus 0 Enclosure 0.

DRIVE PERFORMANCE
These are rule of thumb figures which can be used as a guideline for each type of drive used in a
VNX2 system.
Throughput (IOPS) figures are based on small block random I/O workloads:

Bandwidth (MB/s) figures are based on large block sequential I/O workloads:

Recommended Order of Drive Population:


1. FAST Cache
2. FLASH VP
3. SAS 15K
4. SAS 10K
5. NL-SAS
Physical placement should always begin at Bus0 Enclosure0 (0_0) and the first drives to get
placed are always the fastest drives as per the above order. Start at the first available slot on each
BUS and evenly balance the available Flash drives across the first slots of the first enclosure of
each bus beginning with the FAST Cache drives. This ensures that FLASH Drives endure the
lowest latency possible on the system and the greatest RoI is achieved.
FAST CACHE
FAST Cache drives are configured as RAID-1 mirrors and again it is good practice to balance the
drives across all available back-end buses. Amount of FAST Cache drives per B/E Bus differs for
each system but ideally aim for no more than 8 drives per bus (Including SPARE), this is due to
the fact that FAST Cache drives are extremely I/O Intensive and placing more than the
recommended maximum per Bus may cause I/O saturation on the Bus.

Note: Do not mix different drive capacity sizes for FAST Cache, either use all 100GB or all
200GB drive types.
Also for VNX2 systems there are two types of SSD available:
FAST Cache SSDs are single-level cell (SLC) Flash drives that are targeted for use with FAST
Cache. These drives are available in 100GB and 200GB capacities and can be used both as FAST
Cache and as TIER-1 drives in a storage pool.
FAST VP SSDs are enterprise Multi-level cell (eMLC) drives that are targeted for use as TIER1 drives in a storage pool (Not supported as FAST Cache drives). They are available in three
flavors 100GB, 200GB and 400GB.

More detailed post on FAST Cache: EMC VNX FAST Cache


DRIVE FORM FACTOR
Drive form factor (2.5 | 3.5) is an important consideration. For example if you have a 6 BUS
System with 6 DAEs (one DAE per BUS) consisting of 2 x 2.5 Derringer DAEs and 4 x 3.5
Viper DAEs as follows:

MCx HOT SPARING CONSIDERATIONS


Best practice is to ensure 1 spare is available per 30 of each drive type. When there are different
drives of the same type in a VNX, but different speeds, form factors or capacities, then these
should ideally be placed on different buses.
Note: Vault drives 0_0_0 0_0_3 if 300GB in size then no spare is required, but if larger than
300G is used and user luns are present on the Vault then a spare is required in this case.
While all un-configured drives in the VNX2 Array will be available to be used as a Hot Spare, a
specific set of rules are used to determine the most suitable drive to use as a replacement for a
failed drive:
1. Drive Type: All suitable drive types are gathered.
2. Bus: Which of the suitable drives are contained within the same bus as the failing drive.
3. Size: Following on from the Bus query MCx will then select a drive of the same size or if none
available then a larger drive will be chosen.
4. Enclosure: This is another new feature where MCx will analyse the results of the previous
steps to check if the Enclosure that contains the actual Failing drive has a suitable replacement
within the DAE itself.
See previous post for more info: EMC VNX MCx Hot Sparing
Drive Layout EXAMPLE 1:
VNX 5600 (2 BUS)

FAST Cache:
1 X Spare, 8 x FAST Cache Avail.
8 / 2 BUSES = 4 FAST Cache Drives Per BUS
1 x 2.5 SPARE Placed on 0_0_24
1 X Spare, 20 x Flash VP Avail.

Fast VP:
20 / 2 BUSES = 10 Per BUS
10 x 3.5 Placed on BUS 0 Encl 1
10 x 2.5 Placed on BUS 1 Encl 0
1 X 2.5 SPARE Placed on 1_0_24

Drive Layout EXAMPLE 2:

VNX 5800 (6 BUS)

Drive Layout EXAMPLE 3:


VNX 8000 (16 BUS)

Useful Reference:
EMC VNX2 Unified Best Practices for Performance
2 Comments

Cisco Nexus 3064 Configuring


Jumbo Frames

September 2, 2015 Cisco 3064, Cisco, jumbo, nexus, qos


The default MTU size on a Nexus 3064 switch is 1500 bytes, the following details how to
reconfigure the switch for a system wide MTU value of 9216(jumbo).

Note: The Cisco Nexus 3000 Series switch does not fragment frames. As a result, the switch
cannot have two ports in the same Layer 2 domain with different maximum transmission units
(MTUs). A per-physical Ethernet interface MTU is not supported. Instead, the MTU is set
according to the QoS classes. You modify the MTU by setting Class and Policy maps.
As per Cisco Documentation

Configuring Jumbo Frames:


Begin by creating a policy-map of type network-qos named aptly in this example JumboFrames
and assign as the system-qos. (As stated above you cannot configure on a per interface level).
n3k-sw# config t
n3k-sw(config)# policy-map type network-qos ?
WORD Policy-map name (Max Size 40)
n3k-sw(config)# policy-map type network-qos JumboFrames
n3k-sw(config-pmap-nq)# class type network-qos class-default
n3k-sw(config-pmap-nq-c)# mtu 9216
n3k-sw(config-pmap-nq-c)# system qos
n3k-sw(config-sys-qos)# service-policy type network-qos ?
WORD Policy-map name (Max Size 40)
n3k-sw(config-sys-qos)# service-policy type network-qos class-default
n3k-sw(config-sys-qos)# show policy-map type network-qos
Type network-qos policy-maps
===============================
policy-map type network-qos JumboFrames
class type network-qos class-default
mtu 9216

Verification of the change can be validated by running a show queuing interface command on
one of the 3k interfaces (interface Ethernet 1/2 in this example):
n3k-sw# show queuing interface ethernet 1/2

Ethernet1/2 queuing information:


qos-group sched-type oper-bandwidth
0 WRR 100
qos-group 0
HW MTU: 9216 (9216 configured)
drop-type: drop, xon: 0, xoff: 0
If you run the show interface command then a value of 1500 will be displayed despite the
system wide change that was made:
n3k-sw# show queuing interface ethernet 1/2

Ethernet1/2 is up

Dedicated Interface
Hardware: 100/1000/10000 Ethernet
Description: 6296 2A Mgmt0
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec
Leave a comment

EMC VMAX Fully Pre-allocate TDEV


August 19, 2015 VMAX EMC, PREALLOCATE, TDEV, VMAX
By Fully Pre-allocating a TDEV all the tracks associated with the device are reserved, this may
be useful for mission critical apps or avoiding any write miss penalties.
Example SYMCLI:
Single TDEV example:
symconfigure -sid xxx -cmd start allocate on tdev 0c66 end_cyl=last_cyl
allocate_type=persistent; commit

Range of TDEVs:
symconfigure -sid xxx -cmd start allocate on tdev 0c6e:1116 end_cyl=last_cyl

allocate_type=persistent; commit

Example UNISPHERE:
From the Unisphere GUI navigate to storage>volumes right click the device you wish to modify
and select Start allocate.

Leave a comment
Search for:

@DavidCRing
Top Posts & Pages

EMC RecoverPoint Architecture and Basic Concepts

EMC VNX - New Shutdown Options

EMC VMAX3 - CLI Cheat Sheet

Categories
Categories

Archives
Archives

David Ring

EMC VNXe 3200 Configuration Steps Via UEMCLI (Part1) October 22, 2015 David
Ring

EMC ViPR Cisco IVR Cross-Connect Zoning (VPLEX) October 9, 2015 David Ring

EMC UIM/P Editing The Database October 4, 2015 David Ring

David Ring
Blog at WordPress.com. The Big Brother Theme.
Follow

Follow David Ring


Get every new post delivered to your Inbox.
Join 1,346 other followers
Build a website with WordPress.com

Potrebbero piacerti anche