Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
UEMCLI (Part1)
October 22, 2015 VNXe 3200, EMC VNXE, mcx, Uemcli
There are some minor cli changes with VNXe MCX which I will document as part of this series,
for VNXe GEN1 please refer to these earlier posts:
EMC VNXe Gen1 Configuration Using Unisphere CLI
The initial configuration steps outlined in Part1 :
Accept End User License Agreement
Change the Admin Password
Apply License File
Commit the IO Modules
Perform a Health Check
Code Upgrade
Create A New User
Change the Service Password
Enable SSH
Accept End User License Agreement
uemcli -d 192.168.1.50 -u Local/admin -p Password123# /sys/eula set -agree yes
Change the Admin Password
uemcli -d 192.168.1.50 -u Local/admin -p Password123# /user/account show
uemcli -d 192.168.1.50 -u Local/admin -p Password123# /user/account -id user_admin set
-passwd NewPassword -oldpasswd Password123#
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /user/account show
Reference Help for any assistance:
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword / -help
Apply License File
Firstly gather the Serial Number of the VNXe:
uemcli -d 192.168.1.50 -u Local/admin -p Password123# /sys/general show -detail
Then browse to the EMC registration site, entering the VNXe S/N to retrieve the associated lic
file:
https://support.emc.com/servicecenter/registerProduct/
Upload the acquired license file:
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword -upload -f
C:\Users\david\Downloads\FL100xxx00005_29-July-2015_exp.lic license
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /sys/lic show
Commit the IO Modules
The following commits all uncommitted IO modules:
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /env/iomodule commit
Display a list of system IO modules:
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /env/iomodule show
Perform a Health Check
It is good practice to perform a Health Check in advance of a code upgrade:
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /sys/general healthcheck
Code Upgrade
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /sys/soft/ver show
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword -upload -f
C:\Users\david\Downloads\VNXe 2.4.3.21980\VNXe-MR4SP3.1-upgrade-2.4.3.21980RETAIL.tgz.bin.gpg upgrade
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /sys/soft/ver show
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /sys/soft/upgrade create -candId
CAND_1
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /sys/soft/upgrade show
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /sys/general healthcheck
Note: Please see a more detailed overview of the upgrade process in a previous post:
http://davidring.ie/2015/03/02/emc-vnxe-code-upgrade/
Create A New User
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /user/account create -name david -type
local -passwd DavidPassword -role administrator
uemcli -d 192.168.1.50 -u Local/admin -p NewPassword /user/account show
The role for the new account can be:
administrator Administrator
storageadmin Storage Administrator
operator Operator (view only)
Change the Service Password
The Service password is used for performing service actions on the VNXe.
uemcli -d 10.0.0.1 -u Local/admin -p Password123# /service/user set -passwd newPassword
-oldpasswd Password123#
Enable SSH
uemcli -d 192.168.1.50 -u service -p NewPassword /service/ssh set -enabled yes
Leave a comment
The reason why this issue occurs during a ViPR storage provisioning task with VPLEX is due to
the fact that ViPR incorrectly attempts to apply two simultaneous updates to the Cisco MDS IVR
database, correctly the MDS database is locked by the first task and the second task times out
resulting in a failed ViPR provisioning process. The tasks should be executed in a sequential
fashion allowing each task to complete and then commit changes to the IVR database thus
removing the lock it held once the commit is successful. Once the database lock is removed then
the subsequent task may execute on the database.
Workaround:
Executing an exclusive storage provisioning order from ViPR catalog for a single ESXi host
works perfectly, including automatically creating the required Cross-Connect Zoning, this is due
to the fact the single workflow performs MDS IVR database updates sequentially. During the
single ESXi host exclusive storage provisioning task ViPR creates the necessary initiators,
storage views and IVR Zones (both local and cross-connect zoning) for a single host. BUT
performing a shared storage provisioning task to an ESXi Cluster fails in a single catalog order, it
will also fail if two exclusive storage provision orders are executed at the same time. In summary
the workaround is to execute an exclusive storage provisioning order for each host in the cluster
individually one at a time. Once this is complete and each host has a volume presented and
VPLEX has the correct initiators and storage views created by ViPR, you may then create a new
distributed LUN for the whole ESXi cluster. ViPR simply adds the new distributed volumes to
existing storage views in VPLEX (there is no zoning going on when you run the ddev creation,
thus no locking). Once you have a working distributed volume for all of the hosts, you may then
remove the exclusive volumes and everything should function accordingly. Ensure to verify that
all the required zoning (including IVR Zones) is configured correctly on all switches and the
ESXi hosts can see all associated paths.
NOTE: ViPR engineering plan to enhance the Zoning workflow with an additional step to
obtain/monitor any IVR database locks before proceeding with the IVR zoning operations.
This will be targeted for the next ViPR release. I will provide updates to this post in due
course.
Solution Example:
The below diagram depicts the connectivity requirements in order to implement a ViPR storage
provisioning solution with a VPLEX Metro configuration using Cross-Connect Zoning:
From the above digram you can see that an ISL is in place for Site-to-Site connectivity, in this
example configuration the ISL carries VPLEX-FC-WAN-Replication traffic over
VSAN30(Fabric-A) and VSAN31(Fabric-B) -(VPEX FC WAN COM). VSAN30 is stretched
between Fabric-A switches on both sites and VSAN31 is stretched between both switches on
Fabric-B for Site1&2. VSAN30&31 can be used as transit VSANs for this example IVR
configuration.
In order for ViPR v2.x to successfully execute the task of automatically creating the required
cross-connect zoning the following configuration needs to be in place (as per example diagram
above):
Site1:
Fabric-A, VSAN10: associated interfaces|PC (even ESX hba of site1, VPLEX FE&BE and
PC30) added as members to vsan10.
Fabric-B, VSAN11: associated interfaces|PC (odd ESX hba of site1, VPLEX FE&BE and PC30)
added as members to vsan11.
Site 2:
Fabric-A, VSAN20: associated interfaces|PC (even ESX hba of site2, VPLEX FE&BE and
PC31) added as members to vsan20.
Fabric-B, VSAN21: associated interfaces|PC (odd ESX hba of site2, VPLEX FE&BE and
PC31) added as members to vsan21.
Site1 Site2:
Fabric-A: VSAN30 used as a transit vsan over Port-channel 30.
Fabric-B: VSAN31 used as a transit vsan over Port-channel 31.
A prereq is required in order for ViPR to successfully create the cross-connect zoning
automatically as part of the provisioning workflow, the prereq is to manually create an IVR zone
on fabric A, connecting vsan 10 and vsan 20 and an IVR zone on Fabric B connecting vsan11
and vsan 21 (example IVR Zones provided below).
In the case of ViPR v2.2 an additional prereq task is required and that is to stretch the VSANs
between sites, as per this example VSAN20 gets added to switch-A on Site 1 and vice-versa
VSAN10 added to switch-A on Site2, repeat same for Fabric-B switches but no local interfaces
are assigned to these dummy VSANs, essentially a VSAN20 is created without any member on
Switch-A Site1 etc. This is done for all respective VSANs as can be seen in the example
configuration provided below. As part of the VSAN stretch ensure to add the allowed VSANs to
the respective port-channels:
Port-Channel 30 Allowed VSAN 10,20,30
Port-Channel 31 Allowed VSAN 11,21,31
Once the VSAN is stretched across the sites as per the prereq for ViPR v2.2, ViPR will then
automatically create the required IVR zones as part of the provisioning workflow.
Note: The vArray should be set for Automatic Zoning for all this to occur.
Example MDS Configuration
These are example configuration steps to be completed on both sites MDS switches in order to
enable Cisco Inter-VSAN Routing (IVR is the standard for cross-connect zoning with VPLEX
Metro) and to enable automatic cross-connect zoning with ViPR:
FABRIC A Switches
feature ivr
ivr nat
ivr distribute
ivr commit
system default zone distribute full
system default zone mode enhanced
ivr vsan-topology auto
zone mode enhanced vsan 10
zone mode enhanced vsan 20
zone mode enhanced vsan 30
vsan database
vsan 10 name VSAN10
vsan 20 name VSAN20
vsan 30 name vplex1_wan_repl_vsan30
interface port-channel 30
channel mode active
switchport mode E
switchport trunk allowed vsan 10
switchport trunk allowed vsan add 20
switchport trunk allowed vsan add 30
switchport description CROSS-SITE-LINK
switchport speed 8000
switchport rate-mode dedicated
Configuring FABRIC A switches Fcdoamin priorities:
Site1:
fcdomain priority 2 vsan 10
fcdomain domain 10 static vsan 10
fcdomain priority 100 vsan 20
fcdomain domain 22 static vsan 20
fcdomain priority 2 vsan 30
fcdomain domain 30 static vsan 30
Site2:
fcdomain priority 100 vsan 10
fcdomain domain 12 static vsan 10
fcdomain priority 2 vsan 20
fcdomain domain 20 static vsan 20
fcdomain priority 100 vsan 30
fcdomain domain 32 static vsan 30
Example: configuring Inter-VSAN routing (IVR) Zones connecting an ESXi host HBA0
over VSANs 10 and 20 from site1->site2 and vice versa site2->site1 utilising the transit
VSAN30:
device-alias database
device-alias name VPLEXSITE1-E1_A0_FC02 pwwn 50:00:14:42:A0:xx:xx:02
device-alias name VPLEXSITE1-E1_B0_FC02 pwwn 50:00:14:42:B0:xx:xx:02
device-alias name VPLEXSITE2-E1_A0_FC02 pwwn 50:00:14:42:A0:xx:xx:02
device-alias name VPLEXSITE2-E1_B0_FC02 pwwn 50:00:14:42:B0:xx:xx:02
device-alias name ESXi1SITE1-VHBA0 pwwn xx:xx:xx:xx:xx:xx:xx:xx
device-alias name ESXi1SITE2-VHBA0 pwwn xx:xx:xx:xx:xx:xx:xx:xx
device-alias commit
device-alias distribute
ivr zone name ESXi1SITE1-VHBA0_VPLEXSITE2-E1_A0_FC02
member device-alias ESXi1SITE1-VHBA0 vsan 10
member device-alias VPLEXSITE2-E1_A0_FC02 vsan 20
ivr zone name ESXi1SITE1-VHBA0_VPLEXSITE2-E1_B0_FC02
member device-alias ESXi1SITE1-VHBA0 vsan 10
member device-alias VPLEXSITE2-E1_B0_FC02 vsan 20
ivr zone name ESXi1SITE2-VHBA0_VPLEXSITE1-E1_A0_FC02
member device-alias ESXi1SITE2-VHBA0 vsan 20
member device-alias VPLEXSITE1-E1_A0_FC02 vsan 10
ivr zone name ESXi1SITE2-VHBA0_VPLEXSITE1-E1_B0_FC02
member device-alias ESXi1SITE2-VHBA0 vsan 20
member device-alias VPLEXSITE1-E1_B0_FC02 vsan 10
ivr zoneset name IVR_vplex_hosts_XC_A
member ESXi1SITE1-VHBA0_VPLEXSITE2-E1_A0_FC02
member ESXi1SITE1-VHBA0_VPLEXSITE2-E1_B0_FC02
member ESXi1SITE2-VHBA0_VPLEXSITE1-E1_A0_FC02
member ESXi1SITE2-VHBA0_VPLEXSITE1-E1_B0_FC02
ivr zoneset activate name IVR_vplex_hosts_XC_A
ivr commit
FABRIC B Switches
feature ivr
ivr nat
ivr distribute
ivr commit
Using CFS: The IVR feature uses the Cisco Fabric Services (CFS) infrastructure to enable
efficient configuration management and to provide a single point of configuration for the entire
fabric in the VSAN.
Thanks to @HeagaSteve,Joni,Hans,@dclauvel & Sarav for providing valuable input.
Leave a comment
BALANCE
BALANCE is the key when designing the VNX drive layout:
Where possible the best practice is to EVENLY BALANCE each drive type across all available
back-end system BUSES.This will result in the best utilization of system resources and help to
avoid potential system bottlenecks. VNX2 has no restrictions around using or spanning drives
across Bus 0 Enclosure 0.
DRIVE PERFORMANCE
These are rule of thumb figures which can be used as a guideline for each type of drive used in a
VNX2 system.
Throughput (IOPS) figures are based on small block random I/O workloads:
Bandwidth (MB/s) figures are based on large block sequential I/O workloads:
Note: Do not mix different drive capacity sizes for FAST Cache, either use all 100GB or all
200GB drive types.
Also for VNX2 systems there are two types of SSD available:
FAST Cache SSDs are single-level cell (SLC) Flash drives that are targeted for use with FAST
Cache. These drives are available in 100GB and 200GB capacities and can be used both as FAST
Cache and as TIER-1 drives in a storage pool.
FAST VP SSDs are enterprise Multi-level cell (eMLC) drives that are targeted for use as TIER1 drives in a storage pool (Not supported as FAST Cache drives). They are available in three
flavors 100GB, 200GB and 400GB.
FAST Cache:
1 X Spare, 8 x FAST Cache Avail.
8 / 2 BUSES = 4 FAST Cache Drives Per BUS
1 x 2.5 SPARE Placed on 0_0_24
1 X Spare, 20 x Flash VP Avail.
Fast VP:
20 / 2 BUSES = 10 Per BUS
10 x 3.5 Placed on BUS 0 Encl 1
10 x 2.5 Placed on BUS 1 Encl 0
1 X 2.5 SPARE Placed on 1_0_24
Useful Reference:
EMC VNX2 Unified Best Practices for Performance
2 Comments
Note: The Cisco Nexus 3000 Series switch does not fragment frames. As a result, the switch
cannot have two ports in the same Layer 2 domain with different maximum transmission units
(MTUs). A per-physical Ethernet interface MTU is not supported. Instead, the MTU is set
according to the QoS classes. You modify the MTU by setting Class and Policy maps.
As per Cisco Documentation
Verification of the change can be validated by running a show queuing interface command on
one of the 3k interfaces (interface Ethernet 1/2 in this example):
n3k-sw# show queuing interface ethernet 1/2
Ethernet1/2 is up
Dedicated Interface
Hardware: 100/1000/10000 Ethernet
Description: 6296 2A Mgmt0
MTU 1500 bytes, BW 1000000 Kbit, DLY 10 usec
Leave a comment
Range of TDEVs:
symconfigure -sid xxx -cmd start allocate on tdev 0c6e:1116 end_cyl=last_cyl
allocate_type=persistent; commit
Example UNISPHERE:
From the Unisphere GUI navigate to storage>volumes right click the device you wish to modify
and select Start allocate.
Leave a comment
Search for:
@DavidCRing
Top Posts & Pages
Categories
Categories
Archives
Archives
David Ring
EMC VNXe 3200 Configuration Steps Via UEMCLI (Part1) October 22, 2015 David
Ring
EMC ViPR Cisco IVR Cross-Connect Zoning (VPLEX) October 9, 2015 David Ring
David Ring
Blog at WordPress.com. The Big Brother Theme.
Follow