Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
%% It is factory pre-configured
%% 100% Virtually Provisioned
%% Can be configured as all Flash
%% Rapidly provision storage
%% set service Level Objectives
----------
VMAX 100 K
----------
## 1 TO 2 eNGINES.
## 1440 Drives
----------
VMAX 200 K
----------
## 1 to 4 Engines
## 2880 Drives
----------
VMAX 400 K
----------
## 1 to 8 engines
## 5760 Drives
Features of VMAX3.
** Low latenty
** high IOPS
** can be configured all flash
** storage can be rapidly provisioned with desired service level agreements.
** it is management simplicity
** Massive scalability
** EMC solutions enabler version 8.0 ( uniSPHERE 8.0) to control VMAX3 Arrays.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~
VMAX 100K,200K,400K
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~
feature Description
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~
Maximum Drives Per Engine @ (720) 2.5"
@ (360) 3.5"
-----------------------------------------------------------------------------------
------
Drive Options @ Hybrid ( mixed drive types )
@ All-Flash
-----------------------------------------------------------------------------------
------
DAE Mixing @ 60-drve and 12-drive behind an engine
@ Dingle increments
-----------------------------------------------------------------------------------
------
Power Option @ Three-Phase Delta(50amp),WYE(32 amp)
@ single-phase(32 amp)
-----------------------------------------------------------------------------------
------
Dispersion @ System Bays 2-8, upto 25 meters from
System Bay 1
-----------------------------------------------------------------------------------
------
Vault @ Vault to Flashin engine
-----------------------------------------------------------------------------------
------
Racking Options @ Single Engine
@ Dual Engine
@ Thier-party Racking
-----------------------------------------------------------------------------------
------
Service Access @ Integrated service processor in System
Bay 1
# Management Module congrol station
(MMCS)
@ Service tray on additional system bays
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
VMAX 100K VMAX 200K
VMAX 400K
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# Engines 1-2 1-4
1-8
-----------------------------------------------------------------------------------
----------------------------
Cache/Engine 512 Gb or 1 TB 512 GB, 1TB,or 2 TB
512 Gb ,1 TB or 2 TB
-----------------------------------------------------------------------------------
----------------------------
Engine Type 2.1 GHz 2.6 GHz
2.7 GHz
24 core 32 core
48 core
-----------------------------------------------------------------------------------
----------------------------
Max 2.5" Drives 1440 2880
5760
-----------------------------------------------------------------------------------
----------------------------
Max 3.5" Drives 720 1440
2880
-----------------------------------------------------------------------------------
----------------------------
Max usable Capacity 0.5 PBu 2.1 PBu
4.0 PBu
-----------------------------------------------------------------------------------
----------------------------
Max FE ports 64 128
256
-----------------------------------------------------------------------------------
----------------------------
Infiniband Fabric Dual 12-port Switches Dual 12-port Switches
Dual 18-port Switches
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
-----
The load can be dynamially distributed over time. - currently this feature is
not supported by software ( by user) , we need to contact the EMC service Engineer
to do this
-----------------------------------------------------------------------------------
-----------------------------------------------------------------------------------
-----
@@ Initial Configuration
-- Configuration is done at the factory.
-- Symwin and Simplified SymmWin
~ Runs on Management Module control Station (MMCS) ( is this like
service processor ?)
~ Access restricted to authorized EMC personnel only.
-----------------------------------------------------------------------------------
---
-----------------------------------------------------------------------------------
---
SYMAPI
-----------------------------------------------------------------------------------
---
^
^
|
|
|
|
|
|
|
| MMCS(Management Module control Station)
|
| SYMMWIN
v
v
-----------------------------------------------------------------------------------
----
HYPERMAX OS
-----------------------------------------------------------------------------------
----
## VMAX3 Arrays
--Service Level Based Management.
## Performance Analyzet
-- Installed by default
-- pOstgreSQL
## APIs for Automation and Provisioning.
UNISPHERE functionality
## Manage eLicenses, User and Roles
## Storage Configuration Managment
-- SLO based provisioning
-- FAST
## Configure and Monitor Alerts
## Performance Monitoring
-- Real time, Root cause and Historical ( real time
analysis and historical trending of VMAX performance data)
we can also see high frequency metrix in the
real time , VMAX3 system HEAT maps ,graphs ,detailing system performance.
we cab also drill down through data to
investigate the issues to monitor perfomance overtime,execute schedule and ongoing
reports and
to export data to a file
-- Dashboards
users can customise their own Dashboards along
with system pre given.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
==========================
Factory Pre-Configuration
## Disk Groups
@ A collection of Physical Drives.
## Data Pools
@ Collection of Data Devices(Tdats)
- Preconfigured in each Disk Group
- All the disks in the disk group are of same RAID protection(
so all the disks in the given group are having same RAID protection and fixed
size )
-----------------------------------------------------------------------------------
-------------------------------------------------------
IMP Disk Groups,Data Pools, Storage Resource pool, Service Level
Objectives can'b be configured or modfied with solutions enabler of UNISPHERE
These are created during the configuration process at the
factory.
-----------------------------------------------------------------------------------
-------------------------------------------------------
All the data pools must belong to only one disk group, there is one to
one relationship between disk groups and data pools.
Disk group must contaning the disk of same disk technology,
capacity,rotational speed and RAID type
The performance capability of each data pool is known and based on
drive type , speed, capacity , Quantity of drives , and the RAID protection.
+++++++++++++++++++++++
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Option Characteristics
protection Performanace cost
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
RAID 1 @ Write to two separate Physical drives
Higher Fastest LOW
@ Read frin Single Drive
-----------------------------------------------------------------------------------
-------------------------------------------------------
RAID 5 @ Parity based protection
High Fast Read and Good Write Lower
@ Striped data and parity
@ 3+1 and 7+1
-----------------------------------------------------------------------------------
-------------------------------------------------------
RAID 6 @ Two parity drives 6+2 and 14+2
@ Data availability is primary consideration
Highest Fast Read and Fair Write Lower
@ Performance is a secibdart consideration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
write penalaties
++++++++++++++++++++++++++
SRP -Storage Resource Pool
++++++++++++++++++++++++++
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| ~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~ |
| | Flash -RAID(3+1) | | SA 15k - RAID 1 | | SAS
7.2K -RAID 6(14+2) | |
| | Data Pool 0 | | Data Pool 1 | | Data
Pool 2 | |
| ~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~ |
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
++++++++++++++++++++++++++++++++++++++++++++++++
SLO - Service Level Objective Based Provisioning
++++++++++++++++++++++++++++++++++++++++++++++++
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Service Level Objective
Storage Groups
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@ Defined ideal performance Operating range of an
application @ Canbe explicitly associated with an SRP
@ Can be combined with a workload type
@ Can be explicitly associated with an SLO and workload type
-- further refines the performance objective
-- Defines the SG and FAST Managed.
@ Preconfigured
@ SGs are implicitly associated with the default SRP and the Optimized SLO
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
--------------
storage groups
--------------
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
Diamond Flash
0.8 ms
-----------------------------------------------------------------------------------
---------------------
Platinum Between Flash & 15K RPM
3.0 ms
-----------------------------------------------------------------------------------
---------------------
Gold 15k RPM
5.0 ms
-----------------------------------------------------------------------------------
---------------------
Silver 10K RPM
8.0 ms
-----------------------------------------------------------------------------------
---------------------
Bronze 7.2K RPM
14.0 ms
-----------------------------------------------------------------------------------
---------------------
Optimized(default) System optimized
N/A
-----------------------------------------------------------------------------------
---------------------
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$
-----------------------------------------------------------------------------------
-------------------------------
IMP @@ these SLOs are Fixed and Cannot be
modified
@@ the end user can associate the desired
SLO with a storage group.
@@ please also note that Certain SLOs are
may not available on an array , if certain drive type are not configured
-----------------------------------------------------------------------------------
-------------------------------
ex:
Diamond SLOs are not available if there
no Flash drives present in an array.
Broanze SLOs are not available if there
no 7.2k RPM drives are not present in any array.
+++++++++++++++++++++++++
AVAILABLE WORKLOAD Types
+++++++++++++++++++++++++
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~
Workload type
Description
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~
OLTP Small
block I/O workload
-----------------------------------------------------------------------------------
-------------------
OLTP with Replication Small
block I/O workload with local or remote replication
-----------------------------------------------------------------------------------
-------------------
DSS Large
Block I/O workload
-----------------------------------------------------------------------------------
-------------------
DSS with Replication Large
blockI/O workload with local or remote replication
-----------------------------------------------------------------------------------
-------------------
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~
++++++++++++++++++++++++
AUTO PROVISIONING GROUPS
++++++++++++++++++++++++
+++++++++++++++++++
STORAGE ALLOCATION
+++++++++++++++++++
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Initiator Group @ Fibre Channel
Initiator /HBA/HBA WWNS
- A initiator
group can have upto max of 64 initiators(WWS) or 64 child Initiator group names
- A initiator
group can't contain mixture of host Initiators(wwn) and child IG names
- An individuall
belongs to only one initiator group
-- howerver
once the initiator is in the group, the group can be member of another initiator
group
-- this
feture is called cascaded initiator group ( this is allowed cascaded level of 1.
so it can do one level deep)
@ Port Flags set on
Initiator Group basis with one set of port flags applying to all the initiators in
the group
- FCID Lockdown
per Initiator ( it is set on per initiator basis)
(FCID
lockdown stops the threat of WWN spoofing)
(WWN
Spoofing: An attacker gains access to a storage system in order to
access/modify/deny data or metadata.)
-----------------------------------------------------------------------------------
---------------------------------------------------------------------
Port Group @ Front End Ports
-- it can contain
maximum of 32 FA ports
@ A port can belong
to multiple port groups
@ Ports must have
the ACLX flag enabled ( beore a port is added to the port group m the ACLX shouold
have been enabled)
** ACLX
-Access Control Logix
symmaskdb
-sid <sid #> list database -v
** What
controls the visibility of the VMAX3 ACLX device to a host?
Show ACLX
device flag set to Enabled
-----------------------------------------------------------------------------------
---------------------------------------------------------------------
Storage Group @ VMAX3 Thin Devices
@ A device can
belong to more than one storage group
@ can be associated
with SRP,SLO and Workload Type
-- storage group
can only contain devices or other storage groups
No mixing is
permitted.
-- storage grop
with devices may contain upto 4k VMAX3 logical volumes/LUNS
-- A logical
Volume may belong to more than one storage group.
-- There is limit
of 16K storage groups per VMAX3 storage array.
-- A parent
storage group can have upto 32 child storage groups.
-- one of each
type of group is associated with to form a Masking view.
** Storage
group is a logical grouping of upto 4096 symm devices.
** LUN
addressing done automatically via dyanamic LUN feature
-----------------------------------------------------------------------------------
---------------------------------------------------------------------
Masking View @ one of each type
of group is associated together to form a masking view
** It
defines an association between one Initiator group, one port group and one storage
group.
** when a
masking view is created, the devicves in the storage group are mapped to the ports
in the port group
and
maksined to the initiators in the initiator group.
-----------------------------------------------------------------------------------
---------------------------------------------------------------------
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+++++++++++++++++++++++++++++++++
MANAGING STORAGE AND PROVISIONING
+++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++
configuration Architecture
++++++++++++++++++++++++++
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~
| ---------------------------
---------------------------- ------------------------ |
| | | |
~~~~~~~~~~~~~~~~~~~ | | ~~~~~~~~~~~~~~~~~~ | |
| | HOST | | | LOCAL
VMAX3 | | | | REMOTE VMAX3 | | |
| | | |
~~~~~~~~~~~~~~~~~~~ | | ~~~~~~~~~~~~~~~~~~ | |
| | ~~~~~~~~~ ~~~~~~~~~~ | | ~~~~~~
| | | |
| | |SYMCLI | |UNIVMAX | | | | FA |
| | | |
| | ~~~~~~~~~ ~~~~~~~~~~ | | ~~~~~~
| | | |
| | ~~~~~~~~~~~~~~~~~~~ |---------------->|
| | | |
| | | SYMAPI | | | | |
| | | |
| | ``````````````````` | | | ~~V~~
| | ~~~~~~ | |
| | | SIL | | | | | RA
|------------------------>| RA | | |
| | ~~~~~~~~~~~~~~~~~~~~ | | | ~~~~~
| | ~|~~~~ | |
| | | | |
| | | | |
| ----------------------------
----|----------------------- ----|------------------- |
| | ^
| ^ |
| Ethernet | |
Ethernet| | |
| | |
| | |
| | |
| | |
| v |
v | |
|
--------------------------- -------------------------- |
| |
~~~~~~~~~~~~~~~~~~~~~~~~ | | ~~~~~~~~~~~~~~~~~~~~~~ | |
| | ! | SYMWIN
scripts | !| | ! | SYMWIN scripts | ! | |
| | !
~~~~~~~~~~~~~~~~~~ !| | ! ~~~~~~~~~~~~~~~~~~ ! | |
| | !
!| | ! ! | |
| | !
!| | ! ! | |
| | ! SYMWIN
!| | ! SYMWIN ! | |
| | !
!| | ! ! | |
| |
~~~~~~~~~~~~~~~~~~~~~~~~~| | ~~~~~~~~~~~~~~~~~~~~~~~ | |
| |
~~~~~~~~~~~~~~~~~~~~~~~~~| | ~~~~~~~~~~~~~~~~~~~~~~~ | |
| | ! MMCS
!| | ! MMCS ! | |
| | !
!| | ! ! | |
| |
~~~~~~~~~~~~~~~~~~~~~~~~~| | ~~~~~~~~~~~~~~~~~~~~~~~ | |
|
|--------------------------- -------------------------- |
|
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~
++++++++++++++++++++++++
VMAX3 Gatekeeper Devices
++++++++++++++++++++++++
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| %% 3-cyl thin devices(~ 6MB)
|
| %% Receives low-level SCSI I/O from SYMCLI/GUI
|
| %% Used as target of SYMCLI/SYMAPI commands
|
| *** commands passed through gatekeepers to VMAX3 for action
|
| *** Locked during the passing of commands
|
| *** Lots of commands flowing to the VMAX3 from many
applications on the same host can cause gatekeeper shortage |
|
|
| %% must be accessible from the host executing the commands
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
++++++++++++++++++++++++++++++++++++++++++++
CONCURRENT CONFIGURATION SESSIONS ON VMAX3
++++++++++++++++++++++++++++++++++++++++++++
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| @@ Concurent Provisioning
|
| - upto four concurrint non-conflicting configuration change
seesions. |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
++++++++++++++++++++++++++++++++++++++++++++++
CONFIGURATION CHANGES USING UNISPHERE FOR VMAX
++++++++++++++++++++++++++++++++++++++++++++++
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| @@ Multiple ways to invoke configuration changes via unispehere
|
| -- Depends on the rype of configuration change
|
| -- Unisphere has many wizards.
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
++++++++++++++++++++++++
Unispehere -SRP Headroom
++++++++++++++++++++++++
@@ the storage groups dashboard in unisphere for vmax shows all the
configured storage resouce pools and the available headroom for each SLO.
@@ prior to allocating new storage to a host,it is a good idea too
check the available headroom
----------------------
Unisphere -SRP details
----------------------
----------------------
Unisphere -Job list
----------------------
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| @@ List of jobs
|
| - Yet to be run - CAn be run on demand or scheduled
for later execution |
| - Jobs that are running , successfully completed,
or failed. |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
for example :
we can create a job for creating the
volumes/luns ...we can click run botton to create the luns now and click on the
scheduled button to create it later.
---------------------------
configuration chages SYMCLI
---------------------------
## Query
- symconfigure query -sid <sid#>
## Abort
- configuration change session can be terminated
prematurely using the abort command.
- premature termination is only possible before the point
of no return.
- symconfigure -sid <sid#> abort -session_id <session-
id>
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
###################################################################################
###################################################################################
#######################################################
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-------------
FAST with SLO
-------------
One of the major changes with V3 is the way we provision storage. FAST has been
enhanced to work on a more granular level (128KB track level) and we have
abstracted a lot of the internals so that the end user need not be so concerned
about
the mechanics of the array they can simply provision capacity and set a performance
expectation which the array will work to achieve.
we are no longer required to create meta devices to support larger devices and the
SLO model makes provisioning intuitive and easy.
-------------------------
Creation of storage Group
-------------------------
@@ We can creae the Storage group like which we created earlier for VMAX2.
@@ Assigning to SLO and Workload is optional ,
@@ If no SLO or Workload is specified FAST will still manage everything but yout
SLO will be optimized.
@@ VMAX3 supports 64k (64000) storage grops, ( we can create one storage group
for one application).
@@ At present we can create devs up to 16TB soon to be increased further.
-----------------------------------------------------------------------------------
--------------------------
symconfigure -sid 007 -cmd "create dev count =5 config=tdev,
emulation=fba size=2048 GB sg=myapp_sg;" preview
-----------------------------------------------------------------------------------
--------------------------
----------------------------------------------------------
symsg �sid <sid#> create myapp_sg �slo gold �workload oltp
----------------------------------------------------------
@@ Present to the host via a masking view, no change from VMAX here.
++ Here I will highlight a few of the key commands to gather information about
the configuration and interaction with the SRP and SLO.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++
IMP NOTE:- Monitoring and Alerting of FAST SLO is built into Unisphere for VMAX.
SLO compliance is reported at every level when looking at storage group components
in Unisphere.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
++++++++++++
-----------------------------------
Viewing SRP configured on the Array
-----------------------------------
----------------------------
symcfg -sid <sid#> list -srp
-----------------------------
----------------------------------------------
symcfg -sid <sid#> show -srp <srp-name> -tb/gb
----------------------------------------------
---------------------------------
Viewing Available SLO in the VMAX
---------------------------------
----------------------------
symcfg -sid <sid#> list -slo
----------------------------
++ to get more details look at the SLO's and the workloads that can
be associated with storage groups
we can run this command.
------------------------------------------------------
symcfg -sid <sid#> list -slo -detail -by_resptime -all
------------------------------------------------------
--------------------------------
Viewing SRP capacity consumption
--------------------------------
@@ to the get an idea of how your storge is being consumed ( from the
command line we can run this command)
---------------------------------------------
symsg -sid <sid#> list -srp -demand -type slo
---------------------------------------------
++ this above command shows you how the SRP is being consumed by
each of the SLO,
++ it will also list how much space is consumed by DSE and Snapshot
( please note that , this capacity all comes from the SRP , so it's worth keeping
an eye on)
------------------------------------------
Listing SLO associations by Storage Group.
------------------------------------------
@@ the previous command (symsg -sid <sid#> list -srp -demand -type
slo) gives good idea at high level.
@@ but if we want to see from a storage level like which storage
groups are associated with each SLO we can use this undergiven command
--------------------------------------
symsg -sid <sid#> list -by_SLO -detail
--------------------------------------
++ this above command shows , each storage group whether or not is
associated with SLO.
++ we can also get some detial about the number of devices in storage
group ( not the capacity)
----------------------------------------------
symcfg list -tdev -bound -detail -sd <sg-name>
-----------------------------------------------
## We can see the full breakdown of the SRP including drive pools
and which SLO you have availabe as well as TDAT information.
with this command we can have information like thin
devices(Tdevs) bound to the SRP and how much space they are each consuming.
---------------------------------------------------------
symcfg -sid <sid#> how -srp <srp-name> -gb -detail | more
---------------------------------------------------------
-----------------------------------------------------------------------------------
----------------------------------------------------------
Changing the SLO on Existing Storage Groups ( the storage groups which are already
associated to the masking view and they are in production)
-----------------------------------------------------------------------------------
----------------------------------------------------------
---------------------------------------------------------
symsg -sid <sid#> -sg test set -slo platinum -wl OLTP_REP
---------------------------------------------------------
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++
solutions Enabler 8.X also allows for moving devices between groups non-
disuprively
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++