Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
This is an Instructor Led course with all relevant student materials being provided in printed
student guides.
The objectives for this course are shown here. Please take a moment to read them.
The course assumes the student has base knowledge and has met the prerequisites
detailed on this slide. A list of specific prerequisite courses can be found in EMC Education
Services Learning Management System.
You will be asked by the Instructor to provide some pertinent information in response to
the questions shown on the slides to gauge your level of understanding of the material
subject matter.
Mornings start with a lecture at 9:00 A.M. There is an hour long break at lunch time usually taken at noon. There are two 15-minute breaks, one in the morning and one in the
afternoon. The instructor will set times at the beginning of class.
Please adhere to the classroom etiquette guidelines as listed here on the slide and be
courteous to all other course participants.
This is a 3-day lab oriented course that walks participants through a Vblock Platform
Deployment. Slide shows whats expected for day 1.
This is a 3-day lab oriented course that walks participants through a Vblock Platform
Deployment. Slide shows whats expected for day 2-3.
10
11
12
13
This module will present an overview of the building blocks that make up the Vblock
Infrastructure.
13
The objectives for this module are shown here. Please take a moment to read them.
14
The objectives for this lesson are shown here. Please take a moment to read them.
15
A Vblock System is an integrated solution that combines compute, network, storage and
management components into a single package. This package is a self-contained unit that
can be utilized to deploy a single service, multiple services, or can be aggregated with
additional Vblock Systems to support larger initiatives.
16
17
VCE offers Vblock infrastructure packages for all environments. Starting with the Vblock
Series 300 EX and FX for small organizations, consolidation projects, extending to the scale
out capabilities of the Vblock Series 300 GX and HX models , to the large enterprise class
Vblock Series 700 Model MX with Enterprise class Symmetrix VMAX storage with its
advanced replication and disaster recovery capability.
Scalability for a Range of Business Applications
Address a wide range of virtual machines, users, and applications. Scale up or out for
private or public cloud environments.
Leverage simplicity and efficiency of EMC VNX family to improve TCO.
Implement policy-based, automated provisioning for the entire infrastructure.
Extensible to meet the most demanding IT needs
Vblock Series 300 EX and FX Models provide and entry-level configuration, furnishing the
benefits of infrastructure management consolidation
Vblock Series 300 GX and HX Models extend further to organizations of all sizes,
highlighting the benefits of shared services, such as virtual desktops, email, etc.
Vblock Series 700 MX Models are designed for high intensity application environments, and
are thus ideal for the enterprise and Service Providers, or private clouds hosting business
critical ERP and CRM systems. 700 Series Vblocks are scalable to thousands of VMs and
petabytes of storage.
VCE Vblock Systems Deployment and Implementation - Module 1
19
The Vblock Series 300 is a new line of four Vblock Infrastructure Platforms that are based on the EMC VNX
series of unified storage arrays. The Vblock Series 300 Infrastructure Platforms have the following features:
Optimized, fast delivery configurations based on the most commonly purchased components
Common power solution across all Vblock Series 300 cabinets with three North American and two nonNorth American options
Smaller base configurations with fewer drives, fewer blades, and more granular flexibility on the
configuration
Granular, but optimized compute and storage growth by adding predefined kits and packs
New array software for replication and reporting
VMware vStorage API for Array Integration (VAAI) enablement
New Advanced Management Pod (AMP) models for both value and high availability requirements
The Vblock System 700 Models are designed for deployments involving large numbers of virtual machines
and users. The Vblock Series 700 is available in two models:
Vblock Series 700 model MX (700MX)
Vblock Series 700 model LX (700LX)
The 700MX utilizes a SAN storage medium or a NAS (File) storage medium. UCS local boot disks are optional.
The 700LX delivers a multi-controller, scale-out storage architecture with consolidation and efficiency for the
enterprise. It allows scaling of storage resources through common and fully redundant building blocks called
VMAXe series engines. The 700LX is designed for deployments of large numbers of virtual machines and
users. It meets the higher performance and availability requirements of an enterprise's business-critical
applications.
A VG-8 gateway system is required for file level storage on the Vblock System 700 Models.
The AMP is the recommended Management option for the Vblock, however it is not a
mandatory component. If it is installed it will greatly reduce the implementation time for
the Vblock infrastructure:
Self contained management infrastructure for the Vblock
Remote access capability, private NATing, security
Management software (UIM, vCenter, etc.) for the Vblock, running as virtual machines
on two C200 ESXi hosts
Used in an Operate model for remote access and operational tasks
Can be used for a customer who wants a dedicated management infrastructure for their
Vblock
Shown here is the High Availability model of the AMP.
Note: In the 300EX, the AMP is not installed in the base cabinet. The AMP must be installed
within an external SE cabinet, aggregation cabinet, or customer-provided cabinet. The miniAMP occupies three rack units (RU). The HA AMP occupies 6 RU.
22
This diagram depicts the various element managers that are involved in managing the
Vblock infrastructure, as well as the associated Virtual Machines they would run on.
Cisco Data Center Network Manager (DCNM) solutions provide proactive, highly secure
management of data center Ethernet and SANs.
23
This diagram depicts the various element managers that are involved in managing the
Vblock infrastructure, as well as the associated Virtual Machines they would run on.
Cisco Data Center Network Manager (DCNM) solutions provide proactive, highly secure
management of data center Ethernet and SANs.
24
In VMware vSphere environments running the Cisco NX-OS operating system, the Cisco
Nexus 1000V Series Switches are virtual machine access switches that are an intelligent
software switch implementation. The Cisco Nexus 1000V Series operates inside the
VMware ESX hypervisor and supports Cisco VN-Link server virtualization technology. This
provides:
When server virtualization is deployed in the data center, virtual servers are not typically
managed in the same manner as physical servers. Server virtualization is treated as a
special deployment, leading to longer deployment time and a greater degree of
coordination among server, network, storage, and security administrators. The Cisco Nexus
1000V Series provides a consistent networking feature set and provisioning process all the
way from the VM access layer to the core of the data center network infrastructure. Virtual
servers can now leverage the same network configuration, security policy, diagnostic tools,
and operational models as the physical servers that are attached to dedicated physical
network ports. Virtualization administrators can access predefined network policy that
follows mobile virtual machines to ensure proper connectivity, saving valuable time to
focus on virtual machine administration.
The Cisco Nexus 1000V was developed in close collaboration with VMware and is certified
by VMware to be compatible with VMware vSphere, vCenter, ESX, and ESXi, and with many
other VMware vSphere features.
25
The objectives for this lesson are shown here. Please take a moment to read them.
26
VCE Build Services redefines data center deployment. Vblock Infrastructure Platforms are
fully integrated and tested in a controlled factory environment by VCE technicians. Then
VCE and partner teams install, configure, and tune the Vblock System in the organizations
data center, typically within five days or less, so the platform is ready for application
migration in order to speed time to value.
Sizing, buying, receiving, assembling, configuring, testing and validating vs. pre-configure,
pre-tested, ready to grow and ready to go!
28
Deployment Lifecycle starts like all infrastructure platform with planning and data
gathering. From there the system is build, configure to customer specifications, delivered,
installed and validated. Service designed to deploy a solution from concept to production in
the same amount of time to install a single component.
29
Depending on your job responsibilities and which organization you work for, you will access
a different section of documentation.
Partners: VCE Partner Resource Center Resource Library Category Technical
www.vcepartnerportal.com/resourcelib-vce.asp?loc=331
Requires a valid user name and password.
Cisco, EMC, VCE, or VMware employees: VCE Portal Vblock Infrastructure Platforms
Series 100: www.vceportal.com/solutions/Series100
Series 300: www.vceportal.com/solutions/Series300
Series 700: www.vceportal.com/solutions/Series700
Vblock 0, Vblock 1, Vblock 1U: www.vceportal.com/solutions/2010Models
Release Certification Matrix: www.vceportal.com/solutions/releasematrix
Note: The Logical Build is THE ultimate build reference for all Vblock deployments. This
lesson will highlight a subset of the steps presented in the Logical Build Guide when
configuring the AMP. Read the notes section of each slide for more information, and always
reference the Logical Build Guide itself DIRECTLY when on a customer site, since build
contents are updated frequently. For the same reason, the HTML version may be simpler to
work with than PDF (since copying and pasting, etc. is not allowed in the secure PDF).
30
Currently, only available on the VCE Portal and not the partner portal.
When walking through the initial onsite validation of a customer Vblock, you will need to
frequently refer to the Logical Configuration Survey, since the majority of the customer
configuration (ideally all of it) will have been designed and implemented in the Vblock
during the manufacturing and testing process.
32
The Vblock Physical Build Guides are designed to be used online at VCE manufacturing
facilities. They describes all the activities required to assemble and cable a given Vblock
Series. After completing the tasks in this guide, the Vblock System is configured by VCE
employees to meet the specific needs of the customer. Once configured, it is shipped to
the customer site where the build process is completed by connecting the racks and
integrating the Vblock System into the customer's environment.
33
Deploying a Vblock System with or without UIM doesnt matter to the deployment
engineer. It simply determines which procedure to follow.
35
We do not encourage customers to put space between the cabinets. Doing this would
require an extended lead time on the order as the cross-cabinet connections would not
reach and custom cables would need to be ordered.
The final steps of a deployment include validating the appliance to the customer. It is
usually to navigate each component using the individual element mangers and then walk
through the customer defined (or agreed upon) test and acceptance plan. With the
customer satisfied the system is fully operational change the user names and passwords to
customer supplied names to complete the install.
These are the key points covered in this module. Please take a moment to review them.
38
39
The objectives for this module are shown here. Please take a moment to read them.
40
The objectives for this lesson are illustrated here. Please take a moment to read them.
41
Most environments have a reference architecture today. Only those running Vblock System
have a Converged Infrastructure! Buying the individual components and putting them
together may have a physical appearance of a Vblock System but how long did it take, how
many numbers are there for support. Did the unit arrive onsite preconfigured, ready go
ready to grow?
43
VCE Vblock Systems Deployment and Implementation - Module 2
43
44
The MDS family represents extensive selection of networked storage connectivity products.
MDS integrates high-speed Fibre Channel connectivity (1 to 10 Gb/s), highly resilient
switching technology, and options for intelligent IP storage networking. This wide range of
connectivity options allows you to configure MDS directors, switches, and routers to meet
any business requirement. MDS products provide more than just network connectivity.
They offer:
Simple, centralized, automated SAN management
Proven interoperability across your networked storage solution
The highest availability to meet escalating business continuity and service level
requirements
Scalability with built-in investment protection
Cisco MDS for intelligent SANs are an integral part of an enterprise data center architecture
and provide a better way to access, manage, and protect growing information resources
across a consolidated Fibre Channel, Fibre Channel over IP (FCIP), Small Computer System
Interface over IP (iSCSI), Gigabit Ethernet, and Optical network.
Its important to ascertain which VNX series platform meets your business requirements.
EMC makes it easy by offering the broadest range of unified storage platforms in the
industryrate your requirements and choose your solution.
The Symmetrix VMAX is available in two different basic configurations. A single cabinet
configuration (1A) includes a single processing module enclosure with two directors and
between 40 and 120 disk modules. This system can be expanded by adding a second
storage bay and up to 240 additional drive modules. The multi-enclosure systems included
separate system and storage bays. The system bay may include up to eight processor
module enclosures with for 2 to 16 physical director boards. A minimum of one storage
bay is required and a maximum of 10 storage bays would enable configurations of up to
2400 disk drives.
The objectives for this lesson are illustrated here. Please take a moment to read them.
48
Data Center Rack Integration Services available for customers who have specific data center
racking requirements.
Base, Expansion, and Storage Rack types are available for the Vblock System Series 300.
Base, Expansion, and Storage Rack types are available for the Vblock System Series 700.
Shown above is a vertical view of the Vblock System physical cabling. It attempts to show
what components are cabled where.
Open the Wiring Tool for examples of Vblock System physical cabling.
When initially powering on a Vblock Systems the sequence to how the system is powered
on must be followed.
The objectives for this lesson are illustrated here. Please take a moment to read them.
56
When you download a new pair of kickstart and system images, you also get a new BIOS
image because it is included in the system image. You can use the install all command to
upgrade the kickstart, system, and upgradeable BIOS images.
Validate version:
switch# show version
Cisco Nexus Operating System (NX-OS) SoftwareTAC
Software
BIOS:
version
2.1.05.0(0)N1(2)
kickstart:
version
system: version
5.0(0)N1(2)
At this point, the name of your switch is entered along with the IP address and subnet mask of the OOB
Ethernet management port interface. Without this information, management access to the switch through
the OOB Ethernet port would not be possible.
When there are options to select with each dialog, you can either press Return, which accepts the choice
indicated between the square brackets (for example, [n]), or you can select the alternative. In the example, n,
for no, was entered at Enable IP routing?, Configure static route?, and Configure the default network?
because [y] was the current selection and these items were not desired in the configuration. However,
Configure the default gateway? was desired, so pressing Return enabled the user to enter an IP address on
the next dialog line. No other options in the example dialog script were changed.
A Network Time Protocol (NTP) server provides a precise time source (radio clock or atomic clock) to
synchronize the system clocks of network devices. NTP is transported over User Datagram Protocol (UDP)/IP.
All NTP communications use Coordinated Universal Time (UTC). An NTP server receives its time from a
reference time source, such as a radio clock or atomic clock, attached to the time. NTP distributes this time
across the network. Using NTP is optional but recommended.
Telnet services are enabled to remotely log on to the switch. The DNS client on the switch communicates
with the DNS server to perform the IP address-to-name mapping. Setting up the Domain Name Server (DNS)
is optional but recommended.
The system prints a summary of the configuration for your review. The configuration printed will be exactly
what you entered. Compare it once more with the information you obtained in the initial setup requirements
to verify there are no typing errors. If everything was entered correctly, there is no need to edit.
The system asks if you would like to edit the configuration that just printed out. Any configuration changes
made to a switch are immediately enforced but are not saved. If no edits are needed, then you are asked if
you want to use this configuration and save it as well. Since [y] (yes) is the default selection, pressing
Return activates this function, and the configuration becomes part of the running-config and is copied to the
startup-config.
This also ensures that the kickstart and system boot images are automatically configured. Therefore, you do
not have to run a copy command after this process. A power loss restarts the switch using the startup-config,
which has everything saved that has been configured to nondefault values. If you do not save the
configuration at this point, none of your changes are updated the next time the switch is rebooted.
It is recommended that the one-step install all command be used to upgrade your system
software. This command upgrades all modules in any MDS-Series switch. Only one install
all command can be running on a switch at any time, and no other command can be issued
while running that command. The install all command can not be performed on the
standby supervisor module. It can only be issued on the active supervisor module.
The general steps to upgrade your system are:
Log into the switch through the console, Telnet, or SSH port of the active supervisor.
Create a backup of your existing configuration file, if required.
Perform the upgrade by issuing the install all command. The example above
demonstrates upgrading to SAN-OS 3.0.1 using the install all command.
When upgrading, images can be retrieved in one of two ways:
Local, where images are locally available on the switch. The install all command
uses the specified local images.
Remote, where images are in a remote location and the user specifies the
destination using the remote server parameters and the file name to be used
locally.
To upgrade the switch to a new image, you must specify the variables that direct the switch
to the images. To select the kickstart image, use the kickstart variable, or to select the
system image, use the system variable. The images and variables are important factors in
any install procedure. You must specify the variable and the image to upgrade your switch.
Both images are not always required for each installation.
With the pre-installation tasks (system unpacking/racking and cabling) complete, the next
installation steps are focused on powering up the system and performing the system
initialization, system health checks, and product registration with VNX Installation Assistant
for File/Unified. Once the system is initialized, ConnectHome must be configured.
The general steps for creating a new bin file. The IMPL.bin file is initially created using
SymmWin and loaded into each director in the Symmetrix. The IMPL.bin defines the logical
and physical configuration of a Symmetrix system.
Network access requires a known IP address. Use IP addresses provided by the customer in
the Logical Configuration Survey (LCS) or a reserved DHCP. The factory-configured IP
addresses for POUs in the Vblock Platform are 192.168.123.123. These must be changed to
valid addresses.
Example on how to configure:
Configure the Power Outlet Unit
Connect POU to a system using a crossover cable
Reconfigure the systems network properties to be on the default address network subnet
Use a web browser to access the POU at the permanent IP address
64
65
The objectives for this module are shown here. Please take a moment to read them.
66
The objectives for this lesson are shown here. Please take a moment to read them.
67
Element Managers are used for component initialization, UIM preparation and installations
where UIMP is not deployed.
This diagram depicts the various element managers that are involved in managing the
Vblock infrastructure, as well as the associated protocols and/or APIs.
69
The key part of the Unified System is the UCS Manager. That manages the entire system
and by a system, we mean the chassis and the servers within the chassis as well as the
Fabric Extenders, but also the number of chassis that are part of a single pair of what we
call the Fabric Interconnects. There are a number of servers that are part of the chassis.
The graphical user interface consist primarily of the Right and Left Panes for most activities.
The left or Navigation Pane consist of a fault summary bar across the top view and a series
of 5 tabs that offer differing views of the various managed components in the California
system. The fault summary has four conditions ranging from critical, major, minor and
warning. The faults summary contains all the cumulative totals for the entire California
system.
An expandable branch or tree function allows the operator to traverse the various
components located in the five tabs.
The right or Content Pane consist of a top toolbar with a back button, new object creation
pull-down, options & questions buttons, information button and a debug pull-down menu.
The second toolbar in the content pane offers the operator a breadcrumbs trail of object
hierarchies already traversed with the ability to rapidly return to a previous location along
the trail. At the right most portion of this bar is the current location.
The largest part of the content pane offers granular details associated with the varying
objects that have been highlighted in the navigation pane. And at the very bottom the
content pane are the function buttons associated with committing or saving as well as
discarding the changes made here.
71
The CLI is organized into a hierarchy of command modes, with the EXEC mode being the
highest-level mode of the hierarchy. Higher-level modes branch into lower-level modes.
You use create, enter, and scope commands to move from higher-level modes to modes in
the next lower level , and the exit command to move up one level in the mode hierarchy.
Most command modes are associated with managed objects, so you must create an object
before you can access the mode associated with that object. You use create and enter
commands to create managed objects for the modes being accessed. The scope commands
do not create managed objects, and can only access modes for which managed objects
already exist.
Each mode contains a set of commands that can be entered in that mode. Most of the
commands available in each mode pertain to the associated managed object. Depending
on your assigned role and locale, you may have access to only a subset of the commands
available in a mode; commands to which you do not have access are hidden.
The CLI prompt for each mode shows the full path down the mode hierarchy to the current
mode. This helps you to determine where you are in the command mode hierarchy, and
can be an invaluable tool when you need to navigate through the hierarchy.
Fabric Manager is the management tool for the Cisco MDS switches. Fabric Manager comes in two
configurations:standalone and server. Fabric Manager Standalone (FM) is a free product that is installed
on any host that will be performing management tasks. The installation bundles with it a Postgres
database, or can be pointed to an external database. The standalone version of Fabric Manager can only
manage (open) one fabric at a time, and does not offer performance monitoring capabilities, as well as
some other administrative features. The Fabric Manager Server (FMS) requires a license be installed on
every switch in the fabric, and can simultaneously manage multiple fabrics as well as collect
performance statistics, etc. Unlike FM, FMS is deployed in a client/server model. So, the server portion
is installed on a single host and the client portion is installed on any number of management stations.
The client stations connect to the server and retrieve information from the centralized database.
Both versions of Fabric Manager can manage all aspects of the fabric, including ports, enabled features,
zoning and security. Fabric Manager provides the ability to manage all elements in the fabric from a
single interface. For switch specific tasks, such as manipulating ports or viewing element statuses, some
prefer to use Device Manager (DM). DM is a switch-centric tool that is installed separately from FM, but
can be launched from within FM.
The MDS switches have a very robust CLI integrated into the NX-OS operating system. The CLI is accessed by
establishing a SSH session with the management port of the switch. The CLI provides auto-complete for
commands by using the Tab key and context-sensitive help by using the ? as part of a command.
Fabric Manager can also be used to perform limited managementof the Nexus 5000 series switches. For full
management of these switches, as well as the Nexus 1000v, the CLI should be used.
73
74
The slide shows the WWN of blade one, 20:00:00:25:b5:10:2a:01, zoned to VMAX Director
FA-8F port 1 on Fabric A. It also shows the WWN of blade one, 20:00:00:25:b5:10:2b:01,
zoned to VMAX Director FA-7F port 1 on Fabric B.
In general practice, the Ports would be zoned to different VMAX engines to provide
connectivity redundancy. The VMAX is an active active array and PowerPath would also
be loaded onto the compute blade to manage multipathing to the storage volumes.
75
76
Unisphere is web-based software that allows you to configure, administer, and monitor
VNX series. It replaces the previous interfaces used to manage Celerra (Celerra Manager)
and CLARiiON (Navisphere).
By consolidating the management of multiple devices into one GUI, Unisphere gives you an
overall view of what is happening in your environment plus an intuitive and easier way to
manage EMC unified storage.
78
VNX management can be performed using the Navisphere Secure CLI. It is a client
application that allows simple operations on the EMC VNX Series platform, and some other
legacy storage systems. It uses the Navisphere 6.X security model, which includes rolebased management auditing of all user change requests, management data protected with
SSL, and centralized user account management.
79
Symmetrix Management Console provides device management for both the Symmetrix VMax and Symmetrix DMX products. Several key features simplify storage management in
virtual data centers and cluster environments. As data centers continue to embrace
virtualization, management tools are required to tier, consolidate, and scale physical
resources.
Symmetrix Management Console manages the following features:
Auto-provisioning GroupsMap and mask initiator groups, storage ports, and
storage groups
Virtual ProvisioningAlso known as thin provisioning
Enhanced Virtual LUN TechnologyData mobility within the array and movement
between tiers
Symmetrix Management Console also offers several ease-of-use functions such as wizards
that help streamline the process for Auto-provisioning, SRDF replication configuration, and
Enhanced Virtual LUN Technology. Additionally, there is the ability to create storage
templates for reuse in provisioning storage.
Symmetrix Management Console is loaded on the Service Processor, eliminating the
need for another server host.
Symmetrix Management Console complements both the ControlCenter and
SYMCLI; it is a lightweight software package with a web-based GUI.
SYMCLI can be used to perform ad-hoc operations or incorporated into user developed
scripts to integrate Symmetrix management and control with the application and host
environment.
These are the key points covered in this module. Please take a moment to review them.
83
84
The objectives for this module are shown here. Please take a moment to read them.
85
The objectives for this lesson are illustrated here. Please take a moment to read them.
86
The AMP is the recommended Management option for the Vblock, however it is not a
mandatory component. If it is installed it will greatly reduce the implementation time for
the Vblock infrastructure. The AMP contains:
An out-of-band management infrastructure for the Vblock
Remote access capability, private NATing, security
Management software (VMware vCenter, vCenter Database, vCenter Update Manager
for Vblock platform, Active Directory, DNS, DHCP, EMC Unified Infrastructure
Manager/Provisioning 3.0 (UIM/P), Cisco Nexus 1000V VSM, Unisphere Service
Manager, EMC VNX Initialization Utility, PowerPath/VE and Fabric Manager) running as
(or accessible through) virtual machines on two C200 ESXi hosts
Used in an Operate model for remote access and operational tasks
Can be used for a customer who wants a dedicated management infrastructure for their
Vblock
The AMP is a required component in the Vblock Infrastructure if Remote Services are
required by the customer to be performed by VCE.
87
The following table lists the Advanced Management Pod (AMP) components for the
following Vblock Series 300 models: 300EX, 300FX, 300GX, 300HX, and 700MX.
Tis list is only valid for release 2.5.3 of the Certification Matrix. For the most recent
information, please reference
http://vblockproductdocs.ent.vce.com/release_certification_matrices.htm#Series_300
88
Deployment of the Advanced Management Pod is broken down into eight high-levels steps,
which will be discussed in more detail in the following slides. Note that there are slight
variations in procedure depending on whether or not the mini-AMP or the HA AMP is being
deployed. As always, reference the latest Logical Build Guide for more detail on each step.
89
9600 baud
8 data bits
no parity
1 stop bit
no flow control
2) Power on the management switch.
3) When asked if you want to enter the initial
configuration dialog, type no.
90
Cisco C200 comes with 2 - 300GB SAS drive that are mirrored together at the controller
level for high availability. Unlike other Vblock system blades that boot from the SAN the
AMP uses internal drives for its operating system. Before you can install the os to the
internal drives the C200 firmware must be upgraded to the VCE required level. Once
upgrade the RAID controllers must be configured and the IME volume created between the
internal disks. Once the RAID controllers have been properly configured ESXi can be
installed and configured on the server.
8)
9)
10)
11)
12)
13)
14)
15)
16)
92
6)
7)
8)
9)
10)
11)
12)
13)
14)
15)
93
Log into the NAT-ed address of the C200-A server as admin. Refer to Vblock Platforms usernames and passwords for
the password.
2) Navigate to Server-->Remote Presence-->Virtual Media.
3) On the Virtual Media tab, check Enabled and Save Changes.
4) Navigate to the Admin tab-->Network Settings tab.
5) Input the DNS servers and change the Hostname to VxxxxxRMCM01.
6) Launch the KVM console:
7) At the KVM console, go to Tools-->Launch Virtual Media.
8) Click Add Image.
9) Select the VMware ISO file that reflects the version that you are loading and click Open.
10) When the The device - Virtual Media Session window displays, select the Mapped check box for the ISO file.
11) Allow the server to boot.
12) When prompted, press F6 to enter the boot menu.
13) On the Please Select Boot Device screen, select Cisco Virtual CD/DVD.
14) When the VMware screen appears, select ESXi installer. The installer will begin loading files. This could take several
minutes to complete.
15) After the files have finished loading and the "Welcome" screen appears, press Enter.
16) Press F11 to accept the license agreement.
17) On the Select a Disk screen, choose the RAID volume and press Enter.
18) Press F11 to confirm the install and continue.
19) After the installation is complete, press Enter to reboot the server.
20) When the server is back up, press F2 to customize the system.
21) Select Configure Password and press Enter.
22) Type the password twice and press Enter. Refer to Vblock Platforms usernames and passwords for the password.
23) Scroll down to Configure Management Network and press Enter.
24) Scroll down to IP Configuration and press Enter.
25) Scroll down to VLAN (optional) and set VLAN to 101. Note: If the customer's VLAN is different, use the customer's
VLAN number.
26) Highlight Set static IP address and press the space bar to select.
27) Configure the IP settings and press Enter. For the customer-specific IP settings, refer to the AMP ESXi host section of
the customer's logical configuration survey. Confirm that the ESXi host address is accessible.
28) If you are configuring an HA AMP, repeat the above steps on the C200-B server using the following values:
Set the IP of C200-B CIMC to the value requested in the customer's logical configuration survey.
Set the ESXi management IP to the value requested in the customer's logical configuration survey. Confirm that the
address is accessible.
94
Log in to vCenter.
Click the ESXi host.
Click the Configuration tab.
Click Networking.
Click Properties on the vSwitch.
Click Add.
Ensure that the virtual machine is selected.
In the Network Label field, type the VLAN name. For example, vblock_esx_mgmt.
Note: VLAN names are case sensitive. Make sure that any names that you use from the
logical configuration survey exactly match the names as they are specified in the survey.
9) Enter the VLAN ID.
10) Click Next.
11) Click Finish.
95
Steps for each of these deployment processes are not covered in exhaustive detail. For
specific information on the exact settings and requirements for each of these components,
see the Configuring management VMs and ESXi hosts (vSphere 5 or 4) in the Logical Build
Guide.
For more information on overall security best practices and specific settings to configure on
each VM, see Lesson 2 of Module 8.
96
The objectives for this module are shown here. Please take a moment to read them.
99
100
vMotion moves VMs across physical portsthe network policy must follow
From a network perspective, one would like to have a security policy that is attached to the virtual machine
as it moves. Unfortunately, todays tools only allow for network policy to be attached to the physical
server. In fact, VMware has a tool called DRS, or Dynamic Resource Scheduler, that automatically migrates
the VM depending on CPU and memory loads. Regardless of the time of day, network administrators need to
know what the VMs are doing. What they really need is mobile security policy attached to the VM
Impossible to view or apply network policy to locally switched traffic
The second issue with server virtualization is the virtual switch inside the hypervisor that switches packets
between virtual machines. It is actually fairly difficult to see which VM is actually talking to other VMs inside
the server. Customers are demanding troubleshooting and debugging capabilities inside the server.
Need collaboration between network and server admin
There is muddled ownership of the virtual switch. Nowadays, server admins manage the virtual switch, and
they need constant communication with their nework administrator to configure the virtual switch. On one
hand, Server admins want their network team to configure the virtual network. On the other hand, network
admins are demanding network tools to configure the virtual switch and they want visibility down to the
virtual machine.
Nexus 1000V overcomes these three server virtualization issues, and accelerates datacenter virtualization.
101
The Cisco Nexus 1000V is a virtual access software switch that works with VMware vSphere
and has
the following components:
Virtual Supervisor Module (VSM)The control plane of the switch an d a virtual
machine that runs Cisco NX-OS.
Virtual Ethernet Module (VEM)A virtual line card embedded in each VMware vSphere
(ESX) host. The VEM is partly inside the kernel of the hypervisor and partly in a userworld process, called the VEM Agent.
102
104
105
The objectives for this module are shown here. Please take a moment to read them.
106
The objectives for this lesson are shown here. Please take a moment to read them.
107
EMC Ionix Unified Infrastructure Manager is a single point of management for Vblocks. It
simplifies the configuration lifecycle of Vblock resources while at the same time ensuring
resources are allocated according to service requirements.
Ionix Unified Infrastructure Manager (UIM) is the only tool that manages multiple Vblocks
across compute, network, & storage resources
Before a Vblock is put into use UIM, can ensure that it is complying with configuration best
practices and it can enforce those guidelines over time. By providing an automated
approach to implementing changes, UIM also helps enforce change management
discipline. UIM can also track and report on changes to the Vblock thereby supporting a
disciplined change management process.
UIM simplifies and accelerates the configuration and provisioning of Vblock network,
storage and compute resources.
Eliminates need for multiple server, network & storage configuration tools
No need for additional 3rd party tools to manage UCS compute
108
This diagram depicts the various element managers that are involved in managing the
Vblock infrastructure, as well as the associated Virtual Machines they would run on.
Cisco Data Center Network Manager (DCNM) solutions provide proactive, highly secure
management of data center Ethernet and SANs.
Vblock
Administration
and Management
- Module 2 Module 5
VCE
Vblock
Systems Deployment
and Implementation
109
The objectives for this lesson are shown here. Please take a moment to read them.
110
The first setof requirements for UIM to successfully discover a Vblock are related to the
SAN. Note that the VSAN and Zoneset names are case-sensitive. When creating the
zonesets, you will not be able to activate them with no zones/members. Simply create the
zonesets and leave them inactive.
112
113
The first set of requirements for UIMP to successfully discover a Vblock are related to the
SAN. Note that the VSAN and zoneset names are case-sensitive. When creating the
zonesets, you will not be able to activate them with no zones/members. Simply create the
zonesets and leave them inactive.
115
Each fabric interconnect has a set of ports in the fixed port module that are configured as
either server ports or uplink ports. Ports are not reserved but as part of installation, must
designate uplink and server ports. Expansion modules increase the number of uplink ports
on the fabric interconnect and provide Fibre Channel ports to the fabric interconnect
Service Profiles isolates the attributes of a server from the physical hardware
The steps for Vblock Series 300 storage environment configuration are listed on the slide.
120
Maximizes the size and number of the hypers per disk in order to best use the disks (less
wasted space), with minimum splits per drive. Hyper volumes are combined to provide a
protection scheme.
If a client has a different default datastore size, the auto_meta_member_size can be
adjusted so that either 4 or 8 member metas can be built to accommodate.
TDEVs do not have any real storage behind them until they are bound to a virtual pool. Preallocate 5% to 10% of the overall size during the bind process.
TDEV=Thin Device that is a cache only device that has no physical storage behind it. Once
bound to a storage pool the device can be presented to a host as any other device.
TDAT=Data Device is used for forming the Thin Pool used for Virtual Provisioning. Multiple
pools are supported and drive architectures must reside in there own pool. As data is
written to the TDEV (Thin Device) the actual data is stored (and stripped) across the pool
and all of the TDAT device that make up the pool.
The objectives for this lesson are shown here. Please take a moment to read them.
122
UIM/P 3.0 is a SuSE Linux-based virtual appliance. Before you deploy it, you must first
obtain the UIM/P 3.0 .ovf files and .vmdk files. These files are available on EMC PowerLink.
Deploying the .ovf template requires a supported version of ESXi (currently ESX/ESXi 4.1 or
ESXi 5.0), a valid UIM/P 3.0 license key, and appropriate CPU memory, and disk resources (2
CPU, 16GB of memory, 140GB free disk space)
First, use the standard VMware OVF deployment process, using the UIMP_OVF10.ovf file.
Once complete, start the appliance.
Note: It takes 5-10 minutes for the appliance to be configured when starting the first time.
Communication between UCS and UIM/P is secured using HTTPS. Therefore, you must
enable HTTPS by first export the certificate from UCSM, then installing it on the UIM/P
server.
Next, configure VLAN settings on ESX(i) by editing the UcsNetworking.xml file. Change all
of the VLAN settings to the correct details for the Vblock, as specified in the Logical
Configuration Survey. Change
Note: Only edit the vlanName and vlanNumber. DO NOT edit the FunctionalVlan name.
Once this has been completed, you will be ready to import the VMware ESX/ESXi media for
deployment. Copy the required VMware ESX/ESXi media to the /tmp directory on the
UIM/P server and run the uim_loadesx.sh script (located in the /opt/ionix-uim/tools/
directory).
In order for UIM to discover NAS volumes within a VNX array, you must enable SMI-S within
the array. UIM cannot discover NAS volumes in a VNX storage array unless SMI-S is enabled
within the array.
123
The discovery process performs an inventory of the Vblock, so that UIM can learn what
resources are available to be used. You will need to perform this process any time you add
or remove devices from the Vblock.
WhenUIM discovers a Vblock, it captures information about the environment, including:
UCS blade servers (chassis, slot, model, RAM, Adapter, CPU, status)
UCS VLANs (name and VLAN ID)
SAN -VSANs
Storage Array Type, Storage Type, Total Capacity, Free Capacity, Subscribed capacity, RAID
Level, Disk Type
10
The UIMP provides setup validation status that includes the compliance status as well as
the severity level for validating the configuraiotn of the Vblock System.
The Dashboard has several different categories of views, and is customizable to meet your
preferences.
The Vblock summary option displays a high-level tabular view of a Vblock. Multiples can be
added to the dashboard to show all available Vblock environments. All other displays are
per Vblock as well.
The Capacity by Quality shows the amount used and available for each grade in stacked
tabular format. Storage and blades are shown on different charts.
The available capacity shows the usage and availability of both blades and storage by usage
type (Used, Available, non-graded, Externally Used, etc.) in a pie chart format.
Vblock Capacity shows the storage and blade resources in a single stacked bar chart for
each location specified when grading the resources. Resources are grouped by grades.
These are the key points covered in this module. Please take a moment to review them.
126
127
The objectives for this module are shown here. Please take a moment to read them.
128
The objectives for this lesson are illustrated here. Please take a moment to read them.
129
Pools provide the ability to allocate server attributes in a UCS domain while enabling the
centralized management of shared system resources.
Policies determine how UCS components will act in a specific circumstance. You can create
multiple instances of most policies for example, you may need different boot policies, so
that some servers boot from local storage, whereas others boot from SAN.
A policy-based management approach allows Cisco UCS Manager to use the metadata of
servers to abstract the state of the hardware. For example, the administrative state of
blades is managed with service profiles. A service profile contains values for a server's
property settings, including vNICs, MAC addresses, boot policies, firmware policies, and
other elements. By abstracting these settings from the physical server to a service profile,
you can deploy a service profile to any physical computing hardware in Cisco UCS.
Furthermore, the service profile can, at any time, be migrated from one physical server to
another.
A service profile is therefore the description of a logical server, and there is a one-to-one
relationship between a service profile and a physical server. A service profile template is
the blueprint for creating new service profiles. Using policies and pools that are defined by
functional administrators, server managers can create service profiles. For example, a
network administrator can define a pool of MAC addresses and policies such as quality of
service (QoS) for a VLAN. A server administrator can the use a MAC address from the pool
to create a service profile.
131
WWNN pools are used in the UCS environment to assign a block of virtualized WWNs that
can be assigned to a server when a service profile is created.
Worldwide Port Name Pools (WWPN): When a profile is being built, the number of virtual
host bus adapters (vHBAs) can be specified. Each vHBA needs to have a unique virtual
WWPN assigned to it. In most cases your WWPN pool should equal the number of blades
multiplied by two, because each blade has two virtual HBAs present. Multiple WWPN pools
can be created on a per-application basis to minimize SAN zoning requirements.
132
Policies are used to simplify management of configuration aspects such as where to boot
from or which server to select (for example, based on the number of CPUs). After you have
defined your pools and created VLANs and VSANs, you next need to define your policies. In
the UCS environment, many policies have already been defined using default values;
however, there are a few policies that need to be defined by the user.
133
VLANs: A named VLAN creates a connection to a specific external LAN. The VLAN isolates traffic to that
external LAN, including broadcast traffic.
The name that you assign to a VLAN ID adds a layer of abstraction that allows you to globally update all
servers associated with service profiles that use the named VLAN. You do not need to reconfigure the servers
Individually to maintain communication with the external LAN.
You can create more than one named VLAN with the same VLAN ID. For example, if servers that host business
services for HR and Finance need to access the same external LAN, you can create VLANs named HR and
Finance with the same VLAN ID. Then, if the network is reconfigured and Finance is assigned to a different
LAN, you only have to change the VLAN ID for the named VLAN for Finance.
In a cluster configuration, you can configure a named VLAN to be accessible only to one fabric interconnect or
to both fabric interconnects. Be aware that you cannot create VLANs with IDs from 3968 to 4048. This range
of VLAN IDs is reserved.
VSANs: A named VSAN creates a connection to a specific external SAN. The VSAN isolates traffic to that
external SAN, including broadcast traffic. The traffic on one named VSAN knows that the traffic on another
named VSAN exists, but cannot read or access that traffic.
Like a named VLAN, the name that you assign to a VSAN ID adds a layer of abstraction that allows you to
globally update all servers associated with service profiles that use the named VSAN. You do not need to
reconfigure the servers individually to maintain communication with the external SAN. You can create more
than one named VSAN with the same VSAN ID.
Note: Do not configure a VSAN as 4079.This VSAN is reserved and cannot be used in either FC switch mode or
FC end-host mode. If you plan to use FC end-host mode in a Cisco UCS instance, do not configure VSANs with
an ID in the range from 3840 to 4079.
VSANs in that range are not operational if the following conditions exist in a Cisco UCS instance:
134
UUID
WWNN
Boot Policy
Server assignment
Service Profiles created from an initial template inherit all properties of that template.
After the profile is created, it is no longer connected to that template, therefore each
profile must be individually changed.
Similarly, Service Profiles created from an updating template inherit all the properties of
that template. However, unlike an initial template, any changes to the template
automatically update the service profiles created from the updating template.
The diagram listed shown above shows the most significant configuration points for Cisco
Unified Computing System service profiles.
135
With a Service Profile Template, you can quickly create several service profiles with the
same basic parameters, such as the number of vNICs and vHBAs, and with the identity
information drawn from the same pools.
For example, if you need several service profiles with similar values to configure servers to
host database software, you can create a Service Profile Template, either manually or from
an existing Service Profile. You then use the template to create the additional Service
Profiles.
Note: If you only need one Service Profile with similar values to an existing Service Profile,
you can clone the existing profile in UCS Manager.
If you need to disassociate a Service Profile from a server, UCS will attempt to shutdown
the operating system on the server. If the OS does not shutdown within a reasonable length
of time, UCS will initiate a forced shutdown.
136
An FSM is a workflow model, similar to a flow chart, that is composed of the following:
A finite number of stages (states)
Transitions between those stages
Operations
The current stage in an FSM is determined by past stages and the operations performed to
transition between the stages. A transition from one stage to another is dependent on the
success or failure of an operation.
Cisco UCS Manager GUI displays FSM information for an end point on the FSM tab for that
end point. You can use the FSM tab to monitor the progress and status of the current FSM
task and view a list of the pending FSM tasks.
The information about a current FSM task in the Cisco UCS Manager GUI is dynamic and
changes as the task progresses. You can view the following information about the current
FSM task:
Which FSM task is being executed
The current state of that task
The time and status of the previously completed task
Any remote invocation error codes returned while processing the task
The progress of the current task
If you want to view the FSM task for an end point that supports FSM, navigate to the end
point in the Navigation pane and click on the FSM tab in the Work pane.
137
The objectives for this lesson are illustrated here. Please take a moment to read them.
138
This slide shows the steps required for a Blade SAN boot.
139
In Fabric Manager, select the Create VSAN icon from the toolbar. The Create VSAN dialog
box allows you to:
Select one or more switches where the VSAN will be created.
Specify the VSAN ID (valid range: 2 to 4093).
Select the load balancing scheme.
Select the interop mode.
Specify the administrative state (active/suspended).
Choose whether to specify static domain IDs for this VSAN (optional).
Choose whether this VSAN will be exclusively used for Fibre Connection (FICON)
Protocol.
Fabric Manager can be used to create Zonesets and activate them for a given VSAN. Select
the Zoneset folder and right click. Select insert and define a Zoneset name. From the
bottom window, select the desired zones and drag and drop them into the Zoneset. Once
the Zoneset contains the correct Zones, select activate. This will cause a menu to be
displayed which will allow the comparison of the new Zoneset to the already present active
Zoneset.
Using Fabric Manager provides an easy tool for all zone configuration tasks. To create and
edit zonesets, right-click the VSAN folder in the Logical Domains pane. The pop-up menu
displays several options, including:
Edit Full Zone Database: Choose this to create and edit fcaliases, zones, and
zonesets.
Deactivate Zoneset: Choose this option to deactivate the currently active zoneset.
Copy Full Zone Database: Choose this option to propagate the configured zoneset in
the VSAN to any switch.
Edit Full Zone Database dialog allows complete fcalias, zone, and zoneset
configuration:
Left pane: Displays fcalias names, zone and zoneset folders.
Bottom-right pane: Displays all Name Server entries for the VSAN.
Top-right pane: displays the configuration of the fcalias, zone or zoneset you select
in the left pane.
Add zones: To add a new zone or zoneset, select the folder and click the blue arrow.
Delete zones: To delete any zone or zoneset selected in the left pane or selected
item(s) in the top-right pane, click the red arrow.
Bottom menu: Provides options to activate, deactivate, and distribute zonesets.
143
The slide shows the WWN of blade one, 20:00:00:25:b5:10:2a:01, zoned to VMAX Director
FA-8F port 1 on Fabric A. It also shows the WWN of blade one, 20:00:00:25:b5:10:2b:01,
zoned to VMAX Director FA-7F port 1 on Fabric B.
In general practice, the Ports would be zoned to different VMAX engines to provide
connectivity redundancy. The VMAX is an active active array and PowerPath would also
be loaded onto the compute blade to manage multipathing to the storage volumes.
144
The objectives for this lesson are illustrated here. Please take a moment to read them.
145
The steps for Vblock Series 300 storage environment configuration are listed on the slide.
146
When a VNX Data Mover is configured as an NFS server, file systems are mounted on a Data
Mover and a path to that file system is exported. Exported file systems are then available
across the network and can be mounted by remote users. In the case of the Vblock Series
300 the ESX configured blade will mount the NFS export as a Datastore.
An NFS-configured Data Mover does the following:
Provides access to the exported file system through an IP network.
Authenticates the user if using a secure NFS by comparing the access rights of the NFS
client requesting information with the access rights defined for the exported file system,
then performing user access control on the file system object.
The NFS exports can provide shared datastores for the Virtual Machines, ISO repositories or
Guest OS shared directories etc.
The Vblock storage is dedicated to the Vblock to prevent performance degradation and the
possibility of SLA non-compliance.
147
The objectives for this lesson are illustrated here. Please take a moment to read them.
148
Maximizes the size and number of the hypers per disk in order to best use the disks (less
wasted space), with minimum splits per drive. Hyper volumes are combined to provide a
protection scheme.
If a client has a different default datastore size, the auto_meta_member_size can be
adjusted so that either 4 or 8 member metas can be built to accommodate.
TDEVs do not have any real storage behind them until they are bound to a virtual pool. Preallocate 5% to 10% of the overall size during the bind process.
TDEV=Thin Device that is a cache only device that has no physical storage behind it. Once
bound to a storage pool the device can be presented to a host as any other device.
TDAT=Data Device is used for forming the Thin Pool used for Virtual Provisioning. Multiple
pools are supported and drive architectures must reside in there own pool. As data is
written to the TDEV (Thin Device) the actual data is stored (and stripped) across the pool
and all of the TDAT device that make up the pool.
All necessary operations to make them part of the configuration are handled automatically
by Symmetrix Enginuity once the objects are added to the applicable group. This reduces
the number of commands needed for mapping and masking devices and allows for easier
storage allocation and de-allocation.
150
The lessons for this module are shown here. Please take a moment to read them.
151
PowerPath has more than just simple channel failover. In addition to automatic fail-back,
PowerPath/VE brings PowerPaths established load balancing policies to virtual
environments. Rather than designate some channels as active and others as stand-by,
PowerPath leverages all channels for I/O and can dynamically distribute traffic over them.
This gives PowerPath/VE superior and predictable performance over NMP. The adaptive
policy of the CLAR_opt is ideal for choosing paths based on path load and logical device
priority.
152
The points covered in this module are listed here. Please take a moment to review them.
153
154
The objectives for this module are shown here. Please take a moment to read them.
155
If this is how the Vblock System ships then the is how the Vblock System is received!
Need to know if the systems will fit in the Datacenter!
Each Vblock system is shipped as a combination of cabinets, pallets, and totes. Depending
on the model of Vblock system and specific components ordered, the exact number of
shipping cabinets, pallets, and totes will vary. A sufficiently sized receiving area will be
required to receive the Vblock system. The following is an example of the plastic totes and
crated cabinets.
We do not encourage customers to put space between the cabinets. Doing this would
require an extended lead time on the order as the cross-cabinet connections would not
reach and custom cables would need to be ordered.
The Site Survey document is designed to help understand the physical facility where the
Vblock system will be positioned for long-term operation. The Vblock system should be
deployed in a contiguous fashion for the various integrated components requiring physical
interconnectivity. If necessary, it is feasible to segment the equipment to meet data center
logistics; additional cabling and fees may be applicable.
VCE does not encourage customers to put space between the cabinets. Doing this would
require an extended lead time on the order as the cross-cabinet connections would not
reach and custom cables would need to be ordered.
160
The Vblock System allows environments to scale by adding compute (chassis and blades) or
storage resources (physical disks or engines). To simplify the expansion some customers
opt to by the pre-cabled cabinets so expansion is simplified.
161
For environment that may want to migrate data using SAN ports or want to introduce DR
appliance, work with VCE to determine how to configure and use the available ports.
All cabinets for the Vblock 300 have the same power requires. A complete list of Power
Specifications can be found in Vblock System 300 Physical Planning Guide
Vblock System Series 700 requires separate Power connectors for compute/aggregation
and VMAX storage. A complete list of Power Specifications can be found in Vblock System
700 Physical Planning Guide
This procedure describes how to power on a Vblock Series 300 and Vblock Series 700.
With the exception of the Vblock Series 300 model EX, the majority of the Vblock models
incorporate an IP aggregation layer into the Vblock itself, consisting of 2 x Cisco Nexus 5548
or 7010 switches.
This aggregation layer needs to integrate with the customers core network which,
depending upon core-edge and scale-out requirements may consist of director class or
data center class L2/L3 switches. Data flow at the aggregation layer is largely determined by
the type of physical connectivity and Layer 2 configuration implemented between the
fabric interconnects and the upstream switches. Cross-connectivity between the fabric
interconnects and the aggregation layer switches ensures that each interconnect has
connectivity to either fabric (A or B) in the event of switch failure it also provides for
better upstream distribution of traffic to the aggregation layer.
It is a best practice to cluster the switches at the aggregation layer (i.e. the 5548s or
7010s) in a vPC or VSS cluster, in order to present a single Layer 2 domain to upstream and
downstream switches (the fabric interconnects). This provides for redundancy at the
aggregation layer and results in better distribution of traffic at the aggregation layer. The
configuration of the aggregation layer and the core network are key areas addressed in the
VCE site survey.
168
169
170
The objectives for this module are shown here. Please take a moment to read them.
171
The objectives for this lesson are shown here. Please take a moment to read them.
Security hardening for a Vblock can be broken into two general sections: Securing the
components of the AMP, and securing components of the UCS. This lesson will consider
best practices for accomplishing both.
Note: All of the settings described in this lesson should have already been configured by
VCE prior to the Vblock arriving onsite. In order for a successful initialization and
deployment of a Vblock, each of these security settings should first be validated, then
modified as necessary to adhere to customer requirements.
172
Consistent use of the following usernames and passwords will have been applied to the
Vblock System at build time. After the on-site delivery is completed, the delivery team will
work with the customer to establish site-specific usernames and passwords. It is important
to note that these default passwords MUST be changed to prevent unauthorized access to
the various devices and elements.
Note: A strong password requirement is enabled by default on Vblocks. Ensure that all
changes meet this requirement. Never reduce password complexity for the sake of
convenience.
173
Privileges can be org-related or non org related. This means that non org related privileges
will apply across the entire UCS environment.
174
The default roles within the Vblock/UCS infrastructure are shown here.
175
Adding users is very similar to other types of user management in use today. Seen here is
the Create User interface of the UCS.
176
Tech Support Mode provides a command-line interface that can be used to diagnose and repair VMware ESXi
hosts. VCE recommends Tech Support Mode be disabled, due to the fact that:
The interface is not audited, so commands issued at this interface are not logged.
Commands that can be issued from this interface can result in an unusable system.
The interface only supports logging in as root. No other user account or role can use this management
interface.
Tech Support Mode should therefore only be used as a last resort to troubleshooting an ESXi host.
The CIM system provides an interface that enables hardware-level management from remote applications via
a set of standard API s. To ensure that the CIM interface is secure, provide only the minimum access
necessary to these applications. Do not provision them with the root account or any other full administrator
account; instead, create a service account specific to these applications that has only limited privileges.
Read-only access to CIM information can be granted to any local account defined on the ESX/ESXi system, as
well as any role defined in vCenter Server. If the application requires write access to the CIM interface, only
two privileges are required. It is recommended that you create a role to apply to the service account with
only these privileges:
Host Config SystemManagement
Host CIM CIMInteraction
This role can be either local to the host or centrally defined on vCenter Server, depending on how the
particular monitoring applications work. To validate that the setting has been applied, login to the host with
the service account (e.g., using the vSphere Client) you should be provide only read-only access, or only the
two privileges indicated above.
177
178
Numerous changes must be made to secure individual management VMs in the AMP.
These settings are designed to provide proper isolation of management VMs, thereby
preventing both accidental or malicious access that could otherwise compromise the
secure management of the Vblock as a whole.
Changes can be made to each management Virtual Machine by first powering it off, then
choosing Edit Settings of the VM. Click on the Options tab, then click on General
under Advanced options. Click on the Configuration Parameters button to gain access
to manually add advanced settings. Using the Vblock Infrastructure Platforms Security
Hardening document, Version 1.0 (November 2011) as your guide, make the changes
noted therein. (As of the release of this document, there are a total of 19 recommended
security changes that should be made).
If there is a need to apply settings on a more ESXi host-wide basis, edit the
/etc/vmware/config file on the ESXi host in question.
179
The objectives for this lesson are shown here. Please take a moment to read them.
180
This policy defines how the mgmt0 Ethernet interface on the fabric interconnect should be
monitored. If Cisco UCS detects a management interface failure, a failure report is
generated. If the configured number of failure reports is reached, the system assumes that
the management interface is unavailable and generates a fault. By default, the
management interfaces monitoring policy is disabled.
If the affected management interface belongs to a fabric interconnect which is the
managing instance, Cisco UCS confirms that the subordinate fabric interconnect's status is
up, that there are no current failure reports
Logged against it, and then modifies the managing instance for the end-points.
If the affected fabric interconnect is currently the primary inside of a high availability setup,
a failover of the management plane is triggered. The data plane is not affected by this
failover.
You can set the following properties related to monitoring the management interface:
Type of mechanism used to monitor the management interface.
Interval at which the management interface's status is monitored.
Maximum number of monitoring attempts that can fail before the system assumes that
the management is unavailable and generates a fault message.
181
Enable syslog for all Vblock platforms. There are three syslog facility options within UCS: Local Destinations,
Remote Destinations, and Local Sources.
Complete this procedure whether or not UIM/P will be used for provisioning.
1. In the Admin tab within Cisco UCS manager, click Faults, Events and Audit Log > Syslog.
2. In the Console section:
a. For Admin State, click enabled.
b. In the Level menu, click critical.
3. In the Monitor section
a. For Admin State, click enabled.
b. In the Level menu, click critical.
4. In the File section:
a. For Admin State, click enabled.
b. In the Level menu, click debugging.
c. If the customer provided one or more syslog server IP addresses:
o
In the Server 1 section, for Admin State, click enabled.
o
In the Level menu, click critical.
o
In the Hostname field, enter the customer-provided primary syslog server IP address or hostname.
If the customer also provided a secondary syslog server IP address:
o
In the Server 2 section, for Admin State, click enabled.
o
In the Level menu, click critical.
o
In the Hostname field, enter the customer-provided secondary syslog server IP address or hostname.
5. Click Save Changes.
182
If the customer intends to monitor ESXi hosts directly with SNMP, community strings, traps,
and polling configuration parameters must all be defined in advance. Monitoring via SNMP
will be discussed more in a later lesson in this module.
183
184
The objectives for this lesson are shown here. Please take a moment to read them.
185
186
Testing objectives to account for module and link failure scenarios should include:
1)
Validating the HA and load-balancing capabilities of L2 port-channel uplinks on the Fabric Interconnect. We will disable or
disconnect one of the links in the 4 port port-channel with the Switch. Traffic should failover to the redundant links in the
channel with minimal disruption. During the latter part of the test, we will bring up the link. Traffic should resume the
baseline characteristics.
2)
Validating the HA and load-balancing capabilities of dual the Fabric Interconnect design. We will disable or disconnect the
port-channel uplink towards the Switch, on the <Fabric Interconnect-A>. Traffic from the VMs should now be sent
towards <Fabric Interconnect-B> and then towards the Catalyst Switch. During the latter part of the test, we will bring up
the port-channel on <Fabric Interconnect-A>. Traffic should resume the baseline characteristics.
3)
Validating the HA and load-balancing capabilities of dual the Fabric Interconnect design. We will disable or disconnect all
server links to the <Fabric Interconnect-A>. Traffic from the VMs should now be sent towards <Fabric Interconnect-B>.
During the latter part of the test, we will bring up the server links to <Fabric Interconnect-A>.Traffic should resume the
baseline characteristics.
4)
Validating the HA and load-balancing capabilities of dual the Fabric Interconnect design. We will disable or disconnect one
of the server links to the <Fabric Interconnect-A>. Traffic from the VMs received on <Fabric Interconnect-A> should be
sent through the blades connection <Fabric Interconnect-B>. During the latter part of the test, we will bring up the server
link to <Fabric Interconnect-A>. Traffic should resume the baseline characteristics.
5)
Validating the redundancy capabilities of FC links between the Fabric Interconnect and MDS. In this test case we will have
8 links connected to MDS SAN, and disable or disconnect one of the FC links that connects to the SAN and observe its
effect on FC traffic. During the latter part of the test, we will bring up the link. Traffic should resume the baseline
characteristics.
6)
Validating the redundancy capabilities of the network during a system failure. We will bring down one of the Fabric
Interconnect systems via reload and observe its impact on end-to-end traffic.
187
A Configuration Review Guide should be completed and delivered to the customer upon
completion of the deployment. It should contain all relevant configuration of the Vblock,
including all IP addresses used, user names and passwords, and all standard compute, LAN,
SAN, storage, and virtualization component details.
188
189
Completion of a successful test and acceptance plan means that the customer has signed
off on the completion results of all executed tests. Further, it means that no outstanding
issues remain that would impede the immediate deployment of the Vblock into
production.
Also, for a customer to obtain the most benefit from a newly deployed Vblock, an adequate
knowledge transfer process must take place. This includes, but is not limited, to providing
diagrams of the network infrastructure, documenting the VNX or VMAX disk layout, the
location and number of ESXi hosts, providing any LUN/storage creation scripts, defining
what the ACS is and how to use it, providing an overview of UCS Manager, accessing and
deploying Profiles and Templates, and defining what the AMP is and how its components
are used.
For more information on each of these topics and how long a knowledge transfer should
take, please see document 1624-testplan_custknowledgetransfer.pdf, found on
http://www.vcepartnerportal.com/resourcelib-vce.asp?sid=15.
190
These are the key points covered in this module. Please take a moment to review them.
191
The summary for this course is shown here. Please take a moment to read the key points.
192
193