Sei sulla pagina 1di 306

Cisco UCS and UCS

Director
Mike Griffin
Day 1 – UCS architecture and overview
 Data Center trends
 UCS overview
 UCS hardware architecture
 UCS B series server overview
 C series server overview
 UCS firmware management
 UCS HA
Day 2 – Service profile overview and lab
 Pools, Policies and templates
 UCS Lab

2
Day 3 – UCS Director
 UCS director overview
 UCS director hands on lab
Day 4 – Advances UCS topics
 UCS Networking and connectivity
 Implementing QoS within UCS
 UCS Central

3
Module 1
Scale Up Scale Out Scale In
 Bladed and Rack servers
 Monolithic servers  Commoditized servers
 Multi-Socket / Multi-core
 Large numbers of CPUs  1 APP / 1 Physical Server
CPUs
 Proprietary platform  X86 platform
 X64 platforms (Intel / AMD)
 Proprietary OS  Commoditized OS
 Commoditized OS
 Many apps per server
 Virtual Machine Density

The 90’s Early 2000’s Now

High cost / Proprietary Servers under-utilized Management complexity


Large failure domain Power & cooling Cloud Computing / Dynamic
Resourcing

6
Console, power,
networking, and
storage connectivity
to each blade

Console, power,
networking, and storage
connectivity shared in
chassis

7
single core core
core
core core
CPU

single socket, 1 CPU, 4


single socket for 1 CPU single socket for 1 CPU
processing cores
Terminology
 Sockets – Slot in machine board for processing chip
 CPU – Processing Chip
 Core – the actual processing unit inside CPU

Server Impact
 More cores in a CPU = More Processing
 Critical for application that become processing bound
 Core densities are increasing 2/4/6/8/12/16
 CPUs are x64 based
DIMM Slots DIMM

 DIMMs – Dual Inline Memory Module - a series of dynamic random-access


memory integrated circuits. These modules are mounted on a printed circuit
board
 Ranking – memory modules with 2 or more sets of DRAM chips connected
to the same address and data buses. Each such set is called a rank. 1 dual
and quad ranks exist
 Speed – Measured in MHz most server memory is DDR3 and PC3-10600 =
1333 MHz
 As Server memory increases clock speed will sometimes drop in order to be
able to utilize such large memory amounts

9
PCIe BUS
In virtually all server compute
platforms PCIe bus serves as the
primary motherboard-level
interconnect to hardware

 Interconnect: A connection or link between 2 PCIe ports, can consist of 1 or more


lanes

 Lanes: A lane is composed of a transmit and receive pair of differential lines. PCIe
slots can have 1 to 32 lanes. Data transmits bi-directionally over lane pairs. PCIe
x16 where x16 represents the # lanes the card can use

 Form Factor: A PCIe card fits into a slot of its physical size or larger (maximum
×16), but may not fit into a smaller PCIe slot (×16 in a ×8 slot)
Platform Virtualization:
 Physical servers host multiple Virtual Servers
 Better physical server utilization (Using more of
existing resources)
 Virtual Servers are managed like physical
 Access to physical resources on server are shared
 Access to resources are controlled by hypervisor on
physical host

Key Technology for :


 VDI / VXI
 Server Consolidation
 Cloud Services
 DR

Challenges:
 Pushing Complexity into virtualization
 Who manages what when everything is virtual
 Integrated and virtualization aware products
11
Server Orchestrators / Manager of Manager

Chassis Mgr Chassis Mgr Chassis Mgr

Network Mgr Network Mgr Network Mgr

Server Mgr Server Mgr Server Mgr

Vendor A Vendor B Vendor C

12
Cloud

Virtualization

Web

Client Srv

Mini Comp

Mainframe

1960 1970 1980 1990 2000 2010


Web
Service Catalog VDI CRM Store

Orchestration Orchestration / Management / Monitoring


and - Tidal, New Scale, Altiris,Cloupia
- UCSM ECO Partner Integration (MS, IBM, EMC, HP)
Management - UCSM XML API

Storage
Compute Network - NetApp FAS
- FCoE
Infrastructure - UCS B Series
- Nexus 7K, 5K,
- UCS C Series
4K, 3K, 2K,
Mgmt Server
Over the past 20 years
 An evolution of size, not thinking
 More servers & switches than ever
 Management applied, not integrated
 Virtualization has amplified the problem

Result
 More points of management
 More difficult to maintain policy coherence
 More difficult to secure
 More difficult to scale

15
Mgmt Server  Embed management
 Unify fabrics
 Optimize virtualization
 Remove unnecessary
o switches,

o adapters,

o management modules

 Less than 1/3rd the support


infrastructure for a given
workload
Single Point of Management

Unified Fabric

Blade Chassis / Rack Servers

18
SAN LAN MGMT SAN

 Fabric Interconnect
G G S S G G
Fabric Fabric
A A
Interconnect Interconnect
G G G G G G  Chassis
o Up to 8 half width blades or 4 full
Compute Chassis width blades
Fabric Fabric
R I C C I R
Extender Extender
x8 x8 x8 x8

 Fabric Extender
o Host to uplink traffic engineering

M P P
Adapter
B
Adapter
B
Adapter
 Adapter
o Adapter for single OS and
X X X X X X hypervisor systems
x86 Computer x86 Computer

 Compute Blad
o Half Width or Full Width
Compute Blade Compute Blade 20
(Half slot) (Full slot)
SAN LAN
UCS Fabric Interconnect
20 Port 10Gb FCoE
40 Port 10Gb FCoE

UCS Fabric Extender


Remote line card

UCS Blade Server Chassis


Flexible bay configurations

UCS Blade Server


Industry-standard architecture

UCS Virtual Adapters


Choice of multipleadapters
23
2 x power supplies redundant
fabric
2 x fan modules interconnect

4 x Single slot
blades

chassis
2 x double
slot blades

24
4 x power supplies
expansion
20/40/48/96 x fabric/border ports module bay 1
(Depending on model) or 2

2 x cluster ports 2 x power


entry

console port
1 x management port

2 x IOMs 8 x fan
modules

4 x 10GE SFP+
fabric ports (FCoE)

25
4 x power entry
2U

1U

UCS 6200 (UP) Series (6248 & 6296)

UCS 6100 Series (6120 & 6140)


1U

2U

26
Product Features
UCS 6120XP UCS 6140XP UCS 6248UP UCS 6296UP
and Specs
Switch Fabric Throughput 520 Gbps 1.04 Tbps 960 Gbps 1920 Gbps

Switch Footprint 1RU 2RU 1RU 2RU

1 Gigabit Ethernet Port


8 16 48 96
Density
10 Gigabit Ethernet Port
26 52 48 96
Density
1/2/4/8G Native FC Port
6 12 48 96
Density

Port-to-Port Latency 3.2us 3.2us 1.8us 1.8us

# of VLANs 1024 1024 4096 4096

Layer 3 Ready (future) ✔ ✔

40 Gigabit Ethernet Ready


✔ ✔
(future)

Virtual Interface Support 15 per Downlink 15 per Downlink 63 per Downlink 63 per Downlink

Unified Ports (Ethernet or


✔ ✔
FC)
27
6120 / 6140 FI 6248 / 6296 FI

16 Unified Ports
Switching ASIC
Upto 8
Aggregates traffic to/from host-facing
Fabric Ports
10G Ethernet ports from/to network- FLASH to
facing 10G Ethernet ports Interconnect
DRAM
CPU (also referred to as CMC)
EEPROM
Controls ASIC and perform other
chassis management functionality Control
Chassis
Management IO Switching ASIC
L2 Switch Controller
Aggregates traffic from CIMCs on the
server blades
Switch
Interfaces
HIF (Backplane ports)
NIF (FabricPorts)
Chassis
BIF Up to 32 Backplane Ports to
Signals Blades
CIF

No local switching – All traffic from


HIFs goes upstream for Switching
IOM-2104

IOM-2204

30
IOM-2208
2104/220X Generational Contrasts

2104 2208 2204


ASIC Redwood Woodside Woodside
Host Ports 8 32 16
Network Ports 4 8 4
CoSes 4 (3 enabled) 8 8
1588 Support No Yes Yes
Latency ~800nS ~500nS ~500nS
Adapter Redundancy 1 mLOM only mLOM and Mezz mLOM and Mezz
I/O Modules

Blade Connectors
PSU Connectors

Redundant data and management paths


32
B22 M3
2-Socket Intel E5-2400, 2 SFF Disk / SSD, 12 DIMM

B200 M3
2-Socket Intel E5-2600, 2 SFF Disk / SSD, 24 DIMM
Blade Servers

B250 M2
2-Socket Intel 5600, 2 SFF Disk / SSD, 48 DIMM

B230 M2
2-Socket Intel E7-2800 and E7-8800, 2 SSD, 32 DIMM

B420 M3
4-Socket Intel E5-4600, 4 SFF Disk / SSD, 48 DIMM

B440 M2
4-Socket Intel E7-4800 and E7-8800, 4 SFF Disk / SSD, 32 DIMM

33
 Expands UCS into rack mount
market
 Multiple offerings for different work
loads
UCS C460 M2 o C200 - 1RU base rack-mount server
o C210 – 2RU large internal storage
moderate RAM
o C250 – 2RU Memory Extending
(384GB)
UCS C260 M2 UCS C220 M3 o C260 – 2RU Large internal storage
and large RAM capacity (1TB)
o C460 – 4RU and 4 socket / Large
intenal storage / large RAM (1TB)
UCS C250 M2 o C220 M3 – Dense Enterprise Class
1 RU server / 2 socket / 256 GB /
optimize for virtualization
UCS C240 M3 o C240 M3 – 2RU / Storage
UCS C210 M2 Opotimized / Enterprise class / 384
GB / up to 24 disks
 Offers Path to Unified Computing

UCS C200 M2 34
Dongle
for 2USB,
VGA,
Console

DVD
Internal
Disk

UCS C200 Front View

Console and Management


Expansion Card

Power

LOM
USB and VGA
UCS C200 Rear View
35
Dongle
for 2USB,
VGA,
Console

DVD
Internal
Disk

UCS C210 Front View

Console and Management Expansion Card

Power

UCS C21036 Rear View LOM


USB and VGA
Dongle
for 2USB,
VGA,
Console
Internal
Disk

DVD UCS C250 Front View

Power

USB
Expansion Card and
VGA
UCS C250 Rear View LOM

37
Console and Management
Internal
Disk

DVD

Dongle
for 2USB,
VGA,
UCS C260 Front View Console

38
Dongle
for 2USB,
VGA,
Console

DVD

Internal
UCS C460 Front View Disk
Dongle
for 2USB,
VGA,
Console

Internal
DVD Disk
UCS C220 Front View

40
Dongle
for 2USB,
VGA,
Console

Internal
Disk
UCS C240 Front View

41
C22 M3
2-Socket Intel E5-2400, 8 Disk s/ SSD, 12 DIMM, 2 PCIe, 1U

C24 M3
2-Socket Intel E5-2400, 24 Disks / SSD, 12 DIMM, 5 PCIe, 2U

C220 M3
2-Socket Intel E5-2600, 4/8 Disks / SSD, 16 DIMM, 2 PCIe, 1U
Rack Servers

C240 M3
2-Socket Intel E5-2600, 16/24 Disks / SSD, 24 DIMM, 5 PCIe, 2U

C260 M2
2-Socket Intel E7-2800 / E7-8800, 16 Disks / SSD, 64 DIMM, 6 PCIe, 2U

C460 M3
4-Socket Intel E5-4600 , 16 Disks / SSD, 48 DIMM, 7 PCIe, 2U

C460 M2
4-Socket Intel E7-4800 / E7-8800, 12 Disks / SSD, 64 DIMM, 10 PCIe 4U

42
Virtualization – Ethernet Only
Compatibility
M81KR / VIC 1200

VM I/O Virtualization Existing Driver Stacks Cost Effective


and Consolidation 10GbE LAN access

10GbE/FCoE 10GbE/FCoE

Eth Eth Eth


FC FC

vNICs

0 1 2 3 127 10GbE FC For hosts that


need LAN
access
PCIe x16
PCIe Bus
43
 Dual 10 Gbps connectivity into fabric
 PCIe x16 GEN1 host interface
 Capable of 128 PCIe devices
(OS dependent)
 Fabric Failover capability
 SRIOV “capable” device
10 Base KR
Sub Ports

UIF 0 UIF 1

M81KR - VIC
 Next Generation VIC
 Dual 4x10 Gbps connectivity into fabric
 PCIe x16 GEN2 host interface
 Capable of 256 PCIe devices
(OS dependent)
 Same host side drivers as VIC (M81KR)
10 Base KR
 Retains VIC features with enhancements Sub Ports

 Fabric Failover capability


 SRIOV “capable” device UIF 0 UIF 1

1280-VIC

45
 mLOM on M3 blades
 Dual 2x10 Gbps connectivity into fabric
 PCIe x16 GEN2 host interface
 Capable of 256 PCIe devices
(OS dependent)
 Same host side drivers as VIC (M81KR)
10 Base KR
 Retains VIC features with enhancements Sub Ports

 Fabric Failover capability


 SRIOV “capable” device UIF 0 UIF 1

1240-VIC
Virtualization

127
10GbE/FCoE

Eth

PCIe x16
FC

0 1 2 3
Eth Eth
FC
 UCS P81e and VIC1225
 Up to 256 vNICs
 NIC Teaming done by HW
vNICs

CNA
10GbE/FCoE

PCIe Bus
10GbE FC

 Emulex and Qlogic


 2 Fibre Channel
 2 Ethernet
 NIC Teaming through bonding driver
Ethernet or HBA

47
48
RAID Controllers Disks

 1 Built in controller (ICH10R)  3.5 inch and 2.5 inch form factors
 Option LSI 1064e based mezz  15K SAS (High Performance)
controller  10K SAS (Performace)
 Option LSI 1078 based Mega RAID  7200 SAS (High Cap / Perf)
controller (0,1,5,6 and 10 support)  7200 SATA (Cost and Cap)
 73GB, 146GB, 300GB, and 500GB
 The FI’s runs 3 separate “planes” for the various functionality

o Local-mgmt
• Log file management, license management, reboot etc is done through local-
mgmt

o NXOS
• The data forwarding plane of the FI
• Functionality equivalent to NXOS found on Nexus switches but is Read-only

o UCSM
• XML Based and is the only way to configure the system
• Configures NXOS for data forwarding

 “connect” CLI command is used to connected to local-mgmt or NXOS


on FI A or B

51
management redundant management managed endpoints
interfaces service
Fabric Interconnect
UCSM switch elements
UCSM

chassis elements

server elements
multiple protocol
support

redundant management plane

52
GUI

XML
configuration
CLI state

Cisco UCS
API
Manager
operational
3rd party state

tools

53
 Fabric Interconnects synchronize database and state information
through dedicated, redundant Ethernet links (L1 and L2)

 The “floating” Virtual IP is owned by the Primary Fabric Interconnect

 Management plane is active / standby with changes done on the


Primary and synchronized with the Secondary FI

 Data plane is active / active

54
L1 to L1

L2 to L2

55
56
57
58
Example of session log file on client

Enable Logging in Java to capture issues

Client logs for debugging UCSM access & Client KVM access are found at this location
on Client system:
59
C:\Documents and Settings\userid\Application Data\Sun\Java\Deployment\log\.ucsm
• Embedded device manager for family of UCS components
• Enables stateless computing via Service Profiles
• Efficient scale: Same effort for 1 or N blades
GUI Navigation CLI Equivalent to GUI
SNMP SMASH CLP Call-home

IPMI CIM XML

Remote KVM UCS CLI and GUI

Serial Over LAN UCS XML API


 TCP 22

 TCP 23 if telnet is enabled (off by default)

 TCP 80

 UDP 161/162 if SNMP is enabled (off by default)

 TCP 443 if https is enabled (off by default)

 UDP 514 is syslog is enabled

 TCP 2068 (KVM)

64
 The C-Series rack mount servers can also be managed by the
UCSM.
 This requires a pair 2232PP FEX to accomplish this. This FEX
supports the needed features for PCIe virtualization and FCoE.
 A total of 2 cables must be connected from the server to both FEXs.
 One pair cables will be connected to the LOM (LAN on
Motherboard). This will provide control plane connectivity for the
UCSM to manage the server.
 The other pair of cables will be connected the adapter (P81 or
VIC1225). This will provide data plane connectivity.
 VIC1225 adapters support single wire management in UCSM 2.1

66
• 16 servers per UCS “virtual chassis”
(pair of 2232PPs) UCS
Manager
• 1 Gig LOM’s used for management

• Scale to 160 Servers (10 sets of 2232)


UCS 6100 or 6200 UCS 6100 or 6200

• Generation 2 IO adapters Nexus 2232 Nexus 2232

10 Gb CNA
1 Gb LOM
GLC-T connector

Mgmt Connection

Data Connection

67
• Management and data for C-Series
rack servers carried over single wire, UCS
Manager
rather than separate wires
• Requires VIC 1225 adapter UCS 6100 or 6200 UCS 6100 or 6200

• Continues to offer scale of up to 160 Nexus 2232 Nexus 2232

servers across blade and rack in a


single domain

VIC 1225

Mgmt and
Data Connection

68
 Cisco VIC provides converged
network connectivity for Cisco
UCS C-Series servers.
 Integrated into UCSM operates
in NIV (VN-TAG) mode.
Up to 118 PCIe devices (vNIC/vHBA)
 Provides NC-SI connection for
stand alone Single Wire Mgmt

 The VIC 1225 requires UCSM


SW 2.1 for either dual wire or
single wire mode.

69
FI-A FI-B

2232 fex
2232fex

10/100 BMC
Mgmt ports 10G
1G Adapter
LOM ports ports
GE LOM
NC-SI
BMC

PCIe
CPU Mem
Rack server

70
 Existing out of band management topologies will continue to work

 No Direct FI support – Adapter connected to Nexus 2232

 Default CIMC mode – Shared-LOM-EXT

 Specific VLAN for CIMC traffic (VLAN 4044)

 NC-SI interface restricted to 100 Mbps

71
Server Model Number of VIC PCIe Slots that Primary NC-SI
1225 Supported support VIC 1225 Slot (Standby
Power) for UCSM
integration
UCS C22 M3 1 1 1
UCS C24 M3 1 1 1
UCS C220 M3 1 1 1
UCS C240 M3 2 2 and 5 2
UCS C260 M2 2 1 and 7 7
UCS C420 M3 3 1, 4, and 7 4
UCS C460 M2 2 1 and 2 1

72
SAN A ETH 1 ETH 2 SAN B

MGMT MGMT

Uplink Ports

OOB Mgmt
Fabric A Fabric B
Fabric Switch Cluster
Server Ports

Chassis 1 N2232 N2232


Fabric Extenders I I
O O
M M

VIC Mgmt Path


Virtualized Adapters A CNA B
Compute Blades B200 74
Rack Mount
Half / Full width
MGMT0
L1 / L2 Clustering

Console
 Setup runs on a new system
<snip>
Enter the configuration method. (console/gui) ? console
Enter the setup mode; setup newly or restore from backup. (setup/restore) ? setup
You have chosen to setup a new Fabric interconnect. Continue? (y/n): y
Is this Fabric interconnect part of a cluster(select 'no' for standalone)? (yes/no) [n]: yes
Enter the switch fabric (A/B) []: A
Enter the system name: MySystem
Physical Switch Mgmt0 IPv4 address : 10.10.10.2
Physical Switch Mgmt0 IPv4 netmask : 255.255.255.0
IPv4 address of the default gateway : 10.10.10.254
Cluster IPv4 address : 10.10.10.1
<snip>

 Login prompt
MySystem-A login:

76
 Setup runs on a new system
o Enter the configuration method. (console/gui) ? console
o Installer has detected the presence of a peer Fabric interconnect.
o This Fabric interconnect will be added to the cluster. Continue (y/n) ? y
o Enter the admin password of the peer Fabric interconnect: <password>
o Retrieving config from peer Fabric interconnect... done
o Peer Fabric interconnect Mgmt0 IP Address: 10.10.10.2
o Cluster IP address : 10.10.10.1
o Physical Switch Mgmt0 IPv4 address : 10.10.10.3
o Apply and save the configuration (select 'no' if you want to re-enter)? (yes/no): yes
o Applying configuration. Please wait. Configuration file - Ok

 Login prompt
 MySystem-B login:

77
78
79
80
81
 Three downloadable files for blade
and rack mount integration Infrastructure Bundle: UCS
Manager
• UCSM
o Infrastructure Bundle • Fabric Interconnect
(NX-OS)
o B-series Bundle • Fabric Extender
(IOM) Firmware
o C-series Bundle • Chassis Mgmt.
Controller

 ISO file for OS drivers Server Bundle:


• CIMC
• BIOS
• RAID Controller FW
• Catalog File
• UCSM Mgmt Extn.

• Adapter FW
• Catalog File
• UCSM Mgmt Extn.
85
86
87
 Manual
o Upgrade guides published with every UCSM release
o Very important to follow the upgrade order listed in the guide
http://www.cisco.com/en/US/products/ps10281/prod_installation_guides_list.html

 Firmware Auto-Install
o New feature in UCSM 2.1
o Wizard-like interface to specify which version of firmware to upgrade
infrastructure / servers to
o Sequencing of firmware updates is handled automatically to ensure the
least downtime
o Intermediate user acknowledgement during fabric upgrade allows users
to verify that elements such as storage are in an appropriate state
before continuing the upgrade

88
 Firmware Auto-Install implements package version based upgrades
for both UCS Infrastructure components and Server components

 It is a two step process –


o Infrastructure Firmware
o Install Server Firmware

 Recommended to run “Install Infrastructure Firmware” first and then


“Install Server Firmware”

89
Sequence followed by “Install Infrastructure Firmware”

1) Upgrade UCSM
Non disruptive but UCSM connection is lost for 60-80 seconds.
2) Update backup image of all IOMs
Non disruptive.
3) Activate all IOMs with “set startup” option
Non disruptive.
4) Activate secondary Fabric Interconnect
Non disruptive but degraded due to one FI reboot.
5) Wait for User Acknowledgement
6) Activate primary Fabric Interconnect
Non disruptive but degraded due to one FI reboot and UCSM
connection is lost for 60-80 seconds.
90
91
92
 Blade management service to
external client
o IP’s on ext mgmt network
o NAT’d by the FI
 Service to ext clients:
o KVM / Virtual media
o IPMI
o Serial Over LAN (SOL)
KVM, IPMI, SoL
to the FI: SSH, HTTP/S
FI 192.168.1.2
eth0:0
eth0:1
eth0:2 192.168.1.4 Server 1/1
eth0:3 Server 1/2
eth0:4 192.168.1.5
Server 1/3
eth0:5 192.168.1.6 Server 1/4

mgmt0
 Number of hosts in subnet / number of blades

 Must be same VLAN (native) as UCSM mgmt interface

 Physical path (FI-A or FI-B) done at blade discovery

 CIMC IP associated with


o physical blade (UCSM 1.3)

o Service profile (UCSM 1.4)

96
98
ISO/IMG

100
FC Eth

Native Fibre Channel Lossless Ethernet:


1/10GbE, FCoE, iSCSI, NAS

Benefits Use-cases
 Simplify switch purchase -  Flexible LAN & storage convergence
remove ports ratio guess work based on business needs
 Increase design flexibility  Service can be adjusted based on the
demand for specific traffic
 Remove specific protocol
bandwidth bottlenecks
104
 Ports on the base card or the Unified Port GEM Module can either
be Ethernet or FC
 Only a continuous set of ports can be configured as Ethernet or FC
 Ethernet Ports have to be the 1st set of ports
 Port type changes take effect after next reboot of switch for Base
board ports or power-off/on of the GEM for GEM unified ports.

Base card – 32 Unified Ports GEM – 16 Unified Ports

Eth FC Eth FC

105
106
 Slider based configuration
 Only even number of ports can be configured as FC
 Ethernet
o Server Port
o Uplink Port
o FCoE Uplink
o FCoE Storage
o Appliance Port

 Fiber Channel
o FC Uplink Port
o FC Storage Port

108
 Server Port
o Connects to Chassis

 Uplink Port
o Connects to upstream LAN.

o Can be 1 Gig or 10 Gig

 FCoE Uplink Port


o Connects to an upstream SAN via FCoE

o Introduced in UCSM 2.1

 FCoE Storage Port


o Connects to a directly attached FCoE Target

 Appliance Port
o Connects to an IP appliance (NAS)

109
 FC Uplink Port
o Connects to upstream SAN via FC
o Can be 2 / 4 or 8 Gig

 FC Storage Port
o Connects to a directly attached FC Target

110
 The FIs do not participate in VTP

 VLAN configuration is done in the LAN tab in UCSM

 The default VLAN (VLAN 1) is automatically created and cannot be


deleted

 As of UCSM 2.1, only 982 VLANs are supported

 VLAN range is 1-3967 and 4049-4093

 Support for Isolated PVLANs within UCS

111
112
113
 VSAN configuration is done in the SAN Tab in UCSM

 The default VSAN (VSAN 1) is automatically created by the system


and cannot be deleted

 FCoE VLAN ID associated with every VSAN

 FCoE VLAN ID cannot overlap with the Ethernet VLAN ID (created


in the LAN tab)

 The maximum number of VSANs supported is 32

114
115
116
vPC
 Port Channels provide better
performance and resiliency

 As of UCSM 2.1, maximum 8


members per port channel
FI-A FI-B

 LACP mode is active

 Can connect to vPC / VSS

 Load Balancing is src-dest mac/ip

117
118
119
120
 FC Uplinks from FI can be
members of a port channel with a
Nexus or MDS upstream FCF

 As of UCSM 2.1, maximum 16


members per port channel
FI-A FI-B

 Load Balancing is OX_ID based

121
122
123
Module 2
LAN SAN

LAN Connectivity OS & Application SAN Connectivity

State abstracted from hardware

MAC Address Drive Controller F/W UUID BMC Firmware WWN Address
NIC Firmware Drive Firmware BIOS Firmware HBA Firmware
NIC Settings BIOS Settings HBA Settings UUID: 56 4dcd3f 59 5b…
Boot Order MAC : 08:00:69:02:01:FC
WWN: 5080020000075740
Boot Order: SAN, LAN

Chassis-1/Blade-2

Chassis-8/Blade-5

 Separate firmware, addresses, and parameter settings from server hardware


 Physical servers become interchangeable hardware components
 Easy to move OS & applications across server hardware
Server Name
Server Name
UUID
Server
UUID Name
 Contain server state information
MAC
UUID,
MAC MAC,WWN
WWN
Boot info
WWN
Boot
Boot
info
firmware
info
 User-defined
LAN Config
LAN,
LAN SAN Config
Config
SAN Config
Firmware…
o Each profile can be individually created
SAN Config
o Profiles can be generated from a template
Run-time
association
 Applied to physical blades at run time
o Without profiles, blades are just anonymous
hardware components

 Consistent and simplified server deployment – “pay-as-you-grow” deployment


o Configure once, purchase & deploy on an “as-needed” basis

 Simplified server upgrades – minimize risk


o Simply disassociate server profile from existing chassis/blade and associate to new chassis/blade

 Enhanced server availability – purchase fewer servers for HA


o Use same pool of standby servers for multiple server types – simply apply appropriate profile during failover
Blade Failure

Time A

Identity
LAN/SAN
Config

Time B
Service Profile:
MyDBServer
 Feature for multi tenancy which defines a management hierarchy for the
UCS system

 Has no effect on actual operation of blade and the OS

 Usually created on the basis of


o Application type – ESXCluster, Oracle
o Administrative scope – HR, IT

 Root is the top of the hierarchy and cannot be deleted

 Organizations can have multiple levels depending on requirement

129
Root Org

Eng HR

QA HW

130
 Pools, Policies, Service Profiles, Templates
 Blades are not part of an organization and are global resources

131
 Root has access to Pools,
Policies in Group-C

Root Org
 HR has access to Pools,
Policies defined in Group-C Group-C

 Eng has access to Group-B + Eng HR


Group-C Group-B

 QA has access to Group-B +


Group-C QA HW
Group-A

 HW has access to Group-A +


Group-B + Group-C

132
 Consumer of a Pool is a Service Profile.

 Pools can be customized to have uniformity in Service Profiles. For


example, the Oracle App servers can be set to derive MAC
addresses from a specific pool so that it is easy to determine app
type by looking at a MAC address on the network

 Value retrieved from pool as you create logical object, then specific
value from pool belongs to service profile (and still moves from
blade to blade at association time)

 Overlapping Pools are allowed. UCSM guarantees uniqueness


when a logical object is allocated to a Service Profile.
134
 Logical Resource Pool
o UUID Pool
o MAC Pool
o WWNN / WWPN Pool

 Physical Resource Pool


o Server Pool – Created manually or by qualification

135
 Point to pool from appropriate place in Service Profile

 For example:
o vNIC --- use MAC pool
o vHBA – use WW Port Name pool

 In GUI can see the value that is retrieved from the pool
o Note that it belongs to service profile, not physical blade

136
 Pools simplify creation of Service Profiles.

 Management of virtual least identity namespaces, in same UCS


domain

 Cloning

 Templates

137
 If you create a profile with pool associations
o (server pool, MAC pool, WWPN pool, etc)…..

• Then all pool associations are replicated to a cloned template.

• Specific new values for MAC, WWN will be immediately assigned to the
profile from the appropriate pool.

138
 16-byte (128-bit) number  3x10^38 different values
 Stored in BIOS
 Consumed by some software vendors (e.g. MS, VMware)

139
 UUIDs (as used by ESX) need only be unique within ESX
“datacenter” (unlike MACs, WWNs, and IPs)

 It is impossible to assign the same UUID to 2 different UCS servers


via UCSM

 Can have overlapping pools

 Pool resolution from current org up to root – if no UUIDs found,


search default pool from current org up to root

140
 One MAC per vNIC

 MAC address assignment:


o Hardware-derived MAC

o Manually create and assign MAC

o Assign address from a MAC pool

141
 Can have overlapping pools

 UCSM performs consistency checking within UCS domain

 UCSM does not perform consistency checking with upstream LAN

 Should use 00:25:B5 as vendor OUI

 Pool resolution from current org up to root – if no MACs found,


search default pool from current org up to root

142
 One WWNN per service
profile
 One WWPN per vHBA
 WWN assignment:
o Use hardware-derived
WWN
o Manually create and assign
WWN
o Assign WWNN pool to
profile/template
o Assign WWPN pool to vHBA

143
 Can have overlapping pools

 UCSM performs consistency checking within UCS pod

 UCSM does not perform consistency checking with upstream SAN

 20:00:00:25:B5:XX:XX:XX recommended

 Pool resolution from current org up to root – if no WWNs found,


search default pool from current org up to root

144
 Manually populated or Auto-populated

 Blade can be in multiple pools at same time

 “Associate” service profile with pool


o Means select blade from pool (still just one profile per blade at a time)
o Will only select blade not yet associated with another Service Profile,
and not in process of being disassociated

145
 One server per service profile
 Assign server pool to service profile or template

149
 Can have overlapping pools

 2 servers in an HA cluster could be part of same chassis

 Can use hierarchical pool resolution to satisfy SLAs

 Pools resolved from current org up to root – if no servers found, then


search default pool from current org up to root

150
 Policies can be broadly categorized as
o Global Policies
• Chassis Discovery Policy
• SEL Policy
o Policies tied to a Service Profile
• Boot Policy
• BIOS Policy
• Ethernet Adapter Policy
• Maintenance Policy

 Policies when tied to a Service Profile greatly reduce the time taken
for provisioning

152
154
156
157
158
159
160
161
Template flavors:
 Initial template
o Updates to template are not propagated to profile clone

 Updating template
o Updates to template propagated to profile clone

Template types:

 vNIC

 vHBA

 Service Profile

163
 When creating a vNIC in a service profile, a vNIC template can be
referenced.

 This template will have all of the values and configuration to be used
for creating the vNIC.

 These values include QoS, VLANs, pin groups, etc.

 Can be referenced multiple times when creating a vNIC in your


service profile.

164
165
 Similar to a vNIC template. This is used when creating vHBAs in
your service profile.

 Used to assign values such as QoS, VSANs, etc.

 Can be referenced multiple times to create multiple vHBAs in your


service profile.

166
167
 Same flow as creating Service Profile

 Can choose server pool (but not individual blade)

 Can associate virtual adapters (vNIC, VHBA) with MAC and WWN
pools

 Can create template from existing service profile


o Works nicely for all elements of service profile that use pools

168
169
170
171
172
173
174
175
176
177
178
179
180
181
 You will first start by creating several pools, policies and templates that will
be used for the values assigned to each of your servers though a service
profile.
 You will then create a service profile template. From the wizard you will be
selecting the various pools, policies and templates.
 Once you’ve created your service profile template you will then create two
service profiles clones. The values and pools and policies assigned to the
template will be used to create two individual service profiles.
 For example you will create a MAC pool with 16 usable MAC addresses.
This will then be placed in the service profile template. When creating
clones from the template, the system will allocate MAC addresses from this
pool to be used by each vNIC in this service profile.
 The service profile will automatically be assigned to the server via the
server pool. You will then boot the server and install Linux.

182
183
Module 3
 Role Based Access Control
 Remote User Authentication
 Faults, Events and Audit Logs
 Backup and Restore
 Enabling SNMP
 Call Home
 Enabling Syslog
 Fault Suppression

185
 Organizations
o Defines a management hierarchy for the UCS system
o Absolutely no effect on actual operation of blade and its OS
 RBAC
o Delegated Management
o Allows certain users to have certain privileges in certain organizations
o Absolutely no effect on who can use and access the OS on the blades

187
 Orgs and RBAC could be used independently
 Orgs without RBAC
o Structural management hierarchy
o Could still use without delegated administration
• Use administrator that can still do everything
 RBAC without Orgs
o Everything in root org (as we have been doing so far)
o Still possible to delegate administration (separate border network/FC
admin from server admin, eg)

188
 Really no such thing as not having Orgs

 We have just been doing everything in root (/) org

 Just use happily, if you don’t care about hierarchical management


root (/)

SWDev QA

SWgrpA SWgrpB
IntTest
Policies
 Blades are independent of Org

 Same blade can be in many server pools in many orgs

 Blade can be associated with logical service profile in any org


Locale: myloc
Role:myrole

root (/)
priv1
/SWDev
priv2
/Eng/HWEng
priv3

User is assigned certain


privs (one or more roles)
over certain locales (one
or more)

User: jim
 Role is a collection of privileges

 There are predefined roles (collections).


o You can create new roles as well

 Some special privileges:


o admin (associated with predefined “admin” role”)

o aaa (associated with predefined “aaa” role”)


 Radius
 TACACS+
 LDAP
 Local

198
 Provider – The remote authentication server

 Provider Group – A group of authentication servers or providers.


This must be defined as the group is what’s referenced when
configuring authentication.

 Group Map – Used to match certain attributes in the authentication


request to map the appropriate Roles and Locales to the user.

199
 For LDAP we define a DN and reference the roles and Locales it
maps to.
 If no group map is defined, a user could end up with the default
privileges such as read-only

200
 Faults – System and hardware failures such as power supply failures, Power
failure, or configuration issues.
 Events – System events such as clustering, or RSA key generated, etc.
 Audit logs – Configuration events such as Service Profile and vNIC creation,
etc.
 Syslog – Syslog messages generated and sent to a external Syslog server
 TechSupport files – These are “show tech” files that have been created and
stored.

202
203
 A Fault Suppression policy is used to determine how long faults are
retained or cleared.
 The flapping interval is used to determine how long a fault retains its
severity and stays in an active state.
 Say for example the flapping interval is 10 seconds. If a critical fault
came in continuously within the 10 seconds, the fault would be
suppressed and remain in an active state.
 After the 10 seconds duration, if no further instances of the fault
have been reported the fault is then either retained or cleared based
on the suppression policy.

204
 Full state backup – This is a backup of the entire system for
disaster recovery. This file can not be imported and can only be
used when doing a system restore upon startup of the Fabric
Interconnects.
 All Configuration backup – This backs up the system and logical
configuration into an XML file. This file can not be used during the
system restore and can only be imported while the UCS is
functioning. This backup does not include passwords of locally
authenticated users.
 System Configuration – Only system configuration such as users,
roles and management configuration.
 Logical Configuration – This backup is logical configuration such
as policies, pools, VLANs, etc.

207
 Creating a Backup Operation allows you to perform the same
backup multiple times.
 File can be stored on the locally or on a remote file system.

208
209
 You can also create scheduled backups
 This can only be done for Full State and All System configuration
backups.
 Can point the UCS to write to an FTP server, storage array or any
other type of file system.
 This can be done daily, weekly or bi weekly.

211
IP or Hostname of
remote server to store
backup

Protocol to use for


backup

Admin state of
scheduled backup

Backup Schedule

212
 Once a Backup operation is complete you can then import the
configuration as needed.
 You must create an Import Operation. This is where you will point to
the file you want to Import into the UCS.
 You can not import a Full system backup. This file can only be used
when doing a system restore when a Fabric Interconnect is booting.
 Options are to Merge with the running configuration or replace the
configuration.

213
 UCS supports both SNMP versions 1, 2 and 3
 The following protocols are supported for SNMPv3 users:
o HMAC-MD5-96 (MD5)

o HMAC-SHA-96 (SHA)

 The AES protocol can be enabled under a SNMPv3 user as well for
additional security.

216
217
 You have the option to enable Traps or Informs. Traps are less
reliable because it does not require acknowledgements. Informs
require acknowledgements but also have more overhead.
 If you enable SNMPv3, the following V3 privileges can be enabled:
o Auth—Authentication but no encryption

o Noauth—No authentication or encryption

o Priv—Authentication and encryption

218
 Choose the authentication encryption type.
 Once you enable AES, you must use a privacy password. This is
used when generating the AES 128 bit encryption key.

219
 Call Home is a feature that allows UCS to generate a message
based on system alerts, faults and environmental errors.
 Messages can be E mailed, sent to a pager or an XML based
application.
 The UCS can send these message in the following format:
o Short text format

o Full text

o XML format

221
 A destination profile is used to determine the recipients of the call
home alerts, the format it will be sent on and for what severity level.
 A call home policy dictates what error messages you would like to
enable or disable the system from sending.
 When using E mail as the method to send alerts, an SMTP server
must be configured.
 It’s recommended that both fabric interconnects have reachability to
the SMTP server.

222
Call Home logging
level for the system

Contact info listing the


source of the call
home alerts

Source e mail address


for Call Home

SMTP Server

223
Alert groups – What
elements you want to
receive errors on.

Logging Level

Alert Format

E mail recipients who


will receive the alerts

224
 Call home will send alerts for certain types of events and messages.
 A Call Home policy allows you to disable alerting for these specific
messages.

225
 Smart Call home will alert Cisco TAC of an issue with UCS.
 Based on certain alerts, A Cisco TAC case will be generated
automatically.
 A destination profile of “CiscoTAC-1” is already predefined. This is
configured to send Cisco TAC message with the XML format.

226
 Under the CiscoTAC-1 profile, enter callhome@cisco.com
 Under the “System Inventory” tab, click “send inventory now”.
 The message will sent to Cisco. You will then receive and automatic
reply based on the contact info you specified in the Call Home
setup.
 Simply click on the link in the e mail and follow the instructions to
register your UCS for the Smart Call Home feature.

227
 Syslog can be enabled under the admin tab in UCS.
 Local destination will allow you to configure UCS to store syslog
messages locally in a file.
 Remote destination will allow UCS to send to a remote syslog
server. Up to three servers can be specified.
 Local sources will allow you to decide what types of messages are
sent. The three sources are Alerts, Audits and Events.

229
Customer benefits

• Customers can align UCSM fault alerting


with their operational activities

Feature details
• Fault suppression offers the ability to
lower severity of designated faults for a
maintenance window, preventing Call
Home and SNMP traps during that period

• Predefined policies allow a user to easily


place a server into a maintenance mode
to suppress faults during maintenance
operations
1. Users can now “Start/Stop Fault Suppression” in order to suppress
transient faults and Call-home/SNMP notifications
2. Support on both physical (Chassis, Server, IOM, FEX) and logical
entities (Org, Service Profile)
3. Users can specify a time window during which fault suppression
will take into effect
4. A fault suppression status indicator to show different states (Active,
Expired, Pending)
5. Fault Suppression policies that contain a list of faults raised during
maintenance operations will be provided

233
Server Focused - Operating system level shutdown/reboot
- Local disk removal/ replacement
- Server power on/power off/reset
- BIOS, adapter firmware activation/upgrades
- Service profile association, re-association, dis-
association
IOM Focused - Update/Activate firmware
- Reset IOM
- Remove/Insert SFPs
- IOM removal/insert

Fan/PSU Focused - Local disk removal/ replacement


1. Suppress Policy is used to specify which faults we want to
suppress
2. Consists of cause/type pairs defined as Suppress Policy Items
3. System will provide pre-defined suppress policies that are not
modifiable
4. Additional suppress policies cannot be created and used by user

236
1. default-chassis-all-maint
Blade, IOM, PSU, Fan
2. default-chassis-phys-maint
PSU, Fan
3. default-fex-all-maint
OM, PSU, Fan
4. default-fex-phys-maint
PSU, Fan
5. default-iom-maint
IOM
6. default-server-maint

237
238
Module 4
Single Point of Management

Unified Fabric

Stateless Servers with Virtualized Adapters

240
UCS Manager
Embedded– manages entire system

UCS Fabric Interconnect

UCS Fabric Extender


Remote line card

UCS Blade Server Chassis


Flexible bay configurations

UCS Blade or Rack Server


Industry-standard architecture

UCS Virtual Adapters


Choice of multiple adapters
241
UCS Fabric Interconnect – UCS 6100 UCS Fabric Interconnect – UCS 6200UCS 6100

• 20x 10GE Ports – 1 RU • 48x Unified Ports (Eth/FC) – 1 RU


• 40x 10GE Ports – 2 RU • 32x base and 16x expansion
UCS 6200
• Ethernet or FC Expansion Modules

UCS Fabric Extender – UCS 2104 UCS Fabric Extender –


UCS 2104
UCS 2208/2204
• 8x 10GE Downlinks to Servers
• 4x 10GE Uplinks to FIs • 32x 10GE Downlinks to Servers UCS 2208/2204 IOM
• 8x 10GE Uplinks to FIs

Adapters - M81KR VIC, M71KR, etc. Adapter - UCS VIC 1280


• Up to 2x 10GE ports • Up to 8x 10GE ports
• M81KR: Up to 128 virtual interfaces • Up to 256 virtual interfaces

242
MGMT MGMT

Uplink Ports

OOB Mgmt
Fabric A Fabric B
Fabric Switch Cluster
Server Ports

Chassis 1 Chassis 20
Fabric Extenders I I I I
O O O O
M M M M

Virtualized Adapters A CNA B A CNA CNA B


Compute Blades B200 243 B250
Half / Full width
MGMT MGMT

Uplink Ports

OOB Mgmt
Fabric A Fabric B
Fabric Switch Cluster
Server Ports

Chassis 1 N2232 N2232


Fabric Extenders I I
O O
M M

VIC Mgmt Path


Virtualized Adapters A CNA B
Compute Blades B200 244
Rack Mount
Half / Full width
Fabric
Interconnect
 vNIC (LIF)
vFC vEth
Host presented PCI device managed by 1 1
UCSM.

 VIF
Policy application point where a vNIC IOM
connects to UCS fabric

 VNTag Adapter
An identifier that is added to the packet
which contains source and destination
ID which is used for switching within the vHBA vNIC Cable
UCS fabric. 1 1
Virtual Cable
(VNTag)
Service Profile
(Server)
245 Blade
What you see
FI-A FI-A FI-A

vFC vEth vFC vEth


Switch 1 1 1 1

Eth 1/1
 Dynamic, Rapid
Provisioning
IOM A IOM A  State abstraction
Cable
 Location
10GE 10GE Independence
A A

Adapter  Blade or Rack

Adapter
vHBA vNIC vHBA vNIC Physical
1 1 1 1 Cable

Virtual Cable
Service Profile (VN-Tag)
(Server)
Blade (Server)
246
Blade
Hardware Components
DDR3 x2
Carmel 1
10 Gig Carmel 2
South Intel
Carmel 6
Bridge Jasper Forest
Carmel cpu
Sunnyvale

UPC UPC UPC NVRAM Memory


PCIe x8
Serial Flash
12 Gig
PEX 8525
12 Gig 4 port PCIE
Switch
PCIe x4 PCIe x4 PCIe x4

ASIC
Unified Crossbar Fabric

CPU
0
PCIE PCIE PCIE
Dual Gig Dual Gig Dual Gig
0 1 0 1
1 N/C

12 Gig 12 Gig

Xcon1 Mgmt

UPC UPC UPC


Xcon2 Console
10 Gig

248
FC Eth

Native Fibre Channel Lossless Ethernet:


1/10GbE, FCoE, iSCSI, NAS

Benefits Use-cases
 Simplify switch purchase -  Flexible LAN & storage convergence
remove ports ratio guess work based on business needs
 Increase design flexibility  Service can be adjusted based on the
demand for specific traffic
 Remove specific protocol
bandwidth bottlenecks
249
 Ports on the base card or the Unified Port GEM Module can either
be Ethernet or FC
 Only a continuous set of ports can be configured as Ethernet or FC
 Ethernet Ports have to be the 1st set of ports
 Port type changes take effect after next reboot of switch for Base
board ports or power-off/on of the GEM for GEM unified ports.

Base card – 32 Unified Ports GEM – 16 Unified Ports

Eth FC Eth FC

250
251
 Slider based configuration
 Only even number of ports can be configured as FC
 Configured on a per FI basis

252
61x0/62xx Generational Contrasts
Feature 61x0 62xx
Flash 16GB eUSB 32GB iSATA
DRAM 4GB DDR3 16GB DDR3
Processor Single Core Celeron 1.66 Dual Core Jasper Forest 1.66
Unified Ports No Yes
Number of ports / UPC 4 8
Number of VIF’s / UPC 128 / port fixed 4096 programmable
Buffering per port 480KB 640KB
VLANs 1k 1k (4k future)
Active SPAN Session 2 4 (w/dedicated buffer)
Latency 3.2uS 2uS
MAC Table 16k 16k (32k future)
L3 Switching No Future
IGMP entries 1k 4k (future)
Port Channels 16 48 (96 in 6296)
FabricPath No 253 Future
Components
Switching ASIC Upto 8
Fabric Ports
FLASH to
Aggregates traffic to/from host-facing 10G Interconnect
Ethernet ports from/to network-facing 10G DRAM
Ethernet ports
EEPROM
CPU (also referred to as CMC) Control
Controls Redwood and perform other Chassis
chassis management functionality Management IO Switching ASIC
Controller

L2 Switch
Aggregates traffic from CIMCs on the Switch
server blades

Woodside Interfaces
HIF (Backplane ports) Chassis
Up to 32 Backplane Ports to
NIF (FabricPorts) Signals Blades
BIF
CIF
No local switching – All traffic from
HIFs goes upstream for Switching
254
2104/220X Generational Contrasts

Feature 2104 2208 2204


ASIC Redwood Woodside Woodside
Host Ports 8 32 16
Network Ports 4 8 4
CoSes 4 (3 enabled) 8 8
1588 Support No Yes Yes
Latency ~800nS ~500nS ~500nS
Adapter 1 mLOM only mLOM and Mezz mLOM and Mezz
Redundancy

255
 Next Generation VIC based
 Dual 4x10 Gbps connectivity into fabric
 PCIe x16 GEN2 host interface
 Capable of 256 PCIe devices
 (OS dependent)
 Same host side drivers as VIC (M81KR)
10 Base KR
 Retains VIC features with enhancements Sub Ports

 Fabric Failover capability


 SRIOV “capable” device UIF 0 UIF 1

1280-VIC

256
Key Generational Contrasts
Function/Capability M81KR 1280-VIC

PCIe Interface Gen1 x16 Gen2 x16


Embedded CPU’s 3 @ 500 MHz 3 @ 675 MHz
Uplinks 2 x 10GE 2 x 10GE / 2 x 4 x 10GE
vNICs/vHBAs 128 256
WQ,RQ,CQ 1024 1024
Interrupts 1536 1536
VIFlist 1024 4096
Complete hardware inter-operability between Gen 1 and Gen 2

Fabric IOM Adapter Supported Min Software version


Interconnect required
6100 2104 UCS M81KR UCSM 1.4(1) or earlier

6100 2208 UCS M81KR UCSM UCS 2.0

6100 2104 UCS1280 VIC UCSM UCS 2.0

6100 2208 UCS1280 VIC UCSM UCS 2.0


6200 2104 UCS M81KR UCSM UCS 2.0

6200 2208 UCS M81KR UCSM UCS 2.0

6200 2104 UCS1280 VIC UCSM UCS 2.0

6200 2208 UCS1280 VIC UCSM UCS 2.0

258
Ethernet Switching Modes
LAN  Server vNIC pinned to an Uplink port
Spanning
Tree  No Spanning Tree Protocol
o Reduces CPU load on upstream switches
o Reduces Control Plane load on 6100
o Simplified upstream connectivity

 UCS connects to the LAN like a


FI A MAC Server, not like a Switch
Learning  Maintains MAC table for Servers only
vEth 3 vEth 1 o Eases MAC Table sizing in the Access Layer

VLAN 10 MAC  Allows Multiple Active Uplinks per


Fabric A Learning VLAN
o Doubles effective bandwidth vs STP
L2
 Prevents Loops by preventing Uplink-
Switching
to-Uplink switching
 Completely transparent to upstream
VNIC 0 VNIC 0
LAN
 Traffic on same VLAN switched locally
Server 2 Server 1
260
LAN
 Server to server traffic on the
same VLAN is locally switched Server 2

 Uplink port to Uplink port traffic not Uplink


Ports Deja-Vu
switched RPF

 Each server link is pinned to an


FI
uplink port / port-channel
 Network to server unicast traffic is
forwarded to server only if it arrives VLAN 10
vEth 1 vEth 3
on pinned uplink port. This is
termed as the Reverse Path
Forwarding—(RPF) check
 Packet with source MAC
belonging to a server received on VNIC 0 VNIC 0
an uplink port is dropped (Deja-Vu
Check) Server 2 Server 1
261
LAN
 Broadcast traffic for a VLAN is
B B
pinned on exactly one uplink port
Broadcast
(or port-channel) i.e., it is dropped Uplink
Listener
Ports
when received on other uplinks per VLAN

 Server to server multicast


FI
traffic is locally switched
 RPF and deja-vu check also
applies for multicast traffic vEth 1 vEth 3

B
VNIC 0 VNIC 0

Server 2 Server 1
262
Root
LAN  Fabric Interconnect behaves like a
normal Layer 2 switch
 Server vNIC traffic follows VLAN
forwarding
 Spanning tree protocol is run on
FI-A MAC
the uplink ports per VLAN—Rapid
Learning PVST+
vEth 3 vEth 1
 Configuration of STP parameters
(bridge priority, Hello Timers etc)
Fabric A VLAN 10
not supported
 VTP is not supported currently
L2
Switching  MAC learning/aging happens on
both the server and uplink ports
like in a typical Layer 2 switch
VNIC 0 VNIC 0
 Upstream links are blocked per
VLAN via Spanning Tree logic
Server 2 Server 1
263
Fabric Failover
 Fabric provides NIC FI-A
L1
L2
L1
L2 FI-B
failover capabilities vEth vEth
chosen when defining a 1 1
service profile
 Traditionally done using
Physical Cable IOM IOM
NIC bonding driver in the
OS Virtual Cable
10GE 10GE
 Provides failover for both
unicast and multicast
PHY Adapter
traffic Cisco VIC
Menlo – M71KR
 Works for any OS vNIC VIRT
Adapter
1

265
OS / Hypervisor / VM
1 2 1 2

Upstream Switch 15 15
Upstream Switch
16 14 14 16
MAC-A
Uplink
7 8 8 7
Uplink gARP
Ports Ports
UCS FI-A UCS FI-B
VLAN 10 VLAN 20 VLAN 10 VLAN 20

HA Links
1 2 3 4 5 6 1 2 3 4 5 6
Server Ports Server Ports

1 2
Fabric Ports
3 4
UCS 1 2
Fabric Ports
3 4

FEX-A Blade FEX-B


Backplane 1 2 3 4 5 6 7 8 Chassis 1 2 3 4 5 6 7 8 Backplane
Ports Ports

Blade Server
Eth 1/1/4 Adapter Eth 1/1/4
vNIC stays UP
MAC –A Eth 0 MAC –B Eth 1
PCI Bus

Bare Metal Operating


System Windows / Linux

266
1 2 1 2

Upstream Switch 15 15
Upstream Switch
16 14 14 16
Uplink Ports Uplink Ports
7 8 8 7
UCS FI-A UCS FI-B
VLAN Web veth1240 VLAN Web veth1241
VLAN NFS MAC-C VLAN NFS MAC-C
VLAN VMK
MAC-E
VLAN VMK
MAC-E MAC-C, E
gARP
VLAN COS VLAN COS
HA Links
1 2 3 4 5 6 1 2 3 4 5 6
Server Ports Server Ports

1 2
Fabric Ports
3 4
UCS 1 2
Fabric Ports
3 4

FEX-A Blade FEX-B


Backplane 1 2 3 4 5 6 7 8 Chassis 1 2 3 4 5 6 7 8 Backplane
Ports Ports

Blade Server
Eth 1/1/4 Adapter Eth 1/1/4

MAC –A Eth 0 Eth 1 MAC –B

Veth10 Veth11 Veth20 Veth5 Veth10 Hypervisor


Profile Web Profile Web Profile NFS Profile VMK Profile COS Switch

MAC –C Service
MAC –D MAC –E Kernel
Console
267
Ethernet Switching Modes Recommendations
 Spanning Tree protocol is not run in EHM hence control plane is
unoccupied
 EHM is least disruptive to upstream network – BPDU Filter/Guard,
Portfast enabled upstream
 MAC learning does not happen in EHM on uplink ports. Current
MAC address limitation on the 6100 ~14.5K.

Recommendation: End Host Mode


269
 Dynamic pinning
Server ports pinned to an
uplink port/port-channel
DEFINED:
automatically PinGroup
 Static pinning Oracle
1 2 3 4
Specific pingroups created 6100 A
and associated with Pinning
adapters vEth 3 vEth 1
Switching
 Static pinning allows traffic Fabric A
management if required for
certain applications / servers

APPLIED:
VNIC 0 VNIC 0 PinGroup
Recommendation: Oracle
End Host Mode Server X Oracle
270
L1 L1
FI-A
 Fabric Failover is only L2 L2 FI-B

applicable in EHM. vEth vEth


1 1
 NIC teaming software
required to provide
failover in Switch mode. IOM IOM
Physical Cable

Virtual
Cable 10G 10G
E E

PHY Adapter
Cisco VIC
Menlo – M71KR
vNIC VIRT
Adapter
1

Recommendation: End Host Mode OS / Hypervisor / VM


271
End Host Mode Switch Mode
Primary Root Secondary Root Primary Root Secondary Root

LAN LAN

Active/Active
Blocking
Border Ports Border Ports

FI-A FI-B FI-A FI-B

Server Ports Server Ports

272
Recommendation: End Host Mode
 Certain application like MS-NLB (Unicast mode) have the need for
unknown unicast flooding which is not done in EHM

 Certain network topologies provide better network path out of the


Fabric Interconnect due to STP root placement and HSRP L3 hop.

 Switch Mode is “catch all” for different scenarios

Recommendation: Switch Mode


273
Adapter – IOM Connectivity
IOM-A

IOM-B

2x10G KR
2x10G KR
2x10G KR

2x10G KR
Mezzanine Connector 4x10G KR Integrated I/O
Slot (mLOM – VIC1240)

`
x16 Gen 2 x16 Gen 2

B200-M3 Sandy QPI Sandy Patsburg


Bridge Bridge
CPU # 1 CPU # 0 PCH-B

275
2208-A

2208-B
Port 0 Port 1
Mezzanine Slot

VIC-1240
VIC ASIC
Not Populated

`
x16 Gen 2 x16 Gen 2

B200-M3

CPU CPU

276
2208 - A

2208 - B
Port 0 Port 1 Port 0 Port 1
Mezzanine Slot

VIC-1240
VIC ASIC VIC ASIC

VIC 1280

`
x16 Gen 2 x16 Gen 2

B200-M3
CPU CPU

277
2208 - A

2208 - B
Port 0 Port 1 Port 0 Port 1

VIC-1240
Sereno
Port Expander
`
x16 Gen 2 x16 Gen 2

B200-M3
CPU CPU

278
2204-A

2204-B
Port 0 Port 1
Mezzanine Slot

VIC-1240
VIC ASIC
Not Populated

`
x16 Gen 2 x16 Gen 2

B200-M3

CPU CPU

279
2204 - A

2204 - B
Port 0 Port 1 Port 0 Port 1
Mezzanine Slot

VIC-1240
VIC ASIC VIC ASIC

VIC 1280 Mezz

`
x16 Gen 2 x16 Gen 2

B200-M3
CPU CPU

280
2204 - A

2204 - B
Port 0 Port 1 Port 0 Port 1

VIC-1240
Mezzanine Slot

Pass Through

`
x16 Gen 2 x16 Gen 2

B200-M3
CPU CPU

281
IOM – FI Connectivity
Server-to-Fabric Port Pinning Configurations
Discrete Mode Port Channel Mode FAN FAN FAN FAN

FAN1

FAN1
PS1

PS1
FAN FAN FAN FAN STAT STAT STAT STAT
FAN1

FAN1
PS1

PS1
STAT STAT STAT STAT FAIL FAIL FAIL FAIL

STAT

STAT
FAIL FAIL FAIL FAIL

FAN2

FAN2
STAT

STAT
FAN2

FAN2
OK OK OK OK
OK OK OK OK

PS2

PS2
PS2

PS2
N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W
N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W

Slot 1 Slot 2 Slot 1 Slot 2


UCS 5108
UCS 5108

!
!
SLOT SLOT
SLOT SLOT
1 2
1 2

SLOT
3
Slot 3 Slot 4 SLOT
4
SLOT
3
Slot 3 Slot 4 SLOT
4

SLOT
5
Slot 5 Slot 6 SLOT
6
SLOT
5
Slot 5 Slot 6 SLOT
6

SLOT
7
Slot 7 Slot 8 SLOT
8
SLOT
7
Slot 7 Slot 8 SLOT
8

OK FAIL OK FAIL OK FAIL OK FAIL


OK FAIL OK FAIL OK FAIL OK FAIL

• 6200 to 2208
283 • 6200 to 2204
 Individual Links
Bladed pinned to discrete NIFs
Valid number of NIFs for pinning – 1,2,4,8

 Port-channel
Only supported between UCS 6200 –2208/4 XP

284
Number of Active Fabric Links Blades pinned to fabric link

1-Link All the HIF ports pinned to the active link


2-Link 1,3,5,7 to link-1
2,4,6,8 to link-2
4-Link 1,5 to link-1
2,6 to link-2
3,7 to link-3
4,8 to link-4
8-Link (Applies only to 2208XP) 1 to link-1
2 to link-2
3 to link-3
4 to link-4
5 to link-5
6 to link-6
7 to link-7
8 to link-8

 HIFs are statically pinned by the system to individual fabric ports.


 Only 1,2,4 and 8 links are supported, 3,5,6,7 are not valid configuration.
 Static Pinning done by the
Fabric Interconnect
system dependent on number of

Fabric Ports
fabric ports

 1,2 4, 8 (2^x) are valid links for


initial pinning
IOM
 Applicable to both 6100 / 6200
and 2104XP/2208XP
Server Ports

Blade 7
Blade 1

Blade 2

Blade 3

Blade 4

Blade 5

Blade 6

Blade 8
286

Other blades unaffected


Pinned HIFs are brought down

287
Blade 1 Server Ports Fabric Ports
IOM

Blade 2

Blade 3

Blade 4

Blade 5

Blade 6

Blade 7
Fabric Interconnect

Blade 8
 Blades re-pinned to valid number of Fabric Interconnect
6100/6200
links – 1,2,4 or 8

Unused Link
Fabric Ports
 Pinned blade connectivity affected

 HIF’s brought down/up for re-


pinning IOM
 May result in unused links

Server Ports
 Addition of links requires re-ack of
chassis.

Blade 5
Blade 1

Blade 2

Blade 3

Blade 4

Blade 6

Blade 8
Blade7
288
Fabric Interconnect
 Only possible between 6200- 6200
2208XP

Fabric Ports
 HIFs pinned to port-channel Port Channel

 Port-Channel Hash IOM – 2208XP


IP
L2 DA, L2 SA, L3 DA, L3 SA,VLAN

Server Ports
FCoE
L2 SA, L2 DA, FC SID ,FC DID

Blade 4

Blade 5

Blade 6

Blade 8
Blade 1

Blade 2

Blade 3

Blade7
289
Fabric Interconnect
 Blades still pinned to Port-channel 6200
on a link failure

Fabric Ports
 HIF’s not brought down till all Port Channel

members fail

IOM – 2208XP

Server Ports

Blade 4

Blade 5

Blade 6

Blade 8
Blade 1

Blade 2

Blade 3

Blade7
290
Discrete Mode FAN FAN FAN FAN
Port Channel Mode FAN FAN FAN FAN

FAN1

FAN1

FAN1

FAN1
PS1

PS1

PS1

PS1
STAT STAT STAT STAT STAT STAT STAT STAT
FAIL FAIL FAIL FAIL FAIL FAIL FAIL FAIL

STAT

STAT

STAT

STAT
FAN2

FAN2

FAN2

FAN2
OK OK OK OK OK OK OK OK
PS2

PS2

PS2

PS2
N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W N10-PAC1-550W

UCS 5108 UCS 5108

SLOT
1
Slot 1 Slot 2 SLOT
2
!
SLOT
1
Slot 1 Slot 2 SLOT
2
!

SLOT
Slot 3 Slot 4 SLOT SLOT
Slot 3 Slot 4 SLOT
3 4 3 4

SLOT
5
Slot 5 Slot 6 SLOT
6
SLOT
5
Slot 5 Slot 6 SLOT
6

SLOT
7
Slot 7 Slot 8 SLOT
8
SLOT
7
Slot 7 Slot 8 SLOT
8

OK FAIL OK FAIL OK FAIL OK FAIL OK FAIL OK FAIL OK FAIL OK FAIL

 Servers can only use a single 10GE IOM uplink  Servers can utilize all 8x 10GE IOM uplinks
 A blade is pinned to a discrete 10 Gb uplink  A blade is pinned to a logical interface of 80 Gbps
 Fabric Failover if a single uplink goes down  Fabric Failover if all uplinks on same side go down
 Per blade traffic distribution , same as Balboa  Per flow traffic distribution with-in a port-channel
 Suitable for traffic engineering use case  Suitable for most environments
 Addition of links requires chassis re-ack.  Recommended with VIC 1280
291
Upstream Connectivity (Ethernet)
DMZ 1 DMZ 2
VLAN 20-30 VLAN 40-50

All Links Forwarding

Prune VLANs

FI-A FI-B
EHM EHM

DMZ 1 Server

DMZ 2 Server

Assumption: No VLAN overlap293


between DMZ 1 & DMZ 2
Dynamic Re-pinning of failed uplinks

FI-A
Sub-second re-pinning Pinning

vEth 3 vEth 1
Switching
Fabric A VLAN 10

L2
Switching VNIC stays up
All uplinks forwarding for all VLANs
vSwitch /
GARP aided upstream convergence N1K
VNIC 0
No STP ESX HOST 1
MAC A
Sub-second re-pinning
VM 1 VM 2
No server NIC disruption
MAC B MAC C
VNIC 0 Server 2
294
Recommended: Port Channel Uplinks No disruption

No GARPs
needed

FI-A
Sub-second convergence Pinning

vEth 3 vEth 1
Switching
Fabric A VLAN 10

More Bandwidth per Uplink L2


Switching NIC stays up
Per flow uplink diversity
No Server NIC disruption vSwitch / N1K
VNIC 0
Fewer GARPs needed ESX HOST 1
MAC A
Faster bi-directional convergence
VM 1 VM 2
Fewer moving parts
MAC B MAC C
VNIC 0 Server 2
RECOMMENDED 295
vPC uplinks hide uplink & switch failures from Server VNICs

vPC
No disruption Domain

No GARPs
Needed!

FI-A
Pinning

vEth 3 vEth 1
Switching
More Bandwidth per Uplink Fabric A VLAN 10

No Server NIC disruption


Switch and Link resiliency L2 NIC stays up
Switching
Per flow uplink diversity
vSwitch / N1K
No GARPs
VNIC 0
Faster Bi-directional convergence ESX HOST 1

Fewer moving parts VM 1 VM 2


Server 2
vPC RECOMMENDED 296 VNIC 0
MAC B MAC C
 VNIC 0 on Fabric A
VM1 to VM4:
 VNIC 1 on Fabric B
1) Leaves Fabric A
 VM1 Pinned to VNIC0
L2 Switching 2) L2 switched
 VM4 Pinned to VNIC1
upstream
3) Enters Fabric B
 VM1 on VLAN 10
 VM4 on VLAN 10
FI-A FI-B
EHM EHM

VNIC 0 VNIC 1 VNIC 0 VNIC 1


ESX HOST 1 ESX HOST 2

vSwitch / N1K vSwitch / N1K


Mac Pinning Mac Pinning

VM1 VM2 VM3 VM4

297
7K1 7K2

FI-A FI-B
EHM EHM

1. Traffic destined for a vNIC on the Red Uplink enters 7K1


2. Same scenario vice-versa for Green
3. All Inter-Fabric traffic traverses Nexus 7000 peer link
298
vPC uplinks to L3 aggregation switch
7K1 7K2
vPC Domain

keepalive

vPC peer-link

FI-A FI-B
EHM EHM
With 4 x 10G (or more) uplinks per 6100 – Port Channels

FI-A FI-B
EHM EHM

All UCS uplinks forwarding


No STP influence on the topology
End Host Mode

300
Upstream Connectivity (Storage)
 Fabric Interconnect operates in N_Port
Proxy mode (not FC Switch mode) FLOGI
o Simplifies multi-vendor interoperation FDISC
o Simplifies management F_Port F_Port

 SAN switch sees Fabric Interconnect as


an FC End Host N_Proxy N_Proxy

FI-A FI-B
 Server vHBA pinned to an FC uplink in vFC 1 vFC 2 vFC 1 vFC 2
the same VSAN. Round Robin F_Proxy F_Proxy
selection.

N_Port N_Port
 Eliminates the FC domain on UCS
vHBA vHBA vHBA vHBA
Fabric Interconnect 0 1 0 1

Server 1 Server 2

 One VSAN per F_port (multi-vendor)

 Trunking and Port channeling (OX_ID) Ethernet


with MDS, Nexus 5K FC
Converged
FCoE link
Dedicated
FCoE link
FC FCoE
 UCS Fabric Interconnect behaves
like an FC fabric switch

 Primary use case is directly


attached FC or FCoE Storage E Port

Targets
FI-A FI-B
vFC 1 vFC 2 vFC 1 vFC 2
 Light subset of FC Switching
F_Proxy
features F_Proxy

o Select Storage ports


o Set VSAN on Storage ports N_Port N_Port

vHBA vHBA vHBA vHBA


0 1 0 1

 Fabric Interconnect uses a FC Server 1 Server 2

Domain ID

Ethernet
 UCSM 2.1 - In the absence of SAN,
FC
Zoning for directly connected Converged
targets will be done on the FI’s. FCoE link
Dedicated
FCoE link
Nexus
 FI’s in NPV Mode FLOGI 7k/5k
FDISC
VF Port VF Port

 Support for trunking and port-channeling


VNP VNP
 Zoning happens upstream to UCS FI-A FI-B
vFC 1 vFC 2 vFC 1 vFC 2

F_Proxy F_Proxy

N_Port N_Port

vHBA vHBA vHBA vHBA


0 1 0 1

Server 1 Server 2

Ethernet
FC
Converged
FCoE link
Dedicated
FCoE link
Nexus 5k
 FI’s in NPV Mode FLOGI
FDISC
VF Port VF Port

 With a Nexus 5k upstream, the link can


be converged i.e. Ethernet/IP and FCoE
VNP VNP
traffic on the same wire
FI-A FI-B
vFC 1 vFC 2 vFC 1 vFC 2
 Goes against the best practices for
F_Proxy F_Proxy
upstream Ethernet connectivity

N_Port N_Port
 Can be used in scenarios where port
vHBA vHBA vHBA vHBA
licenses and cabling an issue. 0 1 0 1

Server 1 Server 2

Ethernet
FC
Converged
FCoE link
Dedicated
FCoE link
 IP Storage attached to “Appliance NAS LAN
Port”
NFS, iSCSI, CIFS Volume
A
Volume
B

C1
 Controller interfaces active/standby
for a given volume when attached to
separate FIs Appliance
Port
A U A U
FI-A FI-B
 Controller interfaces Active/Active vEth 1 vEth 2 vEth 1 vEth 2
when each handling their own
volumes

 Sub-optimal forwarding possible if


vNIC 0 vNIC 1 vNIC 0 vNIC 1
not careful
Ensure vNICs are accessing Volumes local to its
fabric
Server 1 Server 2
 Storage attached directly to FI Netapp LAN
o NFS
o iSCSI
o CIFS A1
UTA
o FCoE
Unified
Appliance
Port
 Supported with Netapp Unified A U A U
Target Adapter (UTA) FI-A FI-B
vEth 1 vEth 2 vEth 1 vEth 2

 Cable and port reduction

vNIC 0 vNIC 1 FC 0 FC 1

Server 1

Potrebbero piacerti anche