Sei sulla pagina 1di 90

vPC Best Practices with Nexus

SAVBU TME Team
SAVBU TME Team
Nexus 7000 Nexus 7000 Nexus 5000 5.0(3) 5.1(2) 5.1(3) 5.2 Complete Sync Partial Sync 5.0(3)N1

Nexus 7000

Nexus 7000 Nexus 7000 Nexus 5000 5.0(3) 5.1(2) 5.1(3) 5.2 Complete Sync Partial Sync 5.0(3)N1 5.0(3)N2

Nexus 7000

Nexus 5000

5.0(3) 5.1(2) 5.1(3) 5.2
5.0(3)
5.1(2)
5.1(3)
5.2
Nexus 7000 Nexus 7000 Nexus 5000 5.0(3) 5.1(2) 5.1(3) 5.2 Complete Sync Partial Sync 5.0(3)N1 5.0(3)N2
Nexus 7000 Nexus 7000 Nexus 5000 5.0(3) 5.1(2) 5.1(3) 5.2 Complete Sync Partial Sync 5.0(3)N1 5.0(3)N2
Nexus 7000 Nexus 7000 Nexus 5000 5.0(3) 5.1(2) 5.1(3) 5.2 Complete Sync Partial Sync 5.0(3)N1 5.0(3)N2
Nexus 7000 Nexus 7000 Nexus 5000 5.0(3) 5.1(2) 5.1(3) 5.2 Complete Sync Partial Sync 5.0(3)N1 5.0(3)N2

Complete Sync

Partial Sync

Complete Sync Partial Sync
Complete Sync Partial Sync
5.0(3)N1 5.0(3)N2 5.1(3)N1 5.2N1
5.0(3)N1
5.0(3)N2
5.1(3)N1
5.2N1

E-Rocks

Partial Sync 5.0(3)N1 5.0(3)N2 5.1(3)N1 5.2N1 E-Rocks Nexus 3000 5.0(3)U1 5.0(3)U2 5.1(3)U1 Andaman Complete
Partial Sync 5.0(3)N1 5.0(3)N2 5.1(3)N1 5.2N1 E-Rocks Nexus 3000 5.0(3)U1 5.0(3)U2 5.1(3)U1 Andaman Complete
Partial Sync 5.0(3)N1 5.0(3)N2 5.1(3)N1 5.2N1 E-Rocks Nexus 3000 5.0(3)U1 5.0(3)U2 5.1(3)U1 Andaman Complete
Partial Sync 5.0(3)N1 5.0(3)N2 5.1(3)N1 5.2N1 E-Rocks Nexus 3000 5.0(3)U1 5.0(3)U2 5.1(3)U1 Andaman Complete

Nexus 3000

Sync 5.0(3)N1 5.0(3)N2 5.1(3)N1 5.2N1 E-Rocks Nexus 3000 5.0(3)U1 5.0(3)U2 5.1(3)U1 Andaman Complete sync done at
5.0(3)U1 5.0(3)U2 5.1(3)U1
5.0(3)U1
5.0(3)U2
5.1(3)U1

Andaman

Complete sync done at major releases

Architectural changes

Major enhancements

Major new features

Partial sync done at minor releases

Critical flaws/bugs

Minor new features

Minor enhancements

 Minor new features  Minor enhancements © 2010 Cisco and/or its affiliates. All rights reserved.
• vPC basic components • Hardware Specific Considerations • vPC enhancements • L3 and vPC

vPC basic components

Hardware Specific Considerations

vPC enhancements

L3 and vPC

Adding FEX

Summary designs

• vPC is a Port-channeling concept extending link aggregation to two separate physical switches •
• vPC is a Port-channeling concept extending link aggregation to two separate physical switches •

vPC is a Port-channeling concept extending link aggregation to two separate physical switches

Allows the creation of resilient L2 topologies based on Link Aggregation.

Eliminates the need for STP in the access-distribution

Provides increased bandwidth

All links are actively forwarding

vPC maintains independent control planes

vPC switches are joined together to form a ―domain‖

vPC switches are joined together to form a ―domain‖ vPC domain Physical Topology Logical Topology Virtual

vPC domain

are joined together to form a ―domain‖ vPC domain Physical Topology Logical Topology Virtual Port Channel
are joined together to form a ―domain‖ vPC domain Physical Topology Logical Topology Virtual Port Channel
are joined together to form a ―domain‖ vPC domain Physical Topology Logical Topology Virtual Port Channel
are joined together to form a ―domain‖ vPC domain Physical Topology Logical Topology Virtual Port Channel

Physical Topology

Logical Topology

Virtual Port Channel

vPC
vPC
L2 SiSi SiSi
L2
SiSi
SiSi

Non-vPC Increased BW with vPC

Channel vPC L2 SiSi SiSi Non-vPC Increased BW with vPC © 2010 Cisco and/or its affiliates.
vPC peer vPC peer link keepalive link Primary Secondary vPC peer vPC vPC member port
vPC peer vPC peer link keepalive link Primary Secondary vPC peer vPC vPC member port
vPC peer
vPC peer link
keepalive link
Primary
Secondary
vPC peer
vPC
vPC
member
port
Orphan
Orphan
Orphan
Orphan
Port
Port
Port
Port
port Orphan Orphan Orphan Orphan Port Port Port Port • vPC peer – a vPC switch,

vPC peer a vPC switch, one of a pair

vPC member port one of a set of ports (port channels) that form a vPC

vPC the combined port channel between the vPC peers and the downstream device

vPC peer link Link used to synchronize state between vPC peer devices, must be 10GbE. Also carry multicast/broadcast/flooding traffic and data traffic in case of vpc member port failure

vPC peer keepalive link the peer keepalive link between vPC peer switches. It is used to carry heartbeat packets

CFS Cisco Fabric Services protocol, used for state synchronization and configuration validation between vPC peer devices

Orphan portNon-vPC member port

• Graceful consistency check: On the N7k: NXOS 5.2 On the N5k: NXOS 5.0(2)N2(1) •
• Graceful consistency check: On the N7k: NXOS 5.2 On the N5k: NXOS 5.0(2)N2(1) •

Graceful consistency check:

On the N7k: NXOS 5.2 On the N5k: NXOS 5.0(2)N2(1)

Per VLAN consistency check:

On the N7k: NXOS 5.2 On the N5k: 5.0(2)N2(1)

Autorecovery:

On the N7k: NXOS 5.2 On the N5k: NXOS 5.0(2)N2(1)

Config-sync:

On the N7k: Freetown On the N5k: NXOS 5.0(2)N2(1)

vPC on FEX

On the N5k: NXOS 4.2(1)N1(1) On the N7k: NXOS 5.2

Orphan Ports shutdown:

On N7k: NXOS 5.2 On N5k: E-Rocks+

Orphan Ports shutdown: On N7k: NXOS 5.2 On N5k: E-Rocks+ • IGMP bulk sync: On N7k:

IGMP bulk sync:

On N7k: to be verified On N5k: starting from NXOS 5.0(3)N1(1a)

Multicast Optimization on Peer-link:

On N7k: hidden comand as of NXOS 5.1(3) (but not supported) On N5k: starting from NXOS 5.0(3)N1(1a)

ARP synchronization:

On N7k: NX-OS 4.2(6) and 5.0(2) (Bogota), fixed in 5.1(1) (Cairo) On N5k: under investigation for Goldcoast

vPC peer-switch:

On N7k: 4.2(6), 5.x On N5k: under investigation for Goldcoast

FEX preprovisioning:

On N7k: Freetown On N5k: NXOS 5.0(2)N1(1)

Dual Layer vPC:

On N7k: TBD On N5k: Fairhaven

• vPC allows a single device to use a port channel across two neighbor switches

vPC allows a single device to use a port channel across two neighbor switches (vPC peers)

Eliminate STP blocked ports

Layer 2 port channel only

Provide fast convergence upon link/device failure

only • Provide fast convergence upon link/device failure vPC Peers vPC Peers Portchannel Port channel ©

vPC Peers

Provide fast convergence upon link/device failure vPC Peers vPC Peers Portchannel Port channel © 2010 Cisco

vPC Peers

convergence upon link/device failure vPC Peers vPC Peers Portchannel Port channel © 2010 Cisco and/or its

Portchannel

Port

channel

• Peer Link carries both vPC data and control traffic between peer switches • Carries
• Peer Link carries both vPC data and control traffic between peer switches • Carries
• Peer Link carries both vPC data and control traffic between peer switches • Carries

Peer Link carries both vPC data and control traffic between peer switches

Carries any flooded and/or orphan port traffic

Carries STP BPDUs, IGMP updates, etc.

Carries Cisco Fabric Services messages (vPC control traffic)

Carries ―multicast‖ traffic (more details follow)

Minimum 2 x 10GbE ports

ALL VLANS used on vPC PORTS MUST BE PRESENT ON THE PEER-LINK

ALL VLANS used on vPC PORTS MUST BE PRESENT ON THE PEER-LINK Link 5k01 5k02 vPC
ALL VLANS used on vPC PORTS MUST BE PRESENT ON THE PEER-LINK Link 5k01 5k02 vPC
Link 5k01 5k02
Link
5k01
5k02
vPC PORTS MUST BE PRESENT ON THE PEER-LINK Link 5k01 5k02 vPC Peer 5020 (config)# interface

vPC Peer

5020

(config)# interface port-channel 10

5020

(config-if)#

switchport mode trunk

5020

(config-if)#

switchport trunk allowed <BETTER TO ALLOW ALL VLANS>

5020

(config-if)#

vpc peer-link

5020

(config-if)#

spanning-tree port type network

• Peer Keep-alive provides and out of band heartbeat between vPC peers • Purpose is
• Peer Keep-alive provides and out of band heartbeat between vPC peers • Purpose is
• Peer Keep-alive provides and out of band heartbeat between vPC peers • Purpose is
• Peer Keep-alive provides and out of band heartbeat between vPC peers • Purpose is
• Peer Keep-alive provides and out of band heartbeat between vPC peers • Purpose is

Peer Keep-alive provides and out of band heartbeat between vPC peers

Purpose is to detect and resolve roles if a Split Brain (Dual Active) occurs

Messages sent on 1 second interval with 5 second

timeout

3 second hold timeout on peer-link loss before triggering recovery

Should not be carried over the Peer-Link

Use the mgmt0 interface in the management VRF

Can optionally be a dedicated link, 1Gb is adequate (first 16 ports on 5020 are 1/10GE ports)

3 rd

option, use a routed inband connection over L3

infrastructure (using SVI‘s in the default VRF)

over L3 infrastructure (using SVI‘s in the default VRF) Peer Keepalive can be carried over the

Peer Keepalive can be carried over the OOB management network

int mgmt 0
int mgmt 0
can be carried over the OOB management network int mgmt 0 dc11-5020-1(config)# vpc domain 20

dc11-5020-1(config)# vpc domain 20 dc11-5020-1(config-vpc-domain)# peer-keepalive destination 172.26.161.201 source 172.26.161.200 vrf management Note:

--------:: Management VRF will be used as the default VRF ::--------

• Peer keep-alive is a routable protocol (both N5K and N7K) • Primary design requirement
• Peer keep-alive is a routable protocol (both N5K and N7K) • Primary design requirement
• Peer keep-alive is a routable protocol (both N5K and N7K) • Primary design requirement
• Peer keep-alive is a routable protocol (both N5K and N7K) • Primary design requirement
• Peer keep-alive is a routable protocol (both N5K and N7K) • Primary design requirement
• Peer keep-alive is a routable protocol (both N5K and N7K) • Primary design requirement
• Peer keep-alive is a routable protocol (both N5K and N7K) • Primary design requirement
• Peer keep-alive is a routable protocol (both N5K and N7K) • Primary design requirement
• Peer keep-alive is a routable protocol (both N5K and N7K) • Primary design requirement

Peer keep-alive is a routable protocol (both N5K and N7K)

Primary design requirement is to have a physically different path than all other vPC traffic

In all cases do not carry the peer-keepalive communication over the vPC peer-link

On Nexus 7000 when possible use dedicated

VRF and front panel ports for peer-keepalive link

(1G is more than adequate).

2 nd best is to use the management interfaces

3 rd option is to use an upstream L3 network for peer-keepalive

If using mgmt 0 interfaces do ‘not’ connect the supervisor management interfaces back to back

In a dual supervisor configuration only one

management port will be active at a given point in time!

Connect both mgmt 0 ports to the OOB network

Standby Management Interfacepoint in time! Connect both mgmt 0 ports to the OOB network Active Management Interface ©

0 ports to the OOB network Standby Management Interface Active Management Interface © 2010 Cisco and/or
0 ports to the OOB network Standby Management Interface Active Management Interface © 2010 Cisco and/or

Active Management Interface

Standby Management Interface Active Management Interface © 2010 Cisco and/or its affiliates. All rights reserved.
Standby Management Interface Active Management Interface © 2010 Cisco and/or its affiliates. All rights reserved.
Standby Management Interface Active Management Interface © 2010 Cisco and/or its affiliates. All rights reserved.
• vPC basic components • Hardware Specific Considerations • vPC forwarding rules • vPC enhancements

vPC basic components

Hardware Specific Considerations

vPC forwarding rules

vPC enhancements

L3 and vPC

Adding FEX

Summary designs

• Cisco Nexus 5000 Series • Peer keepalive: 1st option management port. 2nd option dedicated

Cisco Nexus 5000 Series

Peer keepalive:

1st option management port.

2nd option dedicated front panel port in dedicated VLAN.

3rd option upstream L3 network

Cisco Nexus 7000 Series

vPC works on all existing I/O modules

Peer keepalive:

1st option dedicated front panel port in dedicated VRF.

2nd option is management interface.

3rd option upstream L3 network

M1/F1 cards can be used for vPC

Peer-link requires 10 GigE cards

Peer-link should not span M1 and F1, peer-link should be made on either all F1 cards or all M1 cards

NEXUS 7000 I/O modules vPC Peer-link Part number Model VPC Member Port (10 GE Only)

NEXUS 7000 I/O modules

vPC Peer-link Part number Model VPC Member Port (10 GE Only) N7K-M132XP-12 N7K-M132XP-12L ✓ ✓
vPC Peer-link
Part number
Model
VPC Member
Port
(10 GE Only)
N7K-M132XP-12
N7K-M132XP-12L
N7K-M148GT-11
N7K-M148GT-11L
N7K-M148GS-11
N7K-M148GS-11L
N7K-M108X2-12L
N7K-F132XP-15
M M F F F F F-Series Mode Mixed Chassis Mode Mixed Chassis Mode vPC
M M F F F F F-Series Mode Mixed Chassis Mode Mixed Chassis Mode vPC
M M F F F F F-Series Mode Mixed Chassis Mode Mixed Chassis Mode vPC
M
M
F
F
F
F
F-Series Mode
Mixed Chassis Mode
Mixed Chassis Mode
vPC Peer-link on
vPC Peer-link on
M vPC Peer-link on
M-Series Modules
F F-Series Modules
F F-Series Modules (*)
M M
M
M
M
M

M-Series Mode

vPC Peer-link on M-Series Modules

Recommendation : for mixed chassis mode (F1/M1) with vPC peer-link on F1 ports, use at least 2 M1 LC. This will provide resiliency for L3 features (FHRP, SVI).

(*) : command ―peer-gateway exclude-vlan <vlan list>‖ needed for backup routing path over vPC peer-link

• NX-OS 5.1.3 introduces new behavior for handling vPC peer-gateway in mixed chassis mode (M1/F1)
• NX-OS 5.1.3 introduces new behavior for handling vPC peer-gateway in mixed chassis mode (M1/F1)
• NX-OS 5.1.3 introduces new behavior for handling vPC peer-gateway in mixed chassis mode (M1/F1)
• NX-OS 5.1.3 introduces new behavior for handling vPC peer-gateway in mixed chassis mode (M1/F1)
• NX-OS 5.1.3 introduces new behavior for handling vPC peer-gateway in mixed chassis mode (M1/F1)
• NX-OS 5.1.3 introduces new behavior for handling vPC peer-gateway in mixed chassis mode (M1/F1)

NX-OS 5.1.3 introduces new behavior for handling vPC peer-gateway in mixed chassis mode (M1/F1) :

Topology with M1 peer-link : IP/ARP packets destined to the remote Active IP/MAC get

routed locally Topology with F1 peer-link : IP/ARP packets destined to the remote Active IP/MAC use the tunneling mechanism

M M M F F M F F M-Series F-Series Mixed Chassis Mixed Chassis Mode
M
M M
F F
M
F F
M-Series
F-Series
Mixed Chassis
Mixed Chassis
Mode
Mode
Mode
Mode
Knob Not Required
Classic behavior of peer-gateway
Knob not Required
Peer Gateway not
required
Knob Required for transit
path/VLAN
IP/ARP Tunneling over Peer
link
vPC Primary S1 vPC Secondary S2 vPC Peer-link F1 F1 vPC Primary vPC Secondary vPC
vPC Primary S1 vPC Secondary S2 vPC Peer-link F1 F1 vPC Primary vPC Secondary vPC

vPC Primary

S1

vPC Secondary

S2

vPC Peer-link F1 F1
vPC Peer-link
F1
F1
vPC Primary S1 vPC Secondary S2 vPC Peer-link F1 F1 vPC Primary vPC Secondary vPC Peer-link
vPC Primary S1 vPC Secondary S2 vPC Peer-link F1 F1 vPC Primary vPC Secondary vPC Peer-link
vPC Primary vPC Secondary vPC Peer-link S1 S2 M1 M1
vPC Primary
vPC Secondary
vPC Peer-link
S1
S2
M1
M1
F1 vPC Primary vPC Secondary vPC Peer-link S1 S2 M1 M1 vPC Primary vPC Secondary vPC
vPC Primary vPC Secondary vPC Peer-link S1 S2 M1 F1
vPC Primary
vPC Secondary
vPC Peer-link
S1
S2
M1
F1
• vPC basic components • Hardware Specific Considerations • vPC forwarding rules • vPC enhancements

vPC basic components

Hardware Specific Considerations

vPC forwarding rules

vPC enhancements

L3 and vPC

Adding FEX

Summary designs

• With dual-active scenarios 5k01 5k02 MAC address synchronization is interrupted 1 IGMP synchronization is
• With dual-active scenarios
5k01
5k02
MAC address synchronization
is interrupted
1
IGMP synchronization is
interrupted
• There is a 50% likelihood that
4 igmp sync lost
unicast traffic is flooded and that
multicast traffic is dropped
2 - Host subscribes to G1
3 IGMP report for G1
is dropped 2 - Host subscribes to G1 3 IGMP report for G1 © 2010 Cisco
• There will be 2 primary switches sending independent BPDUs  VPC Port-channels on upstream/downstream
• There will be 2 primary switches sending independent BPDUs  VPC Port-channels on upstream/downstream
• There will be 2 primary switches sending independent BPDUs  VPC Port-channels on upstream/downstream
• There will be 2 primary switches sending independent BPDUs  VPC Port-channels on upstream/downstream

There will be 2 primary switches sending independent BPDUs

VPC Port-channels on upstream/downstream switches will be

error-disabled by ‗EtherChannel Misconfiguration Guard‘ after ~90 seconds

If Nexus 7000/5000 is on the other end of VPC no action from

STP as 7000/5000 do not support EtherChannel Guard

• When the peer-link is disconnected • vPC secondary detects primary switch is alive through
• When the peer-link is disconnected • vPC secondary detects primary switch is alive through

When the peer-link is

disconnected

vPC secondary detects primary switch is alive through peer keepalive link

The secondary vpc peer switch suspends all its vpc member ports in order to avoid traffic drop

KEEP PEER KEEPALIVE AND PEER-LINKS SEPARATE

5k01 5k02 vPC vPC Secondary Primary Po10
5k01
5k02
vPC
vPC Secondary
Primary
Po10
SEPARATE 5k01 5k02 vPC vPC Secondary Primary Po10 © 2010 Cisco and/or its affiliates. All rights
dca-n7k2-vdc2  vPC supports standard 802.3ad port channels from upstream and or downstream devices 
dca-n7k2-vdc2  vPC supports standard 802.3ad port channels from upstream and or downstream devices 

dca-n7k2-vdc2

vPC supports standard 802.3ad port channels from upstream and or

downstream devices

Recommended to enable LACP

and or downstream devices  Recommended to enable LACP dc11-5020-1 ―channel - group 201 mode active‖
and or downstream devices  Recommended to enable LACP dc11-5020-1 ―channel - group 201 mode active‖
dc11-5020-1
dc11-5020-1

―channel-group 201 mode active‖

dc11-5020-2

dca-n7k2-vdc2# sh run interface port-channel 201 version 4.1(5)

interface port-channel201 switchport mode trunk switchport trunk allowed vlan 100-105

dc11-5020-1# show running int port-channel 201 version 4.1(3)N1(1)

interface port-channel201 switchport mode trunk switchport trunk native vlan 100 switchport trunk allowed vlan 100-105 vpc 201

dc11-5020-2# show running int port-channel 201 version 4.1(3)N1(1)

interface port-channel201 switchport mode trunk switchport trunk native vlan 100 switchport trunk allowed vlan 100-105 vpc 201

• vPC forwards only on locally connected members of the port channel if any exist
• vPC forwards only on locally connected members of the port channel if any exist

vPC forwards only on locally connected members of the port channel if any exist (same principle as VSS)

Multiple topology choices

dca-n7k2-vdc2

as VSS) • Multiple topology choices dca-n7k2-vdc2 dc11-5020-1 • Square • Full Mesh dca-n7k2-vdc2# sh
as VSS) • Multiple topology choices dca-n7k2-vdc2 dc11-5020-1 • Square • Full Mesh dca-n7k2-vdc2# sh
as VSS) • Multiple topology choices dca-n7k2-vdc2 dc11-5020-1 • Square • Full Mesh dca-n7k2-vdc2# sh
dc11-5020-1
dc11-5020-1

Square

Full Mesh

dca-n7k2-vdc2# sh run interface port-channel 201 version 4.1(5)

interface port-channel201 switchport mode trunk switchport trunk native vlan 100 switchport trunk allowed vlan 100-105 vpc 201

dc11-5020-2

dc11-5020-1# show running int port-channel 201 version 4.1(3)N1(1)

interface port-channel201 switchport mode trunk switchport trunk native vlan 100 switchport trunk allowed vlan 100-105 vpc 201

dc11-5020-2# show running int port-channel 201 version 4.1(3)N1(1)

interface port-channel201 switchport mode trunk switchport trunk native vlan 100 switchport trunk allowed vlan 100-105 vpc 201

• vPC maintains layer 2 topology synchronization via CFS • Copies of flooded frames are
• vPC maintains layer 2 topology synchronization via CFS • Copies of flooded frames are

vPC maintains layer 2 topology synchronization via CFS

Copies of flooded frames are sent across the

vPC-Link in case any single homed devices are

attached

Frames received on the vPC-Link are not forwarded out vPC ports

1. Host MAC_A send packet to MAC_C

2. FEX runs hash algorithm to select one fabric uplink

3. N5K-1 learns MAC_A and flood packets to all ports (in that VLAN). A copy of the packet is sent across the peer link

4. N5K-2 floods the packet to any port in the VLAN except the vPC member ports to prevent duplicated packets

5. N7K-1 and N7K-2 repeat the same forwarding logic

6. N5K-1 updates the the MAC address learned on the vPC port on N5K-2 via CFS

MAC_C

5 CFS 5 6 3 N5K-1 N5K-2 CFS 4 2 6 1
5
CFS
5 6
3
N5K-1
N5K-2
CFS
4
2
6
1
via CFS MAC_C 5 CFS 5 6 3 N5K-1 N5K-2 CFS 4 2 6 1 MAC_A

MAC_A

MAC_C 1 2 N5K-2 N5K-1 3 MAC_A • Traffic is forwarded if destination address is
MAC_C 1 2 N5K-2 N5K-1 3 MAC_A • Traffic is forwarded if destination address is

MAC_C

1 2 N5K-2
1
2
N5K-2

N5K-1

3
3
MAC_C 1 2 N5K-2 N5K-1 3 MAC_A • Traffic is forwarded if destination address is known

MAC_A

Traffic is forwarded if destination address is known (both switches MAC address tables populated)

Always forward via a locally attached member of a vPC if it exists

1. Host MAC_C send packet to MAC_A

2. N7K-2 forwards frame based on learned MAC address

3. N5K-2 forwards frame based on learned MAC address

N5K-1# sh mac-address-table vlan 101

 

VLAN

MAC Address

Type

Age

Port

---------+-----------------+-------+---------+-----

101

001b.0cdd.387f

dynamic 0

Po30

101

0023.ac64.dda5

dynamic 30

Po201

Total MAC Addresses: 4

 

N5K-2# sh mac-address-table vlan 101

 

VLAN

MAC Address

Type

Age

Port

---------+-----------------+-------+---------+-----

101

001b.0cdd.387f

dynamic 0

Po30

101

0023.ac64.dda5

dynamic 30

Po201

Total MAC Addresses: 4

 
dynamic 30 Po201 Total MAC Addresses: 4   © 2010 Cisco and/or its affiliates. All rights
MAC_C 1 2 N5K-2 N5K-1 3 MAC_A • On loss of all of the locally
MAC_C 1 2 N5K-2 N5K-1 3 MAC_A • On loss of all of the locally

MAC_C

1 2 N5K-2
1
2
N5K-2

N5K-1

3
3
MAC_C 1 2 N5K-2 N5K-1 3 MAC_A • On loss of all of the locally attached

MAC_A

On loss of all of the locally attached members of the vPC MAC address table is updated to forward frames for the vPC across the vPC Peer Link

Note: Po20 is the vpc peer-link

N5K-1# sh mac-address-table vlan 101

 

VLAN

MAC Address

Type

Age

Port

---------+-----------------+-------+---------+-----

101

001b.0cdd.387f

dynamic 0

Po30

101

0023.ac64.dda5

dynamic 30

Po201

Total MAC Addresses: 4

 

N5K-2# sh mac-address-table vlan 101

 

VLAN

MAC Address

Type

Age

Port

---------+-----------------+-------+---------+-----

101

001b.0cdd.387f

dynamic 0

Po20

101

0023.ac64.dda5

dynamic 30

Po201

Total MAC Addresses: 4

 
√ √ X √ © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential
√ √ X √ © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential
√ √ X √ © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential
√ √ X √ © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential

√ √ X √ © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 27
√ √ X √ © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 27
√ √ X √ © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 27

√ √ X √ © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 27
√ √ X √ © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 27
√ √ X √ © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 27
√ √ X √ © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 27
√ √ X √ © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 27

X

√ √ X √ © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 27

√ √ X √ © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 27
√ √ X √ © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 27
• Both switches in the vPC Domain maintain distinct control planes • CFS provides for
• Both switches in the vPC Domain maintain distinct control planes • CFS provides for

Both switches in the vPC Domain maintain distinct control planes

CFS provides for protocol state synchronization between both peers (MAC Address table, IGMP state, …)

System configuration must also be kept in sync

Currently there are 2 options to keep configuration consistent:

a manual process with an automated consistency check to ensure correct network behavior

config-sync

Two types of interface consistency checks

Type 1 Will put interfaces into suspend state to prevent invalid forwarding of packets

Type 2 Error messages to indicate potential for

undesired forwarding behavior

to indicate potential for undesired forwarding behavior © 2010 Cisco and/or its affiliates. All rights reserved.
• Type 1 Consistency Checks are intended to prevent network failures • Incorrectly forwarding of
• Type 1 Consistency Checks are intended to prevent network failures • Incorrectly forwarding of

Type 1 Consistency Checks are intended to prevent network failures

Incorrectly forwarding of traffic

Physical network incompatibilities

vPC will be suspended

network incompatibilities • vPC will be suspended dc11-5020-1# sh run int po 201 interface port-channel201

dc11-5020-1# sh run int po 201

interface port-channel201 switchport mode trunk switchport trunk native vlan 100 switchport trunk allowed vlan 100-105

vpc 201

dc11-5020-2# sh run int po 201

interface port-channel201 switchport mode trunk switchport trunk native vlan 100 switchport trunk allowed vlan 100-105

vpc 201

spanning-tree guard root

dc11-5020-2# show vpc brief Legend:

(*) - local vPC is down, forwarding via vPC peer-link

<snip>

vPC status

----------------------------------------------------------------------------

id

------ ----------- ------ ----------- -------------------------- ----------- -

201

Port

Po201

Status Consistency Reason

down failed

Active vlans

vPC type-1 configuration incompatible - STP interface port guard - Root or loop guard inconsistent

- STP interface port guard - Root or loop guard inconsistent © 2010 Cisco and/or its

© 2010 Cisco and/or its affiliates. All rights reserved.

• Type 2 Consistency Checks are intended to prevent undesired forwarding • vPC will be
• Type 2 Consistency Checks are intended to prevent undesired forwarding • vPC will be

Type 2 Consistency Checks are intended to prevent undesired forwarding

vPC will be modified in certain cases (e.g.

forwarding • vPC will be modified in certain cases (e.g. VLAN mismatch) dc11-5020-1# sh run int

VLAN mismatch)

dc11-5020-1# sh run int po 201 version 4.1(3)N1(1)

interface port-channel201 switchport mode trunk switchport trunk native vlan 100 switchport trunk allowed vlan 100-105 vpc 201

dc11-5020-2# sh run int po 201 version 4.1(3)N1(1)

interface port-channel201 switchport mode trunk switchport trunk native vlan 105 switchport trunk allowed vlan 100-104

vpc 201

dc11-5020-1# show vpc brief vpc 201

vPC status

----------------------------------------------------------------------------

id

------ ----------- ------ ----------- -------------------------- -----------

201

Port

Po201

Status Consistency Reason

up success success

Active vlans

100-104

2009 May 17 21:56:28 dc11-5020-1 %ETHPORT-5-IF_ERROR_VLANS_SUSPENDED: VLANs 105 on Interface port- channel201 are being suspended. (Reason: Vlan is not configured on remote vPC interface)

(Reason: Vlan is not configured on remote vPC interface) © 2010 Cisco and/or its affiliates. All
• c-nexus5010-1# show vpc consistency-parameters Legend: global •
• c-nexus5010-1# show vpc consistency-parameters
Legend:
global

Type 1 : vPC will be suspended in case of mismatch

Name

Type Local Value

Peer Value

in case of mismatch • Name Type Local Value Peer Value Global Parameters • ------------- •

Global Parameters

-------------

QoS

2

2

1

STP Disabled

STP MST Region Name

STP MST Region Revision

STP MST Region Instance to 1

2

Network QoS (MTU)

Network Qos (Pause)

---- ---------------------- -----------------------

1

1

1

([], [3], [], [], [], (1538, 2240, 0, 0, 0, (F, T, F, F, F, F)

Rapid-PVST

None

""

0

([], [3], [], [], [], (1538, 2240, 0, 0, 0, (F, T, F, F, F, F)

Rapid-PVST

None

""

0

STP Mode

Global QoS Parameters need to be consistent

VLAN Mapping

STP Loopguard

STP Bridge Assurance

STP Port Type, Edge

Allowed VLANs

Local suspended VLANs

1

1

1

-

-

Disabled

Enabled

Disabled

Enabled

Normal, Disabled, 1,50 1 50 -
Normal, Disabled,
1,50
1
50
-

Normal, Disabled,

Global Spanning Tree Parameters need to be consistent

• Don‘t forget to keep global configuration in sync • Any configuration that could cause

Don‘t forget to keep global configuration in sync

Any configuration that could cause an error in forwarding (e.g. loop) will disable all affected interfaces

As an example if you make a change to an MST region you must make it on ‗both‘ peers

Solution: define MST region mappings from the very beginning of the deployment, for ALL VLANs, the ones that exist as well as the ones that have not yet been created

Defining a region mapping is orthogonal to creating a VLAN

mst region vlans 1-5, 12 mst region vlans 1-5, 10 vPC vPC vPC
mst region
vlans 1-5, 12
mst region
vlans 1-5, 10
vPC
vPC
vPC
• vPC basic components • Hardware Specific Considerations • vPC forwarding rules • vPC enhancements

vPC basic components

Hardware Specific Considerations

vPC forwarding rules

vPC enhancements

L3 and vPC

Adding FEX

Summary designs

Inconsistency Type Impact Recommendation New Enhancements VLAN to MST Region mapping mismatch Pre-provision and MAP
Inconsistency Type Impact Recommendation New Enhancements VLAN to MST Region mapping mismatch Pre-provision and MAP
Inconsistency
Type
Impact
Recommendation
New Enhancements
VLAN to MST Region mapping
mismatch
Pre-provision and MAP all
VLANs on the MST region
STP global settings (BA, Loop
Guard, Root Guard)
Perform STP operations per
port
Global
Operate change during
maintenance window
1
Leverage graceful conflict
resolution
Spanning-tree per interface
settings,
Config Sync
(5.0(2)N1(1) on N5K,
Freetown for N7K)
&
switchport type (trunk/versus
access)…
Per-vPC
Operate change during
maintenance window and/or
leverage graceful conflict
resolution
Port-channel mode
Graceful Conflict
Resolution
(CSCtf84865,N7K -
4.2(8)& 5.2, N5K –
5.0(2)N2(1))
Global
Quality of Service Configuration
2
Minimum disruption
Per-vPC
VLANs configured on vPC
• tc-nexus5010-1# show vpc consistency-parameters global   • Name Type Local Value Peer Value
• tc-nexus5010-1# show vpc consistency-parameters global   • Name Type Local Value Peer Value
• tc-nexus5010-1# show vpc consistency-parameters global   • Name Type Local Value Peer Value

tc-nexus5010-1# show vpc consistency-parameters global

 

Name

Type

Local Value

Peer Value

-------------

---- ---------------------- -----------------------

QoS

2

([], [3], [], [], [],

([], [3], [], [], [],

[])

[])

Network QoS (MTU)

2

(1538, 2240, 0, 0, 0,

(1538, 2240, 0, 0, 0,

0)

0)

Network Qos (Pause)

2

(F, T, F, F, F, F)

(F, T, F, F, F, F)

Input Queuing (Bandwidth)

2

(50, 50, 0, 0, 0, 0)

(50, 50, 0, 0, 0, 0)

Input Queuing (Absolute

2

(F, F, F, F, F, F)

(F, F, F, F, F, F)

Priority)

Output Queuing (Bandwidth) 2

(50, 50, 0, 0, 0, 0)

(50, 50, 0, 0, 0, 0)

Output Queuing (Absolute

2

(F, F, F, F, F, F)

(F, F, F, F, F, F)

• With Graceful Resolution only ports on the vPC secondary are ―suspended‖ if a Type

With Graceful Resolution only ports on the vPC secondary are ―suspended‖ if a Type-1 global inconsistency occurs

This limits the impact of configuration

changes.

switch(config)# vpc domain 10

switch(config-vpc-domain)# [no] graceful

consistency-check

Requires 5.0(2)N2(1) on the Nexus 5k

Requires 5.2 on the Nexus 7k

mst region vlans 1-5, 12 mst region vlans 1-5, 10 vPC vPC vPC
mst region
vlans 1-5, 12
mst region
vlans 1-5, 10
vPC
vPC
vPC

vPC primary

vPC secondary

5.2
5.2
5.0(2)N2(1)
5.0(2)N2(1)
5.2 5.0(2)N2(1)   Check whether STP is enabled or disabled on per-VLAN basis. VLANs that
5.2 5.0(2)N2(1)   Check whether STP is enabled or disabled on per-VLAN basis. VLANs that
5.2 5.0(2)N2(1)   Check whether STP is enabled or disabled on per-VLAN basis. VLANs that
5.2 5.0(2)N2(1)   Check whether STP is enabled or disabled on per-VLAN basis. VLANs that
 

Check whether STP is enabled or disabled on per-VLAN basis.

VLANs that have mismatched status will be suspended on both switches

 Rest of VLANs won‘t be affected Disable STP on VLAN 5  Prior to
 Rest of VLANs won‘t be affected
Disable STP on VLAN 5
 Prior to this change all VLANs are
affected
• Config-sync allows administrators to make configuration changes on one switch and have the system

Config-sync allows administrators to make configuration changes on

one switch and have the system

automatically synchronize to its peers.

This eliminates any user prone errors & reduces the administrative

overhead of having to configure

both vPC members simultaneously.

Config-sync and Graceful conflict resolution are complementary features

Config-sync traffic is carried over the peer keepalive link

+ vlan 12

mst region mst region vlans 1-5, 12 vlans 1-5 vPC vPC vPC
mst region
mst region
vlans 1-5, 12
vlans 1-5
vPC
vPC
vPC
• Global Configurations: VLANs ACLs STP configurations QOS • Interface Level Configurations: Ethernet Interfaces

Global Configurations:

VLANs

ACLs

STP configurations

QOS

Interface Level Configurations:

Ethernet Interfaces

Port Channel Interfaces

vPC Interfaces

Which configurations are not synchronized?

Enabling ―Feature‖

vPC domain configuration

FCoE configuration

N5000-1# feature vpc vpc domain 10 peer-keepalive destination 10.29.170.8 N5000-1#sh run switch-profile Switch-profile
N5000-1# feature vpc vpc domain 10 peer-keepalive destination 10.29.170.8 N5000-1#sh run switch-profile Switch-profile
N5000-1# feature vpc vpc domain 10 peer-keepalive destination 10.29.170.8 N5000-1#sh run switch-profile Switch-profile
N5000-1# feature vpc vpc domain 10 peer-keepalive destination 10.29.170.8 N5000-1#sh run switch-profile Switch-profile
N5000-1# feature vpc vpc domain 10 peer-keepalive destination 10.29.170.8 N5000-1#sh run switch-profile Switch-profile
N5000-1# feature vpc vpc domain 10 peer-keepalive destination 10.29.170.8
N5000-1#
feature vpc
vpc domain 10
peer-keepalive destination 10.29.170.8
N5000-1#sh run switch-profile Switch-profile Apple sync-peers destination 10.29.170.8
N5000-1#sh run switch-profile
Switch-profile Apple
sync-peers destination 10.29.170.8
N5000-2# feature vpc vpc domain 10 peer-keepalive destination 10.29.170.7
N5000-2#
feature vpc
vpc domain 10
peer-keepalive destination 10.29.170.7
N5000-2#sh run switch-profile Switch-profile Apple sync-peers destination 10.29.170.7
N5000-2#sh run switch-profile
Switch-profile Apple
sync-peers destination 10.29.170.7
N5000-1(config-if)# config sync N5000-1(config-sync)# switch-profile Apple N5000-1(config-sync-sp)# int ethernet
N5000-1(config-if)# config sync
N5000-1(config-sync)# switch-profile Apple
N5000-1(config-sync-sp)# int ethernet
100/1/3
N5000-1(config-sync-sp-if)# switch mode
trunk
N5000-1(config-sync-sp-if)# verify
NOTE: Verify does not push the config to
peer, user must issue “commit” for sync to
take place
N5000-1(config-if)# Verify Successful config sync
N5000-1(config-sync)# switch-profile Apple
N5000-1(config-sync-sp)# commit
If sync fails, then the config is in the BUFFER
Commit Successful
N5000-1#sh run switch-profile
interface ethernet 100/1/3
switchport mode trunk
N5000-2#sh run switch-profile
interface ethernet 100/1/3
switchport mode trunk
• Configuration is stored in a buffer until commit is applied. • User can add/delete/move
• Configuration is stored in a buffer until commit is applied. • User can add/delete/move

Configuration is stored in a buffer until commit is applied.

User can add/delete/move configuration.

Once the config has been pushed via commit, it will no

longer show up in buffer (it will

show up in ―show running- config switch-profile X‖)

If the commit fails due to mutex check or other reasons, the failed configuration still shows in the buffer, you have to explicitly remove it to continue

in the buffer, you have to explicitly remove it to continue N5K-1(config-sync-sp-if)# sh switch-profile A buffer
N5K-1(config-sync-sp-if)# sh switch-profile A buffer ----------------------------------------------------- Seq-no
N5K-1(config-sync-sp-if)# sh switch-profile A buffer
-----------------------------------------------------
Seq-no Command
-----------------------------------------------------
1
interface Ethernet100/1/9
1.1
switchport mode trunk
1.2
switchport trunk allowed vlan 5-10
2
interface Ethernet100/1/10
2.1
switchport mode access
N5K-1(config-sync-sp)# ? buffer-delete Delete buffered command(s) buffer-move Move buffered command(s)
N5K-1(config-sync-sp)# ?
buffer-delete Delete buffered command(s)
buffer-move
Move buffered command(s)
N5K-1(config-sync-sp)# buffer-delete 1 N5K-1(config-sync-sp)# sh switch-profile A buffer
N5K-1(config-sync-sp)# buffer-delete 1
N5K-1(config-sync-sp)# sh switch-profile A buffer
-----------------------------------------------------
Seq-no Command
-----------------------------------------------------
2
interface Ethernet100/1/10
2.1
switchport mode access
• Interface Ethernet1/11 • fex associate 100 • switchport mode fex-fabric • channel-group 100 config-t
• Interface Ethernet1/11 • fex associate 100 • switchport mode fex-fabric • channel-group 100 config-t
• Interface Ethernet1/11 • fex associate 100 • switchport mode fex-fabric • channel-group 100 config-t
• Interface Ethernet1/11 • fex associate 100 • switchport mode fex-fabric • channel-group 100 config-t
• Interface Ethernet1/11 • fex associate 100 • switchport mode fex-fabric • channel-group 100 config-t

Interface Ethernet1/11

fex associate 100

switchport mode fex-fabric

channel-group 100

config-t area

This portion is not synchronized

Interface Ethernet1/11

shut/no shut

switch-profile area

This portion is synchronized

A port-channel may consist of port ethernet 1/1 on n5k01 And erthernet 1/2 on n5k02
A port-channel may consist of port ethernet 1/1 on n5k01 And erthernet 1/2 on n5k02
A port-channel may consist of port ethernet 1/1 on n5k01 And erthernet 1/2 on n5k02
A port-channel may consist of port ethernet 1/1 on n5k01 And erthernet 1/2 on n5k02

A port-channel may consist of port ethernet 1/1 on n5k01

And erthernet 1/2 on n5k02

FEX A/A has the same FEX configured to both N5ks, so Preprovisioning has to be configured identically

• If one vPC peer needs to be disconnected completely from the vPC domain you
• If one vPC peer needs to be disconnected completely from the vPC domain you
• If one vPC peer needs to be disconnected completely from the vPC domain you

If one vPC peer needs to be disconnected

completely from the vPC

domain you can still operate the remaining one

For this you need to

leverage the commands

reload restore‖ and

autorecovery

Reload restore deals with the split brain

scenario allowing a

vPC peer to bring up new vPC ports even after a reload

Autorecovery deals

with the sequential loss

of peer-link first, and peer-keepalive second, allowing the vPC secondary to bring up

the vPC ports (which

were down previously)

• VPC needs to be able to talk to the peer (over peer-link) before bringing
• VPC needs to be able to talk to the peer (over peer-link) before bringing

VPC needs to be able to talk to the peer (over peer-link) before bringing up VPC port-channels

Negotiate LACP/STP operating roles for the chassis

Wait for per-port peer parameters and handshake to bring up vPC ports

Performs peer parameters consistency check on each VPC bringup

Only after VPC port-channels are brought up.

What if after a full DC outage (both Nexus down), only one switch is coming up ?

Will not bring up VPCs if after a datacenter outage, only one VPC peer comes back up

1 Switch1 Switch2 Switch3 2 3 Switch1 Switch3 Existing vPCs are brought up Switch1 Switch3

1

1 Switch1 Switch2 Switch3 2 3 Switch1 Switch3 Existing vPCs are brought up Switch1 Switch3 When
Switch1
Switch1

Switch2

Switch3
Switch3
1 Switch1 Switch2 Switch3 2 3 Switch1 Switch3 Existing vPCs are brought up Switch1 Switch3 When

2

3

Switch1
Switch1
1 Switch1 Switch2 Switch3 2 3 Switch1 Switch3 Existing vPCs are brought up Switch1 Switch3 When
1 Switch1 Switch2 Switch3 2 3 Switch1 Switch3 Existing vPCs are brought up Switch1 Switch3 When
Switch3
Switch3

Existing vPCs are brought up

Switch3 2 3 Switch1 Switch3 Existing vPCs are brought up Switch1 Switch3 When adding a new
Switch3 2 3 Switch1 Switch3 Existing vPCs are brought up Switch1 Switch3 When adding a new

Switch1

Switch1
2 3 Switch1 Switch3 Existing vPCs are brought up Switch1 Switch3 When adding a new vPC
Switch3
Switch3
Switch1 Switch3 Existing vPCs are brought up Switch1 Switch3 When adding a new vPC member port,
Switch1 Switch3 Existing vPCs are brought up Switch1 Switch3 When adding a new vPC member port,

When adding a new vPC member port, the port goes up

2

2 Switch1 Switch2 Switch3 3 1 Switch1 Switch2 Switch1 Switch2 Switch3 Switch3 © 2010 Cisco and/or
Switch1 Switch2

Switch1

Switch2
Switch2
2 Switch1 Switch2 Switch3 3 1 Switch1 Switch2 Switch1 Switch2 Switch3 Switch3 © 2010 Cisco and/or
2 Switch1 Switch2 Switch3 3 1 Switch1 Switch2 Switch1 Switch2 Switch3 Switch3 © 2010 Cisco and/or
Switch3
Switch3
2 Switch1 Switch2 Switch3 3 1 Switch1 Switch2 Switch1 Switch2 Switch3 Switch3 © 2010 Cisco and/or

3

2 Switch1 Switch2 Switch3 3 1 Switch1 Switch2 Switch1 Switch2 Switch3 Switch3 © 2010 Cisco and/or
2 Switch1 Switch2 Switch3 3 1 Switch1 Switch2 Switch1 Switch2 Switch3 Switch3 © 2010 Cisco and/or

1

2 Switch1 Switch2 Switch3 3 1 Switch1 Switch2 Switch1 Switch2 Switch3 Switch3 © 2010 Cisco and/or
2 Switch1 Switch2 Switch3 3 1 Switch1 Switch2 Switch1 Switch2 Switch3 Switch3 © 2010 Cisco and/or
Switch1
Switch1

Switch2

2 Switch1 Switch2 Switch3 3 1 Switch1 Switch2 Switch1 Switch2 Switch3 Switch3 © 2010 Cisco and/or
2 Switch1 Switch2 Switch3 3 1 Switch1 Switch2 Switch1 Switch2 Switch3 Switch3 © 2010 Cisco and/or
Switch1
Switch1
Switch2
Switch2
Switch2 Switch3 3 1 Switch1 Switch2 Switch1 Switch2 Switch3 Switch3 © 2010 Cisco and/or its affiliates.
Switch2 Switch3 3 1 Switch1 Switch2 Switch1 Switch2 Switch3 Switch3 © 2010 Cisco and/or its affiliates.
Switch2 Switch3 3 1 Switch1 Switch2 Switch1 Switch2 Switch3 Switch3 © 2010 Cisco and/or its affiliates.
Switch3
Switch3
Switch3
Switch3
Keepalive S1 -Primary vPC peer-link vPC 1 po1 S2-Secondary Keepalive vPC 1 po1 S1 -Primary
Keepalive S1 -Primary vPC peer-link vPC 1 po1 S2-Secondary Keepalive vPC 1 po1 S1 -Primary
Keepalive
Keepalive

S1 -Primary

S1 -Primary vPC peer-link
S1 -Primary vPC peer-link
S1 -Primary vPC peer-link

vPC peer-link

S1 -Primary vPC peer-link
S1 -Primary vPC peer-link
vPC 1 po1
vPC 1
po1
Keepalive S1 -Primary vPC peer-link vPC 1 po1 S2-Secondary Keepalive vPC 1 po1 S1 -Primary vPC
Keepalive S1 -Primary vPC peer-link vPC 1 po1 S2-Secondary Keepalive vPC 1 po1 S1 -Primary vPC

S2-Secondary

Keepalive S1 -Primary vPC peer-link vPC 1 po1 S2-Secondary Keepalive vPC 1 po1 S1 -Primary vPC
Keepalive vPC 1 po1
Keepalive
vPC 1
po1
vPC peer-link vPC 1 po1 S2-Secondary Keepalive vPC 1 po1 S1 -Primary vPC peer-link S2-Secondary Peerlink

S1 -Primary

S1 -Primary vPC peer-link

vPC peer-link

S2-Secondary

Peerlink down and keepalive working

Secondary shuts vPCs

Primary fails

Po1 is completely shut

Keepalive vPC Primary vPC peer-link vPC 1 po1 vPC Secondary vPC Operational Primary Keepalive vPC
Keepalive vPC Primary vPC peer-link vPC 1 po1 vPC Secondary vPC Operational Primary Keepalive vPC

Keepalive

vPC Primary vPC peer-link vPC 1 po1
vPC
Primary
vPC peer-link
vPC 1
po1
Keepalive vPC Primary vPC peer-link vPC 1 po1 vPC Secondary vPC Operational Primary Keepalive vPC 1

vPC

Secondary

vPC Primary vPC peer-link vPC 1 po1 vPC Secondary vPC Operational Primary Keepalive vPC 1 po1
vPC Primary vPC peer-link vPC 1 po1 vPC Secondary vPC Operational Primary Keepalive vPC 1 po1

vPC Operational Primary

Keepalive vPC 1 po1
Keepalive
vPC 1
po1
vPC Secondary vPC Operational Primary Keepalive vPC 1 po1 vPC peer-link Peerlink down and keepalive down
vPC peer-link

vPC peer-link

Peerlink down and keepalive down

After 3 consecutive keepalive timeouts

Secondary changes role and brings up vPCs

• STP for vPCs is controlled by the vPC operationally primary switch and only such
• STP for vPCs is controlled by the vPC operationally primary switch and only such

STP for vPCs is controlled by the vPC operationally primary switch

and only such device sends out BPDUs

on STP designated ports.

This happens irrespectively of where the designated STP Root is located

The vPC operationally secondary

device proxies STP BPDU messages from access switches toward the primary vPC

Primary Secondary vPC vPC BPDUs
Primary
Secondary
vPC
vPC
BPDUs
ECMP ECMP Secondary vPC SW2 vPC PK-Link vPC_PL vPC peer-link is a regular STP port
ECMP ECMP Secondary vPC SW2 vPC PK-Link vPC_PL vPC peer-link is a regular STP port
ECMP ECMP Secondary vPC SW2 vPC PK-Link vPC_PL
ECMP
ECMP
Secondary
vPC
SW2
vPC PK-Link
vPC_PL

vPC peer-link is a regular STP port

Primary vPC SW1 L3 L2 vPC1 vPC2
Primary
vPC
SW1
L3
L2
vPC1
vPC2
is a regular STP port Primary vPC SW1 L3 L2 vPC1 vPC2 Secondary Root SW3 SW4
is a regular STP port Primary vPC SW1 L3 L2 vPC1 vPC2 Secondary Root SW3 SW4
Secondary Root
Secondary
Root

SW3

SW4

Primary Root
Primary
Root

vPC Primary Switch Source and controls STP for vPCs

Root vPC Primary Switch Source and controls STP for vPCs The secondary vPC device does NOT
Root vPC Primary Switch Source and controls STP for vPCs The secondary vPC device does NOT

The secondary vPC device does NOT source STP BPDUs on symmetrical vPCs

vPC device does NOT source STP BPDUs on symmetrical vPCs MAC_A MAC_B © 2010 Cisco and/or

MAC_A

MAC_B

• Assume the following topology with vPC enabled on the vPC • If the Primary

Assume the following topology with vPC enabled on the vPC

If the Primary fails over, the

secondary needs to start sending

BPDUs

If the Primary was also the STP root, the secondary also has to overtake the role as a root

If this process lasts too long, the

uplink port on 5k02 may go into BA_Inconsistent state

Better not use Bridge Assurance with vPC

Bridge Assurance on peer-link is

fine (and is the default)

Primary /

Root

Secondary

becomes

primary and

root

7k01 7k02
7k01
7k02
/ Root Secondary becomes primary and root 7k01 7k02 BPDUs prior to the failure BA Inconsistent

BPDUs prior

to the failure

BA Inconsistent

5k01 5k02
5k01
5k02
Primary Secondary left# sh span vlan 101 VLAN0101 ROOT ROOT Spanning tree enabled protocol rstp
Primary Secondary left# sh span vlan 101 VLAN0101 ROOT ROOT Spanning tree enabled protocol rstp
Primary Secondary left# sh span vlan 101 VLAN0101 ROOT ROOT Spanning tree enabled protocol rstp
Primary Secondary left# sh span vlan 101 VLAN0101 ROOT ROOT Spanning tree enabled protocol rstp
Primary Secondary left# sh span vlan 101 VLAN0101 ROOT ROOT Spanning tree enabled protocol rstp
Primary
Secondary
left# sh span vlan 101
VLAN0101
ROOT
ROOT
Spanning tree enabled protocol rstp
Root ID
Priority
8293
Address 0023.04ee.be01
This bridge is the root
Bridge ID Priority
Address
8293
(priority 8192)
0023.04ee.be01
Interface
Role Sts Cost
Prio.Nbr Type
---------------- ---- --- --------- -------- ---------------
Po1
Po100
Desg FWD 1
Root FWD 2
128.4096 (vPC) P2p
128.4195 (vPC peer-link)
left# sh vpc role | i mac
vPC system-mac
vPC local system-mac
: 00:23:04:ee:be:01 
: 00:1b:54:c2:42:43
right# sh span vlan 101
VLAN0101
Spanning tree enabled protocol rstp
Root ID
In Peer-Switch mode bridge-ID
Priority 8293
Address 0023.04ee.be01
This bridge is the root
comes from system-mac as
opposed to local mac in normal
mode
Bridge ID Priority
Address
8293
(priority 8192)
0023.04ee.be01
Interface
Role Sts Cost
Prio.Nbr Type

---------------- ---- --- --------- -------- ---------------

128.4096 (vPC) P2p 128.4195 (vPC peer-link)

Po1

Desg FWD 1 Desg FWD 2

Po100
Po100
• BA is default enabled on Peer-Link (and recommended to remain enable), not recommended for
• BA is default enabled on Peer-Link (and recommended to remain enable), not recommended for

BA is default enabled on Peer-Link (and recommended to remain enable), not

recommended for VPCs unless Peer-Switch feature is used

Without Peer-switch BA should be kept only on Peer-Link (no BA/Loop guard on VPCs)

Dispute is default enabled (for both RSTP and MST on VPC)

UDLD [normal mode] is recommended to take out bad links from channels

BA + UDLD + Dispute (on all interswitch links when using Peer-switch) when all switches support this (nexus7000/5000)

• By default on the Nexus 5x00 series, LACP sets a port to ―I state‖
• By default on the Nexus 5x00 series, LACP sets a port to ―I state‖

By default on the Nexus 5x00 series, LACP sets a port to ―I state‖ if it does not receive an LACP PDU from the peer. This behavior is different on the Nexus 7000 series where the default is to suspend a port if it doesn‘t receive LACP PDUs.

For server facing port-channels it is better to allow LACP ports to revert to I-state if the server doesn‘t send LACP PDUs. By doing this the I-state port can operate like a regular Spanning-Tree port. Also this allows

immediate server connectivity when it boots up before the full LACP

negotiation has taken place.

For network facing ports, allowing ports to revert to I-state creates additional Spanning-tree state without any real benefit.

This behavior can be configured on a per Port-Channel basis with the configuration ―[no] lacp suspend-individual‖ (which is the equivalent of the Catalyst IOS command ―port-channel standalone-disable‖.

• IGMP snooping shares the snooped reports with the peer vPC switch to help with
• IGMP snooping shares the snooped reports with the peer vPC switch to help with
• IGMP snooping shares the snooped reports with the peer vPC switch to help with

IGMP snooping shares the snooped reports with the peer vPC switch to help with multicast forwarding

Forwarding of IGMP protocol packets is tweaked so that IGMP

reports received on one vPC switch is also forwarded to the vPC peer. Thus multicast forwarding state remains in sync on both the vPC switches.

Do NOT DISABLE IGMP Snooping!

If you need to support Firewalls / Clusters:

Use static IGMP entries OR Create an IGMP querier!

• vPC maintains dual active control planes and STP still runs on both switches •
• vPC maintains dual active control planes and STP still runs on both switches •
• vPC maintains dual active control planes and STP still runs on both switches •

vPC maintains dual active control planes and STP still runs on both switches

IGMP join/leave messages received

on one peer is forwarded to another

vPC Primary

vPC Secondary

one peer is forwarded to another vPC Primary vPC Secondary vPC Primary peer via peer link

vPC Primary

forwarded to another vPC Primary vPC Secondary vPC Primary peer via peer link • IP muticast

peer via peer link

IP muticast packets are sent to host through local port

Non-IP multicast and broadcast packets are flooded

vPC Secondary

IGMP

join/leave

IGMP

join/leave

are flooded vPC Secondary IGMP join/leave IGMP join/leave © 2010 Cisco and/or its affiliates. All rights
• So is the multicast traffic going to the peer link? Yes, but duplicates are
• So is the multicast traffic going to the peer link? Yes, but duplicates are
• So is the multicast traffic going to the peer link? Yes, but duplicates are

So is the multicast traffic going to the peer link?

Yes, but duplicates are avoided by using the vPC loop prevention technique, which should rather be called ―duplicate prevention‖

And how about orphan ports?

Orphan ports receive traffic because the multicast traffic is always sent

over the peer-link

the multicast traffic is always sent over the peer-link N7k01 N7k02 1 2 3 4 N5k01
the multicast traffic is always sent over the peer-link N7k01 N7k02 1 2 3 4 N5k01
N7k01 N7k02 1 2 3 4
N7k01
N7k02
1
2
3
4
N5k01 N5k02 1 3
N5k01
N5k02
1
3
• Assuming that there are no orphan ports it is possible to remove multicast traffic
• Assuming that there are no orphan ports it is possible to remove multicast traffic

Assuming that there are no orphan ports it is possible to remove multicast traffic from crossing the peer-link with the command

no ip igmp snooping mrouter vpc-peer-link (Nexus 5k)

ip igmp snooping vpc peer-link-exclude (hidden command on the Nexus 7k, not supported)

• VPC peer-link is considered as mrouter port. Therefore all multicast traffic is flooded over
• VPC peer-link is considered as mrouter port. Therefore all multicast traffic is flooded over
• VPC peer-link is considered as mrouter port. Therefore all multicast traffic is flooded over
• VPC peer-link is considered as mrouter port. Therefore all multicast traffic is flooded over

VPC peer-link is considered as mrouter port. Therefore all multicast traffic is flooded over peer-link

A CLI was introduced in 5.0(3)N1(1) to

avoid that. With the CLI multicast traffic is sent to vPC peer-link only when it is necessary, such as, there is singly connected host

Improving multicast convergence time

with peer-link down/up and switch reload

The CLI is not supported for FEX dual- home topology in 5.0(3)N1(1). The

limitation will be removed in upcoming

release-5.0(3)N2(1)

N5k-1 N5k-2 IGMP Group sync
N5k-1
N5k-2
IGMP Group sync
• If the peer-link is lost the vPC secondary is going to shut vPC on

If the peer-link is lost the vPC secondary is going to shut

vPC on the N7k

down the vPC member ports

For single attached hosts, pls see

CSCtc49559

and Orphan ports ―suspend‖

feature

vPC on the N5k

• and Orphan ports ―suspend‖ feature vPC on the N5k N7k01 N7k02 1 2 3 4
• and Orphan ports ―suspend‖ feature vPC on the N5k N7k01 N7k02 1 2 3 4
N7k01 N7k02 1 2 3 4
N7k01
N7k02
1
2
3
4
N5k01
N5k01
N5k02
N5k02
vPC on the N5k N7k01 N7k02 1 2 3 4 N5k01 N5k02 © 2010 Cisco and/or
vPC on the N5k N7k01 N7k02 1 2 3 4 N5k01 N5k02 © 2010 Cisco and/or
vPC on the N5k N7k01 N7k02 1 2 3 4 N5k01 N5k02 © 2010 Cisco and/or
Keepalive S1 -Primary S2-Secondary vPC peer-link vPC 1 po1 CE-1 Active or Standby Active or
Keepalive
Keepalive

S1 -Primary

S2-Secondary

vPC peer-link vPC 1 po1
vPC peer-link
vPC 1
po1

CE-1

S1 -Primary S2-Secondary vPC peer-link vPC 1 po1 CE-1 Active or Standby Active or Standby 

Active or

Standby

Active or

Standby

Intended for devices that do not support port-channel. Other

devices should be dually

connected by vPCs (Orphan- port CLI is available only on physical ports, not on port- channels)

Configure single attached devices (like FW or LB) port as orphan-port

Orphan port

S1(config)# int eth 1/1

S1(config-if)# vpc orphan-ports suspend

S2(config)# int eth 1/1 S2(config-if)# vpc orphan-ports suspend

When vPC peer-link goes down, vPC secondary peer device shuts all its vPC member ports as well as orphan ports

• vPC basic components • Hardware Specific Considerations • vPC forwarding rules • vPC enhancements

vPC basic components

Hardware Specific Considerations

vPC forwarding rules

vPC enhancements

L3 and vPC

Adding FEX

Summary designs

HW Programmed to forward frames sent to the FHRP MAC address on BOTH Switches •
HW Programmed to forward frames sent to the FHRP MAC address on BOTH Switches •
HW Programmed to forward frames sent to the FHRP MAC address on BOTH Switches •

HW Programmed to forward frames

sent to the FHRP MAC address on

BOTH Switches

vPC maintains dual active control planes and STP still runs on both switches

HSRP active process communicates

runs on both switches • HSRP active process communicates HSRP Active HSRP Standby the active MAC
runs on both switches • HSRP active process communicates HSRP Active HSRP Standby the active MAC

HSRP Active

HSRP Standby

the active MAC to its neighbour

Only the HSRP active process responds to ARP requests

HSRP active MAC is populated into the L3 hardware forwarding tables, creating a local forwarding capability on the HSRP standby device

Consistent behavior for HSRP, VRRP and GLBP

No need to configure aggressive

FHRP hello timers as both switches

are active

aggressive FHRP hello timers as both switches are active © 2010 Cisco and/or its affiliates. All
 It recommended to ‗not‘ use HSRP link tracking in a vPC configuration  Reason:
 It recommended to ‗not‘ use HSRP link tracking in a vPC configuration  Reason:
 It recommended to ‗not‘ use HSRP link tracking in a vPC configuration  Reason:
 It recommended to ‗not‘ use HSRP link tracking in a vPC configuration  Reason:

It recommended to ‗not‘ use HSRP link tracking in a vPC configuration

Reason: vPC will not forward a packet back on a vPC once it has crossed the peer-link, except in the case of a remote member port failure

Use an L3 point-to-point link between the vPC peers to establish a L3 backup

SVI

VLAN 300

SVI

VLAN 300

VLAN 100, 200,300 VLAN 200
VLAN 100,
200,300
VLAN 200
SVI VLAN 300 SVI VLAN 300 VLAN 100, 200,300 VLAN 200 path to the Core in

path to the Core in case of uplinks

failure

A single point-to-point VLAN/SVI will suffice to establish a L3 neighbor

VLAN 100

VLAN/SVI will suffice to establish a L3 neighbor VLAN 100 © 2010 Cisco and/or its affiliates.
• Non-RFC compliant end hosts Device required to send packets to the MAC address returned

Non-RFC compliant end hosts

Device required to send packets to the MAC address returned in ARP response (HSRP virtual MAC) Some non-compliant devices use the MAC address of the sender device (Switch physical MAC) NAS devices (i.e. NETAPP Fast-Path or EMC

IP-Reflect) have been found to do this

vPC Peer Gateway - NX-OS 4.2(1)

Allows a vPC peer to respond both the the HSRP virtual and the real MAC address of both itself and it‘s peer

“peer-gateway” command tells the vPC to respond to the physical MAC address of its peer
“peer-gateway” command
tells the vPC to respond
to the physical MAC
address of its peer
L3 L2 VLAN 200
L3
L2
VLAN 200
to the physical MAC address of its peer L3 L2 VLAN 200 VLAN 100 © 2010
to the physical MAC address of its peer L3 L2 VLAN 200 VLAN 100 © 2010
to the physical MAC address of its peer L3 L2 VLAN 200 VLAN 100 © 2010

VLAN 100

 Not enabled by default  After the peer-link comes up perform an ARP bulk

Not enabled by default

After the peer-link comes up perform an ARP bulk sync over CFSoE to the peer switch

Improve Convergence for Layer 3 flows

ARP TABLE IP1 IP1 MAC1 VLAN 100 IP2 IP2 MAC2 VLAN 200 P IP1 MAC1
ARP TABLE
IP1
IP1
MAC1
VLAN 100
IP2
IP2
MAC2
VLAN 200
P
IP1
MAC1
IP1 MAC1 VLAN 100 IP2 IP2 MAC2 VLAN 200 P IP1 MAC1 ARP TABLE MAC1 VLAN
ARP TABLE MAC1 VLAN 100 MAC2 VLAN 200 SVIs S
ARP TABLE
MAC1
VLAN 100
MAC2
VLAN 200
SVIs
S
P S
P
S
S1(config-vpc-domain)# ip arp synchronize
S1(config-vpc-domain)#
ip arp synchronize

S2(config-vpc-domain)#

ip arp synchronize

Primary vPC

Secondary vPC

Note: CSCti06907 has been fixed
Note:
CSCti06907
has
been fixed
IP2 MAC2
IP2
MAC2

ARP Synchronization Process

has been fixed IP2 MAC2 ARP Synchronization Process © 2010 Cisco and/or its affiliates. All rights
Feature Function Availability VPC interaction with FHRP Both active and standby peer function as gateway

Feature

Function

Availability

VPC interaction with FHRP

Both active and standby peer function as gateway

VPC interaction with FHRP Both active and standby peer function as gateway HSRP VRRP

HSRP VRRP

Peer-gateway

L3 forwarding when the DMAC is peer‘s

MAC

vPC delay restore

Delay bringing up vPC ports

vPC exclude VLAN

CLI to specify SVI interfaces won‘t be suspended when peer-link fails

vPC exclude VLAN CLI to specify SVI interfaces won‘t be suspended when peer-link fails

ARP synchronization

Synchronize ARP between two peer switches

Roadmap

 
 

PIM pre-built-SPT

Both N5k joins source tree as PIM last hop router

PIM dual DR

Both N5k can be DR when it is first hop router

PIM dual DR Both N5k can be DR when it is first hop router
• vPC basic components • Hardware Specific Considerations • vPC forwarding rules • vPC enhancements

vPC basic components

Hardware Specific Considerations

vPC forwarding rules

vPC enhancements

L3 and vPC

Adding FEX

Summary designs

FEX2148T starting from 4.1(3)N1(1) FEX 2248, 2232 from 4.2(1)N1(1)

FEX 2248, 2232, 2224 from 4.2(1)N1(1)

2232 from 4.2(1)N1(1) FEX 2248, 2232, 2224 from 4.2(1)N1(1) © 2010 Cisco and/or its affiliates. All
2232 from 4.2(1)N1(1) FEX 2248, 2232, 2224 from 4.2(1)N1(1) © 2010 Cisco and/or its affiliates. All
2232 from 4.2(1)N1(1) FEX 2248, 2232, 2224 from 4.2(1)N1(1) © 2010 Cisco and/or its affiliates. All
2232 from 4.2(1)N1(1) FEX 2248, 2232, 2224 from 4.2(1)N1(1) © 2010 Cisco and/or its affiliates. All
2232 from 4.2(1)N1(1) FEX 2248, 2232, 2224 from 4.2(1)N1(1) © 2010 Cisco and/or its affiliates. All
Fairhaven © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 74
Fairhaven © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 74
Fairhaven © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 74
Fairhaven © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 74
Fairhaven © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 74
Fairhaven © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 74
Fairhaven © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 74

Fairhaven

Fairhaven © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 74
Fairhaven © 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 74
N7K NXOS 5.1(1) N7K NXOS 5.2 Future Y N Y Y Y Y active active
N7K NXOS 5.1(1)
N7K NXOS 5.1(1)
N7K NXOS 5.2
N7K NXOS 5.2
Future
Future
Y N Y Y Y Y active active active active active active N Y Y
Y
N
Y
Y
Y
Y
active
active
active
active
active
active
N
Y
Y
active
active
active
active
active
active
N
N
radar
© 2010 Cisco and/or its affiliates. All rights reserved.
Cisco Confidential
75
Nexus 5000 Topologies (Nexus 2248TP & 2232PP) Straight Through FCoE Adapters supported on 10G N2K

Nexus 5000 Topologies (Nexus 2248TP & 2232PP)

Straight Through

5000 Topologies (Nexus 2248TP & 2232PP) Straight Through FCoE Adapters supported on 10G N2K interfaces vPC
5000 Topologies (Nexus 2248TP & 2232PP) Straight Through FCoE Adapters supported on 10G N2K interfaces vPC
5000 Topologies (Nexus 2248TP & 2232PP) Straight Through FCoE Adapters supported on 10G N2K interfaces vPC

FCoE Adapters

supported on 10G

N2K interfaces

vPC Supported with up to 2 x 8 links

Redundancy model Dual Switch with redundant fabric

Provides isolation for Storage topologies (SAN ‘A’

and ‘B’)

Port Channel and Pinning supported for Fabric Link

Dual Homed

Channel and Pinning supported for Fabric Link Dual Homed Local Etherchannel with up to 8 links
Channel and Pinning supported for Fabric Link Dual Homed Local Etherchannel with up to 8 links
Channel and Pinning supported for Fabric Link Dual Homed Local Etherchannel with up to 8 links

Local Etherchannel with up to 8 links

Redundancy model Single switch with dual ‘supervisor’ for fabric, data control & management planes

No SAN ‘A’ and ‘B’ isolation (VSAN isolation sufficient in the future?)

Nexus 7000 Topologies (Nexus 2248TP & 2232PP) NXOS 5.2 Nexus 2248TP & 2232PP Local Etherchannel

Nexus 7000 Topologies (Nexus 2248TP & 2232PP)

NXOS 5.2 Nexus 2248TP & 2232PP

2248TP & 2232PP) NXOS 5.2 Nexus 2248TP & 2232PP Local Etherchannel with up to 8 links
2248TP & 2232PP) NXOS 5.2 Nexus 2248TP & 2232PP Local Etherchannel with up to 8 links
2248TP & 2232PP) NXOS 5.2 Nexus 2248TP & 2232PP Local Etherchannel with up to 8 links
2248TP & 2232PP) NXOS 5.2 Nexus 2248TP & 2232PP Local Etherchannel with up to 8 links

Local Etherchannel with up to 8 links

2248TP & 2232PP Local Etherchannel with up to 8 links NIC Teaming: TLB/ALB  Fabric links
2248TP & 2232PP Local Etherchannel with up to 8 links NIC Teaming: TLB/ALB  Fabric links
2248TP & 2232PP Local Etherchannel with up to 8 links NIC Teaming: TLB/ALB  Fabric links
2248TP & 2232PP Local Etherchannel with up to 8 links NIC Teaming: TLB/ALB  Fabric links
2248TP & 2232PP Local Etherchannel with up to 8 links NIC Teaming: TLB/ALB  Fabric links
2248TP & 2232PP Local Etherchannel with up to 8 links NIC Teaming: TLB/ALB  Fabric links

NIC Teaming:

TLB/ALB

Fabric links supported on N7K-M132XP-12 & N7K-M132XP-12L

Port Channel only supported for Fabric Links

Local port channel support on 2248 & 2232

No support for DCB and FCoE (parent

switch fabric ports not DCB capable yet)

and FCoE (parent switch fabric ports not DCB capable yet) © 2010 Cisco and/or its affiliates.
Nexus 7000 - vPC – NXOS 5.2 MCEC Etherchannel with up to 16 links 

Nexus 7000 - vPC NXOS 5.2

Nexus 7000 - vPC – NXOS 5.2 MCEC Etherchannel with up to 16 links  Redundancy
Nexus 7000 - vPC – NXOS 5.2 MCEC Etherchannel with up to 16 links  Redundancy
Nexus 7000 - vPC – NXOS 5.2 MCEC Etherchannel with up to 16 links  Redundancy
Nexus 7000 - vPC – NXOS 5.2 MCEC Etherchannel with up to 16 links  Redundancy

MCEC Etherchannel with up to 16 links

Redundancy model Dual Switch (each

switch supports redundant supervisors)

Nexus 5000 Fairhaven

switch supports redundant supervisors) Nexus 5000 Fairhaven MCEC Etherchannel with up to 16 links  Redundancy
switch supports redundant supervisors) Nexus 5000 Fairhaven MCEC Etherchannel with up to 16 links  Redundancy

MCEC

Etherchannel with up to 16

links

Redundancy model Single switch with dual ‘supervisor’, fabric, line card, data

control & management planes

24 FEX

24 FEX Nexus 2000 Straight-through deployment n5k01 n5k02 FEX100 FEX102 FEX120 FEX122 FEX101 FEX121 max 24
24 FEX Nexus 2000 Straight-through deployment n5k01 n5k02 FEX100 FEX102 FEX120 FEX122 FEX101 FEX121 max 24

Nexus 2000 Straight-through deployment

n5k01 n5k02 FEX100 FEX102 FEX120 FEX122 FEX101 FEX121 max 24 x 2
n5k01
n5k02
FEX100
FEX102
FEX120
FEX122
FEX101
FEX121
max 24 x 2
FEX100 FEX102 FEX120 FEX122 FEX101 FEX121 max 24 x 2 n5k01 max 4/8 ―fabric links‖ FEX100

n5k01

FEX102 FEX120 FEX122 FEX101 FEX121 max 24 x 2 n5k01 max 4/8 ―fabric links‖ FEX100 FEX102

max 4/8 ―fabric links‖

FEX100 FEX102
FEX100
FEX102

FEX101

x 2 n5k01 max 4/8 ―fabric links‖ FEX100 FEX102 FEX101 max 24 with Nexus 5500 =

max 24 with Nexus 5500 = 768 ports

Active/Standby

FEX 2248 Peer Keepalive Peer Link vPC Member Port Cisco Nexus 2000 Series Straight-Through vPC

FEX 2248

Peer KeepaliveFEX 2248 Peer Link vPC Member Port Cisco Nexus 2000 Series Straight-Through vPC vPC vPC Primary

Peer Link vPC Member Port
Peer Link
vPC Member Port

Cisco Nexus 2000 Series Straight-Through vPC

vPC Member Port Cisco Nexus 2000 Series Straight-Through vPC vPC vPC Primary Secondary Fabric Links FEX100
vPC vPC Primary Secondary Fabric Links FEX100 FEX120 HIF HIF up to 8 ports up
vPC
vPC
Primary
Secondary
Fabric Links
FEX100
FEX120
HIF
HIF
up to 8 ports
up to 8 ports
FEX100 FEX120 HIF HIF up to 8 ports up to 8 ports up to 4 ports
FEX100 FEX120 HIF HIF up to 8 ports up to 8 ports up to 4 ports

up to 4 ports

up to 4 ports

up to 24 PC per FEX

Cisco Nexus 2000 Active-Active

4 ports up to 24 PC per FEX Cisco Nexus 2000 Active-Active vPC vPC Primary Secondary
vPC vPC Primary Secondary Fabric Links up to 4 ports up to 4 ports vPC
vPC
vPC
Primary
Secondary
Fabric Links
up to 4 ports
up to 4 ports
vPC 1
vPC 2
FEX100
FEX120
HIF
HIF
up to 8 ports
ports vPC 1 vPC 2 FEX100 FEX120 HIF HIF up to 8 ports up to 24

up to 24 PC per FEX

up to 8 ports

FEX 2232 Peer Keepalive Peer Link vPC Member Port Cisco Nexus 2000 Series Straight-Through vPC

FEX 2232

Peer KeepaliveFEX 2232 Peer Link vPC Member Port Cisco Nexus 2000 Series Straight-Through vPC Cisco Nexus 2000

Peer Link vPC Member Port
Peer Link
vPC Member Port

Cisco Nexus 2000 Series Straight-Through vPC

Cisco Nexus 2000 Active-Active

Series Straight-Through vPC Cisco Nexus 2000 Active-Active vPC vPC Primary Secondary Fabric Links FEX100 FEX120
Series Straight-Through vPC Cisco Nexus 2000 Active-Active vPC vPC Primary Secondary Fabric Links FEX100 FEX120
vPC vPC Primary Secondary Fabric Links FEX100 FEX120 HIF HIF up to 8 ports up
vPC
vPC
Primary
Secondary
Fabric Links
FEX100
FEX120
HIF
HIF
up to 8 ports
up to 8 ports
FEX100 FEX120 HIF HIF up to 8 ports up to 8 ports vPC vPC Primary Secondary
vPC vPC Primary Secondary Fabric Links up to 8 ports up to 8 ports vPC
vPC
vPC
Primary
Secondary
Fabric Links
up to 8 ports
up to 8 ports
vPC 1
vPC 2
FEX100
FEX120
HIF
HIF
up to 8 ports

up to 8 ports

up to 8 ports

up to 16 PC per FEX

up to 16 PC per FEX

up to 8 ports

Doesn‘t support FCoE, today Compatible with FCoE IF server uses 2 uplinks © 2010 Cisco
Doesn‘t support FCoE, today
Compatible with FCoE IF server uses 2 uplinks
© 2010 Cisco and/or its affiliates. All rights reserved.
Cisco Confidential
81
• In a Dual Tier vPC configuration FCoE traffic will NOT be load shared across
• In a Dual Tier vPC configuration FCoE traffic will NOT be load shared across

In a Dual Tier vPC configuration FCoE traffic will NOT be load shared across both sets of fabric links

SAN ‗A‘ and ‗B‘ isolation is

maintained

This may result in un-even sharing of traffic across the multiple fabric

links

FCoE + LAN on one set of fabric links LAN only on the other set of fabric links

Need to plan for the aggregate traffic capacity

SAN A SAN B LAN traffic LAN & SAN traffic
SAN A
SAN B
LAN traffic
LAN & SAN traffic
• vPC basic components • Hardware Specific Considerations • vPC forwarding rules • vPC enhancements

vPC basic components

Hardware Specific Considerations

vPC forwarding rules

vPC enhancements