Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
53-1004313-04
January 2018
© 2018, Extreme Networks, Inc. All Rights Reserved.
Extreme Networks and the Extreme Networks logo are trademarks or registered trademarks of Extreme Networks, Inc. in the United States and/or other
countries. All other names are the property of their respective owners. For additional information on Extreme Networks Trademarks please see
www.extremenetworks.com/company/legal/trademarks. Specifications and product availability are subject to change without notice.
© 2017, Brocade Communications Systems, Inc. All Rights Reserved.
Brocade, the B-wing symbol, and MyBrocade are registered trademarks of Brocade Communications Systems, Inc., in the United States and in other
countries. Other brands, product names, or service names mentioned of Brocade Communications Systems, Inc. are listed at www.brocade.com/en/legal/
brocade-Legal-intellectual-property/brocade-legal-trademarks.html. Other marks may belong to third parties.
Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment,
equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without
notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade
sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the
United States government.
The authors and Brocade Communications Systems, Inc. assume no liability or responsibility to any person or entity with respect to the accuracy of this
document or any loss, cost, liability, or damages arising from the information contained herein or the computer programs that accompany it.
The product described by this document may contain open source software covered by the GNU General Public License or other open source license
agreements. To find out which open source software is included in Brocade products, view the licensing terms applicable to the open source software, and
obtain a copy of the programming source code, please visit http://www.brocade.com/support/oscd.
Terminology..........................................................................................................................................................................................................................7
Introduction.......................................................................................................................................................................................................................... 9
Design Considerations..................................................................................................................................................................................................261
Tunnel Scale........................................................................................................................................................................................................................................... 261
Tunnels * VLANs.................................................................................................................................................................................................................................. 261
BGP-EVPN-Based L2 and L3 Extension Validated Scale............................................................................................................................................... 261
BGP-EVPN-Based L2 Extension Validated Scale................................................................................................................................................................262
References.......................................................................................................................................................................................................................263
It should be noted that not all features, such as automation practices, zero-touch provisioning, and monitoring of the Extreme IP fabric,
are included in this document. Future versions of this document are planned to include these aspects of the Extreme IP fabric solution.
The design practices documented here follow the best-practice recommendations, but there are variations to the design that are
supported as well.
Target Audience
This document is written for Extreme system engineers and network architects who design, implement, and support data center
networks. This document is intended for experienced data center architects and network administrators/engineers. The reader must have
a good understanding of data center switching and routing features and Multi-Protocol BGP/MPLS VPN for understanding multitenancy
in VXLAN EVPN networks.
Document History
Date Part Number Description
Providing Internet route reachability for tenant VRFs at TORs through public VRF at border leaf.
Providing Internet route reachability for tenant VRFs at DCI tier through public VRF.
Design considerations.
December 2016 53-1004313-03 EVPN DCI with BGP-EVPN-based L2 and L3 extension through Spines.
January 2018 53-1004313-04 Updated document to reflect Extreme's acquisition of Brocade's data center networking business.
This document describes network designs for interconnecting data center sites leveraging BGP EVPN. The intention of this Extreme
validated design document is to provide reference configurations and document the best practices for interconnecting data centers using
VDX switches with BGP EVPN.
There are two EVPN-based DCI deployment models detailed in this document:
• BGP-EVPN-based L2 extension
• BGP-EVPN-based L2 and L3 extension
Both of these models leverage VXLAN for efficient tunneling of traffic across a core network between data centers; they are differentiated
by how each data center "hands off" traffic to the core network, i.e., either at Layer 2 or at Layer 3.
The BGP-EVPN-based L2 and L3 model is targeted at interconnecting EVPN-based IP fabric data centers; whereas the EVPN-based
L2 model provides a more generic DCI solution with L2 VLAN extension from any type of data center deployment, e.g., VCS or a BGP
EVPN IP fabric. There are multiple design considerations for each; a brief summary is given in the following table, and details are
discussed further in the upcoming sections.
(L2 EVPN control-plane learning between DCs) (Data-plane learning between border leaf and
DCI tier)
Inter-VLAN routing Yes Yes
(Asymmetric or symmetric routing with L3 VNI) (Asymmetric or symmetric routing at DCI tier)
VLAN re-use Yes Limited
(VLAN re-use between tenants and leafs) (VLAN-to-VNI mapping at DC leaf only) (VLANs converge at the DCI tier and DC edge,
e.g. border leaf of EVPN-based IP fabric)
Control-plane segmentation Not segmented Segmented
(Demarcation between DCs and the DCI) The control plane is extended via the WAN and The control plane is extended via the WAN
is shared between data centers. between DCI tiers, but is not shared between
data centers. Segmentation can be avoided with
control-plane extension from the DCI tier to the
leaf node.
VXLAN tunnel scale Tunnels span between leafs of EVPN-based IP DCI tier to DCI tier tunnel scale (dependency
fabric DCs (tunnel scale: many to many) on number of remote sites)
Spine Layer
The role of the spine is to provide interconnectivity between the leafs. Network endpoints do not connect to the spines. Since most policy
is implemented at the leafs, the major role of the spine is to participate in the control-plane and data-plane operations for traffic
forwarding between leaf switches. Some differentiating characteristics of spine nodes include:
• Individual nodes have Layer 3 connectivity to each physical leaf switch.
• Spine nodes are not physically or logically connected to each other.
Leaf Layer
The role of the leaf switch is to provide connectivity to the endpoints in the network. These endpoints include compute servers and
storage devices, as well as other networking devices like routers and switches, load balancers, firewalls, and any other networking
endpoint—physical or virtual. For network efficiency, policy enforcement, including security, traffic path selection, Quality of Service (QoS)
marking, traffic policing, shaping, and traffic redirection, is implemented on leaf switches. Some differentiating characteristics at the leaf
layer are:
• Server VLANs terminate at the leaf switches (Layer 2 from devices to leaf).
• Leaf switches can be deployed individually as a top-of-rack device or as a pair providing switch-level redundancy with active-
active vLAG connections to servers.
• L3 connectivity exists between the spine and leaf switches using L3 physical ports.
• Routing underlay: BGP is used to propagate IPv4/IPv6 routes with BGP neighbors formed from each leaf switch to each spine.
• Load balancing is achieved with L3 ECMP.
• Leaf-to-spine inter-switch point-to-point L3 links configured as “IP Unnumbered” or /31 subnets to conserve IP addresses
and optimize hardware resources (best practice).
Border Leaf
The role of the border leaf switches in the network is to provide external connectivity to the data center site and access to associated
access services like firewalls, load balancers, and edge VPN routers. The border leaf switches together with the edge racks housing these
common services form the edge services PoD. Since all North-South traffic will pass through the border leaf switches, it is important to
account for the bandwidth requirements for both:
• Internet traffic (external access to/from the data center)
• Data Center Interconnect (DCI) traffic (traffic passing between interconnected data centers, e.g., backup)
The ratio of the aggregate bandwidth of the uplinks connecting to the spines (two-tier case) or super-spines (three-tier case) to the
aggregate bandwidth of the uplink connecting to the WAN edge routers determines the over-subscription ratio for traffic exiting the data
center site.
The figure above shows the positioning and connectivity of a border leaf switch pair in a two-tier topology: that is, border leaf switches are
connected to all spines in the DC PoD (same as standard leaf switches) and also have external-facing connections to the WAN edge. In
the case of a three-tier fabric topology, border leaf switches would be connected to the super-spines (third tier), providing external
connectivity for N data center PoDs. The border-leaf to spine/super-spine connections are strictly Layer 3 with a BGP EVPN underlay;
whereas the border-leaf to WAN connections can be either Layer 2 or Layer 3 or a combination of both depending on the requirements
and the DCI deployment model. The upcoming sections will focus on the DCI deployment model details.
To extend the EVPN control plane between sites, the EVPN address family is enabled for the eBGP multihop peering between border
leaf nodes. Continuing the example above and enabling the EVPN address family, the border leaf nodes will send EVPN routes from
their respective data centers to the remote data center; e.g., the border leaf from DC1 sends EVPN routes from DC1 to DC2 and vice
versa. The border leaf nodes then propagate the routes into their local data center. Depicted in the figure below, both data centers now
dynamically share routing information (i.e., IPv4 for VTEP reachability and EVPN) by extending the BGP control plane between sites.
While the control plane is extended over a separate network (e.g. third-party service provider), the internal EVPN routes are not
exchanged with the network providing the extension. That is, by establishing the BGP peering directly between border leaf nodes, BGP
update messages are exchanged directly between border leaf nodes only and not with the WAN edge routers. The WAN edge routers will
route the BGP control traffic only across the transport network. The route information exchanged between the border leaf and the WAN
edge is limited to the following:
• Border leaf router ID: For establishing eBGP MH neighborship.
• Leaf switch VTEP IPs: Forwarding across the IP core network is based on the destination VTEP IP.
Behavior/Core Functions
Multiple data center sites sharing a common BGP-EVPN control plane will behave as a single logical IP fabric data center, enabling L2
VLAN extension and routing between VLANs between leaf switches at different sites.
Layer 2 Extension
Through the exchange of EVPN routes that contain VXLAN tunnel endpoint (VTEP) IP addresses between sites, leaf switches discover
remote leaf switch VTEP IP addresses (automatic VTEP discovery via EVPN Type 3 IMR). Leaf switches that share common VNIs will
dynamically create VXLAN tunnels between them using the discovered VTEP IP addresses.
The figure below shows an example of tunnel formation from a leaf switch in DC1 to a leaf switch in DC2, providing Layer 2 VLAN
extension. Layer 2 traffic is "tunneled" by encapsulating it into an IP User Datagram Protocol packet with an additional VXLAN header.
The outer IP source and destination for tunneled traffic are the source and destination VXLAN tunnel endpoint (VTEP) IP addresses in
this case, the leaf switches in DC1 and DC2 respectively. All transit routers forward the encapsulated Layer 2 traffic based on the outer IP
header, and only the router configured with the destination VTEP de-capsulates the packet to expose the inner Layer 2 frame . With the
Layer 3 handoff deployment model, the border leaf nodes provide both control-plane extension through the exchange of BGP EVPN
routes and data-plane forwarding for IP traffic (including tunneled VXLAN traffic) between sites.
The figure below shows an example of tunnel formation between leaf switches in DC1 and DC2 over an IP/MPLS network. After VXLAN
tunnel formation between leaf switches, Layer 2 traffic will be tunneled between sites. A 5-step example for L2 forwarding is shown:
1. A host in data center 1 forwards Ethernet traffic to its directly attached leaf switch (e.g. known unicast or BUM traffic).
2. Leaf switch in data center 1 receives the L2 traffic, learns or refreshes the source MAC address (data-plane learning), looks up
the destination MAC address, and encapsulates the received Ethernet frame into an IP User Datagram Protocol packet in which
the IP source/destination will be equal to the VTEP source/destination IP addresses plus a VXLAN header using automatic
(1:1) or user-defined VNI mapping and forwards the traffic to the spine layer.
NOTE
The source MAC address learned by the leaf switch is shared within the data center using BGP EVPN update
messages. The border leaf exchanges the BGP update messages with remote DC2 via its border leaf nodes (control-
plane learning). BGP updates are shared directly between border leaf nodes via eBGP multihop peering; i.e., updates
are not shared or leaked from the border leaf to the WAN edge.
3. The following nodes in this example all perform forwarding based on the destination VTEP IP address of the encapsulated
VXLAN packet (from Step 2):
• DC 1 spine
• DC 1 border leaf
• WAN edge and IP/MPLS core
• DC2 border leaf
• DC2 spine
4. The DC2 destination leaf switch receives traffic with a destination IP address matching the local VTEP address, performs
decapsulation revealing the inner Ethernet frame, and forwards traffic in the destination VLAN over the L2 interface toward the
target host.
5. The destination host in DC2 receives L2 traffic from its directly attached leaf switch.
Inter-VLAN Routing
The Layer 3 deployment model supports both asymmetric and symmetric routing for inter-VLAN traffic. Symmetric routing is the
recommended approach for the L3 DCI deployment model to simplify the configuration requirements and efficiently use the resources
at the leaf layer.
• Asymmetric routing—Both source and destination VLANs and associated gateways are configured on ingress and egress leaf
switches. Traffic is routed between the source and destination VLAN by the ingress leaf and is then tunneled to the remote leaf
using the VNI that is mapped to the destination VLAN. The inner L2 frame is then decapsulated at the remote egress leaf and
forwarded in the destination VLAN.
• Symmetric routing—The destination VLAN and gateway are not configured on the ingress leaf switch, and a common VNI is
used for extension between racks. Remote prefixes are advertised within the BGP EVPN address family as reachable with a
next hop equal to the remote leaf VTEP IP address and a VNI shared between to be used for tunneling traffic between local and
remote racks. When the same VLAN extension is not configured between two leaf nodes, leaf switches will not exchange
inclusive multicast routes (Type 3 routes). In the symmetric case, the leaf switches exchange L3 prefixes (Type 5 routes used for
automatic VTEP discovery), which will form a VXLAN tunnel between the leaf switches using a common VNI. A simplified
example is given in the following figure to illustrate the high-level steps for symmetric routing.
– The ingress leaf in DC 1 receives traffic from the VLAN 204 subnet and performs L3 lookup for the destination subnet
VLAN 201, and it resolves the NH to a remote VTEP in DC 2 with a VNI 2001 to be used for transport (associated with
the source and destination leaf switches).
– VXLAN-encapsulated traffic is routed between DC1 and DC2, and the destination IP address is the DC2 leaf VTEP.
– The egress leaf in DC2 de-capsulates the VXLAN traffic, performs L3 lookup for the destination subnet, and via the
destination VLAN GW, resolves the destination ARP and forwards traffic accordingly at L2 to the target host in VLAN 201.
The control-plane capability of the border leaf is unique within the IP fabric since it will not filter BGP-EVPN routes based on route
targets; i.e., it passes on advertisement of all routes to its neighbors similar to a spine node and also has the capability of initiating and
terminating tunnels as standard leaf switches. The specific configuration requirements are detailed in the validated design sections that
follow.
One of the requirements for the BGP-EVPN-based L2 and L3 extension model is that the control plane is shared between sites. This
model is best suited for deployments where the operational/administrative control is centralized between sites to allow for effective
control and configuration, e.g., ensuring consistent VLAN-to-VNI mapping in local and remote data centers.
The DCI tier leverages the same underlying concepts described for the border leaf nodes in the Layer 3 handoff model; that is, DCI tier
nodes share a common extended control plane between sites. The differentiator is that ingress traffic to the border leaf is strictly Layer 2,
and the DCI tier nodes perform VTEP functions for inter-site traffic. The use of a shared EVPN control plane between DCI tiers enables
efficient forwarding across an IP interconnect network in addition to the following:
• Layer 2 extension and Layer 3 VRF host routing
• Dynamic VXLAN tunnel discovery and establishment (between DCI tier nodes)
• BUM reduction with MAC address reachability exchange and ARP/ND suppression
• Conversational ARP/ND
• VXLAN head-end replication and single-pass efficient VXLAN routing
• Open standards and interoperability
Layer 2 Extension
Through the exchange of EVPN routes between DCI tier nodes, automatic VTEP discovery occurs (updates contain VTEP IP addresses).
DCI tier nodes sharing common VNIs will dynamically create VXLAN tunnels between them using the discovered VTEP IP addresses.
The following figure shows an example of tunnel formation between DCI tier nodes over an IP/MPLS network. After VXLAN tunnel
formation between DCI tier nodes, Layer 2 traffic will be tunneled between sites. A 5-step example for L2 forwarding is shown:
1. Data center 1 forwards an Ethernet frame to its local DCI tier node (e.g. known unicast or BUM traffic).
2. The DCI tier at data center 1 receives the L2 traffic and learns or refreshes the source MAC address (data-plane learning), looks
up the destination MAC address, and encapsulates the received Ethernet frame into an IP UDP packet in which the IP source/
destination will be equal to the VTEP source/destination IP addresses plus a VXLAN header using automatic (1:1) or user-
defined VNI mapping, and it forwards the traffic to the spine layer and forwards the traffic to the WAN edge.
NOTE
The source MAC address learned by the DCI tier is shared using MP BGP-EVPN routes with remote DCI tier nodes
(control-plane learning), and BGP updates are shared directly between DCI tier nodes via eBGP multihop peering (i.e.,
updates are not shared or leaked to the WAN edge).
3. The WAN edge receives encapsulated traffic and performs forwarding based on the outer IP header (e.g., simple L3 forwarding
or MPLS depending on the core network).
4. Traffic received at the remote DCI tier with a destination IP address matching the local VTEP address is decapsulated, revealing
the inner Ethernet frame, and is forwarded in the destination VLAN over the L2 interface connected to data center 2.
5. Data center 2 receives the Ethernet traffic from the DCI tier as L2 traffic and adds or refreshes the source MAC address in its
table (data-plane learning).
In short, DCI tier nodes perform data-plane learning over their local L2 interfaces and control-plane learning over their L3 interfaces for
remote MAC addresses, ARP, etc. The result is efficient forwarding by DCI tier nodes because remote MAC addresses and ARPs are
shared with remote DCI tier nodes, reducing the amount of BUM traffic over the core network.
Inter-VLAN Routing
The BGP-EVPN-based L2 extension deployment model is targeted at extending Layer 2 VLANs across a shared core network. For
cases where routing between VLANs is required, there are two ways to achieve it- Asymmetric and Symmetric routing. In Asymmetric
routing the packet is routed first inside the DC, then switched to destination. Symmetric routing achieves routing at the gateway level
using common L3 VNI extension. When the individual data center control planes are separated by an L2 boundary (i.e., DC to DCI tier),
inter-VLAN traffic will be routed asymmetrically. The DCI tier nodes then receive and transport traffic in a single VLAN to the remote site.
When data center control planes are extended across without a boundary, Symmetric routing is efficient.
VLAN Scoping/Multitenancy
Traffic between sites is tunneled using VXLAN encapsulation as described in the example above, and the VLAN to VXLAN VNI mapping
is configured at the DCI tier nodes. For traffic between sites, the separation is based on the VNI. That is, inter-site forwarding with this
deployment model will only occur for cases where the VNI is common between local and remote DCI tier nodes. Therefore, different
tenants at different sites can use overlapping VLANs provided they use unique VNIs for transport across the core network.
FIGURE 9 Topology
Topology Description
• In Data Center Site1, all leaf nodes are connected to four spine nodes (with IPV4 addresses configured on interfaces in /31
subnet) using IPv4 eBGP adjacency with all four spine nodes in the same AS 64610. Leaf 1 and Leaf 2 are single, and Leaf3-
Leaf4, Leaf5-Leaf6, Border-Leaf1-Border-Leaf2 are a vLAG-pair. Leaf 1 is in AS 64630, Leaf 2 is in AS 64650, Leaf 3-Leaf
4 are in AS 64640, Leaf5-Leaf 6 are in AS 64670, and Border-Leaf1-Border-Leaf2 are in AS 64680. ECMP is achieved
using multipath eBGP.
• In Data Center Site2, all leaf nodes are connected to four spine nodes (with IPV4 addresses configured on interfaces in /31
subnet) using IPv4 iBGP adjacency with spine nodes being route-reflectors. All nodes are in AS 64620. Peer group is
configured to establish the BGP adjacency. ECMP is achieved using BGP add-path capability. Border-Leaf3-Border-Leaf4 are
a vLAG pair and all other leaf nodes Leaf 7, Leaf 8, Leaf 9, and Leaf 10 are single nodes.
• Leaf-spine adjacencies are activated under L2VPN EVPN address-family on all leaf and spine switches. Leaf-spine adjacencies
are configured with next-hop-unchanged to advertise routes from EVPN peers to other EVPN peers without changing the next
hop.
• In spine switches, retain route-target all is configured under EVPN address-family. This is to prevent stripping of RTs when
passing routes from one hop to another hop. Leaf switches compare RTs before installing routes with import RT under local
EVPN instance, RT advertised by each leaf node should be maintained before reflecting to other leaf nodes.
• VTEP addresses (Loopbacks) are advertised using the network command. Next-hop-recursion is used for next-hop-
reachability on Data Center Site2 since it is iBGP and redistribute connected is used on all spine nodes to provide next-hop
reachability.
• Border-Leaf1 and Border-Leaf2 are connected to WAN edge1 and WAN edge2 respectively using 4-10G port ECMP and
LAG. Border-Leaf3 and Border-Leaf4 are connected to WAN edge3 and WAN edge4 respectively using 4-10G port ECMP
and LAG. Border-Leaf node pairs are connected to respective WAN edge node pairs (with IPv4 address configured on LAG
interfaces in /31 subnet) using IPv4 eBGP adjacency with all WAN edge nodes in same AS 30614.
• L3 MPLS VPN adjacency is established between Site1 and Site2 WAN edge nodes.
• eBGP multihop session is established between Border-Leaf pair on Data Center Site1 and Border-Leaf pair on Data Center
Site2. Multihop BGP adjacency between Border-Leaf pairs on DCS1 and DCS2 are activated under EVPN address-family.
• Leaf to Host interfaces are configured as an active-active vLAG (aggregation of multiple physical links across multiple switches
from a single fabric forming single logical interface). The interfaces can be in access or trunk VLANs with IPV4, IPV6 any cast
address configured to allow VM mobility within or across DCS.
• Overlay gateway is configured in global context on all leaf nodes (applies to both nodes in case of two node vLAG pair) with
type of overlay to be used, respective VLAN VNI mapping, VTEP membership, switches membership, and VXLAN monitoring
like VLAN stats and SFLOW.
• EVPN instance is configured under rbridge mode for each leaf with RD, RT, VNIs to be extended.
• The retain route-target all command is configured on border-leaf nodes in order to advertise EVPN routes between data center
sites without stripping RT to form tunnel between leaf nodes from Site 1 and Site 2. In this approach, overlay gateway and
EVPN instance configurations can be avoided on border-leaf nodes. In case of symmetric routing, VRF configuration is not
needed on border-leaf nodes. Hence, border-leaf nodes will not form tunnels to other leaf nodes.
• If services have to be added on border-leaf nodes, they have to have tunnels. For this, needed VLAN-VNI mapping should be
added under overlay-gateway configuration with EVPN instance on border leaf nodes.
Hardware/Software Matrix
Role of Node Chassis Name (Possible Chassis Types) Minimum Software Version Required
BR-VDX6940-144S
Role of Node Chassis Name (Possible Chassis Types) Minimum Software Version Required
BR-VDX6740T
BR-VDX6740
Border leaf BR-VDX6940-36Q Network OS 7.0 and later
BR-VDX6940-144S
Spine BR-VDX8770-4/8 Network OS 7.0 and later
BR-VDX6940-36Q
BR-VDX6940-144S
DCI tier BR-VDX6940-36Q Network OS 7.0 and later
BR-VDX6940-144S
BR-VDX6740
WAN edge MLXe-4/8/16/32 NetIron 5.9.00
Configuration Steps
The BGP-EVPN-based L2 and L3 extension deployment model is characterized by the following:
• Use of Layer 3 interfaces between the border leaf nodes and the WAN edge routers
• Layer 3 reachability between border leaf nodes in different data centers via the WAN edge routers (IP transport)
• BGP neighborship between border leaf nodes in different data centers (eBGP multihop) with EVPN AF enabled
BGP Configuration on Border-Leaf1 to spines (similar configuration is on Border-Leaf2 to spines with respective IP addresses).
Interface Configuration on Border-Leaf3 to Spine I (similar configuration is needed on interfaces to other spines from Border-Leaf3 and
on interfaces from Border-Leaf4 to Spines).
BGP Configuration on Border-Leaf3 to Spines (similar configuration is on Border-Leaf2 to Spines with respective IP addresses).
Interface Configuration on Border-Leaf1 to WAN Edge1 (a similar configuration is needed on other ECMP ports and on ECMP ports
used in Border-Leaf3).
Interface Configuration on Border Leaf 2 to WAN Edge2 (a similar configuration is needed on Border-Leaf4).
BGP configuration on Border-Leaf1 to WAN Edge1 ( a similar configuration is needed on other border-leaf nodes too).
Verification of eBGP neighborship from Border Leaf 1 to WAN Edge 1 can be done using the show ip bgp summary command as in
Border-Leaf1 to Spine verification.
WAN Edge 1 to Border Leaf 1 Interface Configuration (a similar configuration is needed on other interfaces connected to border-leaf
nodes and on other WAN edges too).
WAN Edge 1 to MPLS Core Interface Configuration (a similar configuration is needed on other WAN edges too. This interface will be
added into MPLS configuration).
BGP configuration on WAN Edge 1 (a similar configuration is needed on other WAN edges).
Verify that eBGP neighborship is established from Border Leaf 1 to WAN edge 1.
Full-mesh eBGP multihop configuration to Border-Leaf1 (site 1) to Border-Leaf3(site 2) and Border-Leaf4 (site2) (a similar
configuration is needed on other border-leaf nodes).
Verification can be done similar to Border-Leaf1 BGP and BGP EVPN verification.
Server Configurations
Server 1 Bond interface configuration for CentOS server attached to Leaf 5 and Leaf 6 of Data Center Site1.
Server 2 interface configuration for CentOS VM attached to Leaf 8 of Data Center Site2.
NOTE
VLAN-to-VNI mapping can be done manually or automatically. If automatic mapping is enabled, the VNI-to-VLAN mapping is
1:1, i.e. VLAN 201 maps to VNI 201.
BGP and EVPN verification on Leaf 5 can be done similar to Border-Leaf1 to spine (Leaf 6 can be verified using the same command).
vLAG-pair verification on Leaf 5 (Leaf 6 can be verified using the same command).
Anycast gateway verification on Leaf 5 (the same command can be used to verify on other leaf nodes).
Inclusive multicast route verification on Leaf 5 for VNI associated with VLAN 203 (the same command can be used to verify on other
nodes).
Tunnel status verification on Leaf 5 (the same command can be used to verify on other nodes).
Individual tunnel verification on Leaf 5 (the same command can be used to verify on other nodes).
VLAN verification on Leaf 5 and Leaf 6 for 203 (the same command can be used to verify on other nodes).
ARP verification on Leaf 5 (locally learnt ARP entries can be verified using this command).
ARP suppression verification on Leaf 8 (remote ARP learnt via BGP EVPN can be verified using show ip arp suppression-cache).
ARP verification on Leaf 8 (locally learnt ARP entries can be verified using this command).
ARP suppression verification on Leaf 5 (remote ARP learnt via BGP EVPN can be verified using show ip arp suppression-cache).
Local and remote MAC verification on Leaf 5 for VLAN 203 ( the same command can be used to verify on other leaf nodes).
Server Configurations
Server 1 Bond interface configuration for CentOS server attached to Leaf 5 and Leaf 6 of Data Center Site1.
Server 2 interface configuration for CentOS VM attached to Leaf 8 of Data Center Site2.
Server 5 interface configuration for Windows VM attached to Leaf 3 and Leaf 4 of Data Center Site1.
Server 6 interface configuration for Windows VM attached to Leaf 7 of Data Center Site2.
Inclusive multicast route verification on Leaf 3 for VNI associated with VLAN 203 (the same command can be used to verify on Leaf 4
and Leaf 7).
Inclusive multicast route verification on Leaf 5 for VNI associated with VLAN 203 (the same command can be used to verify on Leaf 6
and Leaf 8).
VLAN verification on Leaf 3 for 203 (the same command can be used to verify on other nodes).
Tunnel status verification on Leaf 3 (the same command can be used to verify on other nodes).
ARP verification on Leaf 3 (Locally learnt ARP entries can be verified using this command).
ARP suppression verification on Leaf 3 (remote ARP learnt via BGP EVPN can be verified using show ip arp suppression-cache).
ARP verification on Leaf 5 (locally learnt ARP entries can be verified using this command).
ARP suppression verification on Leaf 5 (remote ARP learnt via BGP EVPN can be verified using show ip arp suppression-cache).
Server 5 to Server 6 traceroute traffic (connected to Leaf 3 and 4 and Leaf 7 with extended VNI 30003).
Server 1 to Server 2 traceroute traffic (connected to Leaf 5 and 6 and Leaf 8 with extended VNI 20003).
• Refer to sections "Configuration: Border Leaf to Spine Layer 3", "Border Leaf to WAN edge Layer 3", and "Border Leaf eBGP
Multihop for Border-Leaf and DCI Configurations".
Server 1 Bond interface configuration for CentOS server attached to Leaf 5 and Leaf 6 of Data Center Site1
Server 2 interface configuration for CentOS VM attached to Leaf 8 of Data Center Site2
Server 3 Bond interface configuration for CentOS server attached to Leaf 5 and Leaf 6 of Data Center Site1
Server 4 interface configuration for Windows VM attached to Leaf 8 of Data Center Site2
Inclusive-multicast route verification on Leaf 5 for VNI associated with VLAN 7000 (same command can be used to verify on Leaf 6
and Leaf 8)
VLAN verification on Leaf 5 for 7000 (same command can be used to verify on other nodes and for VLAN 7001)
ARP verification on Leaf 5 in VRF vrf4 (Locally learnt ARP entries can be verified using this command)
ARP suppression verification on Leaf 5 for VLAN 7001 (Remote ARP learnt via BGP EVPN can be verified using show ip arp
suppression-cache)
ARP verification on Leaf 5 in VRF vrf3 (Locally learnt ARP entries can be verified using this command)
ARP suppression verification on Leaf 5 for VLAN 7000 (Remote ARP learnt via BGP EVPN can be verified using show ip arp
suppression-cache)
Server 1 to Server 2 traceroute traffic (Connected to Leaf 5 & 6 and Leaf 8 with extended VNI 7001)
Server 3 to Server 4 traceroute traffic (Connected to Leaf 5 & 6 and Leaf 8 with extended VNI 7000)
Server Configurations
Server 1 bond interface configuration for CentOS server attached to Leaf 5 and Leaf 6 of Data Center Site1.
Server 3 interface configuration for CentOS VM attached to Leaf 8 of Data Center Site2.
Inclusive multicast route verification on Leaf 5 for VNI associated with VLAN 204 (the same command can be used to verify on Leaf 6
and Leaf 8).
VLAN verification on Leaf 5 and Leaf 6 for 204 (the same command can be used to verify on Leaf 8).
ARP verification on Leaf 5 (locally learnt ARP entries can be verified using this command).
ARP suppression verification on Leaf 8 (remote ARP learnt via BGP EVPN can be verified using show ip arp suppression-cache).
ARP verification on Leaf 8 (locally learnt ARP entries can be verified using this command).
ARP suppression verification on Leaf 5 (remote ARP learnt via BGP EVPN can be verified using show ip arp suppression-cache).
Local and remote MAC verification on Leaf 5 for VLAN 204 (the same command can be used to verify on other leaf nodes).
Conversational ARP verification on Leaf 5 by sending continuous traffic between Server 1 and Server 3.
Symmetric Routing
• VLAN 203 is configured on Data center Site1 (Leaf 5 and 6) and VLAN 204 is configured on Data Center Site2 (Leaf 8).
• VRF vpn1 is configured on Leaf 5, Leaf 6 and Leaf8 with respective import, export route-targets and with common L3 VNI
2005.
• This VNI 2005 is not needed to add under EVPN instance. But VLAN-to-VNI mapping is needed under overlay-gateway
configuration.
• VE interfaces of 203, 204, and VNI VLAN VE will be configured under VRF vpn1.
• VRF address-family must be enabled under BGP configuration to advertise EVPN type 5 routes.
• Traffic between Leaf 5 and 6 and Leaf 8 is verified using traceroute from servers attached to the leaf nodes (between VLAN
203 and 204).
• Configuration examples of servers, interfaces, VRF, overlay-gateway, and EVPN instance on leaf nodes are discussed in the
following section.
• Refer to Example 1 for tunnel, port-channel, and VLAN verifications.
• Refer to sections Configuration: "Border Leaf to Spine Layer3", "Border Leaf to WAN Edge Layer 3", and "Border Leaf eBGP
Multihop for Border-leaf and DCI Configurations".
Server Configurations
Server 1 Bond interface configuration for CentOS server attached to Leaf 5 and Leaf 6 of Data Center Site1.
Server 3 interface configuration for CentOS VM attached to Leaf 8 of Data Center Site2.
VRF VNI verification on Leaf 5 (the same command can be used to verify on other leaf nodes).
L3 prefixes (type 5 routes) verification on Leaf 5 (the same command can be used to verify on other leaf nodes).
VRF route verification on Leaf 5 (the same command can be used to verify on other leaf nodes).
ARP verification on Leaf 5 (locally learnt ARP entries can be verified using this command).
ARP verification on Leaf 8 (locally learnt ARP entries can be verified using this command).
Server Configurations
Server 1 bond interface configuration for CentOS server attached to Leaf 5 and Leaf 6 of Data Center Site1.
Server 2 interface configuration for CentOS VM attached to Leaf 8 of Data Center Site2.
Server 3 interface configuration for CentOS VM attached to Leaf 8 of Data Center Site2.
Server 5 interface configuration for Windows VM attached to Leaf 3 and Leaf 4 of Data Center Site1.
Server 6 interface configuration for Windows VM attached to Leaf 7 of Data Center Site2.
VLAN interface configuration on Border-Leaf1 and Border-Leaf2 (similar configuration is needed on Border-Leaf3 and Border-Leaf4).
VE interface configuration on Border-Leaf1 and Border-Leaf2 (similar configuration is needed on Border-Leaf3 and Border-Leaf4).
Unique loopback interface on Border-Leaf1 and Border-Leaf2 to establish eBGP multihop session with Border-Leaf3 and Border-
Leaf4.
Unique loopback interface on Border-Leaf3 and Border-Leaf4 to establish eBGP multihop session with Border-Leaf1 and Border-
Leaf2.
Overlay gateway configuration on Border-Leaf1 and Border-Leaf2 (under config mode, similar configuration is needed on Border-Leaf3
and Border-Leaf4).
EVPN instance configuration on Leaf 3 and 4 (under rbridge mode, similar configuration is needed on Border-Leaf3 and Border-Leaf4).
Inclusive multicast route verification on Border-Leaf1 for VNI 20003 (the same command can be used to verify on other border-leaf
nodes).
Inclusive multicast route verification on Border-Leaf1 for VNI 30003 (the same command can be used to verify on other border-leaf
nodes).
Tunnel status verification on Border-Leaf1 (the same command can be used to verify on other border-leaf nodes).
ARP verification on Leaf 5 after issuing ARP ping from all servers (locally learnt ARP entries can be verified using this command).
ARP suppression verification on Leaf 5 (remote ARP learnt via BGP EVPN can be verified using show ip arp suppression-cache).
ARP verification on Leaf 3 (locally learnt ARP entries can be verified using this command).
ARP suppression verification on Leaf 3 (remote ARP learnt via BGP EVPN can be verified using show ip arp suppression-cache).
• This makes route exchange between Leaf 5 and 6, Leaf 8 in VLAN 204, and WAN-edge1 in VLAN 801 and 802. (Type 5
route-exchange)
Server Configurations
Server 4 bond interface configuration for CentOS server attached to Leaf 5 and Leaf 6 of Data Center Site1.
Server 3 interface configuration for CentOS VM attached to Leaf 8 of Data Center Site2.
VE interface configuration to WAN Edge1 on Border-Leaf1 (Similar configuration is needed on WAN edg1).
VE interface configuration to WAN Edge1 on Border-Leaf2 (Similar configuration is needed on WAN edg1).
BGP configuration on WAN Edge 1 (similar configuration is needed on other WAN edges).
Inclusive multicast route verification on Border-Leaf1 for VNI 20004 (since VNI for VLAN 204 defined in VRF vpn1 is extended, both
Leaf 5 and 6, and Leaf 8 advertises the IMR route).
VRF route verification on Leaf 5 (the same command can be used to verify on Leaf 6 and Leaf 8).
• VRF tenant-vrf address-family must be enabled under BGP configuration to advertise Type 5 routes.
• ISP is connected to BL1 & BL2 on DCS1 and BL3 & BL4 on DCS2.
• To extend the tenant-vrf from Leaf 5&6 and Leaf 8 to ISP one of the connections between Border-Leaf1 and WAN Edge1 is
configured in two different VLANs (800, 802) with VE 800 in default VRF (to support DCI interconnect and to have multi-hop
eBGP adjacency to border-leaf nodes on DCS2) and VE 802 in VRF public-vrf. Similar changes made on WAN edge 1.
• Similar configurations are needed between Border-Leaf2 and WAN Edge2 in VLANs 30 (default-vrf) and 31 (public-vrf),
Border-Leaf3 and WAN Edge3 in VLANs 850 (Default VRF) and 851 (public-vrf), and Border-Leaf4 and WAN Edge4 in
VLANs 40 (Default VRF) and 41 (public-vrf).
• EBGP adjacency between Border-Leaf nodes (BL1 & BL2, BL3 & BL4) with respective WAN Edges (WE1 & WE2, WE3 &
WE4) is established using VE interfaces mentioned above in respective VRFs.
• WAN edge is configured to advertise only default routes to respective Border-Leaf nodes in public-vrf.
• Route leak is configured between BL and individual leaf nodes in respective DCS with import and export route-targets under
VNIs.
• VLAN to VNI mapping for the VNI added under EVPN instance for route leak must be added under overlay-gateway on Leaf 5
& 6 and BL1 & BL2. Similar configuration is needed on Leaf 8 and BL3 and BL4.
• VE interface corresponding to the VNI for route-leak must be enabled on both Leaf 5 & 6 and on Border-Leaf1 and Border-
Leaf2. Similar configuration is needed on Leaf 8 and BL3 and BL4.
• This makes route exchange between tenant-vrfs of Leaf 5 & 6 with public-vrfs of BL1 & BL2, tenant-vrf of Leaf 8 with public-
vrfs of BL3 & BL4.
• Traffic to internet route is verified from a server attached to Leaf 5 & 6 with ping.
NOTE
In this example, private IPV4 addresses are used from TOR to ISP. This can be modified to public IPV4 addresses with NAT
placed either at WAN edge or at ISP.
Server 3 Bond interface configuration for CentOS server attached to Leaf 5 and Leaf 6 of Data Center Site1
EVPN instance configuration on Leaf 5 and Leaf 6 (under rbridge mode) (Similar configuration is needed on Leaf 8. Leaf 8 will import
default route to reach internet routes from BL3 & BL4.)
VLAN interface configuration on Leaf 5 & 6 for L3 VNI VLANs (Similar configuration is needed on Leaf 8
VE interface configuration on Leaf 5 & 6 for L3 VNI VLANs (Similar configuration is needed on Leaf 8)
VRF configuration on Border-Leaf1 and Border-Leaf2 (similar configuration needed on BL3 & BL4)
VE interface configuration to WAN Edge1 on Border-Leaf1 (Similar configuration is needed on WAN edg1)
VE interface configuration to WAN Edge2 on Border-Leaf2 (Similar configuration is needed on WAN edg2)
BGP configuration on Border-Leaf1 (Similar configuration is needed on BL2, BL3 & BL4)
BGP configuration on WAN Edge1 (Similar configuration is needed on WE2, WE3 & WE4)
Topology Description
FIGURE 16 Three Data Center Sites Interconnected Using BGP EVPN
The following matrix shows the data center tier devices and their types used for validation of this deployment model. This matrix is an
extension of matrices shown in the "BGP-EVPN-Based L2 and L3 Extension" section. In the BGP-EVPN-based L2 extension case,
traffic flow is from server to server as follows:
server - leaf - spine - border leaf - DCI tier - WAN edge - DCI tier - border leaf - leaf - server
Whereas in L3 handoff, the border leaf hands off the traffic to the WAN edge directly.
While interconnecting the IP fabric data center and the flexible (for example, VCS) data center type, the DCI tier layer can be eliminated at
the IP fabric type DC site, and EVPN control-plane extension can be configured between leaf nodes in IP fabric DC to DCI tier nodes in
the flexible DC site. In such a case, traffic flow is from:
server - leaf - spine - border-leaf - WAN edge cloud - DCI tier - VCS leaf - server
Hardware/Software Matrix
BR-VDX6940-144S
BR-VDX6740T
BR-VDX6740
Border leaf BR-VDX6940-36Q Network OS 7.0 and later
BR-VDX6940-144S
Spine BR-VDX8770-4/8 Network OS 7.0 and later
BR-VDX6940-36Q
BR-VDX6940-144S
DCI tier BR-VDX6940-36Q Network OS 7.0 and later
BR-VDX6940-144S
BR-VDX6740
WAN edge MLXe-4/8/16/32 NetIron 5.9.00
Control-plane extension between sites is enabled by establishing eBGP (multihop) peering from local DCI tier nodes to remote DCI tier
nodes with the EVPN address family enabled. The VLAN-to-VNI mapping for the VLANs to be extended is configured on the DCI tier
nodes. This model allows for VLAN extension between different DC types, e.g. VCS to BGP-EVPN-based IP fabric.
In the case of extending a BGP-EVPN-based (IP fabric) data center, the leaf nodes can encapsulate the server traffic and send it over to
the border leaf, the border leaf switches the traffic to the DCI tier, and DCI tier nodes encapsulate the L2 traffic to remote DCI tier nodes.
The following figure shows the high-level packet path.
A. The server in data center site 1 sends an Ethernet frame to the destination in site 2. The packet is forwarded to the leaf node, which is
configured as the default GW for the server.
B. The leaf node learns the source MAC address and shares it with the EVPN peers, i.e., border leaf (control plane) using IMR route. The
leaf node encapsulates the received Ethernet frame into an IP User Datagram Protocol packet and sends it over the VXLAN tunnel where
the VLAN is extended (data plane).
C. The border leaf removes the encapsulation and floods the frame to all interfaces that are configured with the same VLAN, in this
example, the port channel toward the DCI tier. The DCI tier learns the MAC from the source MAC address and shares it with remote
peers in other sites via a BGP-EVPN update.
NOTE
VTEP IP addresses are carried as the BGP next-hop attribute in every EVPN route. This allows BGP to discover remote
VTEPs. The Inclusive Multicast Ethernet Tag Route allows the receiving BGP router to discover which VLANs are common
between the two routers and extend them over the VXLAN tunnel.
D. The encapsulated traffic packet is received at the remote data center site. The VXLAN header encapsulation is removed, and the
original frame is forwarded using L2 forwarding.
E. The destination receives the packet on the active-active vLAG from the DCI tier.
The upcoming sections walk through the configuration and verification steps for this deployment model.
Configuration Steps
The traffic handoff to the border leaf can be done in two ways:
1. Over L2—Having all leaf nodes part of active-active vLAG pair.
FIGURE 18 BGP-EVPN-Based (IP Fabric) Data Center with Leaf, Spine, and Border Leaf
1. Configure the port channel between the border leaf and DCI tier nodes.
2. Configure BGP and activate EVPN adjacency between DCI tier nodes.
3. Configure an overlay gateway instance and activate it on all DCI tier nodes.
Creating VLANs to be extended (on leaf, border leaf, and DCI tier):
Interface configuration for interfaces participating in the port channel (on border leaf and DCI tier):
NOTE
The above interface configurations are to be applied on all interfaces participating in the port
channel.
NOTE
The router ID is used for the BGP neighborship formation. If the router ID is not explicitly defined, the first loopback address is
automatically chosen as the router ID.
Verify tunnel association for the VLAN that is extended to remote sites:
Route verification:
Inclusive multicast route verification on DCI tier 11 for the VNI associated with VLAN 203:
Server Configurations
Server 1 bond interface configuration - Data Center Site 1
VLAN interface configuration on Leaf O & P in Data Center Site 1, Leaf M & N in Data Center Site 2
ARP and ND suppression are to be configured when Anycast Gateway configuration is present.
Anycast gateway verification on DCI Tier 11 (the same command can be used to verify on other DCI tier nodes)
Inclusive-multicast route verification on DCI Tier 11 for VNI associated with VLAN 1998 (the same command can be used to verify on
other nodes)
Tunnel status verification on DCI Tier 11 (the same command can be used to verify on other nodes)
Individual tunnel verification on DCI Tier 11 (the same command can be used to verify on other nodes)
VLAN verification on DCI Tier 11 for 1998 (the same command can be used to verify on other nodes)
ARP verification on DCI Tier 11 (the locally learned ARP entries can be verified using this command)
ARP suppression verification on DCI Tier 11 (the remote ARP learned via BGP EVPN can be verified using show ip arp suppression-
cache)
ARP verification on DCI Tier 21 (the locally learned ARP entries can be verified using this command)
ARP suppression verification on DCI Tier 21 (the remote ARP learned via BGP EVPN can be verified using show ip arp suppression-
cache)
Local and remote MAC verification on DCI Tier 21 for VLAN 203 (the same command can be used to verify on other leaf nodes)
Server Configurations
Server 1 interface configuration for Windows VM attached to Leaf-I of Data Center Site1
Server 2 Bond interface configuration for CentOS server attached to Leaf O and Leaf P of Data Center Site2
BGP and EVPN verification on Leaf I can be done similar to Border-Leaf1 to spine.
Inclusive-multicast route verification on Leaf I for VNI associated with VLAN 1999
ARP suppression verification on Leaf I (the remote ARP learned via BGP EVPN can be verified using show ip arp suppression-cache)
ARP verification on DCE 11 and 12 (locally learned ARP entries can be verified using this command)
ARP suppression verification on Leaf I (the remote ARP learned via BGP EVPN can be verified using show ip arp suppression-cache)
FIGURE 21 Symmetric VLAN Routing Between Two Flexible Type Data Center Sites
Server Configurations
Server 1 bond interface configuration for server attached to Leaf O-P vLAG pair of Data Center
Site1
Server 2 interface configuration for CentOS VM attached to Leaf M-N vLAG pair of Data Center
Site2
BGP configuration on DCI Tier 11 (similar configuration is needed on DCI Tier 12)
VRF VNI verification on Leaf 5 (the same command can be used to verify on other leaf nodes)
L3 prefixes (Type 5 routes) verification on DCI Tier 21 (the same command can be used to verify on other tier nodes)
VRF route verification on DCI Tier 21 (the same command can be used to verify on other DCI tier nodes)
ARP verification on DCI Tier 11 (the locally learned ARP entries can be verified using this command)
ARP verification on DCI Tier 21 (the locally learned ARP entries can be verified using this command)
FIGURE 22 Symmetric VLAN Routing Between Flexible Type and IP Fabric Data Center Sites
Server Configurations
Server 1 bond interface configuration for server attached to Leaf O-P vLAG pair of Data Center
Site1
Server 2 interface configuration for CentOS VM attached to Leaf M-N vLAG pair of Data Center
Site2
BGP configuration on DCI Tier 11 (a similar configuration is needed on DCI Tier 12)
ARP verification on DCI Tier 11 (the locally learned ARP entries can be verified using this command)
ARP verification on DCI Tier 21 (the locally learned ARP entries can be verified using this command)
• VLAN to VNI mapping for the VNI added under EVPN instance for route leak must be added under overlay-gateway on DCI
Tier 11-12. Similar configuration is needed on Leaf 8 and BL3 and BL4.
• VE interface corresponding to the VNI for route-leak must be enabled on DCI Tier 11-12. Similar configuration is needed on
Leaf 8 and BL3 and BL4.
• This makes route exchange between tenant-vrfs with public-vrfs of DCI Tier 11-12, tenant-vrf of Leaf 8 with public-vrfs of
BL3 and BL4.
• Traffic to internet route is verified from a server attached to Leaf O-P with ping.
NOTE
In this example, private IPV4 addresses are used from DCI Tier to ISP. This can be modified to public IPV4 addresses with
NAT placed either at WAN edge or at ISP.
FIGURE 23 Extending a Tenant VRF at Flexible Type and IP Fabric Data Center with Common L3VNI
Server 1 bond interface configuration for CentOS server attached to Leaf O and Leaf P of Data Center
Site1
VE interface configuration to Server 1 on DCI Tier 11-12 (a similar configuration is needed on Leaf 8)
EVPN instance configuration on DCI Tier 11-12 (under rbridge mode) (a similar configuration is needed on Leaf 8. Leaf 8 will import
default route to reach internet routes from BL3 and BL4.)
VLAN interface configuration on DCI Tier 11-12 for L3 VNI VLANs (a similar configuration is needed on Leaf 8)
VE interface configuration on DCI Tier 11 for L3 VNI VLANs (a similar configuration is needed on Leaf 8)
BGP configuration on DCI Tier 11 (a similar configuration is needed on DCI Tier 12 and Leaf 8)
VRF configuration on DCI Tier 11 (a similar configuration is needed on BL3 and BL4)
VE interface configuration to WAN Edge1 on DCI Tier 11 (a similar configuration is needed on WAN edge1)
VE interface configuration to WAN Edge2 on DCI Tier 12 (a similar configuration is needed on WAN edge2)
BGP configuration on DCI Tier 11 (a similar configuration is needed on DCI Tier 12, BL3, and BL4)
BGP configuration on WAN Edge1 (a similar configuration is needed on WE2, WE3, and WE4)
Configuration aspects:
• vLAG connectivity to servers
• Establishing eBGP multihop neighborship between leaf and border-leaf
• Establishing eBGP multihop neighborship between data center edge nodes
• Configuring VLAN 203 on leaf, border-leaf and DCI tier nodes
• Verifying ping between servers 1 and 4
Server Configuration
Switch Configuration
DC1: Configuring VLAN interface on Leaf 5 & 6, border leaf (nodes that are part of IP Fabric data center)
DC1 & DC4: DCI tier 11 & 12 in data center site 1, leaf nodes, DCI tier 41 & 42 in data center site 4
DC1: Port-channel configuration on Leaf 5 & 6 pair in data center site 1 (IP Fabric data center)
DC4: Port-channel configuration between DCI tier 41-42 and leaf nodes in site 4
DC1: Overlay Gateway configurations on Leaf 5 & 6, border-leaf under global config mode
DC1 & DC4: Overlay Gateway configurations on DCI tier 11, 12, 41, & 42
DC1: EVPN instance configuration on Leaf 5 & 6, border-leaf (under rbridge mode)
DC1 & DC4: EVPN instance configuration on DCI tier 11, 12, 41, & 42
As discussed in the "Validated Design: EVPN DCI with L3 handoff" section, the data center site is a BGP EVPN based IP fabric data
center with leaf-spine topology.
• All leaf and border leaf nodes peer with Spine nodes.
• Leaf-Spine adjacencies are activated under L2VPN EVPN address-family. Adjacencies are configured with next-hop-
unchanged to advertise routes from EVPN peers to other EVPN peers without changing the next-hop.
• In Spine switches, retain route-target all is configured under EVPN address-family. This is to prevent stripping of RTs when
passing routes from one hop to another hop. Leaf switches compare RTs before installing routes with import RT under local
EVPN instance, RT advertised by each Leaf node should be maintained before reflecting to other Leaf nodes.
Port-channel Verification
Tunnel Verification
DC1 & DC4: Tunnel between DCI tier 11-12 & 41-42
DC1 & DC4: Tunnel between DCI tier 11-12 & 41-42 detail output
MAC Verification
DC1: Server
DC4: Server
In this example, traffic is routed between a host in DC1 (IP Fabric) that is a member of VLAN 203 and a host in VLAN 204 that resides
in DC4 (VCS). The IP Fabric data center site (DC1) is configured with a Static Anycast gateway for both source and destination VLANs.
As noted DC4 is a VCS fabric i.e. non-EVPN domain is configured with Fabric Virtual Gateway (FVG) for routing between VLANs.
Example
VLAN 203 & 204 are extended between Data Center 1 (IP Fabric) and Data Center 4 (VDX VCS):
• DC 1 (IP Fabric): Leaf 5 & 6, Border-leaf, DCI Tier 11-12
– DC 1 VLAN 203 and 204 Static Anycast GW is configured on leaf and border leaf switches
• Data Center Site 4 (VDX VCS): Leaf and DCI tier 41-42
– DC4 VLAN 203 and 204 Fabric Virtual Gateway is configured for the VCS
The following example below builds on the configuration steps from the Example 1 above. For example, VLAN, port-channel, Overlay-
Gateway, EVPN instance and BGP.
Topology
Server Configuration
DC1: Configuration on server bond interface in data center site 1 (VLAN 203 subnet)
DC4: Configuration on server bond interface in data center site 4 (VLAN 204 subnet)
Switch Configuration
DC1: Configuring VLAN interface on Leaf 5 & 6 and border leaf nodes
DC1 & DC4: Configuring VLAN interface on DCI tier 11 & 12 in data center site 1, DCI tier 41 & 42 in data center site 4
DC1: In addition to the VE 203 configuration, VE 204 interface is configured on Leaf 5 &6, border leaf in data center site 1
DC4: On site 4 leaf nodes, Fabric Virtual Gateway is configured in global configuration mode
DC4: IP MTU on VE
DC1: Port-channel configuration on border leaf and DCI tier nodes in site 1
DC4: Port-channel configuration on DCI tier nodes and leaf nodes in site 4
DC1: Overlay Gateway configuration on Leaf 5 & 6, border leaf under configuration mode
DC1 & DC4: Overlay Gateway configuration on DCI tier 11, 12, 41, & 42
NOTE
Using map vlan auto will generate VNIs with the same ID as the VLAN ID; for example VLAN 203 will have VNI 203.
DC1 & DC4: EVPN instance configuration on DCI tier 11, 12, 41, & 42
ARP Verification
DC1: Server
DC4: Server
Topology Description
• In Data Center Site1, all Leaf nodes are connected to two spine nodes (with IPv4 address configured on interfaces in /31
subnet) using IPv4 EBGP adjacency with both spine nodes in same AS 64610. Leaf C and Leaf D are single and LeafE-LeafF,
LeafG-LeafH are vLAG-pair. Leaf C is in AS 64630, Leaf D is in 64650, Leaf E-Leaf F are in 64640, Leaf G-Leaf H are in
64670. ECMP is achieved using multipath eBGP.
• In Data Center Site2, all Leaf nodes are connected to two spine nodes (with IPv4 address configured on interfaces in /31
subnet) using IPv4 EBGP adjacency with both spine nodes in same AS 64710. Leaf I, Leaf J and Leaf L are single. Leaf I is in
AS 64720, Leaf J is in AS 64730, Leaf L is in AS 64750. ECMP is achieved using multipath eBGP.
• In Data Center Site3, all Leaf nodes are connected to two spine nodes (with IPv4 address configured on interfaces in /31
subnet) using IPv4 EBGP adjacency with both spine nodes in same AS 64810. Leaf 1, Leaf 2, Leaf 3, Leaf 4 are single. Leaf 1
is in AS 64820, Leaf 2 is in AS 64830, Leaf 3 is in AS 64840 and Leaf 4 is in AS 64850. ECMP is achieved using multipath
eBGP.
• Leaf-Spine adjacencies are activated under L2VPN EVPN address-family on all Leaf and Spine Switches. Leaf-Spine
adjacencies are configured with next-hop-unchanged to advertise routes from EVPN peers to other EVPN peers without
changing the next-hop.
• In Spine switches, retain route-target all is configured under EVPN address-family. This is to prevent stripping of RTs when
passing routes from one hop to another hop. Leaf switches compare RTs before installing routes with import RT under local
EVPN instance, RT advertised by each Leaf node should be maintained before reflecting to other Leaf nodes.
• Leaf to Host interfaces are configured as an Active-Active vLAG (aggregation of multiple physical links across multiple switches
from a single fabric forming single logical interface). The interfaces can be in access or trunk VLANs with IPv4, IPv6 any cast
address configured to allow VM mobility within or across DCS.
• Overlay Gateway is configured in global context on all leaf nodes (applies to both nodes in case of two node vLAG pair) with
type of overlay to be used, respective VLAN VNI mapping, VTEP membership, switches membership, and VXLAN monitoring
like VLAN stats and SFLOW.
• EVPN instance is configured under rbridge mode for each leaf with RD, RT, VNIs to be extended.
NOTE
Connections between Leaf-to-Spine and Spine-to-Spine are all 40G connections.
Hardware/Software Matrix
Role of Node Chassis Name (Possible Chassis Types) Minimum Software Version Required
Configuration Steps
The BGP-EVPN-based L2 and L3 extension deployment model is characterized by the following:
• Layer 3 reachability between Spine nodes in different data centers via EBGP neighbor ship.
• BGP neighborship between Spine nodes in different data centers with EVPN AF enabled.
BGP Configuration on Spine A to Spines (Similar configurations on all Spines in other DC's with respective IP addresses)
Server Configurations
Server 1 Bond interface configuration for CentOS server attached to Leaf G and Leaf H of Data Center Site1.
Server 2 interface configuration for Win-VM attached to Leaf J of Data Center Site2.
Server 3 interface configuration for CentOS VM attached to Leaf 1 of Data Center Site3.
rbridge-id 1
router bgp
local-as 64670
neighbor Leaf-Spine-EBGP-Peer peer-group
neighbor Leaf-Spine-EBGP-Peer remote-as 64610
neighbor 10.1.1.8 peer-group Leaf-Spine-EBGP-Peer
neighbor 10.1.2.8 peer-group Leaf-Spine-EBGP-Peer
address-family ipv4 unicast
network 10.10.10.4/32
maximum-paths 8
multipath ebgp
!
address-family l2vpn evpn
graceful-restart
neighbor Leaf-Spine-EBGP-Peer activate
neighbor Leaf-Spine-EBGP-Peer allowas-in 1
neighbor Leaf-Spine-EBGP-Peer next-hop-unchanged
rbridge-id 2
router bgp
local-as 64670
neighbor Leaf-Spine-EBGP-Peer peer-group
neighbor Leaf-Spine-EBGP-Peer remote-as 64610
neighbor 10.1.1.10 peer-group Leaf-Spine-EBGP-Peer
neighbor 10.1.2.10 peer-group Leaf-Spine-EBGP-Peer
address-family ipv4 unicast
network 10.10.10.4/32
maximum-paths 8
multipath ebgp
!
address-family l2vpn evpn
graceful-restart
neighbor Leaf-Spine-EBGP-Peer activate
neighbor Leaf-Spine-EBGP-Peer allowas-in 1
neighbor Leaf-Spine-EBGP-Peer next-hop-unchanged
NOTE
VLAN-to-VNI mapping can be done manually or automatically. If automatic mapping is enabled, the VNI-to-VLAN mapping is
1:1, i.e. VLAN 201 maps to VNI 201.
BGP and EVPN verification on Leaf G can be done similar to Border-Leaf1 to spine (Leaf H can be verified using the same command).
vLAG-pair verification on Leaf G (Leaf H can be verified using the same command).
Anycast gateway verification on Leaf G (the same command can be used to verify on other leaf nodes).
rbridge-id 1
router bgp
local-as 64730
network 10.10.10.6/32
maximum-paths 8
multipath ebgp
!
address-family ipv6 unicast
!
address-family l2vpn evpn
graceful-restart
neighbor Leaf-Spine-EBGP-Peer activate
neighbor Leaf-Spine-EBGP-Peer allowas-in 1
neighbor Leaf-Spine-EBGP-Peer next-hop-unchanged
rbridge-id 1
router bgp
local-as 64820
neighbor Leaf-Spine-EBGP-Peer peer-group
neighbor 10.1.9.0 remote-as 64810
neighbor 10.1.9.0 peer-group Leaf-Spine-EBGP-Peer
neighbor 10.1.10.0 remote-as 64810
neighbor 10.1.10.0 peer-group Leaf-Spine-EBGP-Peer
address-family ipv4 unicast
network 10.10.10.70/32
neighbor Leaf-Spine-EBGP-Peer capability additional-paths
maximum-paths 8
multipath ebgp
!
address-family ipv6 unicast
!
address-family l2vpn evpn
graceful-restart
neighbor Leaf-Spine-EBGP-Peer activate
neighbor Leaf-Spine-EBGP-Peer allowas-in 1
neighbor Leaf-Spine-EBGP-Peer next-hop-unchanged
Inclusive-multicast route verification on Leaf G for VNI associated with VLAN 203 (same command can be used to verify on other
nodes).
Tunnel status verification on Leaf G (same command can be used to verify on other nodes).
Individual Tunnel verification on Leaf G (same command can be used to verify on other nodes).
VLAN verification on Leaf G and Leaf H for 203 (same command can be used to verify on other nodes).
ARP verification on Leaf G (Locally learnt arp entries can be verified using this command).
ARP suppression verification on Leaf J (Remote ARP learnt via BGP EVPN can be verified using show ip arp suppression-cache).
Local and Remote MAC verification on Leaf G for VLAN 203 (Same command can be used to verify on other leaf nodes).
Server Configurations
Server 1 bond interface configuration for server attached to Leaf G and Leaf H of Data Center Site1.
Server 2 interface configuration for Windows VM attached to Leaf J of Data Center Site2.
Inclusive-multicast route verification on Leaf G for VNI associated with VLAN 204 (same command can be used to verify on Leaf H and
Leaf J).
VLAN verification on Leaf G and Leaf H for 204 (same command can be used to verify on Leaf 8).
ARP verification on Leaf G (Locally learnt arp entries can be verified using this command).
ARP suppression verification on Leaf J (Remote ARP learnt via BGP EVPN can be verified using show ip arp suppression-cache).
ARP verification on Leaf J from Server 2 (Locally learnt arp entries can be verified using this command).
ARP suppression verification on Leaf G (Remote ARP learnt via BGP EVPN can be verified using show ip arp suppression-cache).
Local and Remote MAC verification on Leaf G for VLAN 204 (Same command can be used to verify on other leaf nodes).
Conversational ARP verification on Leaf G by sending continuous traffic between Server 1 and Server 2.
Symmetric Routing
• VLAN 203 is configured on Data center Site1 (LeafG & H), VLAN 204 is configured on Data Center Site2 (Leaf J) and VLAN
205 is configured on Data Center Site3 (Leaf 1).
• VRF vpn1 is configured on Leaf G, Leaf H, Leaf J and Leaf 1 with respective import, export route-targets and with common L3
VNI 2005.
• This VNI 2005 is not needed to add under EVPN instance. But VLAN to VNI mapping is needed under overlay-gateway
configuration.
• VE interfaces of 203, 204, 205 and VNI VLAN VE will be configured under VRF vpn1.
• VRF address-family should be enabled under BGP configuration to advertise Type 5 routes.
• For demonstration purpose traffic between Leaf G & H and Leaf J is verified using traceroute from servers attached to the leaf
nodes (between VLAN 203 & 204).
• Configuration examples of servers, interfaces, VRF, overlay-gateway, and EVPN instance on leaf nodes are discussed in the
below section.
• Refer Example 1 for tunnel, port-channel, and VLAN verifications
Server Configurations
Server 1 bond interface configuration for CentOS server attached to Leaf G and Leaf H of Data Center Site1.
Server 2 interface configuration for for Win-VM attached to Leaf J of Data Center Site2.
Server 3 interface configuration for CentOS attached to Leaf 1 of Data Center Site3.
NOTE
Similar Configurations are followed for section "Leaf Node 1 Configurations on DCS3" with Leaf 1 in local AS64820, using
VLAN205, and VRF vpn1 RD 601:601.
VRF VNI verification on Leaf G (Same command can be used to verify on other leaf nodes).
L3 prefixes (Type 5 routes) verification on Leaf G (Same command can be used to verify on other leaf nodes).
VRF route verification on Leaf G (Same command can be used to verify on other leaf nodes).
ARP verification on Leaf G (Locally learnt arp entries can be verified using this command).
ARP verification on Leaf J (Locally learnt arp entries can be verified using this command).
Tunnel Scale
The Extreme implementation is designed to minimize the number of VXLAN tunnels required at a given leaf switch by allowing for
multiple VLAN-to-VNI mappings on each. With an extended control plane between two different data centers, a general rule of thumb for
calculating the number of tunnels originating from a given leaf switch is to count the number of leaf switches sharing a common VNI. An
illustrative example follows:
Scenario: Multiple leaf nodes in different data centers with a varying number of VLAN/VNI mappings:
• Leaf node 1 in DC1 has VLANs 100–199 mapped to VNI 10100–10199.
• Leaf node 2 in DC1 has VLANs 100–199 mapped to VNI 10100–10199.
• Leaf node 3 in DC3 has VLAN 100 mapped to VNI 10100.
• Leaf node 4 in DC3 has VLAN 200 mapped to VNI 20000.
No tunnels will be created on Leaf 4 because there are no other leaf switches sharing a common VNI.
Tunnels * VLANs
The tunnels x VLAN scale is calculated as the sum of all VLANs extended across VXLAN tunnels. For example:
Leaf node 1 has 10 VXLAN tunnels (i.e., 10 remote leaf nodes with a common VNI mapping), 5 of which are providing extension for 5
VLANs and the other 5 are extending 10 VLANs. VXLAN * tunnels is 75.
The following tables provide a brief summary of the key scale parameters validated in the test topologies in this document. It should be
noted that these values are not a measure of the maximum scale that can be supported with Extreme switches for DCI.
http://www.brocade.com/content/dam/common/documents/content-types/whitepaper/brocade-data-center-fabric-
architectures-wp.pdf
http://www.brocade.com/content/html/en/configuration-guide/nos-700-ipfabrics/GUID-C490DC0B-
BEE0-46A8-9A6B-294035E19834.html
http://www.brocade.com/content/brocade/en/backend-content/pdf-page.html?/content/dam/common/documents/content-
types/brocade-validated-design/brocade-ip-fabric-bvd.pdf
http://www.brocade.com/content/html/en/configuration-guide/netiron-05900-mplsguide/index.html
https://tools.ietf.org/html/rfc3209
https://tools.ietf.org/html/rfc4364
https://tools.ietf.org/html/rfc5036