Sei sulla pagina 1di 127

NSX Design: Networking

NSX Design: Networking
NSX Design: Networking
NSX Design: Networking

Data Center Profile

Experience, background, and history

Name: Paul A. Mancuso

VMware Instructor/VCI 2006:

Cisco Instructor/CCSI: 1996

Industry Experience: 24+ years

VCDX-NV, VCI, CCSI, CCNP Data Center, MCSE 2012 (Since 1994), MCT, CISSP

Contact/Email/Twitter

pmancuso@vmware.com

datacentertrainer@gmail.com

@pmancuso

954.551.6081

Background:

• @pmancuso • 954.551.6081 • Background: • Current Emphasis: Datacenter technologies, Network

Current Emphasis: Datacenter technologies, Network Virtualization, Server and Desktop virtualization, SAN switching Overview: Started in LAN Networking, Network Mgmt Systems, and gradually moved into Data Center Architecture and Management Publications: Author of MCITPro 70-647: Windows Server 2008 R2 Enterprise Administration Publication: MCITPro 70-70-237: Designing Messaging Solutions with Microsoft® Exchange Server 2007 Certifications: Dozens of Networking/DataCenter/SAN switching/Server Administration technical certifications from Cisco, VMware, Microsoft, Novell, and Industry recognized security certifications from ISC2 (CISSP), Education: Graduated with honors from Ohio State University with Bachelor of Science degree in Zoology (Pre-Med) and minors in Finance and Economics.

History:

VMware NSBU: Technical Enablement Architect

Firefly Director of Cisco Integration for VMware and Microsoft Integration

Firefly Senior Instructor and PLD for Cisco Data Center Virtualization

CEO NITTCI: Prof Services, Training, Content Development (Courseware and books)

CEO Dynacomp Network Systems: Consulting, Training, Courseware Development

02/2015- Prsent 01/2013- 2/2015 11/2009 – 12/2012 10/2003 – Present 02/1989 – 10/2003

Including LAN Networking specializing in Novel Netware, Directory Services, LAN design

Present 02/1989 – 10/2003 • Including LAN Networking specializing in Nove l Netware, Directory Services, LAN
Present 02/1989 – 10/2003 • Including LAN Networking specializing in Nove l Netware, Directory Services, LAN

Agenda – Part 1

1 vSphere Distributed Switch (Whiteboard)

2 NSX for vSphere Overview

3 Physical Network Design Considerations

4 vSphere Design Considerations for NSX

5 NSX Design Considerations

Overview 3 Physical Network Design Considerations 4 vSphere Design Considerations for NSX 5 NSX Design Considerations
Overview 3 Physical Network Design Considerations 4 vSphere Design Considerations for NSX 5 NSX Design Considerations

Session Objectives

vSphere Distributed Switching

NSX Introduction

Overview of Physical Network designs for Network Virtualization

Cover vSphere design impacts on VMware NSX

Highlight key design considerations in NSX for vSphere deployments

Also refer to the NSX-v Design Guide:

https://www.vmware.com/files/pdf/products/nsx/vmw-nsx-network- virtualization-design-guide.pdf

NSX-v Design Guide:  https://www.vmware.com/files/pdf/products/nsx/vmw-nsx-network- virtualization-design-guide.pdf
NSX-v Design Guide:  https://www.vmware.com/files/pdf/products/nsx/vmw-nsx-network- virtualization-design-guide.pdf

Agenda – Part 1

1

vSphere Distributed Switch (Whiteboard)

2

NSX for vSphere Overview

3

Physical Network Design Considerations

4

vSphere Design Considerations for NSX

5

NSX Design Considerations

Physical Network Design Considerations 4 vSphere Design Considerations for NSX 5 NSX Design Considerations
Physical Network Design Considerations 4 vSphere Design Considerations for NSX 5 NSX Design Considerations

vSphere and vSphere Distributed Switch

Whiteboard Discussion

vSphere and vSphere Distributed Switch Whiteboard Discussion CONFIDENTIAL 6

CONFIDENTIAL6

Agenda – Part 1

1

vSphere Distributed Switch (Whiteboard)

2

NSX for vSphere Overview

3

Physical Network Design Considerations

4

vSphere Design Considerations for NSX

5

NSX Design Considerations

Physical Network Design Considerations 4 vSphere Design Considerations for NSX 5 NSX Design Considerations
Physical Network Design Considerations 4 vSphere Design Considerations for NSX 5 NSX Design Considerations

NSX for vSphere Overview

Quick refresh on NSX components and basic architecture

NSX for vSphere Overview Quick refresh on NSX components and basic architecture

NSX Customer and Business Momentum

NSX Customer and Business Momentum 1200 + NSX Customers 250+ Production Deployments (adding 25-50 per QTR)
NSX Customer and Business Momentum 1200 + NSX Customers 250+ Production Deployments (adding 25-50 per QTR)
NSX Customer and Business Momentum 1200 + NSX Customers 250+ Production Deployments (adding 25-50 per QTR)

1200+

NSX Customers

250+

Production Deployments (adding 25-50 per QTR)

100+

Organizations have spent over US$1M on NSX

Stats as of end of Q4 20151200 + NSX Customers 250+ Production Deployments (adding 25-50 per QTR) 100+ Organizations have spent over

Production Deployments (adding 25-50 per QTR) 100+ Organizations have spent over US$1M on NSX Stats as
Production Deployments (adding 25-50 per QTR) 100+ Organizations have spent over US$1M on NSX Stats as
Production Deployments (adding 25-50 per QTR) 100+ Organizations have spent over US$1M on NSX Stats as
Production Deployments (adding 25-50 per QTR) 100+ Organizations have spent over US$1M on NSX Stats as
Production Deployments (adding 25-50 per QTR) 100+ Organizations have spent over US$1M on NSX Stats as
Production Deployments (adding 25-50 per QTR) 100+ Organizations have spent over US$1M on NSX Stats as
Production Deployments (adding 25-50 per QTR) 100+ Organizations have spent over US$1M on NSX Stats as
Production Deployments (adding 25-50 per QTR) 100+ Organizations have spent over US$1M on NSX Stats as
Production Deployments (adding 25-50 per QTR) 100+ Organizations have spent over US$1M on NSX Stats as
Production Deployments (adding 25-50 per QTR) 100+ Organizations have spent over US$1M on NSX Stats as
Production Deployments (adding 25-50 per QTR) 100+ Organizations have spent over US$1M on NSX Stats as
Production Deployments (adding 25-50 per QTR) 100+ Organizations have spent over US$1M on NSX Stats as
Production Deployments (adding 25-50 per QTR) 100+ Organizations have spent over US$1M on NSX Stats as
Production Deployments (adding 25-50 per QTR) 100+ Organizations have spent over US$1M on NSX Stats as
Production Deployments (adding 25-50 per QTR) 100+ Organizations have spent over US$1M on NSX Stats as

Today’s situation

Internal and external forces

Today’s situation Internal and external forces Better security Faster time to market Higher availability Be more
Better security Faster time to market Higher availability Be more efficient Run things cheaper
Better security
Faster time to market
Higher availability
Be more efficient
Run things cheaper
and external forces Better security Faster time to market Higher availability Be more efficient Run things

Our vision:

Deliver

Our vision: Deliver Inherently secure infrastructure IT at the speed of business Data center anywhere Be
Inherently secure infrastructure IT at the speed of business Data center anywhere Be more efficient
Inherently secure
infrastructure
IT at the speed
of business
Data center anywhere
Be more efficient
Improved data center operations
Run things cheaper
CapEx (Increase compute efficiency, ensure full life of network hardware, etc.)
center operations Run things cheaper CapEx (Increase compute efficiency, ensure full life of network hardware, etc.)

Primary Use Cases with NSX

Security: Automation: Application Continuity: Theme Inherently Secure Infrastructure IT at the Speed of Business
Security:
Automation:
Application Continuity:
Theme
Inherently Secure
Infrastructure
IT at the Speed
of Business
Datacenter
Anywhere
Micro-segmentation
IT Automating IT
Disaster Recovery
Lead
Project
Secure infrastructure
at 1/3 the cost
Reduce infrastructure
provisioning time from weeks
to minutes
Reduce RTO by 80%
Value
DMZ Anywhere
Developer Cloud
Metro/Geo Pooling
Other
Projects
Secure End User
Multi-tenant
NSX in Public Cloud
Infrastructure
Developer Cloud Metro/Geo Pooling Other Projects Secure End User Multi-tenant NSX in Public Cloud Infrastructure 12

12

What is NSX?

What is NSX? Provides A Faithful Reproduction of Network & Security Services in Software Switching Routing

Provides

A Faithful Reproduction of Network & Security Services in Software

Reproduction of Network & Security Services in Software Switching Routing Firewalling Load Balancing VPN

Switching

of Network & Security Services in Software Switching Routing Firewalling Load Balancing VPN Connectivity to

Routing

Network & Security Services in Software Switching Routing Firewalling Load Balancing VPN Connectivity to Physical 13

Firewalling

Network & Security Services in Software Switching Routing Firewalling Load Balancing VPN Connectivity to Physical 13

Load

Balancing

Network & Security Services in Software Switching Routing Firewalling Load Balancing VPN Connectivity to Physical 13

VPN

Network & Security Services in Software Switching Routing Firewalling Load Balancing VPN Connectivity to Physical 13
Network & Security Services in Software Switching Routing Firewalling Load Balancing VPN Connectivity to Physical 13

Connectivity to Physical

13

Logical Network

NSX Components

Cloud Consumption • Self Service Portal • vCloud Automation Center, OpenStack, Custom CMP
Cloud Consumption
Self Service Portal
vCloud Automation Center, OpenStack,
Custom CMP
NSX Manager • Single configuration portal Management Plane • REST API entry-point NSX Controller
NSX Manager
• Single configuration portal
Management Plane
• REST API entry-point
NSX Controller

Control Plane

• REST API entry-point NSX Controller Control Plane • Manages Logical networks • Control-Plane Protocol

Manages Logical networks

Control-Plane Protocol

Separation of Control and Data Plane

Protocol • Separation of Control and Data Plane Physical Network Data Plane Distributed Services NSX Edge
Physical Network
Physical
Network

Data Plane

Distributed Services

NSX Edge

Logical Distributed Firewall Switch Logical Router
Logical
Distributed
Firewall
Switch
Logical Router
ESXi
ESXi

Hypervisor Kernel Modules

High – Performance Data Plane

Scale-out Distributed Forwarding Model

… …

NSX vSwitch and NSX Edge

ESXi NSX vSwitch Hypervisor Kernel Modules (vSphere VIBs)
ESXi
NSX vSwitch
Hypervisor Kernel Modules
(vSphere VIBs)
ESXi NSX vSwitch Hypervisor Kernel Modules (vSphere VIBs) VDS VXLAN Logical Router Firewall vSphere  NSX

VDS

VXLAN

Logical Router

Firewall

vSphere

NSX vSwitch (VDS)

VMkernel Modules

VXLAN

Distributed Routing

Distributed Firewall

Switch Security

Message Bus

Distributed Firewall • Switch Security • Message Bus NSX Logical Router Control VM NSX Edge Logical
Distributed Firewall • Switch Security • Message Bus NSX Logical Router Control VM NSX Edge Logical

NSX Logical Router Control VM

NSX Edge Logical Router

Control Functions only

Dynamic routing and updates controller

Determines active ESXi host for VXLAN to VLAN layer 2 bridging

active ESXi host for VXLAN to VLAN layer 2 bridging NSX Edge Services GW NSX Edge

NSX Edge Services GW

NSX Edge Services GW

ECMP, Dynamic Routing

BGP & OSPF

L3-L7 Services:

NAT, DHCP, Load Balancer, VPN, Firewall

VM form factor

High Availability

15 | 22

Virtual Networks (VMware NSX)

Design decision: Should VMware NSX™ be included in the design?
Design decision:
Should VMware NSX™ be included in the design?

A virtual network is a software container that delivers network services.

VMware NSX virtualized logical switching (layer 2) over existing physical networks.

VMware NSX virtualized logical routing (layer 3) over existing physical networks.

VMware NSX also provides the following features:

Logical Firewall: Distributed firewall, kernel integrated, high performance

Logical Load Balancer: Application load balancing in software

Logical Virtual Private Network (VPN): Site-to-site and remote access VPN in software

VMware® NSX API™: REST API for integration into any cloud management platform

For more information see VMware NSX Network Virtualization Design Guide at

http://www.vmware.com/files/pdf/products/nsx/vmw-nsx-network-virtualization-

design-guide.pdf

Design Guide at http://www.vmware.com/files/pdf/products/nsx/vmw-nsx-network-virtualization- design-guide.pdf 16

16

Virtual Extensible LANs (VXLAN)

Design decision: Should VXLAN be included in the design?
Design decision:
Should VXLAN be included in the design?

Ethernet in IP overlay network:

Entire L2 frame encapsulated in User Datagram Protocol (UDP)

50+ bytes of overhead

VXLAN can cross layer 3 network boundaries.

Allows network boundary devices to extend virtual network boundaries over physical IP networks.

Expands the number of available logical Ethernet segments from 4094 to over 16 million logical segments.

Encapsulates the source Ethernet frame in a new UDP packet.

VXLAN is transparent to virtual machines.

VXLAN is an overlay between VMware ESXi hosts. Virtual machines do not see VXLAN ID.

to virtual machines. • VXLAN is an overlay between VMware ESXi hos ts. Virtual machines do

17

VXLAN Terms

A VTEP is an entity that encapsulates an Ethernet frame in a VXLAN frame or de- encapsulates a VXLAN frame and forwards the inner Ethernet frame.

A VTEP proxy is a VTEP that forwards VXLAN traffic to its local segment from another VTEP in a remote segment.

A transport zone defines members or VTEPs of the VXLAN overlay:

Can include ESXi hosts from different VMware vSphere® clusters

A cluster can be part of multiple transport zones

A VXLAN Number Identifier (VNI) is a 24-bit number that gets added to the VXLAN frame:

The VNI uniquely identifies the segment to which the inner Ethernet frame belongs

Multiple VNIs can exist in the same transport zone

VMware NSX for vSphere starts with VNI 5000

frame belongs • Multiple VNIs can exist in the same transport zone • VMware NSX for

18

VXLAN Frame Format

Original L2 frame header and payload is encapsulated in a UDP/IP packet

50 bytes of VXLAN overhead

Original L2 header becomes payload, plus: VXLAN, UDP, and IP headers

Frame

VXLAN Packet

MAC Header IP Header UDP Header VXLAN Header Original Frame Header & Payload Outer MAC
MAC Header
IP Header
UDP Header
VXLAN Header
Original Frame Header
& Payload
Outer MAC Header
Outer IP Header
Outer UDP Header
VXLAN Header
Inner L2
Destination Address
6
Misc Data
9
Source Port
2
VXLAN Flags
1
Destination Address
6
Source Address
6
Protocol 0x11
1
VXLAN Port
2
Reserved
3
Source Address
6
VLAN Type 0x8100
2
Header Checksum
2
UDP Length
2
VNI
3
VLAN Type 0x8100
2
VLAN ID Tag
2
Source IP
4
Checksum 0x0000
2
Reserved
1
VLAN ID Tag
2
Ether Type 0x0800
2
Destination IP
4
8
8
Ether Type 0x0800
2
14+
20
14+
Payload 1500
FCS
2 Ether Type 0x0800 2 Destination IP 4 8 8 Ether Type 0x0800 2 14+ 20
2 Ether Type 0x0800 2 Destination IP 4 8 8 Ether Type 0x0800 2 14+ 20

VXLAN Overhead

19

NSX for vSphere VXLAN Replication Modes

NSX for vSphere provides three modes of traffic replication (one that is Data Plane based and two that are Controller based)

Multicast Mode

Requires IGMP for a Layer 2 topology and Multicast Routing for L3 topology

Unicast Mode

All replication occurs using unicast

Hybrid Mode

Local replication offloaded to physical network, while remote replication occurs via unicast

All modes require an MTU of 1600 bytes

offloaded to physical network, while remote replication occurs via unicast  All modes require an MTU
offloaded to physical network, while remote replication occurs via unicast  All modes require an MTU

VXLAN Replication: Control Plane

In unicast or hybrid mode, an ESXi host sending the communication will select one VTEP in every remote segment from its VTEP mapping table as a proxy. This selection is per VNI (balances load across proxy VTEPs).

In unicast mode, this proxy is called a Unicast Tunnel End Point (UTEP).

In hybrid mode, this proxy is called a Multicast Tunnel End Point (MTEP).

This list of UTEPs or MTEPs is NOT synced to each VTEP.

If a UTEP or MTEP leaves a VNI, the ESXi host sending the communication will select a new proxy in the segment

the communication will select a new proxy in the segment NSX Controller VXLAN Directory Service MAC
NSX Controller VXLAN Directory Service MAC Table ARP Table VTEP Table
NSX Controller
VXLAN Directory
Service
MAC Table
ARP Table
VTEP Table
will select a new proxy in the segment NSX Controller VXLAN Directory Service MAC Table ARP

NSX – Logical View

VM1 VM2 VM3
VM1
VM2
VM3

Web LS

172.16.10.0/24

Edge Routing Service

- NAT

- Firewall

- LB

LS 172.16.10.0/24 Edge Routing Service - NAT - Firewall - LB Logical Firewall Distributed Logical Router

Logical Firewall

Distributed Logical Router

Physical Routers

Firewall Distributed Logical Router Physical Routers Transit LS VM4 VM5 VLAN PG App LS 172.16.20.0/24 Logical
Firewall Distributed Logical Router Physical Routers Transit LS VM4 VM5 VLAN PG App LS 172.16.20.0/24 Logical
Firewall Distributed Logical Router Physical Routers Transit LS VM4 VM5 VLAN PG App LS 172.16.20.0/24 Logical
Transit LS VM4 VM5
Transit LS
VM4
VM5

VLAN PG

App LS

172.16.20.0/24

Logical Router Physical Routers Transit LS VM4 VM5 VLAN PG App LS 172.16.20.0/24 Logical Switching CONFIDENTIAL

Logical Switching

Logical Router Physical Routers Transit LS VM4 VM5 VLAN PG App LS 172.16.20.0/24 Logical Switching CONFIDENTIAL

CONFIDENTIAL

22

Enterprise Topology

A common enterprise-level topology.

External Network

• A common enterprise-level topology. External Network Physical Router VLAN 20 Uplink NSX Edge Services Gateway
• A common enterprise-level topology. External Network Physical Router VLAN 20 Uplink NSX Edge Services Gateway
• A common enterprise-level topology. External Network Physical Router VLAN 20 Uplink NSX Edge Services Gateway

Physical Router

VLAN 20

Uplink

NSX Edge Services Gateway

VXLAN 5020

Uplink

LR Instance 1

NSX Edge Services Gateway VXLAN 5020 Uplink LR Instance 1 Appn DBn Web1 App1 DB1 Web2
Appn DBn Web1 App1 DB1 Web2 App2 DB2 Webn VM VM VM VM VM VM
Appn
DBn
Web1
App1
DB1
Web2
App2
DB2
Webn
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
VM
Appn DBn Web1 App1 DB1 Web2 App2 DB2 Webn VM VM VM VM VM VM VM
Appn DBn Web1 App1 DB1 Web2 App2 DB2 Webn VM VM VM VM VM VM VM
Appn DBn Web1 App1 DB1 Web2 App2 DB2 Webn VM VM VM VM VM VM VM

Servicer Provider: Multiple Tenant Topology

Multiple tenants to the same NSX Edge gateway.

External Network

tenants to the same NSX Edge gateway. External Network NSX Edge Services Gateway VXLAN 5020 Uplink

NSX Edge Services Gateway

VXLAN 5020 Uplink Tenant 1 LR Instance 1 VM VM VM VM VM VM Web
VXLAN 5020
Uplink
Tenant 1
LR Instance 1
VM
VM
VM
VM
VM
VM
Web Logical Switch
App Logical Switch
DB Logical Switch
VXLAN 5030 Uplink
VXLAN 5030
Uplink

Tenant 2

LR Instance 2 VM VM VM VM VM VM Web Logical Switch App Logical Switch
LR Instance 2
VM
VM
VM
VM
VM
VM
Web Logical Switch
App Logical Switch
DB Logical Switch
5030 Uplink Tenant 2 LR Instance 2 VM VM VM VM VM VM Web Logical Switch
5030 Uplink Tenant 2 LR Instance 2 VM VM VM VM VM VM Web Logical Switch

NSX Multiple Tenant Topology (IP Domain Separation)

External Network
External Network

NSX Edge Services Gateway

Separation) External Network NSX Edge Services Gateway VXLAN 5031 to VXLAN 5039 Tenant 10 Tenant 19
Separation) External Network NSX Edge Services Gateway VXLAN 5031 to VXLAN 5039 Tenant 10 Tenant 19
VXLAN 5031 to VXLAN 5039 Tenant 10 Tenant 19 LR Instance 10 Web Logical App
VXLAN 5031 to
VXLAN 5039
Tenant 10
Tenant 19
LR Instance 10
Web Logical
App Logical Switch
DB Logical Switch
Switch
VM
VM
VM
VM
VM
VM
Web Logical Switch
App Logical Switch
DB Logical Switch
Web Logical Switch App Logical Switch DB Logical Switch VXLAN 5021 to VXLAN 5029 Tenant 1
Web Logical Switch App Logical Switch DB Logical Switch VXLAN 5021 to VXLAN 5029 Tenant 1
VXLAN 5021 to VXLAN 5029 Tenant 1 LR Instance 1 Web Logical App Logical Switch
VXLAN 5021 to
VXLAN 5029
Tenant 1
LR Instance 1
Web Logical
App Logical Switch
DB Logical Switch
Switch
VM
VM
VM
VM
VM
VM
Web Logical Switch
App Logical Switch DB Logical Switch

Tenant 9

DB Logical Switch Switch VM VM VM VM VM VM Web Logical Switch App Logical Switch
DB Logical Switch Switch VM VM VM VM VM VM Web Logical Switch App Logical Switch

NSX– Physical View

NSX Edge

NSX– Physical View NSX Edge Manager Controller VM5 VM4 VM1 VM2 VM3 App LS Web Logical
Manager
Manager

Controller

NSX– Physical View NSX Edge Manager Controller VM5 VM4 VM1 VM2 VM3 App LS Web Logical
VM5 VM4 VM1 VM2 VM3 App LS Web Logical Switch Transport Zone
VM5
VM4
VM1
VM2
VM3
App LS
Web Logical Switch
Transport Zone
Compute Cluster Compute Cluster Edge Cluster Physical Network
Compute Cluster
Compute Cluster
Edge Cluster
Physical Network
Cluster Compute Cluster Edge Cluster Physical Network Management Cluster Transport Subnet A 192.168.150.0/24
Cluster Compute Cluster Edge Cluster Physical Network Management Cluster Transport Subnet A 192.168.150.0/24

Management Cluster

Transport Subnet A 192.168.150.0/24

Transport Subnet B 192.168.250.0/24

Network Management Cluster Transport Subnet A 192.168.150.0/24 Transport Subnet B 192.168.250.0/24 CONFIDENTIAL 2 6

CONFIDENTIAL

26

Management Plane Components

vRA/Openstack/Custom

Management Plane Components vRA/Openstack/Custom vSphere APIs NSX REST APIs Management Plane 1:1 NSX Manager vCenter
Management Plane Components vRA/Openstack/Custom vSphere APIs NSX REST APIs Management Plane 1:1 NSX Manager vCenter
Management Plane Components vRA/Openstack/Custom vSphere APIs NSX REST APIs Management Plane 1:1 NSX Manager vCenter

vSphere APIs

NSX REST APIs

Management Plane 1:1 NSX Manager vCenter
Management Plane
1:1
NSX Manager
vCenter
NSX Manager
NSX Manager
APIs Management Plane 1:1 NSX Manager vCenter NSX Manager NSX Manager vSphere Plugin Single Pane of
NSX Manager vSphere Plugin
NSX Manager
vSphere Plugin

Single Pane of Glass

NSX Manager vCenter NSX Manager NSX Manager vSphere Plugin Single Pane of Glass 3 r d

3 rd Party Management Console

NSX Manager vCenter NSX Manager NSX Manager vSphere Plugin Single Pane of Glass 3 r d
NSX Manager vCenter NSX Manager NSX Manager vSphere Plugin Single Pane of Glass 3 r d

NSX Control Plane Components

NSX Controllers

vSphere Cluster  vSphere HA  DRS with Anti-affinity Host Agent Data-Path Kernel Modules
vSphere Cluster
vSphere HA
DRS with Anti-affinity
Host Agent
Data-Path Kernel Modules
DRS with Anti-affinity Host Agent Data-Path Kernel Modules VM E S X i VM VM Properties
VM
VM

ESXi

VM VM
VM
VM

Properties

Virtual Form Factor (4 vCPU, 4GB RAM)

Data plane programming

Control plane Isolation

Benefits

Scale Out

High Availability

VXLAN - no Multicast

ARP Suppression

plane Isolation  Benefits  Scale Out  High Availability  VXLAN - no Multicast 

28

Deploying and Configuring VMware NSX

One Time

Deploying and Configuring VMware NSX One Time Deploy VMware NSX NSX NSX Mgmt Edge Virtual Infrastructure

Deploy VMware NSX

NSX NSX Mgmt Edge
NSX
NSX
Mgmt
Edge

Virtual Infrastructure

Component Deployment
Component Deployment

Deploy NSX Manager

Deploy NSX Controller Cluster

Preparation
Preparation
Host Preparation
Host Preparation
Logical Network Preparation
Logical Network Preparation

Consumption

+ + + Programmatic Virtual Network Deployment
+
+
+
Programmatic
Virtual
Network Deployment
+ + + Programmatic Virtual Network Deployment Logical Networks Logical Network/Security Services Deploy

Logical Networks

Logical Network/Security Services Deploy Logical Switches per tier Deploy Distributed Logical Router or connect to
Logical Network/Security Services
Deploy Logical Switches per tier
Deploy Distributed Logical Router
or connect to existing
Recurring
Logical Switches per tier Deploy Distributed Logical Router or connect to existing Recurring Create Bridged Network
Create Bridged Network
Create Bridged Network

29

Cross-VC NSX Logical Networks

Universal Object Configuration (NSX UI & API) Universal Configuration Synchronization Universal Controller USS
Universal Object Configuration
(NSX UI & API)
Universal Configuration Synchronization
Universal
Controller
USS
Cluster
vCenter & NSX Manager A
vCenter & NSX Manager B
vCenter & NSX Manager H
Primary
Secondary
Secondary
Local VC Inventory
Local VC Inventory
Local VC Inventory
Universal Distributed Logical Router
Universal Logical
Switches
Universal
DFW
CONFIDENTIAL
30

Cross-VC NSX Components & Terminology

Cross-VC NSX Components & Terminology • Cross-VC NSX objects use the term Universal and include: –

Cross-VC NSX objects use the term Universal and include:

Universal Synchronization Service (USS)

Universal Controller Cluster (UCC)

Universal Transport Zone (UTZ)

Universal Logical Switch (ULS)

Universal Distributed Logical Router (UDLR)

Universal IP Set/MAC Set

Universal Security Group/Service/Service Group

NSX Managers have the following roles:

Standalone

Primary

Secondary

Transit

– Standalone – Primary – Secondary – Transit • Universal Distributed Logical Routing adds: – Locale

Universal Distributed Logical Routing adds:

Locale ID

– Primary – Secondary – Transit • Universal Distributed Logical Routing adds: – Locale ID CONFIDENTIAL

CONFIDENTIAL

31

Agenda – Part 1

1

vSphere Distributed Switch (Whiteboard)

2

NSX for vSphere Overview

3

Physical Network Design Considerations

4

vSphere Design Considerations for NSX

5

NSX Design Considerations

Physical Network Design Considerations 4 vSphere Design Considerations for NSX 5 NSX Design Considerations
Physical Network Design Considerations 4 vSphere Design Considerations for NSX 5 NSX Design Considerations

Classical Access/Agg/Core Network

VLANs carried throughout the Fabric

L2 application scope is limited to a single POD

Default gateway – HSRP/VRRP at the aggregation layer

Ideally multiple aggregation PODs, to limit the Layer 2 domain size, although not always the case

Inter POD traffic is L3 routed

domain size, although not always the case  Inter POD traffic is L3 routed WAN/Internet L3
WAN/Internet L3 L3 L2 L2 POD A POD B
WAN/Internet
L3
L3
L2
L2
POD A
POD B
domain size, although not always the case  Inter POD traffic is L3 routed WAN/Internet L3

Physical Network Trends

From 2- or 3-tier to spine/leaf fabrics

Density & bandwidth jump

ECMP for layer 3 (and layer 2)

Reduce network oversubscription

Wire & configure once

Uniform configurations

• Wire & configure once • Uniform configurations L3 L2 WAN/Internet POD B L3 L2 L3

L3

L2

WAN/Internet
WAN/Internet
& configure once • Uniform configurations L3 L2 WAN/Internet POD B L3 L2 L3 L2 POD
POD B L3 L2
POD B L3 L2
POD B L3 L2
POD B L3 L2
POD B L3 L2
POD B L3 L2
POD B L3 L2
POD B L3 L2
POD B
POD B

POD B

POD B L3 L2
POD B L3 L2
POD B L3 L2

L3

L2
L2

L3

L3 L2 POD A

L2

L2
L2
L3 L2 POD A
L3 L2 POD A

POD A

& configure once • Uniform configurations L3 L2 WAN/Internet POD B L3 L2 L3 L2 POD
& configure once • Uniform configurations L3 L2 WAN/Internet POD B L3 L2 L3 L2 POD
WAN/Internet

WAN/Internet

34

L3 Fabric Topologies & Design Considerations

L3 ToR designs have dynamic routing protocol between leaf and spine

BGP, OSPF or ISIS can be used

Rack advertises small set of prefixes (one per VLAN/subnet).

Equal cost paths to the other racks prefixes

801.Q trunks with a small set of VLANs for VMkernel traffic

ToR provides default gateway service for each VLAN subnet

L2 Fabric designs are also available

each VLAN subnet • L2 Fabric designs are also available L3 L2 WAN/Internet L3 Uplinks .

L3

L2

WAN/Internet
WAN/Internet

L3 Uplinks

. . . L3 L2 VLAN Boundary 802.1Q Hypervisor 1 802.1Q Hypervisor n
.
.
.
L3
L2
VLAN Boundary 802.1Q
Hypervisor 1
802.1Q
Hypervisor n
Physical Fabric Options with NSX  Network Virtualization enables greater scale and flexibility regardless of

Physical Fabric Options with NSX

Network Virtualization enables greater scale and flexibility regardless of physical network design

NSX works over any reliable IP Network supporting 1600 byte MTU, these are the only requirements

Most customers are still using hierarchical networks – which work great with NSX

NSX capabilities are independent of network topology

NSX enables both choice and protection of existing investments

NSX capabilities are independent of network topology  NSX enables both choice and protection of existing
NSX capabilities are independent of network topology  NSX enables both choice and protection of existing

Agenda – Part 1

1 vSphere Distributed Switch (Whiteboard)

2 NSX for vSphere Overview

3 Physical Network Design Considerations

4 vSphere Design Considerations for NSX

5 NSX Design Considerations

Overview 3 Physical Network Design Considerations 4 vSphere Design Considerations for NSX 5 NSX Design Considerations
Overview 3 Physical Network Design Considerations 4 vSphere Design Considerations for NSX 5 NSX Design Considerations
vSphere Design Considerations for NSX

vSphere Design Considerations for NSX

vSphere Design Considerations for NSX

vSphere Cluster Design – Collapsed Edge/Infra Racks

vCenter 1

Max supported number of VMs

vCenter 2

Max supported number of VMs

number of VMs vCenter 2 Max supported number of VMs WAN Internet L3 Spine L2 Leaf

WAN

Internet

WAN Internet

L3

VMs vCenter 2 Max supported number of VMs WAN Internet L3 Spine L2 Leaf L3 L2

Spine

2 Max supported number of VMs WAN Internet L3 Spine L2 Leaf L3 L2 Storage L3
2 Max supported number of VMs WAN Internet L3 Spine L2 Leaf L3 L2 Storage L3
L2 Leaf L3 L2 Storage
L2
Leaf
L3
L2
Storage

L3

L2

L2 VLANs for bridging
L2 VLANs
for bridging

Edge Leaf (L3 to DC Fabric, L2 to External Networks)

Edge Clusters

Management

Fabric, L2 to External Networks) Edge Clusters Management Cluster C o m p u t e
Fabric, L2 to External Networks) Edge Clusters Management Cluster C o m p u t e

Cluster

Compute Clusters

Infrastructure Clusters (Edge, Storage, vCenter and Cloud Management System)

Clusters (Edge, Storage, vCenter and Cloud Management System) Cluster location determined by connectivity requirements

Cluster location determined by connectivity requirements

Clusters (Edge, Storage, vCenter and Cloud Management System) Cluster location determined by connectivity requirements

vSphere Cluster Design – Separated Edge/Infra

Leaf

vCenter 1

Max supported number of VMs

vCenter 2

Max supported number of VMs

WAN Internet L3 L2
WAN
Internet
L3
L2
vCenter 2 Max supported number of VMs WAN Internet L3 L2 Spine L3 L2 L3 L2
vCenter 2 Max supported number of VMs WAN Internet L3 L2 Spine L3 L2 L3 L2

Spine

L3 L2
L3
L2

L3

L2

number of VMs WAN Internet L3 L2 Spine L3 L2 L3 L2 Edge Leaf (L3 to
number of VMs WAN Internet L3 L2 Spine L3 L2 L3 L2 Edge Leaf (L3 to

Edge Leaf (L3 to

DC Fabric, L2 to External Networks)

Compute Clusters

Infrastructure Clusters (Storage, vCenter and Cloud Management)

Edge Clusters (Logical Router Control VMs and NSX Edges)

Edge Clusters (Logical Router Control VMs and NSX Edges) Cluster location determined by connectivity requirements

Cluster location determined by connectivity requirements

Edge Clusters (Logical Router Control VMs and NSX Edges) Cluster location determined by connectivity requirements

Management and Edge Cluster Requirements

Routed DC Fabric L3 L3 Leaf L2 L2 L2 VMkernel VMkernel VLANs VLANs VLANs for
Routed DC Fabric
L3
L3
Leaf
L2
L2
L2
VMkernel
VMkernel
VLANs
VLANs
VLANs for

Management Cluster

L2 required for Management workloads such as vCenter Server, NSX Controllers, NSX Manager and IP Storage which use VLAN backed networks

Management

VMs

Edge Cluster

L2 required for External 802.1Q VLANs & Edge Default GW

Needed as Edge HA uses GARP to announce new MAC in the event of a failover

HA uses GARP to announce new MAC in the event of a failover WAN Routed DC
WAN Routed DC Fabric Internet L3 L2 L3 L3 L2 L2 VMkernel VLANs
WAN
Routed DC Fabric
Internet
L3
L2
L3
L3
L2
L2
VMkernel VLANs
of a failover WAN Routed DC Fabric Internet L3 L2 L3 L3 L2 L2 VMkernel VLANs

VLANs for L2 and L3 NSX Services

L2 Fabric - Network Addressing and VLANs Definition Considerations

Compute Racks - IP Address Allocations and VLANs

Function

VLAN ID

IP Subnet

Management

66

10.66.Y.0/24

vMotion

77

10.77.Y.0/24

VXLAN

88

10.88.Y.0/24

Storage

99

10.99.Y.0/24

L2 Fabric

For L2 Fabric – Y identifies the POD number

L2 Fabric For L2 Fabric – Y identifies the POD number L3 L2 Compute Cluster A

L3

L2

Compute

Cluster

A

32 Hosts

Compute

Cluster

B

32 Hosts

POD A POD B
POD A
POD B
Cluster A 32 Hosts Compute Cluster B 32 Hosts POD A POD B VMkernel VLAN/IP Subnet

VMkernel VLAN/IP Subnet Scope

B 32 Hosts POD A POD B VMkernel VLAN/IP Subnet Scope VMkernel VLAN/IP Subnet Scope VXLAN

VMkernel VLAN/IP Subnet Scope

VMkernel VLAN/IP Subnet Scope VMkernel VLAN/IP Subnet Scope VXLAN Transport Zone Scope (extends across ALL PODs/clusters

VXLAN Transport Zone Scope (extends across ALL PODs/clusters)

L3 Fabric - Network Addressing and VLANs Definition Considerations

Compute Racks - IP Address Allocations and VLANs

Function

VLAN ID

IP Subnet

Management

66

10.66.R.x/26

vMotion

77

10.77.R.x/26

VXLAN

88

10.88.R.x/26

Storage

99

10.99.R.x/26

L3 Fabric

For L3 Fabric – R identifies the Rack number

L3 Fabric For L3 Fabric – R identifies the Rack number L3 L2 Compute Cluster A

L3

L2

Compute

Cluster A

32 Hosts

Compute

Cluster B

32 Hosts

L3 L2 Compute Cluster A 32 Hosts Compute Cluster B 32 Hosts VMkernel VLAN/IP Subnet Scope

VMkernel VLAN/IP Subnet Scope

VMkernel VLAN/IP Subnet Scope

VLAN/IP Subnet Scope VMkernel VLAN/IP Subnet Scope VXLAN Transport Zone Scope (extends across ALL

VXLAN Transport Zone Scope (extends across ALL racks/clusters)

Span of VLANs

VMkernel Networking

SVI 66: 10.66.1.1/26 SVI 77: 10.77.1.1/26 SVI 88: 10.88.1.1/26 SVI 99: 10.99.1.1/26 L3 ToR Switch
SVI 66: 10.66.1.1/26
SVI 77: 10.77.1.1/26
SVI 88: 10.88.1.1/26
SVI 99: 10.99.1.1/26
L3 ToR Switch
VLAN 66
VLAN 77
VLAN 88
VLAN 99

Routed uplinks (ECMP)

VLAN Trunk (802.1Q) vSphere Host (ESXi) A1 Mgmt vMotion VXLAN Storage 10.66.1.25/26 10.77.1.25/26 10.88.1.25/26
VLAN Trunk (802.1Q)
vSphere Host (ESXi)
A1
Mgmt
vMotion
VXLAN
Storage
10.66.1.25/26
10.77.1.25/26
10.88.1.25/26
10.99.1.25/26
DGW: 10.66.1.1
GW: 10.77.1.1
DGW: 10.88.1.1
GW: 10.99.1.1
Span of VLANs
10.88.1.25/26 10.99.1.25/26 DGW: 10.66.1.1 GW: 10.77.1.1 DGW: 10.88.1.1 GW: 10.99.1.1 Span of VLANs 44

44

Slide 44

A1

This one should precede slide 35

Author, 1/30/2015

VMkernel Network Addressing

To keep static routes manageable as the fabric scales, larger address blocks can be allocated for the VMkernel functions (/16 as an example):

10.66.0.0/16 for Management

10.77.0.0/16 for vMotion

10.88.0.0/16 for VXLAN

10.99.0.0/16 for Storage

• 10.88.0.0/16 for VXLAN • 10.99.0.0/16 for Storage  Dynamic routing protocols (OSPF, BGP) used to

Dynamic routing protocols (OSPF, BGP) used to advertise to the rest of the fabric

Scalability and predictable network addressing, based on number of ESXi hosts per rack or cluster

Reduces VLAN usage by reusing VLANs with a rack (L3) or POD (L2)

based on number of ESXi hosts per rack or cluster  Reduces VLAN usage by reusing
based on number of ESXi hosts per rack or cluster  Reduces VLAN usage by reusing

VMkernel Networking

Multi instance TCP/IP Stack

Introduced with vSphere 5.5 and leveraged by:

VXLAN NSX vSwitch transport network

5.5 and leveraged by: VXLAN NSX vSwitch transport network  Separate routing table, ARP table and
5.5 and leveraged by: VXLAN NSX vSwitch transport network  Separate routing table, ARP table and

Separate routing table, ARP table and default gateway per stack instance

Provides increased isolation and reservation of networking resources

Enables VXLAN VTEPs to use a gateway independent from the default TCP/IP stack

Management, vMotion, FT, NFS, iSCSI leverage the default TCP/IP stack in 5.5

VMkernel VLANs do not extend beyond the rack in an L3 fabric design or beyond the cluster with an L2 fabric, therefore static routes are required for Management, Storage and vMotion Traffic

Host Profiles reduce the overhead of managing static routes and ensure persistence

46

VMkernel Networking

Static Routing

VMkernel VLANs do not extend beyond the rack in an L3 fabric design or beyond the cluster with an L2 fabric, therefore static routes are required for Management, Storage and vMotion Traffic

Host Profiles reduce the overhead of managing static routes and ensure persistence

Follow the RPQ (Request for Product Qualification) process for official support of routed vMotion. Routing of IP Storage traffic also has some caveats

A number of customers have been through the RPQ process and use routed vMotion with full support from VMware today

Future enhancements will simplify ESXi host routing and enable greater support for L3 network topologies

today • Future enhancements will simplify ESXi host routing and enable greater support for L3 network
today • Future enhancements will simplify ESXi host routing and enable greater support for L3 network

47

VMkernel Networking

VMkernel Teaming Recommendations

LACP (802.3ad) provides optimal use of available bandwidth and quick convergence, but does require physical network configuration

Load Based Teaming is also a good option for VMkernel traffic where there is a desire to simplify configuration and reduce dependencies on the physical network, while still effectively using multiple uplinks

Explicit Failover allows for predictable traffic flows and manual balancing of VMkernel traffic

Refer to VDS best practices White Paper for more details on common configurations:

http://www.vmware.com/files/pdf/techpaper/vsphere-distributed-switch-best-

practices.pdf

2x 10Gbe network adapters per server is most common

Network partitioning technologies tend to increase complexity

Overlay Networks are used for VMs

Use VLANs for VMkernel interfaces to avoid circular dependencies

NSX introduces support for multiple VTEPs per host with VXLAN

dependencies • NSX introduces support for multiple VTEPs per host with VXLAN NSX vSwitch ESXi Host
NSX vSwitch ESXi Host
NSX vSwitch
ESXi Host
dependencies • NSX introduces support for multiple VTEPs per host with VXLAN NSX vSwitch ESXi Host
dependencies • NSX introduces support for multiple VTEPs per host with VXLAN NSX vSwitch ESXi Host
dependencies • NSX introduces support for multiple VTEPs per host with VXLAN NSX vSwitch ESXi Host
dependencies • NSX introduces support for multiple VTEPs per host with VXLAN NSX vSwitch ESXi Host
dependencies • NSX introduces support for multiple VTEPs per host with VXLAN NSX vSwitch ESXi Host

Physical Switch

48

Recap: vCenter – Scale Boundaries

DC Object Max. 500 hosts
DC Object
Max. 500 hosts
Recap: vCenter – Scale Boundaries DC Object Max. 500 hosts Cluster Max. 32 hosts vCenter Server

Cluster Max. 32 hosts

Boundaries DC Object Max. 500 hosts Cluster Max. 32 hosts vCenter Server 10,000 powered on VMs
vCenter Server 10,000 powered on VMs 1,000 ESXi hosts 128 VDS
vCenter Server
10,000 powered on VMs
1,000 ESXi hosts
128 VDS
DRS-based vMotion Manual vMotion
DRS-based vMotion
Manual vMotion
ESXi ESXi ESXi ESXi ESXi ESXi VDS 1
ESXi
ESXi
ESXi
ESXi
ESXi
ESXi
VDS 1
1,000 ESXi hosts 128 VDS DRS-based vMotion Manual vMotion ESXi ESXi ESXi ESXi ESXi ESXi VDS
1,000 ESXi hosts 128 VDS DRS-based vMotion Manual vMotion ESXi ESXi ESXi ESXi ESXi ESXi VDS
1,000 ESXi hosts 128 VDS DRS-based vMotion Manual vMotion ESXi ESXi ESXi ESXi ESXi ESXi VDS
1,000 ESXi hosts 128 VDS DRS-based vMotion Manual vMotion ESXi ESXi ESXi ESXi ESXi ESXi VDS
1,000 ESXi hosts 128 VDS DRS-based vMotion Manual vMotion ESXi ESXi ESXi ESXi ESXi ESXi VDS
1,000 ESXi hosts 128 VDS DRS-based vMotion Manual vMotion ESXi ESXi ESXi ESXi ESXi ESXi VDS
1,000 ESXi hosts 128 VDS DRS-based vMotion Manual vMotion ESXi ESXi ESXi ESXi ESXi ESXi VDS
1,000 ESXi hosts 128 VDS DRS-based vMotion Manual vMotion ESXi ESXi ESXi ESXi ESXi ESXi VDS
ESXi ESXi VDS 2
ESXi
ESXi
VDS 2
1,000 ESXi hosts 128 VDS DRS-based vMotion Manual vMotion ESXi ESXi ESXi ESXi ESXi ESXi VDS
1,000 ESXi hosts 128 VDS DRS-based vMotion Manual vMotion ESXi ESXi ESXi ESXi ESXi ESXi VDS
1,000 ESXi hosts 128 VDS DRS-based vMotion Manual vMotion ESXi ESXi ESXi ESXi ESXi ESXi VDS
1,000 ESXi hosts 128 VDS DRS-based vMotion Manual vMotion ESXi ESXi ESXi ESXi ESXi ESXi VDS
1,000 ESXi hosts 128 VDS DRS-based vMotion Manual vMotion ESXi ESXi ESXi ESXi ESXi ESXi VDS

NSX for vSphere – Scale Boundaries

Cloud Management System 1:1 mapping of vCenter to NSX Cluster NSX API NSX API vCenter
Cloud Management System
1:1 mapping of
vCenter to
NSX Cluster
NSX API
NSX API
vCenter Server
vCenter Server
(Manager)
(Manager)
Controller Cluster
Controller Cluster
DRS-based vMotion Manual vMotion
DRS-based vMotion
Manual vMotion
Logical Network Span
Logical Network Span
Cluster DRS-based vMotion Manual vMotion Logical Network Span ESXi ESXi VDS ESXi ESXi VDS ESXi ESXi
ESXi ESXi VDS
ESXi
ESXi
VDS
Cluster DRS-based vMotion Manual vMotion Logical Network Span ESXi ESXi VDS ESXi ESXi VDS ESXi ESXi
Cluster DRS-based vMotion Manual vMotion Logical Network Span ESXi ESXi VDS ESXi ESXi VDS ESXi ESXi
Cluster DRS-based vMotion Manual vMotion Logical Network Span ESXi ESXi VDS ESXi ESXi VDS ESXi ESXi
Cluster DRS-based vMotion Manual vMotion Logical Network Span ESXi ESXi VDS ESXi ESXi VDS ESXi ESXi
ESXi ESXi VDS
ESXi
ESXi
VDS
Cluster DRS-based vMotion Manual vMotion Logical Network Span ESXi ESXi VDS ESXi ESXi VDS ESXi ESXi
Cluster DRS-based vMotion Manual vMotion Logical Network Span ESXi ESXi VDS ESXi ESXi VDS ESXi ESXi
Cluster DRS-based vMotion Manual vMotion Logical Network Span ESXi ESXi VDS ESXi ESXi VDS ESXi ESXi
Cluster DRS-based vMotion Manual vMotion Logical Network Span ESXi ESXi VDS ESXi ESXi VDS ESXi ESXi
Cluster DRS-based vMotion Manual vMotion Logical Network Span ESXi ESXi VDS ESXi ESXi VDS ESXi ESXi
Cluster DRS-based vMotion Manual vMotion Logical Network Span ESXi ESXi VDS ESXi ESXi VDS ESXi ESXi
ESXi ESXi ESXi ESXi VDS
ESXi
ESXi
ESXi
ESXi
VDS
Cluster DRS-based vMotion Manual vMotion Logical Network Span ESXi ESXi VDS ESXi ESXi VDS ESXi ESXi
Cluster DRS-based vMotion Manual vMotion Logical Network Span ESXi ESXi VDS ESXi ESXi VDS ESXi ESXi
Cluster DRS-based vMotion Manual vMotion Logical Network Span ESXi ESXi VDS ESXi ESXi VDS ESXi ESXi
Cluster DRS-based vMotion Manual vMotion Logical Network Span ESXi ESXi VDS ESXi ESXi VDS ESXi ESXi
Cluster DRS-based vMotion Manual vMotion Logical Network Span ESXi ESXi VDS ESXi ESXi VDS ESXi ESXi
Cluster DRS-based vMotion Manual vMotion Logical Network Span ESXi ESXi VDS ESXi ESXi VDS ESXi ESXi
Cluster DRS-based vMotion Manual vMotion Logical Network Span ESXi ESXi VDS ESXi ESXi VDS ESXi ESXi
Cluster DRS-based vMotion Manual vMotion Logical Network Span ESXi ESXi VDS ESXi ESXi VDS ESXi ESXi

vSphere Cluster Design for NSX

There are two common models for cluster design with NSX for vSphere:

Option 1 with a single vCenter Server attached to Management, Edge and Compute Clusters

This allows NSX Controllers to be deployed into the Management Cluster

Reduces vCenter Server licensing requirements

More common in POCs or small environments

Edge Cluster Compute A
Edge Cluster
Compute A
NSX Edges & DLR Control VMs
NSX Edges & DLR Control VMs
NSX Edges & DLR Control VMs

NSX Edges & DLR Control VMs

WebVM VM WebVM VM WebVM VM WebVM VM
WebVM
VM
WebVM
VM
WebVM
VM
WebVM
VM

Management Cluster

vCenter Server A NSX Manager vCAC NSX Controller Cluster
vCenter
Server A
NSX Manager
vCAC
NSX
Controller Cluster

Compute Z

WebVM VM WebVM VM WebVM VM WebVM VM
WebVM
VM
WebVM
VM
WebVM
VM
WebVM
VM
Cluster vCenter Server A NSX Manager vCAC NSX Controller Cluster Compute Z WebVM VM WebVM VM

vSphere Cluster Design for NSX

Option 2

A common VMware services best practice to have the Management Cluster managed by a dedicated vCenter Server

In this case NSX Manager would be attached to the vCenter Server managing the Edge and Compute Clusters

NSX Controllers must be deployed into the same vCenter Server NSX Manager is attached to, therefore the Controllers are also deployed into the Edge Cluster

Edge Cluster Compute A WebVM VM WebVM WebVM VM WebVM NSX Controller WebVM VM WebVM
Edge Cluster
Compute A
WebVM
VM
WebVM
WebVM
VM
WebVM
NSX
Controller
WebVM
VM
WebVM
Cluster
NSX Edges &
DLR Control VMs
WebVM
VM
WebVM
Cluster NSX Edges & DLR Control VMs WebVM VM WebVM Management Cluster vCenter vCenter Server A

Management Cluster

vCenter vCenter Server A Server B vCAC NSX Manager
vCenter
vCenter
Server A
Server B
vCAC
NSX Manager

Compute Z

VM VM VM VM
VM
VM
VM
VM
VMs WebVM VM WebVM Management Cluster vCenter vCenter Server A Server B vCAC NSX Manager Compute

Agenda – Part 1

1 vSphere Distributed Switch (Whiteboard)

2 NSX for vSphere Overview

3 Physical Network Design Considerations

4 vSphere Design Considerations for NSX

5 NSX Design Considerations (Multiple Sections)

Network Design Considerations 4 vSphere Design Considerations for NSX 5 NSX Design Considerations (Multiple Sections)
Network Design Considerations 4 vSphere Design Considerations for NSX 5 NSX Design Considerations (Multiple Sections)
NSX Manager and Controller Design Considerations

NSX Manager and Controller Design Considerations

NSX Manager and Controller Design Considerations

NSX Manager

NSX Manager is deployed as a virtual appliance

4 vCPU, 12 GB of RAM per node

Consider reserving memory for VC to ensure good Web Client performance

Resiliency of NSX Manager provided by vSphere HA

Catastrophic failure of NSX Manager is rare, however periodic backup is recommended to restore to the last known configuration

If NSX Manager is unavailable, existing data plane connectivity is not impacted

NSX Manager

data plane connectivity is not impacted NSX Manager NSX Manager vSphere Plugin 3 r d Party
NSX Manager vSphere Plugin
NSX Manager
vSphere Plugin
data plane connectivity is not impacted NSX Manager NSX Manager vSphere Plugin 3 r d Party
data plane connectivity is not impacted NSX Manager NSX Manager vSphere Plugin 3 r d Party

3 rd Party Management Console

data plane connectivity is not impacted NSX Manager NSX Manager vSphere Plugin 3 r d Party

NSX Controllers

Controller nodes are also deployed as virtual appliances

4 vCPU, 4GB of RAM per controller node

CPU Reservation of 2048 MHz

No memory reservation required

Modifying settings is not supported

Can be deployed in the Mgmt or Edge clusters

Cluster size of 3 Controller nodes is the only supported configuration

Controller majority is required for having a functional controller cluster

API Provider

Persistence Server

Logical Manager

Switch Manager

Directory Server

Existing Data plane maintained even under complete controller cluster failure

Controller Cluster Node

By default the DRS and anti-affinity rules are not enforced for controller deployment

The recommendation is to manually enable DRS and anti-affinity rules

Minimum 3 host is required to enforce an anti-affinity rule keeping the Controller VMs on separate hosts

rules • Minimum 3 host is required to enforce an anti-affinity rule keeping the Controller VMs
rules • Minimum 3 host is required to enforce an anti-affinity rule keeping the Controller VMs

NSX Controllers

NSX Controllers must be deployed into same vCenter Server that NSX Manager is attached to

Controller password is defined during deployment of the first node and is consistent across all nodes

Controllers require connectivity to NSX Manager and vmk0 (Management VMkernel interface) on all ESXi hosts participating in NSX Logical Networks

NSX Control Plane Protocol operates on TCP port 1234 – connections are initiated from ESXi hosts to Controllers

Internal API on TCP port 443 – NSX Manager is the only consumer

Controller interaction is via CLI, while configuration operations are also available through NSX for vSphere API

 Controller interaction is via CLI, while configuration operations are also available through NSX for vSphere
 Controller interaction is via CLI, while configuration operations are also available through NSX for vSphere

NSX Control Plane Security

NSX Control Plane communication occurs over the management network

The Control Plane is protected by:

Certificate based authentication

SSL

NSX Manager generates self-signed certificates for each of the ESXi Hosts and Controllers

These certificates are pushed to the Controller and ESXi hosts over secure channels

Mutual authentication occurs by verifying these certificates

to the Controller and ESXi hosts over secure channels • Mutual authentication occurs by verifying these
to the Controller and ESXi hosts over secure channels • Mutual authentication occurs by verifying these

NSX Control Plane Security

Certificate NSX Manager 1 Generation 5 SSL NSX Manager DB OVF 2 3 4 REST
Certificate
NSX Manager
1
Generation
5
SSL
NSX Manager DB
OVF
2
3
4
REST API
Message Bus
Deployment
UW Agent
VTEP
UW Agent
VTEP
UW Agent
VTEP
UW Agent
VTEP
Controller Cluster
5
SSL
5
SSL
UW Agent
VTEP
UW Agent
VTEP
vSphere Cluster A
vSphere Cluster B
UW Agent VTEP Controller Cluster 5 SSL 5 SSL UW Agent VTEP UW Agent VTEP vSphere
UW Agent VTEP Controller Cluster 5 SSL 5 SSL UW Agent VTEP UW Agent VTEP vSphere

NSX Management Plane Security

NSX Management Plane communication also occurs over the management network

The following secure protocols are used:

REST API (HTTPS)

VC APIs (HTTPS)

Message bus (AMQP)

Fallback to VIX (SSL)

secure protocols are used: – REST API (HTTPS) – VC APIs (HTTPS) – Message bus (AMQP)
secure protocols are used: – REST API (HTTPS) – VC APIs (HTTPS) – Message bus (AMQP)
Designing VXLAN Logical Switching and vDS

Designing VXLAN Logical Switching and vDS

Designing VXLAN Logical Switching and vDS

Design Considerations – VDS and Transport Zone

Management Cluster

Compute A

Compute N

WebVM VM WebVM VM
WebVM
VM
WebVM
VM
vCenter Server Controller Cluster NSX Edges NSX Manager
vCenter
Server
Controller Cluster
NSX Edges
NSX Manager
WebVM VM WebVM VM 192.168.230.100 vSphere Host 192.168.230.101 vSphere Host
WebVM
VM
WebVM
VM
192.168.230.100
vSphere Host
192.168.230.101
vSphere Host
VXLAN Transport Zone Spanning Three Clusters Compute VDS Edge VDS
VXLAN Transport Zone Spanning Three Clusters
Compute VDS
Edge VDS

VTEP

192.168.240.100 vSphere Host 192.168.240.101 vSphere Host
192.168.240.100
vSphere Host
192.168.240.101
vSphere Host

VTEP

Compute Cluster 1

Compute Cluster N

192.168.220.100

vSphere Host
vSphere Host

192.168.220.101

vSphere Host
vSphere Host

Edge Cluster

VTEP

Compute Cluster 1 Compute Cluster N 192.168.220.100 vSphere Host 192.168.220.101 vSphere Host Edge Cluster VTEP

VDS Uplink Connectivity Options in NSX

NSX supports multiple teaming policies for VXLAN traffic

NSX for vSphere also supports multiple VTEPs per ESXi host (to load balance VXLAN traffic across available uplinks)

Teaming and Failover Mode

NSX Support

Multi-VTEP Support

Route based on Originating Port

Route based on Source MAC hash

LACP

×

Route based on IP Hash (Static Ether Channel)

×

Explicit Failover Order

×

Route based on Physical NIC Load (LBT)

×

×

✓ × Explicit Failover Order ✓ × Route based on Physical NIC Load (LBT) × ×

63

Uplink Connectivity Recommendation for VXLAN Traffic

Teaming and Failover mode recommendation for VXLAN traffic depends on:

VXLAN bandwidth requirements per ESXi host

NSX Administrator’s familiarity with Networking configuration

Recommended Teaming and Failover Mode

Benefits

 

-

Simplicity of configuration and troubleshooting

Explicit Failover

-

Single uplink can handle VXLAN traffic requirements (current generation blade servers can do 20+ Gbps bi-directional traffic e.g. UCS B200 M3)

-

Separate all infrastructure traffic to one uplink and all VXLAN traffic to the other

 

-

Standards based, multiple active uplinks for VXLAN traffic

LACP

-

More advance configuration compared to Explicit Failover

-

Dependency on MLAG/vPC support on physical switches (for ToR redundancy)

 

-

Use where LACP isn’t available or bandwidth requirements for VXLAN traffic exceed a single uplink

Load Balance SRC ID

-

Recommended for Edge cluster to avoid complexity/support of routing over LACP

Route based on Src-ID with Multi-VTEP works well, but it is a more advanced configuration

over LACP  Route based on Src-ID with Multi-VTEP works we ll, but it is a

CONFIDENTIAL

64

Network Adapter Offloads

VXLAN TCP Segmentation Offload (VXLAN TSO) - Operating system sends large sized TCP packets to
VXLAN TCP
Segmentation Offload
(VXLAN TSO)
- Operating system sends large sized TCP packets to NIC
(VXLAN encapsulated)
- NIC segments packets as per physical MTU
- NIC distributes packets among queues Receive Side Scaling - Unique receive thread per queue
- NIC distributes packets among queues
Receive Side Scaling
- Unique receive thread per queue to drive multiple CPUs
Scaling - Unique receive thread per queue to drive multiple CPUs Important features for NSX performance

Important features for NSX performance

Scaling - Unique receive thread per queue to drive multiple CPUs Important features for NSX performance

CONFIDENTIAL

65

VMware internal slide do not share

NIC Drivers Support for VXLAN TSO and RSS

 

NIC Model / Controller

vSphere 5.5 Inbox Driver

Async Driver for vSphere 5.5

 

NIC Driver

Version

VXLAN TSO

RSS

Version

VXLAN TSO

RSS

 

82599

           

Intel (ixgbe)

X540

3.7.13.7.14iov NAPI

Yes

Yes

3.21.4

Yes

 

Yes

I350

 
 

57810

           

Broadcom (bnx2x)

57711

1.72.56.v55.2

No

No

1.78.58.v55.3

Yes

 

Yes

Mellanox (mlx4_en)

Connect X-2 Connect X3 Connect X3 Pro

1.9.7.0

No

Yes

Mellanox is planning an async release to support VXLAN TSO for Connect X3 Pro

Cisco VIC (enic)

VIC 12xx

2.1.2.50

No

No

2.1.2.59

No

No

 

BE2

       

Emulex (elxnet)

BE3

No

No

Emulex is planning an async release to support RSS

Skyhawk

Yes

No

(elxnet) BE3 No No Emulex is planning an async release to support RSS Skyhawk Yes No

CONFIDENTIAL

66

VXLAN Design Recommendations

Unicast Mode is appropriate for small deployments, or L3 Fabric networks where the number of hosts in a segment is limited

Hybrid Mode is generally recommended for Production deployments and particularly for L2 physical network topologies

Hybrid also helps when there are is multicast traffic sourced from VMs

Validate connectivity and MTU on transport network before moving on to L3 and above

Not all network adapters are created equal for VXLAN

Don’t overlap Segment IDs across NSX Domains

L3 and above  Not all network adapters are created equal for VXLAN  Don’t overlap
L3 and above  Not all network adapters are created equal for VXLAN  Don’t overlap
L2 Bridging - VXLAN to VLAN

L2 Bridging - VXLAN to VLAN

L2 Bridging - VXLAN to VLAN

Overlay to VLAN Gateway Functionality

The Overlay to VLAN gateway allows communication between virtual and physical world

VM VXLANVLAN L2 payload
VM
VXLANVLAN
L2 payload

NSX: Virtual Network, VXLAN tunnels

VXLANVLAN L2 payload NSX: Virtual Network, VXLAN tunnels Physical Workload Physical Network VLAN backed network
VXLANVLAN L2 payload NSX: Virtual Network, VXLAN tunnels Physical Workload Physical Network VLAN backed network
VXLANVLAN L2 payload NSX: Virtual Network, VXLAN tunnels Physical Workload Physical Network VLAN backed network

Physical Workload

Physical Network VLAN backed network

VXLAN  VLAN gateway

Virtual Network, VXLAN tunnels Physical Workload Physical Network VLAN backed network VXLAN  VLAN gateway 69

69

Use Cases: Migration

L2 as well as L3

Virtual to virtual, physical to virtual

Temporary, bandwidth not critical

Physical to Virtual VM Virtual to Virtual VM
Physical to Virtual
VM
Virtual to Virtual
VM

Physical Workload

Virtualized

Workload (VLAN backed)

VXLAN VLAN
VXLAN
VLAN
Virtual VM Virtual to Virtual VM Physical Workload Virtualized W o r k l o a

70

Use Cases: Integration of non-Virtualized Workloads

Typically necessary for integrating a non-virtualized appliance

A gateway takes care of the on ramp/off ramp

VM
VM

Physical Services / Workload

non-virtualized appliance • A gateway takes care of the on ramp/off ramp VM Physical Services /
non-virtualized appliance • A gateway takes care of the on ramp/off ramp VM Physical Services /
non-virtualized appliance • A gateway takes care of the on ramp/off ramp VM Physical Services /
VXLAN VLAN
VXLAN
VLAN
non-virtualized appliance • A gateway takes care of the on ramp/off ramp VM Physical Services /

71

Software Layer 2 Gateway Form Factor

Native capability of NSX

High performance VXLAN to VLAN gateway in hypervisor kernel Scale-up

x86 performance curve

Flexibility & Operations

Rich set of stateful services

Multi-tier logical routing

Advanced monitoring

Encapsulation & encryption offloads Scale-out as you grow

Single gateway can handle all P/V traffic

Then additional gateways can be introduced

• Single gateway can handle all P/V traffic • Then additional gateways can be introduced VLAN

VLAN 10

VLAN 20

VLAN 30

• Single gateway can handle all P/V traffic • Then additional gateways can be introduced VLAN

72

Hardware Layer 2 Gateway Form Factor

Some partner switches integrate with NSX and provide VXLAN to VLAN gateway in hardware

Main benefits of this form factor:

Software Gateway: L2 extended

VLAN
VLAN

Virtualized Compute Racks

Software Gateway: L2 extended VLAN Virtualized Compute Racks Database Racks Hardware Gateway: L3 end-to-end VXLAN

Database Racks

Hardware Gateway: L3 end-to-end VXLAN Virtualized Compute Racks Database Racks
Hardware Gateway: L3 end-to-end
VXLAN
Virtualized Compute Racks
Database Racks
end-to-end VXLAN Virtualized Compute Racks Database Racks – Bandwidth – Scale and – Low-latency • Also

Bandwidth

Scale and

Low-latency

Also allows extending VXLAN to areas that cannot host a Software Gateway

– Scale and – Low-latency • Also allows extending VXLAN to areas that cannot host a

73

L2 Connectivity of Physical Workloads

NSX Bridging instance

VXLAN ?? VLAN
VXLAN
??
VLAN
of Physical Workloads NSX Bridging instance VXLAN ?? VLAN Physical workloads in same subnet (L2) 
of Physical Workloads NSX Bridging instance VXLAN ?? VLAN Physical workloads in same subnet (L2) 
of Physical Workloads NSX Bridging instance VXLAN ?? VLAN Physical workloads in same subnet (L2) 
of Physical Workloads NSX Bridging instance VXLAN ?? VLAN Physical workloads in same subnet (L2) 
of Physical Workloads NSX Bridging instance VXLAN ?? VLAN Physical workloads in same subnet (L2) 
of Physical Workloads NSX Bridging instance VXLAN ?? VLAN Physical workloads in same subnet (L2) 
of Physical Workloads NSX Bridging instance VXLAN ?? VLAN Physical workloads in same subnet (L2) 

Physical workloads in same subnet (L2)

Bridging function performed in the kernel of the ESXi host

10+ Gbps performance

1:1 mapping between VXLAN and VLAN

Primary use cases

Migrate workloads without changing IP addressing (P2V or V2V)

Extend Logical Networks to physical devices

Allow Logical Networks to leverage a physical gateway

Access existing physical network and security resources

74

Logical to Physical – NSX L2 Bridging

DLR Control VM Standby

DLR Control VM Active

D L R C o n t r o l V M A c t i
D L R C o n t r o l V M A c t i
D L R C o n t r o l V M A c t i
D L R C o n t r o l V M A c t i
D L R C o n t r o l V M A c t i
D L R C o n t r o l V M A c t i

Compute Cluster

Edge Cluster

l V M A c t i v e Compute Cluster Edge Cluster  Migrate workloads
l V M A c t i v e Compute Cluster Edge Cluster  Migrate workloads
l V M A c t i v e Compute Cluster Edge Cluster  Migrate workloads
l V M A c t i v e Compute Cluster Edge Cluster  Migrate workloads
 Migrate workloads (P2V or V2V)  Extend Logical Networks to Physical  VLAN 100
Migrate workloads (P2V or V2V)
Extend Logical Networks to Physical
VLAN 100

Leverage Network/Security Services on VLAN backed networks

VXLAN 5001

 VLAN 100 Leverage Network/Security Services on VLAN backed networks VXLAN 5001 Physical Workload Physical Gateway
 VLAN 100 Leverage Network/Security Services on VLAN backed networks VXLAN 5001 Physical Workload Physical Gateway

Physical Workload

 VLAN 100 Leverage Network/Security Services on VLAN backed networks VXLAN 5001 Physical Workload Physical Gateway

Physical Gateway

 VLAN 100 Leverage Network/Security Services on VLAN backed networks VXLAN 5001 Physical Workload Physical Gateway

75

NSX L2 Bridging Design Considerations

Usage of ESXi dvUplinks

NSX L2 Bridging Design Considerations Usage of ESXi dvUplinks VXLAN Traffic VLAN 10 Other types of
VXLAN Traffic
VXLAN Traffic
Design Considerations Usage of ESXi dvUplinks VXLAN Traffic VLAN 10 Other types of traffic Bridged traffic

VLAN 10

Considerations Usage of ESXi dvUplinks VXLAN Traffic VLAN 10 Other types of traffic Bridged traffic VXLAN
Considerations Usage of ESXi dvUplinks VXLAN Traffic VLAN 10 Other types of traffic Bridged traffic VXLAN
Considerations Usage of ESXi dvUplinks VXLAN Traffic VLAN 10 Other types of traffic Bridged traffic VXLAN
Considerations Usage of ESXi dvUplinks VXLAN Traffic VLAN 10 Other types of traffic Bridged traffic VXLAN
Considerations Usage of ESXi dvUplinks VXLAN Traffic VLAN 10 Other types of traffic Bridged traffic VXLAN

Other

types of

traffic

Bridged traffic

Traffic VLAN 10 Other types of traffic Bridged traffic VXLAN 5000  Bridged traffic enters and
Traffic VLAN 10 Other types of traffic Bridged traffic VXLAN 5000  Bridged traffic enters and

VXLAN 5000

10 Other types of traffic Bridged traffic VXLAN 5000  Bridged traffic enters and leaves host

Bridged traffic enters and leaves host via the dvUplink that is used for VXLAN traffic

VDS teaming/failover policy for the VLAN is not used for bridged traffic

Need to ensure VLAN 10 is carried on the uplink used for VXLAN traffic

Switch port must also allow traffic from/to that VLAN

Can achieve more than 10G for bridged traffic by bundling together 2 10G physical interfaces

76

VXLAN to VLAN SW L2 Bridging – Considerations

Multiple Bridge Instances vs. separate Logical Routers

Bridge instances are limited to the throughput of a single ESXi host

Bridged traffic enters and leaves host via the dvUplink that is used for VXLAN traffic – VDS teaming/failover policy is not used

Interoperability

VLAN dvPortgroup and VXLAN logical switches must be available on the same VDS

Distributed Logical Routing cannot be used on a logical switch that is Bridged

Bridging a VLAN ID of 0 is not supported

Scalability

L2 bridging provides Line Rate throughput

Latency and CPU usage comparable with standard VXLAN

Loop prevention

Only one bridge active for a given VXLAN-VLAN pair

Detect and filter packets received via a different uplink by matching MAC address

active for a given VXLAN-VLAN pair Detect and filter packets received via a different uplink by
active for a given VXLAN-VLAN pair Detect and filter packets received via a different uplink by

NSX L2 Bridging Design Considerations

Routing + Bridging Use case - Not Supported

DLR Instance 1

Routing + Bridging Use case - Not Supported DLR Instance 1 VM VXLAN 5001 App VM
VM
VM

VXLAN 5001

App VM VM
App VM
VM

Physical Workload

DLR Instance 1 VM VXLAN 5001 App VM VM Physical Workload VLAN 10 Web VM VXLAN

VLAN 10

Web VM

1 VM VXLAN 5001 App VM VM Physical Workload VLAN 10 Web VM VXLAN 5002 Bridge

VXLAN 5002

Bridge Instance 1

Same Layer 2 Domain VXLAN 5002 and VLAN 10

5001 App VM VM Physical Workload VLAN 10 Web VM VXLAN 5002 Bridge Instance 1 Same
5001 App VM VM Physical Workload VLAN 10 Web VM VXLAN 5002 Bridge Instance 1 Same

NSX L2 Bridging Gateway Design Considerations

Routing + Bridging Use case – Supported

Considerations Routing + Bridging Use case – Supported NSX Edge VM VXLAN 5001 App VM VM

NSX Edge

VM
VM

VXLAN 5001

App VM VM
App VM
VM

Physical Workload

NSX Edge VM VXLAN 5001 App VM VM Physical Workload VLAN 10 Web VM Bridge Instance

VLAN 10

Web VM

VM VXLAN 5001 App VM VM Physical Workload VLAN 10 Web VM Bridge Instance 1 VXLAN

Bridge Instance 1

VXLAN 5002

Same Layer 2 Domain VXLAN 5002 and VLAN 10

5001 App VM VM Physical Workload VLAN 10 Web VM Bridge Instance 1 VXLAN 5002 Same

NSX Layer 2 Gateway Design Considerations

Single Instance per VXLAN/VLAN Pair

Current implementation only allows a single bridging instance per LS

Bandwidth limited by single bridging instance

The bridged VLAN must be extended between racks to reach physical devices spreading across the racks

racks to reach physical devices spreading across the racks VLAN extended VXLAN VLAN SW VTEP Physical
racks to reach physical devices spreading across the racks VLAN extended VXLAN VLAN SW VTEP Physical

VLAN extended

VXLAN

VLAN

SW VTEP

spreading across the racks VLAN extended VXLAN VLAN SW VTEP Physical Servers (VLAN 10)  Scale-out

Physical Servers

(VLAN 10)

Scale-out model with multiple bridging instances active for separate VXLAN/VLAN pairs

May allow to reduce the spanning of VLANs to a single rack if physical servers in a VLAN are contained in that rack

L3 (VXLAN) only network

a VLAN are contained in that rack L3 (VXLAN) only network Bridging Instance 1 (VXLAN 5000
Bridging Instance 1 (VXLAN 5000 to VLAN 10)
Bridging Instance 1
(VXLAN 5000 to VLAN 10)

Bridging Instance 2 (VXLAN 5001 to VLAN 20)

VXLAN 5000

VXLAN 5001

Physical Servers

(VLAN 10)

Physical Servers

(VLAN 20)

80

VXLAN to VLAN L2 Bridging – Summary

NSX-v SW L2 Bridging Instance vs HW VTEPs

Always lead with NSX-v SW native bridging, performance is sufficient for nearly all use cases and is HW agnostic

Some customers believe they need HW L2 VTEP when they don’t due to positioning of network vendors. Find out what their use cases are first and whether L2 bridging is actually a requirement

The following are potential use cases for HW L2 VTEP:

Low latency traffic Very large volumes of physical servers High amount of guest initiated storage traffic

Data-plane only multicast based options are available today

Validated on Nexus 9000 and Arista 7150, expected to work on all capable HW

OVSDB support with NSX-v planned for 2015

on Nexus 9000 and Arista 7150, expected to work on all capable HW  OVSDB support
on Nexus 9000 and Arista 7150, expected to work on all capable HW  OVSDB support

NSX-v and HW VTEPs Integration

Deployment Considerations Pre NSX 6.2.2

Mandates deploying multicast in the network infrastructure to handle the delivery of VXLAN encapsulated multi-destination traffic

Broadcast, Unknown Unicast, Multicast (BUM) traffic

Multicast mode only needed for the VXLAN segments that are bridged to VLANs

NSX-v has no direct control over the hardware VTEP devices

No control-plan communication with the controller, nor orchestration/automation capabilities (manual configuration required for HW VTEPs)

Note: full control-plane/data-plane integration only available with NSX-MH

End-to-end loop exposure

No capabilities on HW VTEPs to detect a L2 loop caused by a physical L2 backdoor connection

Unsupported Coexistence of HW and SW VTEPs

Can only connect bare-metal servers (or VLAN attached VMs) to a pair of HW VTEP ToRs

Coexistence of HW and SW VTEPs Can only connect bare-metal servers (or VLAN a ttached VMs)
Coexistence of HW and SW VTEPs Can only connect bare-metal servers (or VLAN a ttached VMs)

Let’s Compare:

HW Vendor Virtualization Solution VM VM VM VM HV HV VM VM VM VM vSphere
HW Vendor
Virtualization Solution
VM
VM
VM VM
HV
HV
VM
VM
VM VM
vSphere
HV
HV
VM VM VM VM HV HV VM VM VM VM vSphere HV HV VLAN-backed NSX Virtualization

VLAN-backed

NSX Virtualization Model

Hardware Vendor Model

VM VM VM VM HV HV VM VM VM VM vSphere HV HV VLAN-backed NSX Virtualization

83

VXLAN Hardware Encapsulation Benefits

NSX virtualization

HW Vendor Model

HW SW VXLAN L2 payload VXLAN L2 payload VM HV vSwitch same same performance performance
HW SW
VXLAN
L2 payload
VXLAN
L2 payload
VM
HV
vSwitch
same
same
performance
performance
VM
HV
vSwitch
VLAN L2 payload
VLAN
L2 payload
performance performance VM HV vSwitch VLAN L2 payload HW GW V X L A N L2

HW GW

VXLAN

L2 payload
L2 payload

No performance* benefit for the HW Vendor Model:

vSwitch performance is independent of the output encapsulation

HW Switches and HW Gateways have similar performances too…

* Here “performance” is defined as packets per second and throughput

HW Gateways have similar performances too… * Here “performance” is defined as packets per second and

84

HW Gateways Make Sense for non-Virtualized Payloads

HW Gateways Make Sense for non-Virtualized Payloads HW GW Performance L2 payload VXLAN L2 payload SW

HW GW

HW Gateways Make Sense for non-Virtualized Payloads HW GW Performance L2 payload VXLAN L2 payload SW

Performance

L2 payload VXLAN L2 payload
L2 payload
VXLAN
L2 payload

SW GW

vs.

L2 payload VXLAN L2 payload SW GW vs. Feature rich, no HW requirement

Feature rich, no HW requirement

L2 payload VXLAN L2 payload
L2 payload
VXLAN
L2 payload
rich, no HW requirement L2 payload VXLAN L2 payload This is the use case we’re advocating

This is the use case we’re advocating for HW Gateways with NSX

85

NSX Bridging Instance vs. Hardware Gateway

A single bridging instance per Logical Switch

Bandwidth limited by single bridging instance

L2 network must be extended to reach all the physical devices

VM VM
VM
VM

VLAN extended

between racks

VLAN 10

VLAN 10

devices VM VM VLAN extended between racks VLAN 10 VLAN 10 • Several Hardware Gateways can

Several Hardware Gateways can be deployed at several locations simultaneously

With Hardware Gateways, VLANs can be kept local to a rack and don’t need to be extended

can be kept local to a rack and don’t need to be extended VM VM L3
VM VM
VM
VM

L3 (VXLAN) only between racks

VXLAN

VLAN

Non-virtualized devices (part of the same L2 segment)

VLAN 10

VLAN 20

86

Logical Routing Distributed and Centralized

Logical Routing

Distributed and Centralized

Logical Routing Distributed and Centralized

NSX Logical Routing Components

NSX Logical Routing Components ESXi Hypervisor Kernel Modules (VIBs) Distributed Logical Router LIF1 LIF2 vSphere Host

ESXi

Hypervisor Kernel Modules (VIBs)

Routing Components ESXi Hypervisor Kernel Modules (VIBs) Distributed Logical Router LIF1 LIF2 vSphere Host DLR

Distributed

Logical Router

Hypervisor Kernel Modules (VIBs) Distributed Logical Router LIF1 LIF2 vSphere Host DLR Kernel Module DLR Control
LIF1 LIF2 vSphere Host
LIF1
LIF2
vSphere
Host

DLR Kernel Module

Logical Router LIF1 LIF2 vSphere Host DLR Kernel Module DLR Control VM Distributed Logical Routing Optimized
Logical Router LIF1 LIF2 vSphere Host DLR Kernel Module DLR Control VM Distributed Logical Routing Optimized

DLR Control VM

Distributed Logical Routing Optimized for E-W Traffic Patterns

Distributed Logical Routing Optimized for E-W Traffic Patterns NSX Edge Centralized Routing Optimized for N-S Routing
Distributed Logical Routing Optimized for E-W Traffic Patterns NSX Edge Centralized Routing Optimized for N-S Routing

NSX Edge

Centralized Routing Optimized for N-S Routing

88

NSX Logical Routing : Components Interaction

External Network
External Network
Peering
Peering

OSPF, BGP

New Distributed Logical Router Instance is 1 created on NSX Manager with Dynamic Routing configured
New Distributed Logical Router Instance is
1 created on NSX Manager with Dynamic
Routing configured
2 Controller pushes new logical router Configuration including LIFs to ESXi hosts
2 Controller pushes new logical router
Configuration including LIFs to ESXi hosts
3 OSPF/BGP peering between the NSX Edge and logical router control VM
3 OSPF/BGP peering between the NSX
Edge and logical router control VM

VXLAN

VLAN

NSX Edge (Acting as next hop router)

control VM VXLAN VLAN NSX Edge (Acting as next hop router) Control 192.168.10.3 3 NSX Mgr
control VM VXLAN VLAN NSX Edge (Acting as next hop router) Control 192.168.10.3 3 NSX Mgr

Control

192.168.10.3

3
3

NSX Mgr

as next hop router) Control 192.168.10.3 3 NSX Mgr 192.168.10.1 Data 6 Path 192.168.10.2 DLR Control

192.168.10.1

Data 6 Path 192.168.10.2
Data
6
Path
192.168.10.2

DLR Control VM

Mgr 192.168.10.1 Data 6 Path 192.168.10.2 DLR Control VM 1 4 Control 4 Learnt routes from
1
1
192.168.10.1 Data 6 Path 192.168.10.2 DLR Control VM 1 4 Control 4 Learnt routes from the

4

Control

4 Learnt routes from the NSX Edge are pushed to the Controller for distribution
4 Learnt routes from the NSX Edge are
pushed to the Controller for distribution
5 Controller sends the route updates to all ESXi hosts
5 Controller sends the route updates to all
ESXi hosts
6 Routing kernel modules on the hosts handle the data path traffic
6 Routing kernel modules on the hosts
handle the data path traffic

Controller Cluster

Controller Cluster

DLR

on the hosts handle the data path traffic Controller Cluster DLR 5 2 172.16.10.0/24 172.16.20.0/24 172.16.30.0/24
on the hosts handle the data path traffic Controller Cluster DLR 5 2 172.16.10.0/24 172.16.20.0/24 172.16.30.0/24
5 2
5 2

172.16.10.0/24

172.16.20.0/24

172.16.30.0/24

on the hosts handle the data path traffic Controller Cluster DLR 5 2 172.16.10.0/24 172.16.20.0/24 172.16.30.0/24

89

Logical Routing Logical Topologies

Logical Routing

Logical Topologies

Logical Routing Logical Topologies

DLR – Design Considerations (Multiple VDS)

Compute A

Web VM VM Web VM VM
Web
VM
VM
Web
VM
VM

Compute B

Web VM VM Web VM VM
Web
VM
VM
Web
VM
VM

Management and Edge Cluster

vCenter Server Controller NSX NSX Manager Cluster Edges
vCenter
Server
Controller
NSX
NSX Manager
Cluster
Edges
VXLAN Transport Zone Spanning Three Clusters
VXLAN Transport Zone Spanning Three Clusters
Compute VDS
Compute VDS