Sei sulla pagina 1di 85

VMware vSphere 6.

0
Knowledge Transfer Kit
Architecture Overview

2015 VMware Inc. All rights reserved.

Agenda
Architecture Overview
VMware ESXi
Virtual machines
VMware vCenter Server
New Platform Services Controller

Recommendations
VMware vSphere vMotion
Availability
VMware vSphere High Availability
VMware vSphere Fault Tolerance

Content Library
VMware Certificate Authority (CA)
Storage
iSCSI Storage Architecture
NFS Storage Architecture
Fibre Channel Architecture
Other Storage Architectural Concepts

Networking

VMware vSphere Distributed Resource

Scheduler
2

Architecture Overview

High-Level VMware vSphere Architectural Overview


VMware vSphere
VMware vCenter Server
Availability

Manage

Application
Services

VMware
VMware vSphere
vSphere vMotion
vMotion
VMware
VMware vSphere
vSphere Storage
Storage
vMotion
vMotion
VMware
VMware vSphere
vSphere High
High
Availability
Availability
VMware
VMware vSphere
vSphere FT
FT
VMware
VMware Data
Data Recovery
Recovery

Scalability

DRS and DPM


Hot Add
Over
Commitment
Content Library

Cluster
Storage
Infrastructure
Services

ESXi

ESXi

ESXi

vSphere VMFS
VMware Virtual
Volumes
VMware Virtual SAN
Thin Provisioning
VMware vSphere
Storage I/O Control

Network

Standard
Standard vSwitch
vSwitch
Distributed
Distributed vSwitch
vSwitch
VMware
VMware NSX
NSX
VMware
VMware vSphere
vSphere
Network
Network I/O
I/O Control
Control

Physical Resources

How Does This Fit With the Software-Defined Data Center


(SDDC)?
Application Service
Self-Service
App VMware
Development

App

vRealize
Application
Blueprinting

App
Deployment
Services
Standardizati
on

Infrastructure Service

TEXT

Catalogs and
Self-Service
Low Admin
StandardAutomation
VMware vRealize
User Portal
Overhead
Templates

SDDC Foundation
VMware
Core
vSphere
Virtualization

Monitoring
vRealize
with
Operations
Performance
vSphere
Manager
and
Capacity
Infrastructure

Cloud
App
Publishi
ng

Cloud Ready

Orchestratio
vRealize
n
with
Orchestrator
Workflow
Library

Navigator

Hyperic
Software-Defined
Software-Defined Networking
Networking
vRealize Log

vRealize Log
Virtualization of
Physical
Assets
Insight

VMware
Virtual
SDS
SAN

VMware
SDN
NSX

vRealize
Compliance
Config.
Manager

SRM
VR
BCDR
vDPA

VMware
VMware
Hybrid
Financial
vCloud
vRealize
Cloud
Connector
Business
5

VMware ESXi

ESXi 6.0
ESXi is bare metal VMware vSphere

Hypervisor
ESXi installs directly onto the physical

server enabling direct access to all server


resources
ESXi is in control of all CPU, memory, network

and storage resources

VMware ESXi

Allows for virtual machines to be run at near

native performance, unlike hosted hypervisors


ESXi 6.0 allows
Utilization of up to 480 physical CPUs per host
Utilization of up to 12 TB of RAM per host
Deployment of up to 2048 virtual machines per

host
7

ESXi Architecture
CLI Commands
for Configuration
and Support

ESXi Host

Agentless
Systems
Management

Agentless
Hardware
Monitoring

VMware
Management
Framework

Common
Information
Model (CIM)

VMware
Management
Framework

VMware
Management
Framework

Local Support Console (ESXi Shell)

VMkernel

Network and Storage

Virtual Machines

Virtual Machines
Virtual Machine

The software computer and consumer of

resources that ESXi is in charge of

App

App

VMs are containers that can run any almost

App

Operating System

any operating system and application.


Segregated environment which does not

cross boundaries unless via network or


otherwise permitted through SDK access
Each VM has access to its own resources
VMs generally do not realize that they are

virtualized

CPU

RAM

Keyboard

Mouse

Disk

Network /
Video Cards

SCSI
CD / DVD
Controller

ESXi Host

10

Virtual Machine Architecture


Virtual machines consist of files stored on a vSphere VMFS or NFS datastore
Configuration file (.vmx)
Swap files (.vswp)
BIOS files (.nvram)
Log files (.log)
Template file (.vmtx)
Raw device map file (<VM_name>-rdm.vmdk)
Disk descriptor file (.vmdk)
Disk data file (VM_name>-flat.vmdk)
Suspend state file (.vmss)
Snapshot data file (.vmsd)
Snapshot state file (.vmsn)
Snapshot disk file (<VM_name>-delta.vmdk)
11

VMware vCenter Server

VMware vCenter 6.0


vCenter is the management platform for

vSphere environments
Provides much of the feature set that comes
with vSphere, such as vSphere High
Availability
Also provides SDK access into the
environment for solutions such as VMware
vRealize Automation
vCenter Server is available in two flavors
vCenter for Windows
vCenter Server Appliance

In vSphere 6.0, both versions offer feature

parity
A single vCenter Server 6.0 can manage
1000 hosts
10,000 virtual machines

13

vCenter 6.0 Architecture


In vCenter 6.0, the architecture has changed dramatically
Provided by Platform Services Controllers
VMware vCenter Single Sign-On
License service
Lookup service
VMware Directory Services
VMware Certificate Authority

All services are provided


from either a
Platform Services Controller
or
vCenter Server instance

Provided by vCenter Server Service


vCenter Server
VMware vSphere Web Client
Inventory Service
VMware vSphere Auto Deploy
VMware vSphere ESXi Dump Collector
vSphere Syslog Collector on Windows and vSphere Syslog Service for
VMware vCenter Server Appliance

14

vCenter 6.0 Architecture (cont.)


Two basic architectures are supported as a result of this change
Platform Services Controller is either Embedded or External to vCenter Server
Choosing a mode depends on the size and feature requirements for the environment

External
Platform Services
Controller

Embedded
Platform Services
Controller

15

vCenter 6.0 Architecture (cont.)


These architectures are Recommended
Enhanced Linked Mode is a major feature that impacts the architecture
When using Enhanced Linked Mode it is recommended to use an external Platform Service Controller
For details about architectures that VMware recommends and the Implications of using them, see

VMware KB article, List of Recommended topologies for vSphere 6.0 (2108548 (http
://kb.vmware.com/kb/2108548)

Enhanced Linked Mode


(No High Availability)

Enhanced Linked Mode


(With High Availability)

16

vCenter 6.0 Architectures (cont.)


These architectures are Not Recommended

Enhanced Linked Mode


(Embedded PSCs)

Enhanced Linked Mode


(Embedded PSC with External vCenter)

Enhanced Linked Mode


(Embedded PSC linked with External PSC)

17

vCenter 6.0 Architecture (cont.)


Enhanced Linked Mode has the following maximums
The architecture should also adhere to these maximums to be supported

Description

Scalability
Maximum

Number of Platform Services Controllers per domain

Maximum Platform Services Controllers per vSphere site (behind a single load
balancer)

Maximum objects in a vSphere domain (users, groups, solution users)

1,000,000

Maximum number of VMware solutions connected to a single Platform Services


Controller

Maximum number of VMware products/solutions per vSphere domain

10

18

vCenter Architecture vCenter Server Components


Platform Services
Controller (Including
vCenter Single Sign-On)

Additional Services:

VMware vSphere Update


Manager
vRealize Orchestrator

Database
Server

Core and
Distributed
Services

User
Access
Control

vSphere Web Client

VMware
vSphere
API

VMware vSphere
Client

Microsoft Active
Directory Domain

Third-Party
Applications

ESXi Management
Plug-Ins

ESXi hosts
vCenter Server
Database
19

vCenter Architecture ESXi and vCenter Server Communication


How vCenter Server components and ESXi hosts communicate
vCenter Server
& Platform
Services Controller

TCP 443

vpxd
TCP
443, 9443
TCP/UDP
902

TCP/UDP
902

hostd

vpxa

ESXi Host
20

VMware vSphere vMotion

vSphere vMotion
vSphere vMotion allows for live migration

of virtual machines between compatible


ESXi hosts
Compatibility determined by CPU, network,

and storage access


With vSphere 6.0, migrations can occur
Between clusters
Between datastores
Between networks

NEW

Between vCenter Servers

NEW

NEW
Over long distances
as long as RTT is

<100ms

22

vSphere vMotion Architecture


vSphere vMotion involves transferring the

entire execution state of the virtual machine


from the source host to the destination

Primarily happens over a high-speed network


The execution state primarily consists of the

following components

The virtual device state, including the state of the

CPU, network and disk adaptors, SVGA, and so


on
External connections with devices, including

networking and SCSI devices


The virtual machines physical memory

Generally a single ping is lost, and users do

not even know a VM has changed hosts

23

vSphere vMotion Architecture Pre-Copy


When a vSphere vMotion is initiated a second VM container is started and pre-copy of the memory is
initiated

ESXi Host 1

ESXi Host 2

VM A

VM A

Memory
Bitmap
vMotion
Network

Memory Pre-Copy

Production
Network

VM End User
24

vSphere vMotion Architecture Memory Checkpoint


When enough data is copied, the VM is quiesced
Checkpoint data is sent with the final changes
ARP is sent and VM is active on the destination host
The source VM is stopped
ESXi Host 2

ESXi Host 1

VM A

VM A

Memory
Bitmap
vMotion
Network

Checkpoint Data

Production
Network
VM End User

25

VMware vSphere Storage vMotion Architecture


vSphere Storage vMotion works in very

much the same way as vSphere vMotion,


only the disks are migrated instead

Read/Write
I/O to Virtual
Disk

It works as follows
1. Initiate storage migration
2. Use the VMkernel data mover or VMware

vSphere Storage APIs - Array Integration


(VAAI) to copy data

VM

VM
Mirror Driver
VMkernel

Data Mover

3. Start a new virtual machine process


Storage Array

4. Use the mirror driver to mirror I/O calls to file

blocks that have already been copied to


virtual disk on the destination datastore
5. Cut over to the destination VM process to

Source Datastore

VAAI

Destination Datastore

begin accessing the virtual disk copy


26

vSphere Storage vMotion Architecture


Simultaneously Change
vSphere vMotion also allows both storage and host to be changed at the same time
New in vSphere 6.0 the VM can be migrated between networks and vCenter Servers

ESXi
Host

Datastore

Network

vCenter

vSphere vMotion Network


Network A

Network B

VMware ESXi

VMware ESXi

vCenter
Server

vCenter
Server
27

Availability
VMware vSphere High Availability
VMware vSphere Fault Tolerance
VMware vSphere Distributed Resource Scheduler

vSphere vMotion Architecture


Long-Distance vSphere vMotion
Cross Continental
Targeting cross continental USA
Up to 100ms RTT

Performance
Maintain standard vSphere
vMotion guarantees
29

Availability
VMware vSphere High Availability

vSphere High Availability


vSphere High Availability is an availability

solution that monitors hosts and restarts


virtual machines in the case of a host failure
VM component protection (monitoring for

APD and PDL events)

NEW

OS and application-independent, requiring

no complex configuration changes

Infrastructure

Connectivity

Application

Host failures

Host network
isolated

Guest OS
hangs/crashes

VM crashes

Agents on the

Datastore incurs
Application
PDLhosts
or APD monitor
hangs/crashes
ESXi
for the
event

following types of failures

31

vSphere High Availability Architecture Overview


Cluster of ESXi hosts created up to 64 hosts
One of the hosts is elected as master when HA is enabled

Availability heartbeats occur through network and storage


HAs agent communicates on the following networks by default
Management network (or)
VMware Virtual SAN network (if Virtual SAN is enabled)

Network heartbeats

Storage heartbeats
Master

32

vSphere High Availability Architecture Host Failures

Master
33

vSphere High Availability Architecture Host Failures

Master

Master declares
slave host dead
34

vSphere High Availability Architecture Host Failures

Master

New master elected


and resumes master
duties
35

vSphere High Availability Architecture Network Partition

Master
36

vSphere High Availability Architecture Host Isolation

Master
37

vSphere High Availability Architecture VM Monitoring

Master

38

vSphere High Availability Architecture VM Component Protection

Master
39

Availability
VMware vSphere Fault Tolerance

vSphere FT
vSphere FT is an availability solution that

provides continuous availability for virtual


machines
Zero downtime
Zero data loss

No loss of TCP connections


Completely transparent to guest software
No dependency on guest OS, applications
No application specific management and

learning
Supports up to 4 NEW
vCPUs in VMs with

vSphere 6.0
Uses fast check pointing rather than

record/replay functionality

41

vSphere FT Architecture
vSphere FT creates two complete virtual machines when enabled with vSphere 6.0
This includes a complete copy of
VMX configuration files
VMDK files including the ability to use separate datastores
Primary VM

Secondary VM

.vmx file

VMDK

VMDK

Datastore 1

VM Network

.vmx file

VMDK

VMDK

Datastore 2

VM Network

42

vSphere FT Architecture Memory Checkpoint


vSphere FT in vSphere 6.0 uses fast checkpoint technology
This is similar to how vSphere vMotion works, but it is done continuously (rather than once)
The fast checkpoint is a snapshot of all data not just memory (memory, disks, devices, and so on)
vSphere FT logging network has a minimum requirement of 10 Gbps NIC
ESXi Host 1

VM A

ESXi Host 2

VM A

Memory
bitmap
vSphere FT
Logging network

Fast Checkpoint Data

Production
network
VM End User

43

Availability
VMware Sphere Distributed Resource Scheduler

DRS

DRS

DRS is a technology that monitors load and

resource usage and will use vSphere vMotion to


balance virtual machines across hosts in a
cluster
DRS also Includes VMware Distributed Power

Management (DPM) which allows for hosts to be


evacuated and powered off during periods of low
utilization

VMware DPM

DRS uses vSphere vMotion functionality migrate

VMs
Can be used in three ways
Fully automated where DRS acts on

recommendations automatically
Partially automated where DRS only acts for initial

VM power-on placement and an administrator has to


approve recommendations
Manual where administrator approval is required

45

DRS Architecture
ESXi Host 1

ESXi Host 1

ESXi Host 2

ESXi Host 2

ESXi Host 3

ESXi Host 3

DRS generates migration recommendations

based on how aggressive it has been


configured
For example
The three hosts on the left side of the following

figure are unbalanced


Host 1 has six virtual machines, its resources

might be overused while ample resources are


available on Host 2 and Host 3
DRS migrates (or recommends the migration

of) virtual machines from Host 1 to Host 2 and


Host 3
On the right side of the diagram, the properly

load balanced configuration of the hosts that


results appears
46

Distributed Power Management Architecture


ESXi Host 1

ESXi Host 1

ESXi Host 2

ESXi Host 2

ESXi Host 3

ESXi Host 3

DPM generates migration recommendations

similar to DRS, but in terms of achieving


power savings
It can be configured for how aggressively you

want to save power


For example
The three hosts on the left side of the following

figure have virtual machines running, but they


are mostly idle
DPM determines that given the load of the

environment shutting down Host 3 will not


impact the level of performance for the VMs
DPM migrates (or recommends the migration

of) virtual machines from Host 3 to Host 2 and


Host 1 and puts Host 3 into standby mode
On the right side of the diagram, the power

managed configuration of the hosts appears

nd
Sta

ost
H
by

47

Content Library

Content Library
The Content Library is new to vSphere 6.0 and is a distributed template, media and script

library for vCenter Server

Similar to the VMware vCloud 5.5 Content Catalog and VMware vCloud Connector Content

Sync

Tracks versions for generational content, cannot be used to revert to older versions
vCenter

vCenter

3
21

Content Library
(Publisher)

2
1

2
1

2
3

Content Library
(Subscriber)

Subscribe

Sync

1
2

2
1

49

Content Library Architecture Publication and Subscription


Publication and subscription allow libraries to be shared between vCenter Servers
Provides a single source for information that can be configured to download and sync

according to schedules or timeframes


vCenter

Templates

vCenter

HTTP GET

Other
Transfer Service

Transfer Service
Content Library Service

Subscribe using URL

Content Library Service

Subscription URL (to lib.json)


Password (optional)
50

Content Library Architecture Content Synchronization


Content Synchronization occurs when content changes
Simple versioning used to denote the modification, and the item is transferred
vCenter

vCenter

HTTP GET

Transfer Service
Content Library Service

VCDB

lib.json

items.json

VMware Content
Subscription Protocol
(vCSP)

item.json

Transfer Service
Content Library Service

VCDB

51

VMware Certificate Authority

Certificates in vSphere 6.0


vCenter 5.x solutions had its TCP/IP connections secured with SSL
Required a unique certificate for each solution

In vSphere 6.0, the various listening ports have been replaced with a single endpoint
Reverse
Web Proxy
(port 443)
vCenter
Server
Service

Inventory
Service

vCenter
Single
Sign-On

vSphere
Web Client

vSphere
Update
Manager

Storage
Policy
Service

This is the reverse HTTP proxy, which will route traffic to the appropriate service based on the

type of request

This means only one endpoint certificate is needed


53

VMware Certificate Authority


In vSphere 6.0, vCenter ships with an internal Certificate Authority (CA)
Called the VMware Certificate Authority

An instance of the VMware CA is included with each Platform Services Controller node
Issues certificates for VMware components under its personal authority in the vSphere eco-

system

Runs as part of the Infrastructure Identity Core Service Group


Directory service
Certificate service
Authentication framework

VMware CA issues certificates only to clients that present credentials from VMDirectory in its

own identity domain

It also posts its root certificate to its own server node in VMware Directory Services

54

How is the VMware Certificate Authority Used?


Machines SSL certificate
Used by reverse proxy on every vSphere node
Used by the VMware Directory Service on Platform Services Controller and Embedded nodes
Used by VPXD on Management and Embedded nodes

Solution users certificates


Single Sign-On signing certificates

55

VMware Endpoint Certificate Store


Certificate Storage and Trust are now handled by the VMware Endpoint Certificate Store
Serves as a local wallet for certificates, for private keys and secret keys, which can be stored

in key stores

Runs as part of the Authentication Framework Service


Runs on every Embedded, Platform Services Controller and Management node
Some key-stores are special
Trusted certificates key store
Machine SSL cert key store

56

How is VMware Endpoint Certificate Store Used?


Machine SSL store
Holds the machine SSL certificate

Trusted roots store


Holds trusted root certificates from all VMware CA instances running on every infrastructure controller in

the SSO identity domain


Holds third-party trusted root certificates that were uploaded to VMDir and were downloaded to every

VMware Endpoint Certificate Store instance


Solutions use the contents of this key-store to verify certificates

Solution key-stores
Following key stores hold private keys and solution user certificates
Machine Account Key Store (Platform Service Controller, Management, Embedded nodes)
VPXD Key Store (Management, Embedded nodes)
VPXD Extension Key Store (Management, Embedded nodes)
VMware vSphere Client Key Store (Management, Embedded nodes)
57

Storage
iSCSI Storage Architecture
NFS Storage Architecture
Fibre Channel Architecture
Other Storage Architectural Concepts

Storage
Both local and/or shared storage are a core

requirement for full utilization of ESXi


features

VMware
ESXi
hosts

Many kinds of storage can be used with

vSphere
Local disks

Datastore
types

NFS

VMware vSphere VMFS

Fibre Channel (FC) SANs


iSCSI SANs

File
system

NAS SANs
Virtual SAN
Virtual Volumes (VVOLs)

Storage
technology

Local
Disks

FC

FCoE

iSCSI

VSAN
or
VVOL

NAS

They are generally formatted either:


A VMFS file system
The file system of the NFS Server

59

Storage Protocol Features


Each different protocol has its own set of supported features
All major of features are supported by all protocols
Storage Protocol

Supports Boot
from SAN

Supports
Supports
VMware vSphere vSphere High
vMotion
Availability

Supports DRS

Supports Raw
Device Mapping

Fibre Channel

FCoE

iSCSI

NFS

Direct Attached
Storage

Virtual SAN

VMware Virtual
Volumes

60

Storage
iSCSI Storage Architecture

Storage Architecture iSCSI


iSCSI storage utilizes regular IP traffic over a standard network to transport iSCSI commands
The ESXi host connects through one of several types of iSCSI initiator

62

Storage Architecture iSCSI Components


All iSCSI systems share a common set of components that are used to provide the storage

access

63

Storage Architecture iSCSI Addressing


Other than the standard IP addresses, iSCSI targets are identified by names as well
iSCSI target name:
iqn.1992-08.com.mycompany:stor1-47cf3c25
or
eui.fedcba9876543210
iSCSI alias: stor1
IP address: 192.168.36.101

iSCSI initiator name:


iqn.1998-01.com.vmware:train1-64ad4c29
or
eui.1234567890abcdef
iSCSI alias: train1
IP address: 192.168.36.88

64

Storage
NFS Storage Architecture

Storage Architecture NFS Components


Much like iSCSI, NFS accesses storage

over the network


NAS device or a
server with
storage

Directory to share
with the ESXi host
over the network

ESXi host with


NIC mapped to
virtual switch

VMkernel port
defined on virtual
switch
66

Storage Architecture Addressing and Access Control with NFS


ESXi Accesses NFS through NFS Server

address / name through a VMkernel port


NFS version 4.1 and NFS version 3 are

available with vSphere 6.0


Different features are supported with

different versions of the protocol

192.168.81.33

NFS 4.1 supports multipathing unlike NFS 3


NFS 3 supports all features, NFS 4.1 does not

support Storage DRS, VMware vSphere


Storage I/O Control, VMware vCenter Site
Recovery Manager, and Virtual Volumes
Dedicated switches are not required for NFS

configurations

192.168.81.72
VMkernel port
configured with
IP address
67

Storage
Fibre Channel Architecture

Storage Architecture Fibre Channel


Unlike network storage such as NFS or iSCSI, Fibre Channel does not generally use an IP

network for storage Access.

The exception here is when using Fibre Channel over Ethernet (FCoE)

69

Storage Architecture Fibre Channel Addressing and Access


Control
Zoning and LUN masking are used for access control to storage LUNs

70

Storage Architecture FCoE Adapters


FCoE adapters allow access to Fibre

Channel Storage over Ethernet


connections
Enables expansion to Fibre Channel

SANs when no Fibre Channel


infrastructure exits in many cases
Both hardware and software adapters

Hardware FCoE

Software FCoE

ESXi Host

ESXi 5.x Host

Network
Driver

Network
Driver

FC
Driver

Converged
Network
Adapter

10 Gigabit
Ethernet

are allowed
Hardware adapters are often called

presented from the single card in the


clients

NIC
with FCoE
Support

FCoE Switch

converged network adapters (CNAs)


Many times both a NIC and a HBA are

Software
FC

Ethernet IP Frames
to LAN Devices

LAN

FC Frames to FC
Storage Arrays

FC
SAN
71

Storage
Other Storage Architectural Concepts

Multipathing
Multipathing enables continued access to

SAN LUNs if hardware fails


It also can provide load balancing based on

the path policy selected

73

vSphere Storage I/O Control


vSphere Storage I/O Control allows traffic to

be prioritized during periods of contention


Brings the compute style shares/limits to

Data
Mining

With
With vSphere
vSphere
Storage
Storage
I/O
I/O Control
Control

Without
Without vSphere
vSphere
Storage
Storage I/O
I/O
Control
Control
Print
Server

Online Microsoft
Store Exchange

Data
Mining

Print
Server

Online
Store

Microsoft
Exchange

storage infrastructure
Monitors device latency and acts when it

over exceeds a threshold


Allows for important virtual machines to

have priority access to resources

During high I/O from non-critical application

74

Datastore Clusters
A collection of datastores with shared resources similar to ESXi host clusters
Allow for management to be done as a shared management interface
Storage DRS can be used to manage the resource and ensure they are balanced
Can be managed by using the following constructs
Space utilization
I/O latency load balancing
Affinity rules for virtual disks

75

Software-Defined Storage
Software-defined storage is a software

construct which is used by


Virtual Volumes
Virtual SAN

Uses storage policy-based management to

assign policies to virtual machines for


storage access
Policies are assigned on a per disk basis,

rather than a per datastore basis


Key tenant to the software-defined data

center
Both Virtual Volumes and Virtual SAN are

discussed in much greater detail in the


Software-Defined Storage Knowledge
Transfer Kit

76

Networking

Networking
Networking is also a core resource for vSphere
Two core types of switches are provided
Standard virtual switches
Virtual switch configuration for a single host

Distributed virtual switches


Data center level virtual switches that provide a consistent network configuration for virtual machines as they

migrate across multiple hosts

Third-Party switches are also allowed


Cisco Nexus 1000v

There are two basic types of connectivity as well


Virtual machine port groups
VMkernel port groups
For IP storage, vSphere vMotion migration, vSphere FT, Virtual SAN, provisioning, and so on
For the ESXi management network
78

Networking Architecture
VM1

VM2

VM3
Management
Network

VMkernel

Test VLAN 101


Production VLAN 102
IP Storage VLAN 103
Management VLAN 104

79

Network Architecture Standard Compared to Distributed


Standard vSwitch

Distributed vSwitch

80

Network Architecture NIC Teaming and Load Balancing


NIC Teaming enables multiple NICs to be connected to a single virtual switch for continued

access to networks if hardware fails

This also can enables load balancing (if appropriate)

Load Balancing Policies


Route based on Originating Virtual Port
Route based on Source MAC Hash
Route based on IP Hash
Route based on Physical NIC Load
Use Explicit Failover Order

Many available policies are configured on any type of switch


Route based on Physical NIC Load is only available on VMware vSphere Distributed SwitchTM

81

VMware vSphere Network I/O Control


vSphere Network I/O Control allows traffic to

be prioritized during periods of contention


Brings the compute style of shares/limits to

storage infrastructure
Monitors device latency and acts when it

over exceeds a threshold


Virtual Switch

Allows important virtual machines or

services to have priority access to resources


10 GigE

82

Software-Defined Networking
Software-Defined Networking is a software

construct that allows your physical network


to be treated as a pool of transport capacity,
with network and security services attached
to VMs with a policy-driven approach
Decouples the network configuration from

the physical infrastructure


Allows for security and micro-segmentation

of traffic
Key tenant to the software-defined data

center (SDDC)

83

Questions

84

VMware vSphere 6.0


Knowledge Transfer Kit

VMware, Inc.
3401 Hillview Ave
Palo Alto, CA 94304
Tel: 1-877-486-9273 or 650-427-5000
Fax: 650-427-5001

Potrebbero piacerti anche