Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
There are several different types of network traffic that you must consider and plan for when you deploy a highly available
HyperV solution. You should design your network configuration with the following goals in mind:
This topic provides network configuration recommendations that are specific to a HyperV cluster that is running Windows
Server 2012. It includes an overview of the different network traffic types, recommendations for how to isolate traffic,
recommendations for features such as NIC Teaming, Quality of Service QoS and Virtual Machine Queue VMQ, and a
Windows PowerShell script that shows an example of converged networking, where the network traffic on a HyperV cluster
is routed through one external virtual switch.
Windows Server 2012 supports the concept of converged networking, where different types of network traffic share the same
Ethernet network infrastructure. In previous versions of Windows Server, the typical recommendation for a failover cluster
was to dedicate separate physical network adapters to different traffic types. Improvements in Windows Server 2012, such as
HyperV QoS and the ability to add virtual network adapters to the management operating system enable you to consolidate
the network traffic on fewer physical adapters. Combined with traffic isolation methods such as VLANs, you can isolate and
control the network traffic.
Important
If you use System Center Virtual Machine Manager VMM to create or manage HyperV clusters, you must use VMM to
configure the network settings that are described in this topic.
In this topic:
https://msdn.microsoft.com/frfr/library/dn550728(v=ws.11).aspx 1/14
25/01/2017 RecommandationsderseaupourunClusterHyperVdansWindowsServer2012
Example of converged networking: routing traffic through one HyperV virtual switch
Appendix: Encryption
Network Traffic
Description
Type
Management
Provides connectivity between the server that is running HyperV and basic infrastructure
functionality.
Used to manage the HyperV management operating system and virtual machines.
Cluster
Used for internode cluster communication such as the cluster heartbeat and Cluster Shared
Volumes CSV redirection.
Live migration
Used for virtual machine live migration.
Storage
Used for SMB traffic or for iSCSI traffic.
Replica traffic
Used for virtual machine replication through the HyperV Replica feature.
Virtual machine
access Used for virtual machine connectivity.
Typically requires external network connectivity to service client requests.
The following sections provide more detailed information about each network traffic type.
Management traffic
A management network provides connectivity between the operating system of the physical HyperV host also known
as the management operating system and basic infrastructure functionality such as Active Directory Domain Services
ADDS, Domain Name System DNS, and Windows Server Update Services WSUS. It is also used for management of
the server that is running HyperV and the virtual machines.
The management network must have connectivity between all required infrastructure, and to any location from which
you want to manage the server.
Cluster traffic
https://msdn.microsoft.com/frfr/library/dn550728(v=ws.11).aspx 2/14
25/01/2017 RecommandationsderseaupourunClusterHyperVdansWindowsServer2012
A failover cluster monitors and communicates the cluster state between all members of the cluster. This communication
is very important to maintain cluster health. If a cluster node does not communicate a regular health check known as
the cluster heartbeat, the cluster considers the node down and removes the node from cluster membership. The cluster
then transfers the workload to another cluster node.
Internode cluster communication also includes traffic that is associated with CSV. For CSV, where all nodes of a cluster
can access shared blocklevel storage simultaneously, the nodes in the cluster must communicate to orchestrate
storagerelated activities. Also, if a cluster node loses its direct connection to the underlying CSV storage, CSV has
resiliency features which redirect the storage I/O over the network to another cluster node that can access the storage.
We recommend that you use a dedicated network or VLAN for live migration traffic to ensure quality of service and for
traffic isolation and security. Live migration traffic can saturate network links. This can cause other traffic to experience
increased latency. The time it takes to fully migrate one or more virtual machines depends on the throughput of the live
migration network. Therefore, you must ensure that you configure the appropriate quality of service for this traffic. To
provide the best performance, live migration traffic is not encrypted.
You can designate multiple networks as live migration networks in a prioritized list. For example, you may have one
migration network for cluster nodes in the same cluster that is fast 10GB, and a second migration network for cross
cluster migrations that is slower 1GB.
All HyperV hosts that can initiate or receive a live migration must have connectivity to a network that is configured to
allow live migrations. Because live migration can occur between nodes in the same cluster, between nodes in different
clusters, and between a cluster and a standalone HyperV host, make sure that all these servers can access a live
migrationenabled network.
Storage traffic
For a virtual machine to be highly available, all members of the HyperV cluster must be able to access the virtual
machine state. This includes the configuration state and the virtual hard disks. To meet this requirement, you must have
shared storage.
In Windows Server 2012, there are two ways that you can provide shared storage:
Shared block storage. Shared block storage options include Fibre Channel, Fibre Channel over Ethernet FCoE,
iSCSI, and shared Serial Attached SCSI SAS.
SMB3.0 includes new functionality known as SMB Multichannel. SMB Multichannel automatically detects and uses
multiple network interfaces to deliver high performance and highly reliable storage connectivity.
By default, SMB Multichannel is enabled, and requires no additional configuration. You should use at least two network
adapters of the same type and speed so that SMB Multichannel is in effect. Network adapters that support RDMA
Remote Direct Memory Access are recommended but not required.
SMB3.0 also automatically discovers and takes advantage of available hardware offloads, such as RDMA. A feature
known as SMB Direct supports the use of network adapters that have RDMA capability. SMB Direct provides the best
performance possible while also reducing file server and client overhead.
https://msdn.microsoft.com/frfr/library/dn550728(v=ws.11).aspx 3/14
25/01/2017 RecommandationsderseaupourunClusterHyperVdansWindowsServer2012
Remarque
The NIC Teaming feature is incompatible with RDMAcapable network adapters. Therefore, if you intend to use the
RDMA capabilities of the network adapter, do not team those adapters.
Both iSCSI and SMB use the network to connect the storage to cluster members. Because reliable storage connectivity
and performance is very important for HyperV virtual machines, we recommend that you use multiple networks
physical or logical to ensure that these requirements are achieved.
Remarque
For more information about SMB Direct and SMB Multichannel, see Improve Performance of a File Server with SMB
Direct and The basics of SMB Multichannel, a feature of Windows Server2012 and SMB3.0.
Replica traffic
HyperV Replica provides asynchronous replication of HyperV virtual machines between two hosting servers or Hyper
V clusters. Replica traffic occurs between the primary and Replica sites.
HyperV Replica automatically discovers and uses available network interfaces to transmit replication traffic. To throttle
and control the replica traffic bandwidth, you can define QoS policies with minimum bandwidth weight.
If you use certificatebased authentication, HyperV Replica encrypts the traffic. If you use Kerberosbased
authentication, traffic is not encrypted.
To separate virtual machine traffic from the management operating system, we recommend that you use VLANs which
are not exposed to the management operating system.
Remarque
Realize that if you want to have a physical or logical network that is dedicated to a specific traffic type, you must
assign each physical or virtual network adapter to a unique subnet. For each cluster node, Failover Clustering
recognizes only one IP address per subnet.
https://msdn.microsoft.com/frfr/library/dn550728(v=ws.11).aspx 4/14
25/01/2017 RecommandationsderseaupourunClusterHyperVdansWindowsServer2012
A failover cluster can use any network that allows cluster network communication for cluster monitoring, state
communication, and for CSVrelated communication.
To configure a network to allow or not to allow cluster network communication, you can use Failover Cluster Manager
or Windows PowerShell. To use Failover Cluster Manager, click Networks in the navigation tree. In the Networks pane,
rightclick a network, and then click Properties.
The following Windows PowerShell example configures a network named Management Network to allow cluster and
client connectivity.
(GetClusterNetworkName"ManagementNetwork").Role=3
https://msdn.microsoft.com/frfr/library/dn550728(v=ws.11).aspx 5/14
25/01/2017 RecommandationsderseaupourunClusterHyperVdansWindowsServer2012
The following table shows the recommended settings for each type of network traffic. Realize that virtual machine
access traffic is not listed because these networks should be isolated from the management operating system by using
VLANs that are not exposed to the host. Therefore, virtual machine networks should not appear in Failover Cluster
Manager as cluster networks.
Remarque
Clear the Allow clients to connect through this network check box.
Remarque
Clear the Allow clients to connect through this network check box.
https://msdn.microsoft.com/frfr/library/dn550728(v=ws.11).aspx 6/14
25/01/2017 RecommandationsderseaupourunClusterHyperVdansWindowsServer2012
The following Windows PowerShell example enables live migration traffic only on a network that is named
Migration_Network.
GetClusterResourceTypeName"VirtualMachine"|SetClusterParameterName
MigrationExcludeNetworksValue([String]::Join(";",(GetClusterNetwork|WhereObject
{$_.Namene"Migration_Network"}).ID))
For example, the following Windows PowerShell command sets a constraint for SMB traffic from the file server
FileServer1 to the network interfaces SMB1, SMB2, SMB3, and SMB4 on the HyperV host from which you run this
command.
https://msdn.microsoft.com/frfr/library/dn550728(v=ws.11).aspx 7/14
25/01/2017 RecommandationsderseaupourunClusterHyperVdansWindowsServer2012
NewSmbMultichannelConstraintServerName"FileServer1"InterfaceAlias"SMB1","SMB2",
"SMB3","SMB4"
Remarque
You must run this command on each node of the HyperV cluster.
To find the interface name, use the GetNetAdapter cmdlet.
To isolate iSCSI traffic, configure the iSCSI target with interfaces on a dedicated network logical or physical. Use the
corresponding interfaces on the cluster nodes when you configure the iSCSI initiator.
If you want to isolate the replica traffic to a particular network adapter, you can define a persistent static route which
redirects the network traffic to the defined network adapter. To specify a static route, use the following command:
For example, to add a static route to the 10.1.17.0 network example network of the Replica site that uses a subnet
mask of 255.255.255.0, a gateway of 10.0.17.1 example IP address of the primary site, where the interface number for
the adapter that you want to dedicate to replica traffic is 8, run the following command:
The NIC Teaming feature, also known as load balancing and failover LBFO, provides two basic sets of algorithms for
teaming.
Switchdependent modes. Requires the switch to participate in the teaming process. Typically requires all the
network adapters in the team to be connected to the same switch.
Switchindependent modes. Does not require the switch to participate in the teaming process. Although not
required, team network adapters can be connected to different switches.
Both modes provide for bandwidth aggregation and traffic failover if a network adapter failure or network disconnection
occurs. However, in most cases only switchindependent teaming provides traffic failover for a switch failure.
NIC Teaming also provides a traffic distribution algorithm that is optimized for HyperV workloads. This algorithm is
referred to as the HyperV port load balancing mode. This mode distributes the traffic based on the MAC address of the
virtual network adapters. The algorithm uses round robin as the loadbalancing mechanism. For example, on a server that
has two teamed physical network adapters and four virtual network adapters, the first and third virtual network adapter
https://msdn.microsoft.com/frfr/library/dn550728(v=ws.11).aspx 8/14
25/01/2017 RecommandationsderseaupourunClusterHyperVdansWindowsServer2012
will use the first physical adapter, and the second and fourth virtual network adapter will use the second physical adapter.
HyperV port mode also enables the use of hardware offloads such as virtual machine queue VMQ which reduces CPU
overhead for networking operations.
Recommendations
For a clustered HyperV deployment, we recommend that you use the following settings when you configure the
additional properties of a team.
Remarque
NIC teaming will effectively disable the RDMA capability of the network adapters. If you want to use SMB Direct and
the RDMA capability of the network adapters, you should not use NIC Teaming.
For more information about the NIC Teaming modes and how to configure NIC Teaming settings, see Windows Server
2012 NIC Teaming LBFO Deployment and Management and NIC Teaming Overview.
Measures network bandwidth, detects changing network conditions such as congestion or availability of
bandwidth, and prioritizes or throttles network traffic.
Includes a minimum bandwidth feature which guarantees a certain amount of bandwidth to a given type of traffic.
We recommend that you configure appropriate HyperV QoS on the virtual switch to ensure that network requirements
are met for all appropriate types of network traffic on the HyperV cluster.
Remarque
You can use QoS to control outbound traffic, but not the inbound traffic. For example, with HyperV Replica, you can
use QoS to control outbound traffic from the primary server, but not the inbound traffic from the Replica server.
Recommendations
For a HyperV cluster, we recommend that you configure HyperV QoS that applies to the virtual switch. When you
configure QoS, do the following:
https://msdn.microsoft.com/frfr/library/dn550728(v=ws.11).aspx 9/14
25/01/2017 RecommandationsderseaupourunClusterHyperVdansWindowsServer2012
Configure minimum bandwidth in weight mode instead of in bits per second. Minimum bandwidth specified
by weight is more flexible and it is compatible with other features, such as live migration and NIC Teaming. For
more information, see the MinimumBandwidthMode parameter in NewVMSwitch.
Enable and configure QoS for all virtual network adapters. Assign a weight to all virtual adapters. For more
information, see SetVMNetworkAdapter. To make sure that all virtual adapters have a weight, configure the
DefaultFlowMinimumBandwidthWeight parameter on the virtual switch to a reasonable value. For more
information, see SetVMSwitch.
The following table recommends some generic weight values. You can assign a value from 1 to 100. For guidelines to
consider when you assign weight values, see Guidelines for using Minimum Bandwidth.
Default weight 0
Cluster 10
Management 10
Replica traffic 10
Live migration 40
Storage 40
Not all physical network adapters support VMQ. Those that do support VMQ will have a fixed number of queues available,
and the number will vary. To determine whether a network adapter supports VMQ, and how many queues they support,
use the GetNetAdapterVmq cmdlet.
You can assign virtual machine queues to any virtual network adapter. This includes virtual network adapters that are
exposed to the management operating system. Queues are assigned according to a weight value, in a firstcome first
serve manner. By default, all virtual adapters have a weight of 100.
Recommendations
We recommend that you increase the VMQ weight for interfaces with heavy inbound traffic, such as storage and live
migration networks. To do this, use the SetVMNetworkAdapter Windows PowerShell cmdlet.
https://msdn.microsoft.com/frfr/library/dn550728(v=ws.11).aspx 10/14
25/01/2017 RecommandationsderseaupourunClusterHyperVdansWindowsServer2012
The example also configures network isolation which restricts cluster traffic from the management interface, restricts SMB
traffic to the SMB interfaces, and restricts live migration traffic to the live migration interface.
#CreateanetworkteamusingswitchindependentteamingandHyperVportmodeNew
NetLbfoTeam"PhysicalTeam"TeamMembers"10GBPort1","10GBPort2"TeamNicName
"PhysicalTeam"TeamingModeSwitchIndependentLoadBalancingAlgorithmHyperVPort#Create
aHyperVvirtualswitchconnectedtothenetworkteam#EnableQoSinWeightmodeNew
VMSwitch"TeamSwitch"NetAdapterName"PhysicalTeam"MinimumBandwidthModeWeight
AllowManagementOS$false#Configurethedefaultbandwidthweightfortheswitch#Ensures
allvirtualNICshaveaweightSetVMSwitchName"TeamSwitch"
DefaultFlowMinimumBandwidthWeight0#Createvirtualnetworkadaptersonthemanagement
operatingsystem#Connecttheadapterstothevirtualswitch#SettheVLANassociated
withtheadapter#ConfiguretheVMQweightandminimumbandwidthweightAdd
VMNetworkAdapterManagementOSName"Management"SwitchName"TeamSwitch"Set
https://msdn.microsoft.com/frfr/library/dn550728(v=ws.11).aspx 11/14
25/01/2017 RecommandationsderseaupourunClusterHyperVdansWindowsServer2012
VMNetworkAdapterVlanManagementOSVMNetworkAdapterName"Management"AccessVlanId10
SetVMNetworkAdapterManagementOSName"Management"VmqWeight80
MinimumBandwidthWeight10AddVMNetworkAdapterManagementOSName"Cluster"SwitchName
"TeamSwitch"SetVMNetworkAdapterVlanManagementOSVMNetworkAdapterName"Cluster"
AccessVlanId11SetVMNetworkAdapterManagementOSName"Cluster"VmqWeight80
MinimumBandwidthWeight10AddVMNetworkAdapterManagementOSName"Migration"SwitchName
"TeamSwitch"SetVMNetworkAdapterVlanManagementOSVMNetworkAdapterName"Migration"
AccessVlanId12SetVMNetworkAdapterManagementOSName"Migration"VmqWeight90
MinimumBandwidthWeight40AddVMNetworkAdapterManagementOSName"SMB1"SwitchName
"TeamSwitch"SetVMNetworkAdapterVlanManagementOSVMNetworkAdapterName"SMB1"Access
VlanId13SetVMNetworkAdapterManagementOSName"SMB1"VmqWeight100
MinimumBandwidthWeight40AddVMNetworkAdapterManagementOSName"SMB2"SwitchName
"TeamSwitch"SetVMNetworkAdapterVlanManagementOSVMNetworkAdapterName"SMB2"Access
VlanId14SetVMNetworkAdapterManagementOSName"SMB2"VmqWeight100
MinimumBandwidthWeight40AddVMNetworkAdapterManagementOSName"SMB3"SwitchName
"TeamSwitch"SetVMNetworkAdapterVlanManagementOSVMNetworkAdapterName"SMB3"Access
VlanId15SetVMNetworkAdapterManagementOSName"SMB3"VmqWeight100
MinimumBandwidthWeight40AddVMNetworkAdapterManagementOSName"SMB4"SwitchName
"TeamSwitch"SetVMNetworkAdapterVlanManagementOSVMNetworkAdapterName"SMB4"Access
VlanId16SetVMNetworkAdapterManagementOSName"SMB4"VmqWeight100
MinimumBandwidthWeight40#Renametheclusternetworksifdesired(GetClusterNetwork|
WhereObject{$_.Addresseq"10.0.10.0"}).Name="Management_Network"(GetClusterNetwork
|WhereObject{$_.Addresseq"10.0.11.0"}).Name="Cluster_Network"(GetClusterNetwork
|WhereObject{$_.Addresseq"10.0.12.0"}).Name="Migration_Network"(Get
ClusterNetwork|WhereObject{$_.Addresseq"10.0.13.0"}).Name="SMB_Network1"(Get
ClusterNetwork|WhereObject{$_.Addresseq"10.0.14.0"}).Name="SMB_Network2"(Get
ClusterNetwork|WhereObject{$_.Addresseq"10.0.15.0"}).Name="SMB_Network3"(Get
ClusterNetwork|WhereObject{$_.Addresseq"10.0.16.0"}).Name="SMB_Network4"#
Configuretheclusternetworkroles(GetClusterNetworkName"Management_Network").Role=
3(GetClusterNetworkName"Cluster_Network").Role=1(GetClusterNetworkName
"Migration_Network").Role=1(GetClusterNetworkName"SMB_Network1").Role=0(Get
ClusterNetworkName"SMB_Network2").Role=0(GetClusterNetworkName
"SMB_Network3").Role=0(GetClusterNetworkName"SMB_Network4").Role=0#Configurean
SMBMultichannelconstraint#ThisensuresthatSMBtrafficfromthenamedserveronly
usesSMBinterfacesNewSmbMultichannelConstraintServerName"FileServer1"
InterfaceAlias"vEthernet(SMB1)","vEthernet(SMB2)","vEthernet(SMB3)","vEthernet
(SMB4)"#ConfigurethelivemigrationnetworkGetClusterResourceTypeName"Virtual
Machine"|SetClusterParameterNameMigrationExcludeNetworksValue([String]::Join(";",
(GetClusterNetwork|WhereObject{$_.Namene"Migration_Network"}).ID))
If you also use HyperV Replica in your environment, you can add another virtual network adapter to the management
operating system for replica traffic. For example:
AddVMNetworkAdapterManagementOSName"Replica"SwitchName"TeamSwitch"Set
VMNetworkAdapterVlanManagementOSVMNetworkAdapterName"Replica"AccessVlanId17Set
VMNetworkAdapterManagementOSName"Replica"VmqWeight80MinimumBandwidthWeight10#
Ifthehostisclusteredconfiguretheclusternameandrole(GetClusterNetwork|
WhereObject{$_.Addresseq"10.0.17.0"}).Name="Replica"(GetClusterNetworkName
"Replica").Role=3
Remarque
https://msdn.microsoft.com/frfr/library/dn550728(v=ws.11).aspx 12/14
25/01/2017 RecommandationsderseaupourunClusterHyperVdansWindowsServer2012
If you are instead using policybased QoS, where you can throttle outgoing traffic regardless of the interface on which
it is sent, you can use either of the following methods to throttle HyperV Replica traffic:
Create a QoS policy that is based on the destination subnet. In this example, the Replica site uses the
10.1.17.0/24 subnet.
NewNetQosPolicy"Replicatrafficto10.1.17.0"DestinationAddress10.1.17.0/24
MinBandwidthWeightAction40
Create a QoS policy that is based on the destination port. In the following example, the network listener on the
Replica server or cluster has been configured to use port 8080 to receive replication traffic.
NewNetQosPolicy"Replicatrafficto8080"DestinationPort8080
ThrottleRateActionBitsPerSecond100000
Appendix: Encryption
Cluster traffic
By default, cluster communication is not encrypted. You can enable encryption if you want. However, realize that there
is performance overhead that is associated with encryption. To enable encryption, you can use the following Windows
PowerShell command to set the security level for the cluster.
(GetCluster).SecurityLevel=2
Clear text 0
Signed default 1
Encrypted 2
https://msdn.microsoft.com/frfr/library/dn550728(v=ws.11).aspx 13/14
25/01/2017 RecommandationsderseaupourunClusterHyperVdansWindowsServer2012
Live migration traffic is not encrypted. You can enable IPsec or other network layer encryption technologies if you want.
However, realize that encryption technologies typically affect performance.
SMB traffic
By default, SMB traffic is not encrypted. Therefore, we recommend that you use a dedicated network physical or
logical or use encryption. For SMB traffic, you can use SMB encryption, layer2 or layer3 encryption. SMB encryption is
the preferred method.
Replica traffic
If you use Kerberosbased authentication, HyperV Replica traffic is not encrypted. We strongly recommend that you
encrypt replication traffic that transits public networks over the WAN or the Internet. We recommend Secure Sockets
Layer SSL encryption as the encryption method. You can also use IPsec. However, realize that using IPsec may
significantly affect performance.
See also
Failover Cluster Networking Essentials video
2017 Microsoft
https://msdn.microsoft.com/frfr/library/dn550728(v=ws.11).aspx 14/14