Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Task
Migrate one physical adapter to each uplink (Ethernet) port profile on the N1Kv DVS.
Migrate all VM guests, including vCenter and both N1Kv VSMs, over to run on the N1Kv VEMs.
Create any needed port profiles if not previously created.
Before any VEMs join the N1Kv, ensure that VEM running on ESXi1 joins the N1Kv as module 3
and VEM running on ESXi2 joins the N1Kv as module 4.
Configuration
To start, let's check the UUIDs of the blades in UCSM, because they serve as the defining value
of the VEM inside the N1Kv.
On ESXi1 Service Profile:
On N1Kv:
FEEDBACK
vem 3
host vmware id 8f862310-4c63-11e2-0000-00000000000f
vem 4
host vmware id 8f862310-4c63-11e2-0000-00000000001f
We will intentionally start with ESXi2 to avoid disrupting vCenter or VSM1 traffic.
Navigate to ESXi2, click the Configuration tab, click Networking, select vSphere
Distributed Switch, and click Manage Physical Adapters.
We have a redundant vmnic for VMKernel traffic, so this shouldn't be too disruptive. Click Yes.
We don't have a redundant vmnic for vMotion, but it isn't critical (in production it would be, but we
aren't in production). Click Yes.
We also have a redundant vmnic for VM-Guest traffic, so again this shouldn't be too disruptive.
Click Yes.
Click OK.
Note the new adapters under their respective uplink port profiles; they should all show green 8pin modular connector icons. Click Manage Virtual Adapters to begin moving VMkernel
interfaces over to the new DVS.
Click Add.
Select the Management Network adapter, double-click under Port Group to display the dropdown menu, and select the VMKernel veth port profile from the N1Kv DVS.
Do the same for the vMotion adapter, but select the vMotion veth port profile from the N1Kv
DVS. Click Next.
Click Finish.
Click Close.
Verification
In N1Kv, note that the VEM module comes online and inserts into the N1Kv DVS properly as
VEM 4.
Note:
By default, modules are inserted and dynamically assigned the first available slot
number; however, based on what we did in the first few tasks, we assured that it
would be inserted as module 4. It is a very good practice to keep VEM numbers
synchronized with your ESXi hosts in some way, if possible. They can be
changed later, but it is much preferred to set them up properly from the start.
N1Kv-01#
Model
Status
Nexus1000V
active *
Nexus1000V
ha-stand
248
NA
ok
by
4
Mod Sw
Hw
4.2(1)SV2(1.1)
0.0
4.2(1)SV2(1.1)
0.0
4.2(1)SV2(1.1)
Mod MAC-Address(es)
Serial-Num
00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA
00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA
02-00-0c-00-03-00 to 02-00-0c-00-03-80 NA
Mod Server-IP
Server-UUID
Server-Name
10.0.121.1
NA
NA
10.0.121.1
NA
NA
10.0.115.12
8f862310-4c63-11e2-0000-00000000001f 10.0.115.12
And also notice the new Vethernet (virtual VM) interfaces in the N1Kv.
Configuration
Back in vCenter, right-click VSM2 in ESXi2 and click Edit Settings.
Change Network Adapter 1 to the new N1Kv-Control (N1Kv-01) port group in the DVS.
Note:
You can distinguish DVS port profiles from standard vSwitch groups by the fact
that they always indicate which DVS they are a part of in parentheses at the end
of the name. (Note that VMWare can run multiple different DVS at one time.)
Change Network Adapter 2 to the new N1Kv-Management (N1Kv-01) port group in the DVS.
Change Network Adapter 3 to the new N1Kv-Control (N1Kv-01) port group in the DVS. Click
OK.
Note in the vDS page for ESXi2 how they have been assigned and show green 8P8C for
connectivity.
Verification
We see that VSM2 has power-cycled but is still seen by the N1Kv.
N1Kv-01# sh mod
Mod Ports Module-Type
Model
Status
248
Nexus1000V
active *
powered
-up
4
Mod Sw
NA
ok
Hw
4.2(1)SV2(1.1)
0.0
4.2(1)SV2(1.1)
Mod MAC-Address(es)
Serial-Num
00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA
02-00-0c-00-03-00 to 02-00-0c-00-03-80 NA
Mod Server-IP
Server-UUID
Server-Name
10.0.121.1
NA
NA
10.0.115.12
8f862310-4c63-11e2-0000-00000000001f 10.0.115.12
Note:
It is important to understand something critical in the Nexus 1000v switch:
Although Vethernet interface numbering doesn't typically change after it is
assigned (even when vMotioning a VM to another ESXi host), these are still virtual
interfaces and they can be destroyed (e.g., delete a VM altogether).The N1Kv
refers to these virtual interfaces or Vethernet interfaces for use in the forwarding
tables with a value known as Local Target Logic (LTL). We will begin to see these
LTL values more and more as we look at pinning in future labs. But for now,
remember one more critical point: a VSM will always have three interfaces, they
will always be ordered as Control, Management, Packet, and they will always be
assigned LTL 10, 11, and 12 values, respectively. This can greatly aide in
troubleshooting. vemcmd show port-old will show you the values for these and
all other eth and veth interfaces, and we'll discuss various vemcmd's later.
Configuration
We will continue on and migrate the other host on ESXi2. Right-click Win2k8-www-1 and click
Edit Settings.
Verification
From any switch, we should be able to ping it, and we should also be able to ping the other two
VMs still on the standard vSwitch on ESXi1.
DC-3750#ping 10.0.110.111
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.110.111, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/9 ms
DC-3750#ping 10.0.110.112
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.110.112, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/4/9 ms
DC-3750#ping 10.0.110.113
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.0.110.113, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/8 ms
DC-3750#
Take note of the new Vethernet (virtual VM) interfaces in the N1Kv.
Configuration
Now we will move the physical adapters on ESXi1 over using the same method that we used on
ESXi2. Navigate to ESXi1, click the Configuration tab, click Networking, select vSphere
Distributed Switch, and click Manage Physical Adapters. Click Click to Add NIC for VMSys-Uplink, vMotion-Uplink, and VM-Guests, and choose adapters vmnic0, vmnic2, and vmnic3
for the three uplink port profiles, respectively. Click OK.
Note their appearance as connected in the vDS. Click Manage Virtual Adapters.
Click Add.
Select the Management Network adapter, double-click under Port Group to display the dropdown menu, and choose the VMKernel. For the VMKernel, choose the vMotion veth port
profiles from the N1Kv DVS. Click Next.
Click Finish.
Click Close.
Verification
The VEM comes online as module 3.
N1Kv-01(config-vem-slot)# sh mod
Mod Ports Module-Type
Model
Status
Nexus1000V
active *
Nexus1000V
ha-stand
248
NA
ok
248
NA
ok
by
Mod Sw
Hw
4.2(1)SV2(1.1)
0.0
4.2(1)SV2(1.1)
0.0
4.2(1)SV2(1.1)
4.2(1)SV2(1.1)
Mod MAC-Address(es)
Serial-Num
00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA
00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA
02-00-0c-00-03-00 to 02-00-0c-00-03-80 NA
02-00-0c-00-04-00 to 02-00-0c-00-04-80 NA
Mod Server-IP
Server-UUID
Server-Name
10.0.121.1
NA
NA
10.0.121.1
NA
NA
10.0.115.11
8f862310-4c63-11e2-0000-00000000000f 10.0.115.11
10.0.115.12
8f862310-4c63-11e2-0000-00000000001f 10.0.115.12
Note the new Ethernet interfaces, and notice that any interfaces belonging to module 3 on ESXi1
will be numbered as Ethernet 3/x and any interfaces belonging to module 4 on ESXi2 will be
numbered as Ethernet 4/x.
Also note the new Vethernet interfaces for vmk management and vmotion.
Configuration
We haven't added a port profile for vCenter (VLAN 1) in N1Kv before now, so now is a good time,
before we begin migrating the remainder of the VM guests over. Before we do this, we need to
modify our uplink VM-Guests profile to include VLAN 1, and because vCenter is rather critical,
it's a good idea to make it a system VLAN as well - that way, it can begin forwarding traffic on
the VEM even before the VEM is inserted into the N1Kv DVS.
Note that syslog should inform you that the DVPG or Distibuted Virtual Port Group was created
in vCenter.
Migrate Network adapter 1 over to the vCenter (N1Kv-01) vDS port profile/group. Click OK.
Verification
We see a Veth interface created for vCenter, and so all should still be responsive with the
vCenter client.
Configuration
Continue to migrate the rest of the VM guests NICs on ESXi 1. When finished, every guest on
both hosts should be fully migrated off of any local vSwitch and running soley on the Nexus
1000v DVS platform.
Verification
In vCenter on the vDS for ESXi1, we should see all VMs running on the N1Kv.
In the N1Kv CLI, we should see all of the Eth and Veth interfaces populated; note that the Veth
interfaces populate the description automatically with the name of the running VM occupying
that respective interface.
Name
Status
Vlan
Duplex Speed
Type
------------------------------------------------------------------------------mgmt0
--
up
routed
full
1000
--
Eth3/1
--
up
trunk
full
1000
--
Eth3/3
--
up
trunk
full
unknown --
Eth3/4
--
up
trunk
full
unknown --
Eth4/1
--
up
trunk
full
1000
Eth4/3
--
up
trunk
full
unknown --
Eth4/4
--
up
trunk
full
unknown --
Veth1
VMware VMkernel, v up
115
auto
auto
--
Veth2
VMware VMkernel, v up
116
auto
auto
--
Veth3
N1Kv-01-VSM-2, Net up
120
auto
auto
--
Veth4
N1Kv-01-VSM-2, Net up
121
auto
auto
--
Veth5
N1Kv-01-VSM-2, Net up
120
auto
auto
--
Veth6
Win2k8-www-1, Netw up
110
auto
auto
--
Veth7
VMware VMkernel, v up
115
auto
auto
--
Veth8
VMware VMkernel, v up
116
auto
auto
--
Veth9
N1Kv-01-VSM-1, Net up
120
auto
auto
--
Veth10
N1Kv-01-VSM-1, Net up
121
auto
auto
--
Veth11
N1Kv-01-VSM-1, Net up
120
auto
auto
--
Veth12
Win2k8-www-2, Netw up
110
auto
auto
--
Veth13
Win2k8-www-3, Netw up
110
auto
auto
--
Veth14
vCenter, Network A up
auto
auto
--
control0
--
routed
full
1000
--
up
--
N1Kv-01#
^ back to top
2013 INE