Sei sulla pagina 1di 6

.

1 6

CCIE Data Center Full-Scale Labs CCIE Data Center Full-Scale Lab 2

CCIE DC Full-Scale Lab 2 Tasks


Last updated: June 14, 2013

Diagram

(/uploads/workbooks/images/diagrams/EOk1VbStsRSTXL3L0cMZ.png)

Introduction 1. Data Center Infrastructure 2. Data Center Storage Networking 3. Unified Computing 4. Data Center Virtualization

Introduction
General Lab Guidelines
You may not use any links that may physically be present but not specifically pictured and labeled in this topology. Name and number all VLANs, port channels, SAN port channels, service profiles, templates, and so on exactly as described in this lab. Failure to do so will result in missed points for that task. You may not change any passwords on any devices unless explicitly directed to do so. You may not change any management IP addresses or default routes on any devices or VDCs unless explicitly directed to do so (you may add them if they do not exist, but you may not change existing). You may not disable telnet on any device. Telnet must work properly on all devices and VDCs. You may not log on to the 3750G switch for this particular lab. It is fully functional and pre-configured for you.

1. Data Center Infrastructure


1.1 VLANs
Do not create any unnecessary VLANs on any switch. Create VLANs 120, 125, 130, 135, 140, 200, 201, 710, and 711 on N7K1. Create VLANs 120, 125, 130, 135, 140, 200, 201, 720, and 721 on N7K2. Create VLANs 120, 125, 130, 135, 200, 201, and 140 on N7K3. Create VLANs 120, 125, 130, 135, 200, 201 and 140 on N7K4 Create VLANs 120, 125, 130, 135, 200, and 201 on N5K1 and N5K2. Name VLANs on every device they appear on according to Table 1.

Table 1

VLAN 120 125 130 135 VM-DATA1 VM-DATA2 VM-DATA3 VM-DATA4

Name

http://labs.ine.com/workbook/view/ccie-dc-full-scale-labs/task/ccie-dc-full-scale-lab-2... 17.06.2013

. 2 6

VLAN 140 150 200 201 710 711 720 721 DC1-ISP-1 DC1-ISP-2 DC2-ISP-1 DC2-ISP-2 OTV-SITE BACKUP DCI-ESXI DCI-VMOTION

Name

1.2 DCI L3 Routing


Configure an L3 link over N7K1 e2/29 with the IP address and subnet mask of 10.71.71.0 255.255.255.254. Use VLAN 710 to accomplish this. This L3 link must belong to VRF "DC1". Configure the L3 link to form an OSPF adjacency in area 0.0.0.5. Use a router id of 10.71.71.71 for the OSPF process. The OSPF process should be named "DC1". Ensure that e2/29 will only ever run at a rate of 1Gbps. Configure an L3 link over N7K1 e2/31 with the IP address and subnet mask of 10.71.71.2 255.255.255.254. Use VLAN 711 to accomplish this. This L3 link must belong to VRF "DC1". Configure the L3 link to form an OSPF adjacency in area 0.0.0.5. Use a router id of 10.71.71.71 for the OSPF process. The OSPF process should be named "DC1". Ensure that e2/29 will only ever run at a rate of 1Gbps. Configure an L3 link over N7K2 e2/21 with the IP address and subnet mask of 10.72.72.0 255.255.255.254. Use VLAN 720 to accomplish this. This L3 link must belong to VRF "DC2". Configure the L3 link to form an OSPF adjacency in area 0.0.0.3. Use a router id of 10.72.72.72 for the OSPF process. The OSPF process should be named "DC2". Ensure that e2/29 will only ever run at a rate of 1Gbps. Configure an L3 link over N7K2 e2/23 with the IP address and subnet mask of 10.72.72.2 255.255.255.254. Use VLAN 721 to accomplish this. This L3 link must belong to VRF "DC2". Configure the L3 link to form an OSPF adjacency in area 0.0.0.3. Use a router id of 10.72.72.72 for the OSPF process. The OSPF process should be named "DC2". Ensure that e2/29 will only ever run at a rate of 1Gbps. These four ports should all immediately go into a forwarding state when brought up and should go into an errDisabled state if they receive any STP BPDUs. Do not modify any configuration on the 3750G switch for this or any other task in this lab. Ensure OSPF converges with whatever means necessary

1.3 L2 Trunking and L3 Routed Interfaces


Configure trunking between N7K1 e2/1 and N7K3 e2/9. Allow only previously created VLANs 120-140 and 200-201 over this link. Ensure that N7K1 is the root for all STP instances. Configure an L3 routed interface between N7K1 e1/1 using the IP address 10.13.13.0/31 and N7K3 e1/9 using the IP address 10.13.13.1/31. Ensure that this L3 link can participate in the OSPF process and route over the DCI. Configure trunking between N7K2 e2/11 and N7K4 e2/20. Only allow only previously created VLANs 120-140 and 200-201 over this link. Configure an L3 routed interface between N7K2 e1/17 using the IP address 10.24.24.0/31 and N7K4 e1/25 using the IP address 10.24.24.1/31. Ensure that this L3 link can participate in the OSPF process and route over the DCI.

1.4 Port Channels


Assuming that more links will be added later, with the desire for minimal traffic disruption, configure the following: Configure trunking on port channel 215 from N7K1 to UCS FI-A, and ensure that the same port channel number is used later from the UCS side. Configure trunking on port channel 218 from N7K1 to UCS FI-B, and ensure that the same port channel number is used later from the UCS side. Ensure that both of these port channels transition immediately to a state of forwarding traffic. Ensure that the N7K1 is the primary device in LACP negotiation. Ensure that the hashing algorithm takes L3 and L4 for both source and destination into account.

http://labs.ine.com/workbook/view/ccie-dc-full-scale-labs/task/ccie-dc-full-scale-lab-2... 17.06.2013

. 3 6

Trunk only previously created VLANs 120-135 and 200-201 southbound from N7K1 to both FIs.

1.5 HSRP
Using information from Table 2, configure SVIs on N7K1 and N7K2 for all VLANs that are present on that switch. Assume that a second Nexus 7000 will be added to each Data Center, and with that in mind, go ahead and provision HSRP for all SVIs at both sites, as follows: Use the newest version of HSRP supported. Make HSRP group numbers correspond with their respective VLAN/SVI numbers. Use the virtual IP address of .254 for SVIs on both switches. Use the host IP address of .251 for each current SVIs on N7K1. (.250 will be used in the future for the other HSRP member at DC1). Use the host IP address of .252 for each current SVIs on N7K2. (.253 will be used in the future for the other HSRP member at DC1). These current SVIs will be the primary HSRP group member even after the other N7K is put into service at each DC; ensure that these SVIs have a higher preference for being the Active forwarder assuming the others come online with defaults. Have the SVIs for VLAN 200 use the fastest possible hello and hold timers.

Table 2

VLAN 120 125 130 135 200 201 192.168.120.0 255.255.255.0 192.168.125.0 255.255.255.0 192.168.130.0 255.255.255.0 192.168.135.0 255.255.255.0 192.168.200.0 255.255.255.0 192.168.201.0 255.255.255.0

IP Subnet / Mask default default default default default default

VRF

1.6 vPC
Configure vPC between N5K1 and N5K2 with the Domain ID 12. Configure the peer-link with an LACP trunking over ports e1/1-2 on Port Channel 512 between N5K1 and N5K2 according to the diagram. Ensure that any vPC numbers correspond with their designated port channel numbers, as listed in the tasks that follow. You are not permitted to create any additional links that are not explicitly pictured in the diagram. Ensure that N5K1 is the root for all STP instances; however, you may not configure any spanning tree priority or root commands globally or at the interface level on N5K1. Ensure that N5K1 holds the primary role for the vPC domain. Ensure that N5K1 always decides which links are active in any port channel. Synchronize all ARP tables. Ensure that if our SAN was an EMC VPLEX or VMAX using IP technologies, vPC would not cause any problems with forwarding frames.

1.7 Port Channels, FEX, and vPC


Configure trunking on Port Channel 100 from N7K2 to N5K1 and N5K2 according to the diagram, and ensure that the pair of N5Ks are the only ones initiating any port channel protocol negotiation. Configure FEX 113 using trunking on Port Channel 113 from N5K1 and N5K2 according to the diagram. Configure FEX 123 using trunking on Port Channel 123 from N5K1 and N5K2 according to the diagram.

1.8 Mgmt VM Access


Configure a 1Gbps access link in VLAN 200 to the Management VM on N5K1 e1/11. Ensure that traffic forwards immediately and goes into an errDisable state if it receives any STP packets.

1.9 Access Trunking


Configure trunking on both ports individually coming from SVR1 up to N5K1 e113/1/1 and N5K2 e123/1/1 according to the diagram. For now, trunk only previously created VLANs 120-135 and 200-201 (there may be additional VLANs needed later).

1.10 OTV
Extend only previously created VLANs 120-135 and 200-201 between Data Centers using OTV. Use the OTV site VLAN of 140 on both sides of the DCI. You may use whatever site identifiers you prefer. The ISP supports SSM and ASM, and for ASM it provides a PIM RP of 10.10.10.25; use this as your only RP. OTV should be authenticated using a hashed value from the word "DCIOTV". Any of the SVIs on N7K1 or N7K2 for the VLANs that are extended across the DCI should be able to ping each other. Prevent HSRP groups at DC1 from becoming active/standby members of the same HSRP group numbers at DC2, and vice-versa. Prevent any device ARPing at either DC from getting the virtual MAC address of the HSRP group from the 7K at the opposite side of the DCI.

http://labs.ine.com/workbook/view/ccie-dc-full-scale-labs/task/ccie-dc-full-scale-lab-2... 17.06.2013

. 4 6

When finished, both N7K1 and N7K2 should be able to ping the actual host IP address of the SVI at the opposite data center traversing the overlay. Each N7K1 and N7K2 should also be able to ping the virtual IP address of .254, which should keep traffic local to the site from which the ping originates.

2. Data Center Storage Networking


2.1 VSANs and FCoE VLANs
Create VSAN 10 on MDS1, MDS2, N5K1, and N5K2. Create VSAN 20 only on MDS1, MDS2, and N5K2. Create VLAN 10 to carry FCoE traffic for VSAN 10 on N5K1 and N5K2. Create VLAN 20 to carry FCoE traffic for 20 respectively only on N5K2.

2.2 UCS SAN Connectivity


Configure FC links on MDS1 as pictured in the diagram ready for both UCS FIs. Do not use any port channeling or trunking. Configure links coming from FI-A to MDS1 to use VSAN 10. Configure links coming from FI-B to MDS1 to use VSAN 20.

2.3 E Port Trunking


N5K1 should be configured as an E trunk to N5K2 and should trunk only VSAN 10 over SAN Port Channel 256 using interfaces fc1/26 and fc1/27. Configure a trunk between N5K2 fc1/28 and MDS2 fc1/3 that trunks only VSANs 10 and 20. N5K2 fc1/32 should provide connectivity to the SAN array for VSAN 10. MDS2 fc1/7 should provide connectivity to the SAN array for VSAN 20.

2.4 Cisco C200 P81E (VIC) CNA FLOGIs


Configure FCoE for Svr1 so that it logs in to VSAN 10 over FEX 113. Configure FCoE for Svr1 so that it logs in to VSAN 20 over FEX 123. Svr1 is set up to FLOGI to both fabrics.

2.5 FCIP
Configure FCIP between MDS1 and MDS2 on interfaces G1/1 and G1/2 on each switch. Use the IP address of 12.12.12.1/30 on MDS1 G1/1 and 12.12.12.2/30 on MDS2 G1/1 over FCIP Profile 10 and interface FCIP 10 on both sides. Use the IP address of 12.12.12.5/30 on MDS1 G1/2 and 12.12.12.6/30 on MDS2 G1/2 over FCIP Profile 20 and interface FCIP 20 on both sides. The 3750G switch is already configured properly; do not connect to it at all. Configure SAN Port Channel 50 over both of these links and trunk only VSAN 10 and VSAN 20 over it. Optimize FCIP on MDS1 and MDS2 to account for optimum TCP window scaling based on the approximate actual RTT (within 20% variance is allowed). Allow FCIP to monitor the congestion window and increase the burst size to the maximum allowed. Ensure that there is no fragmentation of FCIP packets over the link.

2.6 Zoning
Ensure that MDS1 appears to the fabric as domain 0x61 for VSAN 10 and 20. Ensure that MDS2 appears to the fabric as domain 0x62 for VSAN 10 and 20. Ensure that N5K2 appears to the fabric as domain 0x52 for VSAN 10 and 20. Ensure that N5K1 appears to the fabric as domain 0x51 for VSAN 10 and 20. Zone according to the following information. You may only make zoning changes for both Fabric A and Fabric B from MDS1. According to information given in Table 3: Zone so that "ESXi1", "ESXi2", and "ESXi3" all have access to their FC-TARGET-SAN-x for the appropriate Fabrics (fc0's to Fabric A; fc1's to Fabric B). Fabric A uses VSAN 10. Fabric B uses VSAN 20. Zoning for Fabric A should use the zone name "ZONE-A". Zoning for Fabric B should use the zone name "ZONE-B". The zoneset for Fabric A should be named "ZoneSet_VSAN10". The zoneset for Fabric B should be named "ZoneSet_VSAN20". Aliases must be created according to Table 3 and must be used in the zoning configuration.

Note:
Many pWWN's are the same below. They are sorted first by FC-4 Type and then by Fabric.

Table 3

Fabric A

pWWN 20:aa:00:25:b5:01:01:01

LUN N/A

Description ESXi1 vHBA "fc0"

Alias ESXi1-A-fc0

FC-4 Type Init

http://labs.ine.com/workbook/view/ccie-dc-full-scale-labs/task/ccie-dc-full-scale-lab-2... 17.06.2013

. 5 6

Fabric A A B B B A A A A B B B B

pWWN 20:aa:00:25:b5:01:01:02 20:00:d4:8c:b5:bd:46:0e 20:bb:00:25:b5:01:01:01 20:bb:00:25:b5:01:01:02 20:00:d4:8c:b5:bd:46:0f 21:03:00:1b:32:64:5e:dc 21:03:00:1b:32:64:5e:dc 21:03:00:1b:32:64:5e:dc 21:03:00:1b:32:64:5e:dc 21:01:00:1b:32:24:5e:dc 21:01:00:1b:32:24:5e:dc 21:01:00:1b:32:24:5e:dc 21:01:00:1b:32:24:5e:dc

LUN N/A N/A N/A N/A N/A 0 0 1 2 0 0 1 2

Description ESXi2 vHBA "fc0" ESXi3 vHBA "fc0" ESXi1 vHBA "fc1" ESXi2 vHBA "fc1" ESXi3 vHBA "fc1" ESXi1 Boot Volume ESXi2 Boot Volume FC_Datastore 1 FC_Datastore 2 ESXi1 Boot Volume ESXi2 Boot Volume FC_Datastore 1 FC_Datastore 2

Alias ESXi2-A-fc0 ESXi3-A-fc0 ESXi1-B-fc1 ESXi2-B-fc1 ESXi3-B-fc1 FC-TARGET-SAN-A FC-TARGET-SAN-A FC-TARGET-SAN-A FC-TARGET-SAN-A FC-TARGET-SAN-B FC-TARGET-SAN-B FC-TARGET-SAN-B FC-TARGET-SAN-B

FC-4 Type Init Init Init Init Init Target Target Target Target Target Target Target Target

3. Unified Computing
3.1 UCS Initialization
Initialize both UCS Fabric Interconnects (FIs). Fabric Interconnect A should use the IP address of 192.168.101.201/24. Fabric Interconnect B should use the IP address of 192.168.101.202/24. Both Fabric Interconnects should use a VIP of 192.168.101.200.

3.2 SAN Uplinks and VSANs


Disable all confirmation messages for creation and deletion of objects. Configure individual FC uplinks as instructed earlier in the Storage Networking section and according to the diagram. Do not use any port channeling or trunking. Create VSAN 10 and name it "VSAN10" and ensure that it uses VLAN 10 for FCoE. Create VSAN 20 and name it "VSAN20" and ensure that it uses VLAN 20 for FCoE. Configure links coming from FI-A to MDS1 to use VSAN 10. Configure links coming from FI-B to MDS1 to use VSAN 20. Disable any unused FC ports according to diagram.

3.3 LAN Uplinks and VLANs


Configure port channels for all links from FIs to IOM/FEXs in UCS chassis according to the diagram. Configure a port channel from each FI to N7K1 according to the diagram, and use the same port channel number as previously instructed from the N7K side. Create VLANs 120-135 and 200-201 and VLAN 150 from Table 1, with correct names on both UCS FIs (only the ones in the table). Only allow the BACKUP VLAN to traverse the 1Gbps ports designated in the diagram toward the 3750G switch, and ensure that it is in an UP state.

3.4 Disk Policies


Create a hard disk policy named "MAXRAID" that specifies a method that both mirrors and then stripes local disks. Ensure that if any service policy ever uses this policy and tries to associate with a blade whose hard drives are not already provisioned with this RAID method, the association will fail. Do not associate this policy with any service profiles.

3.5 Pools
Create a UUID pool called "Global-UUIDs" and allocate suffixes from the range of 0001-000000000101 to 0001-00000000010f. Create a MAC address pool called "Global-MACs" ranging from 00:25:b5:0a:0a:01 to 00:25:b5:0a:0a:11. Create an nWWN pool called "Global-nWWNs" ranging from 20:ff:00:25:b5:01:01:01 to 20:ff:00:25:b5:01:01:11. Create a Management IP address pool ranging from 192.168.101.210 to 192.168.101.219 with the default gateway of 192.168.101.1.

3.6 Service Profiles


Configure a service profile named "ESXi1" with the following values. Anything changed in this service profile template should never affect any service profiles instantiated from it.

http://labs.ine.com/workbook/view/ccie-dc-full-scale-labs/task/ccie-dc-full-scale-lab-2... 17.06.2013

. 6 6

UUIDs should be dynamically allocated from the Global-UUIDs pool. 2 vHBAs should be created with the following information: Name them "fc0" and "fc1". "fc0" must be assigned the initiator pWWN of 20:aa:00:25:b5:01:01:01. "fc1" must be assigned the initiator pWWN of 20:bb:00:25:b5:01:01:01. Both vHBAs must be able to dynamically obtain nWWNs from the Global-nWWNs pool. Neither of these vHBAs should be allowed to re-attempt FLOGIs more than 3 times. Configure a specific boot policy to boot from SAN with the following information: "fc0" should attempt first to boot from Fabric A using the pWWN for "ESXi1 Boot Volume" in Table 3. "fc1" should attempt first to boot from Fabric B using the pWWN for "ESXi1 Boot Volume" in Table 3. 5 vNICs should be created with the following information: Name them "eth0", "eth1", "eth2", "eth3", and "eth4". "eth0" and "eth3" should only be allowed to ever use Fabric A. "eth1" and "eth4" should only be allowed to ever use Fabric B. "eth2" primarily uses Fabric A, but should automatically use Fabric B if all uplinks on FI-A are down. MAC addresses should must be allocated dynamically from the Global-MACs pool. All VLANs should be allowed on all vNICs except for VLAN 1 and VLAN 150; these should not be allowed on any vNICs. All hosts will explicitly tag their VLAN IDs. Any changes to the service profile requiring a reboot should force the administrator to manually allow it. Any service profile created from this template should not automatically associate with any blades in the chassis. Only allow this service profile to ever associate with blades that have a Palo mezzanine adapter. Do not allow blade to automatically boot after this service profile is associated. Ensure that when booting, the KVM console viewer can see the FC disk that attaches directly after the FC drivers load. Configure the management IP addresses to be dynamically assigned from the global pool. Manually associate this profile with blade 1 and boot the blade.

3.7 Cloning Service Profiles


Create a clone of the previous service profile and call it "ESXi2". Change what is necessary for the vHBAs to be set up as follows: "fc0" must be assigned the initiator pWWN of 20:aa:00:25:b5:01:01:02. "fc1" must be assigned the initiator pWWN of 20:bb:00:25:b5:01:01:02. Ensure that this service profile always uses links fc1/30 on Fabric A and fc1/28 Fabric B for its SAN traffic. Manually associate this profile with blade 2 and boot the blade.

3.8 Traffic Monitoring


Measure traffic in a policy called "Over_3Gbps" on vNIC "eth2" in Service Profile "ESXi1", and raise an informational alert if the traffic received by the vNIC rises above 3Gbps. Do not change the collection interval for any device in the system.

4. Data Center Virtualization


4.1 VSM and VEM Connectivity
Ensure reachability to both VSMs that are running on both UCS blades. Ensure that the VEMs running on both UCS blades insert into the Nexus1000v chassis properly. Ensure that service profile ESX1 shows up as VEM 4 and service profile ESX2 shows up as VEM 5. Do not worry about the UCS C200 VEM for this lab.

4.2 N1Kv QoS


Ensure that all traffic coming from vNIC "eth2" on both blades is marked with CoS 4 only by the use of Nexus1000v, and that the UCS trusts that marking. You are not permitted to attach any policy directly to that interface.

http://labs.ine.com/workbook/view/ccie-dc-full-scale-labs/task/ccie-dc-full-scale-lab-2... 17.06.2013

Potrebbero piacerti anche