Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Troubleshooting
ACI Configuration Policies
Daniel Pita, ACI Solutions TAC
BRKACI-2101
Agenda
• Introduction
• Quick Review of the Object Model
• Flow of Configuration
• Verification at the Different Stages of
Configuration
• Case Studies and Troubleshooting Methodology
• Live Troubleshooting Activity
• Final Q & A
• Summary / Closing Remarks
Like all new tools, ACI has emerged from a need. A
need to simplify Datacenters, abstract the Network,
and focus on the Applications that run in a datacenter.
Conf t
Int e1/25
Switchport mode trunk
Switchport trunk vlan 3,4
No shut
Endpoint Verification
Application Application
Tenants ACME-CL ACME-AP EPG1
Profiles EPGs
Endpoint Verification - CLI
Endpoint Verification - CLI
Show vlan extended
• First (top) section shows the PI
VLAN ID and what
Tenant:AP:EPG it maps to
• Also shows what interface the
VLAN is configured on
• Second (bottom) section
shows the
relationship/translation
from the PI VLAN to the
system VXLAN or access
encapsulation
(on the wire) VLAN
Show system internal eltmc info vlan brief
Show system internal eltmc info vlan brief
• This command clearly shows the relationship between the BD_VLAN and the
FD_VLAN and their respective attributes such as the PI VLAN, BCM HW VLAN,
access encap (on the wire), the VXLAN or VNID
Path of the Packet
• EPG Classification
• VLAN Normalization
Scenario 1: Same EPG, Same Leaf
• Same EPG, Same Leaf is L2
switched in the front panel ASIC
• Same HW VLAN, no policy ALE
enforcement
• Regular L2 switch behavior
BCM
EP EP
1 2
IP: 4.100 IP: 4.101
MAC: 5B89 MAC: 5B90
VLAN: 356 VLAN: 356
Scenario 2: Same EPG, Different Leaf
• Same EPG, Different leaf needs
to go to the ALE for transmission Payload L3i L2i iVXLAN L3o L2o
to destination leaf
• Same BD VXLAN
ALE Tunnel ALE
• Different HW VLAN
• Different PI VLAN
BCM BCM
• Why ALE?
• ALE is the ASIC that
understands Policy! EP
1
EP
2
In ACI however…
ACI Provider/Consumer
• Gotcha!
• Exactly the same. Assuming a contract is in place between the web-client EPG
and web-server EPG!
• The difference is ACI is granular enough to enforce directionality of the flow!
HTTP Contract
HTTP Subject
Web-Client HTTP Filter Web-Server
Consume 80 Provide 80
EPG EPG
Source X
Dest 80
BCM
EP EP
1 2
IP: 4.100 IP: 13.100
MAC: 5B89 MAC: DCF3
VLAN: 356 VLAN: 390
Scenario 4: Different EPG, Different Leaf
• Different EPG, Different leaf
needs to go to the ALE for
transmission to destination leaf
• Different VXLAN
ALE Tunnel ALE
• Policy enforcement:
• Generally happens
BCM BCM
on ingress leaf when
Destination EP is known
EP EP
1 2
PM/
APIC NXOS
PE
• What endpoints are learned on this leaf? What EPG do they belong to?
• Show endpoint detail [vrf <name> | ip | mac | int | vlan |
detail]
• If you know the IP or MAC for a particular endpoint
• Show system internal epmc endpoint [IP|MAC] [x.x.x.x|0.0.0]
• Moquery –c fvCEp | grep –A 10 –B 8 “[IP]”
• Run this command on the APIC
GUI Verification
• Fabric>Inventory
• Holds all the “show commands”
• In reality, Fabric>Inventory is reading the objects(mo and
summary) on the switches and populating an HTML5
page.
• Pro-Tip: The CLI holds all the same information found in
the GUI
• /mit/ under the APIC or the Switches hold the actual model
and objects
• /aci/ under the APIC or Switches follows the same structure
as the GUI for easier navigation and naming!
GUI: Fabric > Inventory
vPC View From Under Fabric>Inventory
Visore and moquery
• Visore and moquery serve the same purpose, just a different front-end
• Visore is via HTTP/HTTPS through the browser
• https://<apic-address>/visore.html
• https://<switch-address>/visore.html
• Moquery is a CLI command that searches the model for a specific object
• Used on the APIC or switches
• Takes flags and arguments
Traditional CLI
• Show port-channel summary
• Show lacp neighbor
Cat summary of aggr-[poX]
vPC with x members
• Two methods
• Wizard for side-A and wizard for side-B
• Wizard for side-A and manually for side-B, reusing the switch selector
• Create new interface selector and port block, new vPC interface policy group and associate to
switch selector
• Pro-tip: when looking at BCM output, xe ports are front panel ports and are
always offset by 1. This is because BCM starts counting at 0 whereas the front
panel and GUI starts at 1. In this case xe19 is referencing port 20
What We Confirmed
sys/phys-[eth1/13]
topology/pod-1/protpaths-101-103/pathep-[ACME-pod3-ucsb-A-int-pol-gro] topology/pod-1/protpaths-101-103/pathep-[ACME-pod3-ucsb-B-int-pol-gro]
Contracts
Contract Model
Filter Subject
Contract
External L3 External L2
EPG
EPG EPG
Verification
• Contracts go directly past ‘GO’
• After Logical object created from APIC API (GUI in this case) a Concrete object is
created and the rules are programmed into hardware on the leafs
• The flow is NGINX -> APIC PM -> Leaf PE -> hardware
• Object is vzBrCP
• Consumer EPG object is vzConsDef
• Provider EPG object is vzProvDef
• Found in /mit/uni/tn-<tenant-name>
• Switch object is in actrlRule
Contract Logical Object
EPGs
• Confirm EPG and context is deployed on a switch
• fvEpP is concrete object on switch that relates to logical fvAEPg (EPG)
• The APIC validates if an EPG is deployable onto switches
• BD associated, context configured on that BD
• Otherwise, faults on the EPG/BD
• The Leaf validates after the APIC and before deployment on to hardware
• Path endpoint validation (port, PC, vPC)
• VLAN encapsulation validation
VMM
Domain
Port Group/
EPG AAEP VM
Network
VMM Integration
• Allows ACI and the APICs insight into VMMs and allows dynamic configuration of Virtual
Machine networks
• “Easy-Button” for provisioning networks to virtual machines
• VMM Domain policy creates a DVS with the name of the VMM Domain policy
• Objects are as follows:
• VMM Domain = vmmDomP
• Controller = vmmCtrlrP
• EPG = infraRtDomAtt (with a target class of fvAEPG)
• AAEP = infraRtDomP (with a target class of infraAttEntityP)
• This is the AAEP under fabric>access policies that is associated to an interface policy group that is then associated
to the interfaces where the Hypervisors are connected
• VLAN Pool = infraRsVlanNs (with a target class of fvnsVlanInstP)
• Port Group = vmmEpPD. Important information available with this object.
In Reality…
vmmCtrlrP infraRsVlanNs
VmmDomP
compHpNic compRsHv
fvEpP
fabricPathEp
Portgroup
compEpPConn VM
fvAEPg
Hypervisor
Object Verification
• hvsAdj is critical. It is a hypervisor adjacency established through a discovery
protocol such as CDP or LLDP
• Without this object, leaf interfaces will not be programmed dynamically.
• hvsAdj is tied to fabricPathEp which is connected again to fvDyPathAtt.
• Dynamic Path Attachment is how VMM deployment works.
• hvsAdj is found on the APIC:
• /mit/comp/prov-Vmware/ctrlr-[<vmm-domain-name>]-
<controller-name>/hv-host-#/
hvsAdj Child Objects
Case Study:
Adjacency Issues
Adjacency Discovery Issues
• Establishing adjacencies is very important and can hinder deployment.
• Problems arise when NIC does not support LLDP(on by default on ACI leaf
ports). If the discovery information is not exchanged, adjacencies will fail. Faults
will trigger.
• UCSB, by having the FI’s in between adds more steps to adjacencies
• FI’s also do not support LLDP down to the IOMs
• CDP must be used from the blade up to the FI
• Resolved with two options
1. Disable LLDP and enable CDP on the ports where the FI’s connect when using a UCS-B
2. Utilize the AAEP vSwitch Override policy
Override Policy Use Case
• Added a new blade and new uplinks from the FI’s to the fabric via a vPC again.
Decided this blade will have its own DVS therefore created a new VMM domain
policy to the same vCenter/datacenter
• Interface policy group used all defaults except LACP
FAULTS!?
compCtrlr
Objects
compEpPD compHv compVm
compHpNic compRsHv
fvEpP
fabricPathEp
Portgroup
compEpPConn VM
fvAEPg
Hypervisor
DVS discovery protocol
• Why is this happening now?
Problem
• Since the interface policy group was left at defaults for LLDP and CDP this
means:
• LLDP on by default
• CDP is off by default
• DVS was created using the active discovery protocol on the interfaces as its discovery
protocol type
• Resolution:
• Change the interface discovery protocols on the two interface policy groups
• Use the override policy on the AAEP
• We will proceed with this method
Configuring the Override Policy
Config
Access Global Right-Click or
Fabric AAEP vSwitch
Policies Policies Actions menu
Policies
Override dialog box
DVS is updated
Most Importantly…
• Faults are gone!
Rack Server Adjacency
Review
Review
• After configuration:
• Check for faults in related objects and recently created objects
• Use show commands to confirm deployment/instantiation
• If show commands are not what is expected, use the sequential flow of the model to
help narrow down the issue
• Navigate the model on the APIC and on the Leafs
• Moquery or Visore for objects of importance
Demo
Summary / Closing Remarks
• ACI gives you the rope
• Need to learn how to use it and understand its
potential.
• Thank you
But wait! There’s more!
ACME-AP ACME-PN
BD1 BD2
EPG1 EPG2
Access Policies
• Switch Selector
• Block: 101-103
• Interface Selector A
• Port 1/20
• Interface Selector B
• Port 1/26
• Switch Selector
• Block: 104
• Interface Selector 1
• Port 1/13
• VLAN Pool 1
• Block: 356-406
• VLAN Pool 2
• Block: 3440-3450
• AAEP
VMM Domain
• Two VMM Domains
• ACME-VMM-1
• 1 UCS C
• ACME-VMM-2
• 1 blade of a B series
Fabric Access Policies
Single Attached Hypervisor Host Configuration
Use cases
• vSwitch does not support LACP so use LACP policy ON not ACTIVE
• Static path under EPG references vSwitch vPC/PC
• Path prefix represents a port channel when creating a static path
• Protpath prefix represents a vPC when creating a static path
Static Path
EPG
What are Domains and why I need them?
• Domains tie together the Access Policy model to the Tenant/EPG model.
• When a domain is associated VLANs and interfaces are associated to an EPG
• Static Paths and Static VLAN pools work together with Domains to properly
program interfaces
• Imperative to have domains associated to EPGs when mixing VMM dynamic
domains and any other Domains
UCSB and port group hashing
• Known issue between UCSB FI’s and vCenter
• Problem exists in ACI
• Solved with vSwitch over ride for LACP
• Use MAC-Pinning so that port groups in vCenter are created as “route based on
originating virtual port”