Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 1 of 49
Contents
Introduction .............................................................................................................................................................. 4
What You Will Learn ............................................................................................................................................. 4
Prerequisites ......................................................................................................................................................... 4
Audience ............................................................................................................................................................... 4
Disclaimer ............................................................................................................................................................. 4
Why Implement ACI ................................................................................................................................................. 4
What Problems Are We Solving? .......................................................................................................................... 4
ACI for the Commercial Data Center .................................................................................................................... 4
Converting Cisco Nexus 9000 NX-OS Mode to ACI Mode ................................................................................... 6
Data Center Design Evolution ................................................................................................................................ 6
Traditional Data Center Design ............................................................................................................................. 6
Commercial Collapsed Designs ....................................................................................................................... 7
Layer 2 Versus Layer 3 Implications ................................................................................................................ 8
The Cisco Layer 2 Design Evolution ..................................................................................................................... 9
Virtual Port Channels ....................................................................................................................................... 9
Server Network Interface Card (NIC) Teaming Design and Configuration ..................................................... 10
Virtual Overlays .............................................................................................................................................. 11
Spine-Leaf Data Center Design .......................................................................................................................... 12
Overlay Design ................................................................................................................................................... 13
ACI Fabric Overlay .............................................................................................................................................. 14
Sample Commercial Topologies .......................................................................................................................... 15
Cisco Nexus 9500 Product Line .......................................................................................................................... 15
Cisco Nexus 9300 Product Line .......................................................................................................................... 16
Design A: Two Spines and Two Leaves ............................................................................................................. 17
Design B: Two Spines and Four Access Leaves ................................................................................................ 19
Design C: Four Aggregation and Four Access Switches - Spine-Leaf ................................................................ 19
Integration into Existing Networks ....................................................................................................................... 20
Fabric Extender Support ..................................................................................................................................... 20
Storage Design ................................................................................................................................................... 22
Layer 4 - 7 Integration ......................................................................................................................................... 24
End-State Topology ............................................................................................................................................ 24
Example ACI Design and Configuration .............................................................................................................. 24
Validated ACI Physical Topology ........................................................................................................................ 25
Validated ACI Logical Topology .......................................................................................................................... 25
ACI Tenant Tab Object Review........................................................................................................................... 26
Tenants .......................................................................................................................................................... 26
Private Networks ............................................................................................................................................ 26
Bridge Domains .............................................................................................................................................. 27
Application Profiles ......................................................................................................................................... 27
Endpoint Groups (EPGs) ................................................................................................................................ 27
Contracts ........................................................................................................................................................ 27
Domains ......................................................................................................................................................... 27
Tenant Tab End-State Configuration Snapshot .................................................................................................. 28
SharePoint Application Profile Policy View ..................................................................................................... 28
EPG Domain Association ............................................................................................................................... 30
Verifying Discovered Endpoints in an EPG .................................................................................................... 31
Tenant Networking ......................................................................................................................................... 32
Policy Enforcement through Contracts ........................................................................................................... 34
External Routed Networks .............................................................................................................................. 35
End-State Tenant Tab Configuration .............................................................................................................. 39
ACI Fabric Tab - Fabric Policies ......................................................................................................................... 39
Enabling External Routing in the Fabric ......................................................................................................... 39
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 2 of 49
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 3 of 49
Introduction
What You Will Learn
Cisco Application Centric Infrastructure (ACI) can easily be deployed and managed in any size commercial data
center, even with an IT staff of one. You will learn the value of ACI for the commercial data center, and understand
how a multi-tier application is built using the ACI policy model. This white paper will show you a real, Cisco
validated ACI topology and walk you through the components of a complete, multi-tier application deployment.
Several ACI features will also be highlighted.
Prerequisites
You should have a basic understanding of ACI and the policy model. However, brief concept reviews and links to
other resources are included in this document if you are unfamiliar with a concept.
Audience
This white paper is intended for sales engineers, field consultants, professional services, IT managers, partner
engineers, and customers who want to take advantage of an infrastructure built to deliver IT efficiency and support
rapid application deployment. It is intended to benefit commercial-sized and small data centers.
Disclaimer
Always refer to the Cisco ACI website for the most recent information on software versions, supported
configuration maximums, and device specifications, as information may evolve from the time of publication of this
paper.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 4 of 49
ACI provides accelerated, cohesive deployment of applications across network and Layer 4 - 7 infrastructure, and
can enable visibility and management at the application level. Advanced telemetry for visibility into network health
and simplified day-two operations also opens up troubleshooting to the application itself. ACIs diverse and open
ecosystem is designed to plug into any upper-level management or orchestration system and attract a broad
community of developers. Integration and automation of both Cisco and third-party Layer 4 - 7 virtual and physical
service devices offers a single tool to manage the entire application environment.
With ACI mode customers can deploy the network based on application requirements in the form of policies,
removing the need to translate to the complexity of current network constraints. In tandem, ACI helps ensure
security and performance while maintaining complete visibility into application health on both virtual and physical
resources.
Figure 1 highlights how the network communication might be defined for a three-tier application from the ACI GUI
interface. The network is defined in terms of the needs of the application by mapping out who is allowed to talk to
whom, and what they are allowed to talk about. It does this by defining a set of policies, known as contracts, inside
an application profile, instead of configuring lines and lines of command-line interface (CLI) code on multiple
switches, routers, and appliances.
This policy model is configured centrally from a cluster of controllers called Cisco Application Policy Infrastructure
Controllers (APICs) and is pushed out to all Cisco Nexus 9000 Series Switches in the ACI fabric. All configuration
is performed through the APIC API (through the GUI, scripting, etc.). No switch is configured by the end user,
allowing rapid application deployment.
Figure 1.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 5 of 49
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 6 of 49
Figure 4.
Figure 5.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 7 of 49
Suboptimal Path Between Servers in Different Pods Due to Spanning Tree-Blocked Links
Addressing these issues could include upgrading hardware to support 40 or 100 Gb interfaces, bundling links into
port channels to appear as one logical link to Spanning Tree, or moving the Layer 2/Layer 3 boundary down to the
access layer to limit the reach of Spanning Tree. Using a dynamic routing protocol between the two layers allows
all links to be active (Figure 7), and allows for fast reconvergence and equal cost multipathing (ECMP).
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 8 of 49
Figure 7.
The tradeoff in moving Layer 3 routing to the access layer in a traditional Ethernet network is that it limits Layer 2
reachability (Figure 8). Applications like virtual machine workload mobility and some clustering software require
Layer 2 adjacency between source and destination servers. By routing at the access layer, only servers connected
to the same access switch with the same VLANs trunked down would be Layer 2-adjacent. However, the
alternative of spanning a VLAN across the entire data center for reachability is problematic due to Ethernets
broadcast nature and Spanning Tree reconvergence events.
Figure 8.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 9 of 49
To the connected device, the connection appears as a normal port-channel interface, requiring no special
configuration. The industry standard term is called Multi-Chassis EtherChannel; the Cisco Nexus-specific
implementation is called vPC.
Figure 9.
vPC deployed on a Spanning Tree Ethernet network is a very powerful way to curb the number of blocked links,
thereby increasing available bandwidth. vPC on the Cisco Nexus 9000 is a great solution for commercial
customers, and those satisfied with current bandwidth, oversubscription, and Layer 2 reachability requirements.
Two of the sample small-to-midsized traditional commercial topologies are depicted using vPCs in Figures 10 and
11. These designs leave the Layer 2/Layer 3 boundary at the aggregation to permit broader Layer 2 reachability,
but all links are active as Spanning Tree does not see any loops to block. Details about the special vPC peering
relationship between Cisco Nexus switches will be discussed later in this document.
Server Network Interface Card (NIC) Teaming Design and Configuration
Modern applications and increasing virtual-machine density due to advances in CPU and memory footprints drive
server bandwidth demand, with many servers requiring 10 Gbps connections. Ideally, every server is dual-homed
to two different physical switches. Ordinarily, one of these connections would actively forward traffic, while the
other connection would stand by. While this design provides redundancy in case of a switch failure, a standby or
blocked connection wastes potential bandwidth.
Cisco Virtual Port Channel (vPC) allows connections from a device in a port channel to terminate on a pair of two
different Cisco Nexus switches set up in a special peering relationship. vPC provides Layer 2 multipathing,
increasing bandwidth while maintaining redundancy. Any device that supports port channels can be set up in a
vPC connected to a pair of switches in a vPC domain, and is unaware it is configured as a special type of port
channel.
Cisco vPC provides the following benefits. It:
Allows a single device to use a port channel connected to two upstream Cisco Nexus switches
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 10 of 49
In the sample topology highlighted in the designs outlined in Figures 10 and 11, the two leaf (access) switches are
set up in a vPC domain. Some servers are dual-homed, and some servers are single-homed. As a best practice, all
devices would be connected to both switches through vPC, but this is not a requirement.
Figure 10.
Figure 11.
Virtual Overlays
Customers with a two- or three-tier design that wish to route to the access layer, yet still maintain Layer 2
reachability between servers, can take the next step in the data center evolution by implementing a virtual overlay
fabric. In a Cisco Nexus 9000 fabric design, dynamic routing is configured between switches down to the access
layer so that all links are active. This eliminates the need for Spanning Tree on the fabric, and can enable equal
cost multipathing (ECMP) using the dynamic routing protocol.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 11 of 49
A virtual overlay fabric called virtual extensible LAN (VXLAN) is used to provide Layer 2 adjacencies over the Layer
3 fabric for servers and other devices that require Layer 2 reachability in the Cisco Nexus 9000 design. Combining
VXLAN and a dynamic routing protocol offers the benefits of an intelligent Layer 3 routing protocol, yet can also
provide Layer 2 reachability across all access switches for applications like virtual-machine workload mobility and
clustering (Figure 12).
Figure 12.
The limitations of Spanning Tree in three-tier designs and the needs of modern applications are driving a shift in
network design toward a spine-leaf or access-aggregation architecture. Two- and three-tier designs are still a valid
and prevalent architecture; spine-leaf simply provides another easily integrated option.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 12 of 49
Spine switches are used to connect to all leaf switches, and are typically deployed at the end or middle of the row.
Spine switches do not connect to other spine switches. Spines serve as backbone interconnects for leaf switches.
Spines only connect to leaves.
All devices connected to the fabric are an equal number of hops away from one another. This delivers predictable
latency and high bandwidth between servers. The diagram in Figure 13 depicts a sample spine-leaf design.
Figure 13.
Spine-Leaf Topology
Another way to think about the spine-leaf architecture is by thinking of the spines as a central backbone with all
leaves branching off the spine like a star. Figure 14 depicts this logical representation, which uses identical
components laid out in an alternate visual mapping.
Figure 14.
Cisco Nexus 9000 Series Switches allow small-to-midsize commercial customers to start with a few switches and
implement a pay-as-you-grow model. When more access ports are needed, more leaves can be added. When
more bandwidth is needed, more spines can be added.
Overlay Design
Virtual network overlays partition a physical network infrastructure into multiple, logically isolated networks that can
be individually programmed and managed to deliver optimal network requirements.
Small-to-midsize commercial customers may require mobility between data centers, within different pods in a
single data center, and across Layer 3 network boundaries. Virtual network overlays make mobility and Layer 2
reachability possible.
Cisco has had a role in developing multiple overlay networks, each solving a different problem by providing an
innovative solution. For example, Cisco Overlay Transport Virtualization (OTV) provides cross-data center mobility
over Layer 3 data center interconnect (DCI) networks using MAC-in-IP encapsulation.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 13 of 49
Figure 15 shows two overlays in use: OTV on a Cisco ASR 1000 for data center-to-data center connectivity, and
VXLAN within the data center. Both provide Layer 2 reachability and extension. OTV is also available on the Cisco
Nexus 7000 Series Switch.
Figure 15.
For more information on overlays, read the Data Center Overlay Technologies white paper. For more information
on OTV, visit the Cisco OTV website.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 14 of 49
Figure 16.
VXLAN is one of several protocols used on the ACI fabric to route traffic between nodes and enforce policy. These
protocols do not need to be configured, but they are visible from the APIC controller. All configuration and
management is done through the APIC controller.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 15 of 49
Figure 17.
Note:
Please check the Cisco Nexus 9500 data sheets for the latest product information. Line card availability
may have changed since the time of writing. Refer to the software release notes to determine current chassis and
line card support for ACI mode.
Note:
Please check the Cisco Nexus 9300 data sheets for the latest product information. Switch availability may
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 16 of 49
Figure 19.
QSA Module Shown Empty on the Left, with SFP/SFP+ Inserted on the Right
For more information on QSA modules, read the Cisco QSA Data Sheet.
For savings on 10 GE interfaces, Cisco SFP+ copper Twinax direct-attach cables are available for distances up to
10 meters (Figure 20). Twinax cables provide significant savings over traditional 10 GE fiber optic transceivers and
cabling.
Note:
Figure 20.
For more information on all 10 GE SFP+ cable options, read the Cisco 10GBase SFP Modules data sheet.
Always check the Cisco Transceiver Modules Compatibility Information webpage to stay up to date on chassis
support.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 17 of 49
In the leaf or access layer the topology depicts a pair of fixed 1-rack-unit (RU) Cisco Nexus 9372PX switches. Each
9372PX provides 48 ports of 1/10 GE with SFP+ transceivers and six QSFP+ 40 Gbps ports. A10GBase-T version
of the switch is also available if twisted pair RJ-45 connectors are desired over SFP+ for 10 GE.
In the spine or aggregation layer the topology depicts two fixed Cisco Nexus 9336PQ Switches. Each 9336PQ
provides 36 ports of 40 GE with QSFP transceivers. If more ports are needed or to plan for future growth, the
chassis-based Cisco Nexus 9500 Series Switches could be used instead of the fixed 9336PQ.
The 9372 switches can operate in standalone Cisco NX-OS mode today, and are alternately capable of operating
in ACI mode, with software support anticipated during the first half of calendar year 2015. The 9336 switches
operate in ACI mode only.
For the latest comparison between Cisco Nexus 9500 Series Switch line cards, check out the Cisco Nexus 9000
Series Switches Compare Models tool.
The QSA 40-to-10 GE modules could also be used in this design as you transition from 10 GE to 40 GE.
Alternately, Cisco provides a low-cost 40-Gigabit transceiver called a bidirection (BiDi) that eases the move from
10 Gbps to 40 Gbps. Existing short-reach (SR) 40-Gigabit transceivers use connectors that require 12 strands of
fiber through an multiple-fiber push-on (MPO) connector (Figure 22). Unfortunately, existing 10-Gigabit fiber
deployments and patch panels use line card-to-line card multimode fiber. Upgrading from 10-Gigabit to 40-Gigabit
fiber can be an expensive endeavor if all transceivers, cabling, and patch panels had to be replaced.
Figure 22.
As an alternative, the Cisco 40-Gigabit QSFP BiDi transceiver (Figure 23) addresses the challenges of the fiber
infrastructure by providing the ability to transmit full-duplex 40 Gbps over a standard OM3 or OM4 multimode fiber
with line-card connectors (Figure 24). The BiDi transceiver can reuse existing 10-Gigabit fiber cabling, instead of
deploying a new fiber infrastructure. The BiDi optic provides an affordable, simple upgrade path to 40 GE at almost
the same cost as 10-GE fiber today.
Figure 23.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 18 of 49
Figure 24.
For more information on BiDi optics, read the Cisco QSFP BiDi Technology white paper.
This design will be showcased later in this white paper, and configuration examples will be provided.
In the leaf or access layer the topology depicts four fixed 1RU Cisco Nexus 9372PX Switches. Each 9372PX
provides 48 ports of 1/10 GE with SFP+ transceivers and six QSFP+ 40 GE ports. There is also a 10GBase-T
version of the switch available if twisted pair RJ-45 connectors are desired over SFP+ for 10 GE.
In the spine or aggregation layer, the topology depicts two fixed Cisco Nexus 9336PQ Switches. Each 9336PQ
provides 36 ports of 40 GE with QSFP transceivers. If more ports are needed or to plan for future growth, the
chassis-based Cisco Nexus 9500 Series Switches could be used instead of the fixed 9336PQ.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 19 of 49
Figure 26.
For detailed information, refer to the Integrate Cisco Application Centric Infrastructure with Existing Networks white
paper.
Page 20 of 49
Table 1.
FEX Models
N9K-C9396PX
N9K-C9372PX
N9K-C9332PQ
N2K-C2248TP/TP-E
N2K-C2232PP
N2K-C2232TM-E
N2K-C2248PQ
Figure 28.
Refer to the Cisco Nexus 9000 Software Release Notes for the most up-to-date feature support.
Fabric Extender Transceivers (FETs) also are supported to provide a cost-effective connectivity solution (FET-10G)
between Cisco Nexus 2000 Series Fabric Extenders and their parent Cisco Nexus 9300 switches.
For more information on FET-10 Gigabit transceivers, refer to the Nexus 2000 Series Fabric Extenders data sheet.
Supported Cisco Nexus 9000-to-Nexus 2000 Fabric Extender (FEX) topologies are pictured in Figure 29. As with
other Cisco Nexus platforms, think of the FEX like a logical remote line card of the parent Cisco Nexus 9000
switch. Each FEX connects to one parent switch. Servers should be dual-homed to two different FEXs.
Figure 29.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 21 of 49
For detailed information, check out the Cisco Nexus 2000 Series NX-OS Fabric Extender Configuration Guide for
Cisco Nexus 9000 Series Switches, Release 6.x.
Storage Design
Existing IP-based storage, such as network-attached storage (NAS) or Small Computer System Interface over IP
(iSCSI) can be integrated into a Cisco Nexus 9000 fabric. Currently, Fibre Channel and Fibre Channel over
Ethernet (FCoE) are not supported on the Cisco Nexus 9000, but this section will show how they could be
designed alongside the Cisco Nexus 9000 and evolve with added features. Refer to the Cisco website for updates
on future support for FCoE N_Port Virtualization (NPV).
A converged storage fabric design is possible through Cisco Nexus 9000 switches if taking advantage of IP-based
storage like iSCSI or NAS to reduce cabling, switch ports, number of switches, and number of adapters required,
as well as significant power savings. All servers connect to Cisco Nexus 9300 leaves (access switches), carrying
both LAN and IP-based storage traffic. The storage devices could also connect to the leaves, or could be left in
place, connected to existing infrastructure. Figures 30 and 31 highlight both options.
Figure 30 illustrates a design where both servers and IP-based storage are directly connected to the leaf access
switches, reducing the number of hops and devices.
Figure 30.
Converged Access for Servers and IP-Based Storage Device on Cisco Nexus 9300
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 22 of 49
Figure 31.
Converged Access for Servers Using IP-Based Storage on the Cisco Nexus 9300
Cisco Nexus 9372 leaf access switches split out the LAN traffic to send to the spine aggregation switches, and
send the IP-based storage traffic to switches dedicated to storage traffic. While many switches could be used,
Cisco Nexus 5672UP Switches are featured in Figure 32.
Cisco Nexus 5600 Series Switches are the third generation of the leading data center server access Cisco Nexus
5000 Series of switches. The Cisco Nexus 5600 is the successor of the industrys most widely-adopted Cisco
Nexus 5500 Series Switches that maintain all the existing Nexus 5500 features, including LAN and SAN
convergence (unified ports, FCoE), fabric extenders (FEX) and FabricPath. In addition, the 5600 brings integrated
line-rate Layer 2 and 3 with true 40-GE support, Ciscos Dynamic Fabric Automation (DFA) innovation, NVGRE,
VXLAN bridging and routing capability, network programmability and visibility, deep buffers, and significantly higher
scale and performance for highly virtualized, automated, and cloud environments.
Figure 32.
For more information, read the Cisco Nexus 5600 Platform Switches data sheet.
Customers who prefer to have separate, dedicated storage connections from each server could cable the storage
connections to their physical storage network, and cable the production IP LAN connections to Cisco Nexus 9000
switches.
The dedicated, physical storage network could take advantage of IP-based storage like iSCSI or NAS, or could be
comprised of a Fibre Channel or FCoE network. Regardless of the protocol, this design would not require any
change in existing storage cabling.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 23 of 49
Layer 4 - 7 Integration
Layer 4 - 7 services like firewalls and load balancers can be inserted and controlled through the ACI fabric using an
object called a service graph. The Layer 4 - 7 service appliances can be physical or virtual, and can be physically
located anywhere in the fabric.
ACI provides a single point of provisioning for services with the added ability to automate and script service
deployment. Reusable service templates can be created and replicated for new application rollouts.
For more information, refer to the Service Insertion with Cisco Application Centric Infrastructure guide.
End-State Topology
The simplified diagram in Figure 33 provides an example of an end-state design integrated into an existing data
center. The design features two Cisco Nexus 9372 Switches, two Nexus 9336 Switches, IP-based storage, an ASA
firewall appliance, connections to the campus Cisco Catalyst LAN environment, and connectivity to the WAN
router. Note that all devices connect to Cisco Nexus 9300 leaf access switches.
Figure 33.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 24 of 49
As shown, some servers are single-homed, and some servers are dual-homed. Some servers are connected to the
leaves through 1 GE, while others are 10 GE. Ideally, all devices would be dual-homed to a pair of leaves.
SharePoint consists of three major tiers: web, application, and database. Most of these reside as virtual machines
on VMware ESXi 5.5 servers, while some database servers are bare metal and not running on a virtualized
environment. This is to demonstrate a mixed data center, as most customers are not 100 percent virtualized, and
are moving to more bare metal and a combination of hypervisors.
External users from the WAN and Internet are connected through a Layer 3 switch to one of the leaf switches. The
bare-metal database servers are also connected off of the Layer 3 switch.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 25 of 49
This is referred to as the policy model of ACI. The policy is configured centrally through the APIC and pushed out
to hardware to be enforced on the Cisco Nexus 9000 Series Switch fabric.
ACI objects will be reviewed briefly in this section to lay out the design, first in the tenant space, and then in the
fabric space of the GUI. For detailed design and definition of all ACI objects, refer to the Cisco Application Centric
Infrastructure Design Guide.
Figure 35 illustrates the logical topology used in the design to represent the policy model for a sample SharePoint
application profile.
Figure 35.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 26 of 49
Bridge Domains
Within a private network, one or more bridge domains are created. A bridge domain is essentially a container for
subnets that will be used by components of the applications. Adding an IP address and mask to a bridge domain
creates the distributed default gateway across the leaves so all endpoints always have a local default gateway,
even if the endpoint moves. Bridge domains also provide the ability to change forwarded behavior. By default, the
fabric will not flood traffic like Address Resolution Protocol (ARP) requests and unknown unicast. However, if an
application requires this type of flooding, those applications subnets can be placed in a separate bridge domain
where flooding can be enabled just for that application.
The relationships and hierarchy between tenants, private networks, and bridge domains is depicted in Figure 36.
Figure 36.
Application Profiles
An application profile defines the pieces or tiers of an application and the relationship between them. An application
profile for SharePoint is featured in this design. A different application would have a different application profile.
Endpoint Groups (EPGs)
Endpoint groups (EPGs) group servers or services with similar policy requirements. For example, SharePoint has
three tiers that require different behavior on the network: web, application, and database. All SharePoint database
servers belong to the same database EPG. Each device inside of an EPG is an individual endpoint. There are
several ways to group endpoints to EPGs, which include identifiers like VLAN, VXLAN, and NVGRE tags; physical
ports or leaves; and virtual ports using VMware integration. Each EPG is associated to one bridge domain, which
should contain the default gateways required by all endpoints in the group.
Contracts
ACI implements a whitelist model: no traffic is permitted on the fabric until policy is put in place. Policy is created
through contracts. Contracts are either consumed, provided, or both consumed and provided between EPGs. A
contract dictates who can talk to whom, and what they are allowed to talk about (i.e. ports, protocols). An
application profile contains a collection of EPGs and the contracts defining the policies between EPGs. A contract
contains one or more subjects, which define what communication is allowed between consumer and provider
EPGs.
Domains
Domains define how endpoints are connected to the fabric. There are four types of domains: VMware Virtual
Machine Manager (VMM), physical, external Layer 2, and external Layer 3.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 27 of 49
An EPG can be associated to multiple domains, depending on how its endpoints are connected. For example, in
the sample topology all of the web servers reside on VMware servers, and therefore the web EPG will only be tied
to a single VMware VMM domain. However, the database servers are found both as virtual machines and as baremetal servers, so the database EPG will be associated to two domains: the VMware VMM domain and a physical
domain that binds a port and VLAN encapsulation to the EPG.
By tying multiple domains to a single EPG we do not have to create different EPGs for servers that we want treated
exactly the same-- a single database EPG could have endpoints that reside on VMware, Hyper-V, Xen, and baremetal servers. ACI is agnostic to how and where the endpoints are connected; you simply tell ACI how to group
endpoints into EPGs.
Domains are generally configured in the fabric tab, and will be covered in more detail in the Fabric tab section later
in this document. Domains are associated to EPGs in the Tenant tab, featured in this section.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 28 of 49
Figure 38.
The SharePoint application profile includes the EPGs listed in Figure 39.
Figure 39.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 29 of 49
Some EPGs may have endpoints on different types of hypervisors or servers. The database tier has both virtual
machine endpoints, and bare-metal server endpoints, and is therefore tied to two domains, illustrated in Figure 41.
Figure 41.
The physical domain is for the bare-metal server connected to the switch through port Eth1/6 on Leaf 102. The
bare-metal database server resides on VLAN 200. In addition to mapping the physical domain to the database
EPG, a static binding must also be configured to tell ACI on which port and VLAN tag to look for the bare-metal
physical database server, depicted in Figure 42.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 30 of 49
Figure 42.
This static binding tells ACI that any traffic entering Leaf 102 on port Eth1/6 tagged with VLAN 200 belongs in the
Database EPG bucket.
Note:
A vMotion EPG has also been created to permit live virtual machine migration across the ACI fabric.
vMotion is a feature specific to VMware. Notice that the vMotion EPG does not provide or consume any contracts.
This is because by default, all endpoints in the same EPG can talk. Each host has a vMotion port in the vMotion
subnet, VLAN, and EPG. vMotion ports only need to talk to other vMotion ports; they do not need to talk to any
other endpoints.
vMotion could also be configured to move virtual machines in and out of the fabric. The ability to do this would
depend on the vSwitch design inside the VMware hypervisor, in addition to the policy design in ACI.
Verifying Discovered Endpoints in an EPG
To view discovered endpoints and verify they are being classified into the correct EPG, view the Operational >
Client End Points tab of an EPG. Note in Figure 43 that there are database endpoints on a VMware server
attached to Leaf 101 port 1/1, and on a bare-metal server talking through Leaf 102 port 1/6.
Figure 43.
Database Endpoints
Each EPG must be tied to a bridge domain, which should contain the default gateways needed for endpoint
networking. An EPG can only be tied to one bridge domain at a time.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 31 of 49
Tenant Networking
Each tenant also has one or more private networks. Each private network has one or more bridge domains
underneath. These objects are configured under the Networking folder, shown in Figure 44.
Figure 44.
There are two private networks: External_VRF and Internal_VRF. Each private network has one bridge domain.
The ExternalBD belongs to the External private network, and the InternalBD belongs to the Internal private
network.
The Web, App, Database, and vMotion EPGs are part of the Internal network, and the external EPG and the
external routed subnets (configuration shown later) are part of the External network.
The web servers belong to the 10.1.1.0/24 subnet, the app servers belong to the 10.2.2.0/24 subnet, the database
servers belong to the 10.3.3.0/24 subnet, and the vMotion network uses the 10.99.99.0/24 subnet, as depicted by
the internal bridge domain configuration in Figure 45. All networks use a.254 default gateway, which is pushed and
active on each leaf the EPGs are present in the fabric.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 32 of 49
Figure 45.
Note in the InternalBD configuration screenshot in Figure 45, the bridge domain is tied to the Internal_VRF private
network. Also take note of the flooding options that can be modified on a bridge domain-by-bridge domain basis.
The options shown are the default forwarding behaviors.
Tying an EPG to a bridge domain is configured in the EPG under the application profile, shown in Figure 46.
Figure 46.
Note:
Just because subnets belong to the same bridge domain does not mean endpoints in those subnets can
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 33 of 49
The screenshot in Figure 47 highlights the App_to_Web contract, which contains a single subject called
App_Web_Subject, which uses a single filter to permit all traffic.
After the contracts are created, they must be applied between EPGs. An EPG can consume, provide, or both
consume and provide a contract. Figure 48 depicts the app EPG providing the App_to_Web contract. The Web
EPG is configured to consume the App_to_Web contract.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 34 of 49
Figure 48.
In Figure 48 you can also see the App EPG consumes another contract, the database resources provided by the
Database EPG. Once all contracts have been configured under the Contracts folder of each EPG, traffic should
flow as permitted by the contracts.
External Routed Networks
The last piece to configure in the Tenant tab is the external routed Layer 3 connection to the Cisco Nexus 6000
Series Switch, where external users reach the SharePoint application.
There are several pieces of an external routed domain to configure, each highlighted in the screenshot sequence
(Figures 49 - 54). In this design, Leaf 102 serves as the border leaf and runs Open Shortest Path First version 2
(OSPFv2) to the Cisco Nexus 6000, connected to port Eth1/6. OSPFv2 NSSA area 1 has already been configured
on the 6000 switch.
First, the external routed network is configured by specifying the basic routing protocol settings (Figure 49).
Figure 49.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 35 of 49
Note a single external routed network (External_Users) is tied to a single private network (External_VRF). It is also
tied to a domain (External_User_Domain), which is configured and covered later in the Fabric tab.
Next, the fabric needs to know which leaf and which port(s) connect to the routed device. This is achieved by
configuring the logical node and logical interface profiles. The logical node profile specifies the border leaf, and its
router ID (Figure 50).
Figure 50.
The logical interface profile specifies the interface the routed device is connected to, and what type of interface on
which the neighbor relationship should be established. Options include a physical interface, subinterface, or
switched virtual interface (SVI). This configuration uses SVI routing so that other routing relationships could be
established for different private networks (VRFs) or tenants on the same physical interface (Leaf 102, port 1/6).
Figure 51 shows this configuration.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 36 of 49
Figure 51.
Configuration of Relationships
Lastly, an OSPF interface protocol policy is set up to a specific network type and timers (timers have not been
modified), illustrated in Figure 52.
Figure 52.
This OSPF interface protocol policy is then associated to the interface profile of Leaf 102 (Figure 53).
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 37 of 49
Figure 53.
Under the Networks folder of the external routed networks, individual subnets can be configured as EPGs. In this
design, there are four sample user sites, North, East, South, and West users (Figure 54).
Figure 54.
After the subnets are configured as EPGs, contracts must be created between these EPGs and the application
profile EPGs to permit the external subnets to communicate with the Web EPG, for example.
For more information, refer to Connecting Application Centric Infrastructure (ACI) to Outside Layer 2 and 3
Networks.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 38 of 49
This concludes the end-state configuration example of the Tenant tab. New applications could be added as new
application profiles to the same tenant, or a new tenant could be created if more separation is required.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 39 of 49
Figure 56.
Next, a pod policy group is configured, referencing the route reflector default policy configured in Figure 56 to set
routing protocol policies for the pod (ACI fabric). This is shown in Figure 57.
Figure 57.
Then, the policy group must be applied to the pod (Figure 58).
Figure 58.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 40 of 49
Enabling external routing on the fabric is complete. For more detailed information on external bridged and external
routed networks, refer to the Connecting Application Centric Infrastructure (ACI) to Outside Layer 2 and 3 Networks
white paper.
Domains (review)
Domains define how endpoints are connected to the fabric. There are four types of domains: VMware
Virtual Machine Manager (VMM), physical, external Layer 2, and external Layer 3.
Domains are generally configured in the Fabric tab and are associated to EPGs in the Tenant tab, acting
as the glue between the Fabric and Tenant space. In essence, a domain specifies how devices connect
to the fabric.
Pools
Every domain is associated to a VLAN pool. The VLAN pool must include any VLANs used by servers in
the domain. However, if using a VMM domain, the pool can be a range of any unused VLANs that will be
pushed to the port groups on the distributed switch pushed from the APIC to vCenter server. As of time
of writing, VLANs cannot overlap on a single leaf switch.
For example, the domain used by the bare-metal database server in the previous section must be tied to
a VLAN pool that includes VLAN 99, which was used by the database server.
Global Policies
AEPs are what tie domains to ports in the fabric. Most likely, not all of your domains will exist on every
single port in the fabric. An AEP has one or more domains associated to it. An AEP is in turn tied to an
interface policy group, covered later. In essence, an AEP is where domains connect to the fabric.
A single AEP should group domains that require similar treatment on fabric interfaces. The sample
configuration will use two AEPs: one for VMware servers, and one for external devices.
Interface Policies
Policies: Create policies for link behavior like speed, duplex, Link Aggregation Control Protocol (LACP),
Link Layer Discovery Protocol (LLDP), and Cisco Discovery Protocol.
Policy Groups: Group various interface policies (listed in the previous Policies bullet) together, associated
to an AEP.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 41 of 49
Profiles: Profiles select specific interfaces and tie an interface policy group to the selected interface(s) to
dictate port behavior.
Switch Policies - Use these to create switch profiles and set up vPC pairs, and tie interface policies to
specific switch(es) to dictate port behavior on specific nodes
Policies: Create policies for switches like Multiple Spanning Tree (MST) region mappings and vPC domain
peer switches.
Policy Groups: Group various switch policies (listed in the previous Policies bullet) together.
Profiles: A best practice is to create a profile for each individual leaf switch, and a profile for each vPC pair
of leaves. Interface profiles are associated to switch profiles to dictate on which leaves the configured port
behavior should be applied.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 42 of 49
Next, create switch profiles for each leaf switch, and for each pair of leaf switches that will be put into a vPC
domain. Sample configuration of switch profiles are shown for Leaf 101 (Figure 60) and for the Leaf 101 and Leaf
102 pair (Figure 61).
Figure 60.
Figure 61.
Switch Profile Configuration for the Leaf 101 and Leaf 102 Pair
Later, interface profiles will be added to the switch profiles to dictate the behavior of ports and presence of VLANs.
This will be shown in the last step.
To place a pair of leaf switches into a vPC domain, add them under the switch policies folder (Figure 62).
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 43 of 49
Figure 62.
Note:
The same vPC rules apply as vPC on other Cisco Nexus platforms. A vPC domain can only contain two
switches, and a single switch can only be a member of one vPC domain at a time.
Domain and VLAN Pool Creation and Association
Next, VLAN pools were created for each of the three domains used in the configuration. One pool for VMware
server integration, one pool for the external routed domain, and one pool for the external physical database servers
(Figure 63).
Figure 63.
Next, the VLAN pools must be associated to their respective domains (Figures 64 and 65). The domains have
already been created in this tab, with the exception of the VMware VMM domain, which is always created under
the VM Networking tab. The AEP association will be highlighted in the next section.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 44 of 49
Figure 64.
Figure 65.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 45 of 49
Figure 66.
Figure 67.
Interface Policies
Several interface policies have been created to accommodate the different types of servers connected to the fabric.
Servers have different connectivity requirements, for example 1 GE versus 10 GE, individual link versus port
channel versus vPC, CDP/LLDP/LACP on or off, etc. These policies must be configured, and tied to the ports in
which the servers are connected.
First, reusable interface policies will be created for various link-level behaviors and protocols. As a best practice,
create on and off policies for each of the protocols so they may be reused across different ports. All policies have
been expanded in the following screenshot (Figure 68), however, the link-level policies are highlighted. Both 1 GE
and 10 GE policies have been created to accommodate the different server NIC speeds connected to the leaf
switches.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 46 of 49
Figure 68.
Next, the interface policies can be placed into policy groups to lump policies together for devices with similar fabric
connectivity requirements. Each interface policy group also ties to an AEP. Policy groups should be created for
each AEP.
In the sample configuration, Server 1 is running VMware ESXi and is part of the VMM domain, which is tied to the
VMware AEP. Server 1 has a single connection to Leaf 101 at 10 GE. The following screenshot (Figure 69) shows
the 10 GE and other link-level policies that will be used for Server 1. Later, this policy group will be tied to an
interface using an interface profile.
Figure 69.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 47 of 49
This policy group could be used across any VMware 10 GE-attached servers, but in the sample topology, this
policy group is only used by Server 1. Server 2 has a 1-GE NIC and will require a different policy group.
Next, an interface profile binds the policy group to an interface, or collection of interfaces. Notice this interface is
generic (for example, 1/1), and does not refer to a specific leaf node. The final step will tie the interface profile to a
switch (leaf) node. The following screenshot (Figure 70) shows the interface profile for Server 1.
Figure 70.
As mentioned earlier, the final step is tying the interface profile to an actual leaf node where the device is
connected. For example, Server 3 is connected to Leaf 101 on port 1/3. The following screenshot (Figure 71)
depicts the association.
Figure 71.
Now, every time a new server or device is attached to a leaf, all that needs to be done is to add a new interface
profile to the appropriate leaf switch.
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 48 of 49
Conclusion
Cisco Application Centric Infrastructure (ACI) can be easily deployed by small to large businesses to rapidly deploy
new applications and bring the language of the business to the network.
Printed in USA
2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
C07-733638-00
01/15
Page 49 of 49