Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
CASE STUDIES
FROM DMVPN AND WAN EDGE TO SERVER
CONNECTIVITY AND VIRTUAL APPLIANCES
DATA CENTER DESIGN CASE STUDIES
Ivan Pepelnjak, CCIE#1354 Emeritus
The information is provided on an as is basis. The authors, and ipSpace.net shall have neither
liability nor responsibility to any person or entity with respect to any loss or damages arising from
the information contained in this book.
Figure 2-1: Existing MPLS VPN WAN network topology ........................................................... 2-3
Figure 2-9: Multiple OSPF processes with two-way redistribution ........................................... 2-13
Figure 2-11: Single AS number used on all remote sites ...................................................... 2-19
Figure 2-12: BGP enabled on every layer-3 device between two BGP routers .......................... 2-21
Figure 2-13: BGP routing information redistributed into OSPF ............................................... 2-22
Figure 2-14: Dedicated VLAN between BGP edge routers ..................................................... 2-23
Figure 2-15: Remote site logical network topology and routing ............................................. 2-27
Figure 4-1: Redundant data centers and their internet connectivity ......................................... 4-3
Figure 4-2: Simplified topology with non-redundant internal components ................................. 4-4
Figure 4-3: BGP sessions between Internet edge routers and the ISPs. .................................... 4-6
Figure 4-4: Outside WAN backbone in the redesigned network ................................................ 4-9
Figure 4-5: Point-to-point Ethernet links implemented with EoMPLS on DCI routers ................. 4-12
Figure 4-6: Single stretched VLAN implemented with VPLS across L3 DCI .............................. 4-13
Figure 4-7: Two non-redundant stretched VLANs provide sufficient end-to-end redundancy ...... 4-14
Figure 4-10: Full mesh of IBGP sessions between Internet edge routers ................................ 4-17
Figure 4-11: Virtual Device Contexts: dedicated management planes and physical interfaces ... 4-22
Figure 4-12: Virtual Routing and Forwarding tables: shared management, shared physical
interfaces ........................................................................................................................ 4-23
Figure 5-1: Redundant data centers and their internet connectivity ......................................... 5-3
Figure 5-2: IP addressing and routing with external networks ................................................. 5-4
Figure 5-5: OSPF routing used in enterprise WAN network ................................................... 5-12
Figure 5-6: EBGP and IBGP sessions on data center edge routers .......................................... 5-15
Figure 5-7: BGP local preference in prefix origination and propagation ................................... 5-17
Figure 7-2: Layer-2 fabric with two spine nodes .................................................................... 7-4
Figure 7-3: Layer-2 leaf-and-spine fabric using layer-2 ECMP technology ................................. 7-4
Figure 7-6: VM-to-uplink pinning with two hypervisor hosts connected to the same pair of ToR
switches ............................................................................................................................ 7-7
Figure 7-7: Suboptimal traffic flow with VM-to-uplink pinning ................................................. 7-8
Figure 7-11: Redundant server connectivity requires the same IP subnet on adjacent ToR switches
...................................................................................................................................... 7-13
Figure 7-12: A single uplink is used without server-to-ToR LAG ............................................ 7-15
Figure 7-13: All uplinks are used by a Linux host using balance-tlb bonding mode .................. 7-16
Figure 7-14: All ToR switches advertise IP subnets with the same cost .................................. 7-17
Figure 7-17: Optimal flow of balance-tlb traffic across a layer-2 fabric ................................... 7-21
Figure 7-18: LAG between a server and adjacent ToR switches ............................................. 7-22
Figure 8-22: High-performance WAN edge packet filters combined with a proxy server ............ 8-15
Figure 9-1: Centralized network services implemented with physical appliances ........................ 9-3
Figure 9-2: Centralized network services implemented with physical appliances ........................ 9-4
Figure 10-6: Routing protocol adjacencies across traffic control appliances ........................... 10-14
Figure 11-4: Single orchestration system used to manage multiple racks ............................... 11-9
Figure 12-1: Some applications use application-level load balancing solutions ........................ 12-3
Figure 12-2: Typical workload architecture with network services embedded in the application stack
...................................................................................................................................... 12-3
Figure 12-4: Application tiers are connected through central physical appliances .................... 12-5
Figure 12-5: Virtual appliance NIC connected to overlay virtual network ................................ 12-8
Figure 13-3: Total amount of data transferred between tiers in a typical web application ....... 13-14
Figure 13-4: Bandwidth constraints in a single data center deployment ............................... 13-16
Figure 13-5: Typical transfer times in a single data center application deployment ................ 13-17
Figure 13-6: Reduced bandwidth available between application tiers ................................... 13-19
Figure 13-7: Increased transfer time due to geographic distribution of application stack
components ................................................................................................................... 13-20
Figure 13-8: Estimated number of requests between application stack components ............... 13-22
Figure 13-9: Impact of increased RTT on total response time ............................................. 13-23
Several years ago I was lucky enough to meet Ivan in person through my affiliation with Gestalt ITs
Tech Field Day program. We also shared the mic via the Packet Pushers Podcast on several
occasions. Through these opportunities I discovered Ivan to be a remarkably thoughtful collaborator.
He has a knack for asking the exact right question to direct your focus to the specific information
you need. Some of my favorite interactions with Ivan center on his answering my could I do this?
inquiry with a yes, it is possible, but you dont want to do that because response. For a great
example of this, take a look at OSPF as the Internet VPN Routing Protocol section in chapter 2 of
this book.
I have found during my career as a network technology instructor that the case studies are the best
method for teaching network design. Presenting an actual network challenge and explaining the
thought process (including rejected solutions) greatly assists students in building the required skill
base to create their own scalable designs. This book uses this structure to explain diverse Enterprise
design challenges, from DMVPN to Data Centers to Internet routing. Over the next few hours of
Jeremy Filliben
Network Architect / Trainer
CCDE#20090003, CCIE# 3851
Most of the engagements touched at least one data center element, be it server virtualization, data
center network core, WAN edge, or connectivity between data centers and customer sites or public
Internet. I also noticed the same challenges appearing over and over, and decided to document
them in a series of ExpertExpress case studies, which eventually resulted in this book.
The book has two major parts: data center WAN edge and WAN connectivity, and internal data
center infrastructure.
In the first part, Ill walk you through common data center WAN edge challenges:
Optimizing BGP routing on data center WAN edge routers to reduce the downtime and brownouts
following link or node failures (chapter 1);
Integrating MPLS/VPN network provided by one or more service providers with DMVPN-over-
Internet backup network (chapter 2);
Building large-scale DMVPN network connecting one or more data centers with thousands of
remote sites (chapter 3);
Implementing redundant data center connectivity and routing between active/active data centers
and the outside world (chapter 4);
External routing combined with layer-2 data center interconnect (chapter 5).
The final part of the book covers scale-out architectures, multiple data centers and disaster
recovery:
Scale-out private cloud infrastructure using standardized building blocks (chapter 11);
Simplified workload migration and disaster recovery with virtual appliances (chapter 12);
Disaster recovery sites and active-active data centers (chapter 13);
I hope youll find the selected case studies useful. Should you have any follow-up questions, please
feel free to send me an email (or use the contact form @ ipSpace.net/Contact); Im also available
for short online consulting engagements.
Happy reading!
Ivan Pepelnjak
September 2014
IN THIS CHAPTER:
This document describes the steps the customer could take to improve the BGP convergence and
reduce the duration of Internet connectivity brownouts.
Edge routers (GW-A and GW-B) have EBGP sessions with ISPs and receive full Internet routing
(~450.000 BGP prefixes1). GW-A and GW-B exchange BGP routes over an IBGP session to ensure
consistent forwarding behavior. GW-A has higher default local preference; GW-B thus always prefers
IBGP routes received from GW-A over EBGP routes.
Core routers (Core-1 and Core-2) dont run BGP; they run OSPF with GW-A and GW-B, and receive
default route from both Internet edge routes (the details of default route origination are out of
scope).
1
BGP Routing Table Analysis Reports
http://bgp.potaroo.net/
Neighbor loss detection can be improved with Bidirectional Forwarding Detection (BFD) 2, fast
neighbor failover3 or BGP next-hop tracking. BGP update propagation can be fine-tuned with BGP
update timers. The other elements of the BGP convergence process are harder to tune; they depend
primarily on the processing power of routers CPU, and the underlying packet forwarding hardware.
Some router vendors offer functionality that can be used to pre-install backup paths in BGP tables
(BGP best external paths) and forwarding tables (BGP Prefix Independent Convergence 4). These
features can be used to redirect the traffic to the backup Internet connection even before the BGP
convergence process is complete.
Alternatively, you can significantly reduce the CPU load of the Internet edge routes, and improve the
BGP convergence time, by reducing the number of BGP prefixes accepted from the upstream ISPs.
DETAILED SOLUTION
The following design or configuration changes can be made to improve BGP convergence process:
Design and configuration changes described in this document might be disruptive and might
result in temporary or long-term outages. Always prepare a deployment and rollback plan,
and change your network configuration during a maintenance window. You can use the
ExpertExpress service for a design/deployment check, design review, or a second opinion.
ENABLE BFD
Bidirectional Forwarding Detection (BFD) has been available in major Cisco IOS and Junos software
releases for several years. Service providers prefer BFD over BGP hold time adjustments because
the high-end routers process BFD on the linecard, whereas BGP hold timer relies on BGP process
(running on the main CPU) sending keepalive packets over BGP TCP session.
To configure BFD with BGP, use the following configuration commands on Cisco IOS:
interface <uplink>
bfd interval <timer> min_rx <timer> multiplier <n>
!
router bgp 65000
neighbor <ip> remote-as <ISP-AS>
neighbor <ip> fall-over bfd
Although you can configure BFD timers in milliseconds range, dont set them too low. BFD should
detect a BGP neighbor loss in a few seconds; you wouldnt want a short-term link glitch to start
CPU-intensive BGP convergence process.
Cisco IOS and Junos support BFD on EBGP sessions. BFD on IBGP sessions is available
Junos release 8.3. Multihop BFD is available in Cisco IOS, but theres still no support for BFD
on IBGP sessions.
BGP next hop tracking deployed on GW-B could trigger the BGP best path selection even
before GW-B starts receiving BGP withdrawn routes update messages from GW-A.
In environments using default routing, you should limit the valid prefixes that can be used for BGP
next hop tracking with the bgp nexthop route-map router configuration command.
If you want to use BGP next hop tracking in the primary/backup Internet access scenario described
in this document:
Do not change the BGP next hop on IBGP updates with neighbor next-hop-self router
configuration command. Example: routes advertised from GW-A to GW-B must have the original
next-hop from the ISP-A router.
Advertise IP subnets of ISP uplinks into IGP (example: OSPF) from GW-A and GW-B.
Use a route-map with BGP next hop tracking to prevent the default route advertised by GW-A
and GW-B from being used as a valid path toward external BGP next hop.
When the link between GW-A and ISP-A fails, GW-A revokes the directly-connected IP subnet from
its OSPF LSA, enabling GW-B to start BGP best path selection process before it receives BGP updates
from GW-A.
BGP next-hop tracking detects link failures that result in loss of IP subnet. It cannot detect
EBGP neighbor failure unless you combine it with BFD-based static routes.
BGP update timers adjustment should be one of the last steps in the convergence tuning process; in
most scenarios youll gain more by reducing the number of BGP prefixes in accepted by the Internet
edge routers.
This solution is ideal if one could guarantee that all upstream providers always have
visibility of all Internet destinations. In case of a peering dispute, that might not be true,
and your network might potentially lose connectivity to some far-away destinations.
Its impossible to document a generic BGP prefix filtering policy. You should always accept prefixes
originated by upstream ISPs, their customers, and their peering partners. In most cases, filters
based on AS-path lengths work well (example: accept prefixes that have no more than three distinct
When building an AS-path filter, consider the impact of AS path prepending on your AS-path
filter and use regular expressions that can match the same AS number multiple times 6.
Example: matching up to three AS numbers in the AS path might not be good enough, as
another AS might use AS-path prepending to enforce primary/backup path selection7.
After deploying inbound BGP update filters, your autonomous system no longer belongs to the
default-free zone8 your Internet edge routers need default routes from the upstream ISPs to reach
destinations that are no longer present in their BGP tables.
BGP default routes could be advertised by upstream ISPs, requiring no further configuration on the
Internet edge routers.
The default routes on the Internet edge routers should use next-hops that are far away to
ensure the next hop reachability reflects the health status of upstream ISPs network. The
use of root DNS servers as next hops of static routes does not mean that the traffic will be
sent to the root DNS servers, just toward them.
BGP PIC is a recently-introduced feature that does not necessarily interoperate with all other BGP
features one might want to use. Its deployment and applicability are left for further study.
9
Responsible Generation of BGP Default Route
http://blog.ipspace.net/2011/09/responsible-generation-of-bgp-default.html
Backup Internet edge router can use BGP next-hop tracking to detect primary uplink loss and adjust
its forwarding tables before receiving BGP updates from the primary Internet edge router.
To reduce the CPU overload and slow convergence caused by massive changes in the BGP, routing
and forwarding tables following a link or EBGP session failure:
Reduce the number of BGP prefixes accepted by the Internet edge routers;
Upgrade the Internet edge routers to a more CPU-capable platform.
IN THIS CHAPTER:
IP ROUTING OVERVIEW
DESIGN REQUIREMENTS
SOLUTION OVERVIEW
OSPF AS THE INTERNET VPN ROUTING PROTOCOL
BENEFITS AND DRAWBACKS OF OSPF IN INTERNET VPN
BGP AS THE INTERNET VPN ROUTING PROTOCOL
IBGP OR EBGP?
AUTONOMOUS SYSTEM NUMBERS
INTEGRATION WITH LAYER-3 SWITCHES
SUMMARY OF DESIGN CHOICES
The traffic in the Customers WAN network has been increasing steadily prompting the customer to
increase the MPLS/VPN bandwidth or to deploy an alternate VPN solution. The Customer decided to
trial IPsec VPN over the public Internet, initially as a backup, and potentially as the primary WAN
connectivity solution.
The customer will deploy new central site routers to support the IPsec VPN service. These routers
will terminate the IPsec VPN tunnels and provide whatever other services are needed (example:
QoS, routing protocols) to the IPsec VPNs.
OSPF routes are exchanged between Customers core routers and SPs PE routers, and between
Customers layer-3 switches and SPs CPE routers at remote sites. Customers central site is in OSPF
area 0; all remote sites belong to OSPF area 51.
The only external connectivity remote customer sites have is through the MPLS/VPN SP
backbone the OSPF area number used at those sites is thus irrelevant and the SP chose to
use the same OSPF area on all sites to simplify the CPE router provisioning and
maintenance.
OSPF routes received from the customer equipment (central site routers and remote site layer-3
switches) are redistributed into BGP used by the SPs MPLS/VPN service, as shown in Figure 2-4.
The CPE routers redistributing remote site OSPF routes into SPs BGP are not PE routers. The OSPF
routes that get redistributed into BGP thus do not have OSPF-specific extended BGP communities,
lacking any indication that they came from an OSPF routing process. These routes are therefore
redistributed as external OSPF routes into the central sites OSPF routing process by the SPs PE
routers.
The OSPF routes advertised to the PE routers from the central site get the extended BGP
communities when theyre redistributed into MP-BGP, but since the extended VPNv4 BGP
Summary: All customer routes appear as external OSPF routes at all other customer sites (see
Figure 2-5 for details).
Dynamic routing: the solution must support dynamic routing over the new VPN infrastructure
to ensure fast failover on MPLS/VPN or Internet VPN failures;
Flexible primary/backup configuration: Internet VPN will be used as a backup path until it
has been thoroughly tested. It might become the primary connectivity option in the future;
Optimal traffic flow: Traffic to/from sites reachable only over the Internet VPN (due to local
MPLS/VPN failures) should not traverse the MPLS/VPN infrastructure. Traffic between an
MPLS/VPN-only site and an Internet VPN-only site should traverse the central site;
Hub-and-spoke or peer-to-peer topology: Internet VPN will be used in a hub-and-spoke
topology (hub = central site). The topology will be migrated to a peer-to-peer (any-to-any)
overlay network when the Internet VPN becomes the primary WAN connectivity solution.
Minimal configuration changes: Deployment of Internet VPN connectivity should not require
major configuration changes in the existing remote site equipment. Central site routers will
probably have to be reconfigured to take advantage of the new infrastructure.
Minimal disruption: The introduction of Internet VPN connectivity must not disrupt the
existing WAN network connectivity.
Minimal dependence on MPLS/VPN provider: After the Internet VPN infrastructure has been
established and integrated with the existing MPLS/VPN infrastructure (which might require
configuration changes on the SP-managed CPE routers), the changes in the traffic flow must not
require any intervention on the SP-managed CPE routers.
Please refer to the DMVPN: From Basics to Scalable Networks and DMVPN Designs webinars
for more DMVPN details. This case study focuses on the routing protocol design
considerations.
Using BGP as the Internet VPN routing protocol would introduce a new routing protocol in the
Customers network. While the network designers and operations engineers would have to master a
new technology (on top of DMVPN) before production deployment of the Internet VPN, the reduced
complexity of BGP-only WAN design more than offsets that investment.
1. Routes received through MPLS/VPN infrastructure are inserted as external OSPF routes into the
intra-site OSPF routing protocol. Routes received through Internet VPN infrastructure must be
worse than the MPLS/VPN-derived OSPF routes, requiring them to be external routes as well.
2. MPLS/VPN- and Internet VPN routers must use the same OSPF external route type to enable
easy migration of the Internet VPN from backup to primary connectivity solution. The only
difference between the two sets of routes should be their OSPF metric.
3. Multiple sites must not be in the same area. The OSPF routing process would prefer intra-area
routes (over Internet VPN infrastructure) to MPLS/VPN routes in a design with multiple sites in
the same area.
4. Even though each site must be at least an independent OSPF area, every site must use the same
OSPF area number to preserve the existing intra-site routing protocol configuration.
The requirement to advertise site routes as external OSPF routes further limits the design options.
While the requirements could be met by remote site and core site layer-3 switches advertising
directly connected subnet (server and client subnets) as external OSPF routes (as shown in Figure
2-8), such a design requires configuration changes on subnet-originating switch whenever you want
to adjust the WAN traffic flow (which can only be triggered by changes in OSPF metrics).
The only OSPF design that would meet the OSPF constraints listed above and the design
requirements (particularly the minimal configuration changes and minimal disruption requirements)
is a design displayed in Figure 2-9 where:
The drawbacks are also exceedingly clear: the only design that meets all the requirements is
complex as it requires multiple OSPF routing processes and parallel two-way redistribution (site-to-
MPLS/VPN and site-to-Internet VPN) between multiple routing domains.
BGP local preference (within a single autonomous system) or Multi-Exit Discriminator (across
autonomous systems) would be used to select the optimum paths, and BGP communities would be
used to influence local preference between autonomous systems.
The BGP-only design seems exceedingly simple, but there are still a number of significant design
choices to make:
IBGP or EBGP sessions: Which routers would belong to the same autonomous system (AS)?
Would the network use one AS per site or would a single AS span multiple sites?
Autonomous system numbers: There are only 1024 private AS numbers. Would the design
reuse a single AS number on multiple sites or would each site has a unique AS number?
IBGP OR EBGP?
There are numerous differences between EBGP and IBGP and their nuances sometimes make it hard
to decide whether to use EBGP or IBGP in a specific scenario. However, you the following guidelines
usually result in simple and stable designs:
If you plan to use BGP as the sole routing protocol in (a part of) your network, use EBGP.
If youre using BGP in combination with another routing protocol that will advertise reachability
of BGP next hops, use IBGP. You can also use IBGP between routers residing in a single subnet.
Its easier to implement routing policies with EBGP. Large IBGP deployments need route
reflectors for scalability and some BGP implementations dont apply BGP routing policies on
reflected routes.
All routers in the same AS should have the same view of the network and the same routing
policies.
EBGP should be used between routers in different administrative (or trust) domains.
Applying these guidelines to our WAN network gives the following results:
EBGP will be used across DMVPN network. A second routing protocol running over DMVPN would
be needed to support IBGP across DMVPN, resulting in overly complex network design.
Throughout the rest of this document well assume the Service Provider agreed to use IBGP
between CPE routers and Internet VPN routers on the same remote site.
The Customer has to get an extra private AS number (coordinated with the MPLS/VPN SP) for the
central site, or use a public AS number for that site.
In a scenario where the SP insists on using EBGP between CPE routers and Internet VPN routers the
Customer has two options:
Reuse a single AS number for all remote sites even though each site has to be an individual AS;
Unless youre ready to deploy 4-octet AS numbers, the first option is the only viable option for
networks with more than a few hundred remote sites (because there are only 1024 private AS
numbers). The second option is feasible for smaller networks with a few hundred remote sites.
The last option is clearly the best one, but requires router software with 4-octet AS number support
(4-octet AS numbers are supported by all recent Cisco and Juniper routers).
Routers using 4-octet AS numbers (defined in RFC 4893) can interoperate with legacy
routers that dont support this BGP extension; Service Providers CPE routers thus dont
have to support 4-byte AS numbers (customer routers would appear to belong to AS
23456).
Default loop prevention filters built into BGP reject EBGP updates with local AS number in the AS
path, making it impossible to pass routes between two remote sites when they use the same AS
number. If you have to reuse the same AS number on multiple remote sites, disable the BGP loop
prevention filters as shown in Figure 2-11 (using neighbor allowas-in command on Cisco IOS).
While you could use default routing from the central site to solve this problem, the default routing
solution cannot be used when you have to implement the any-to-any traffic flow requirement.
Some BGP implementations might filter outbound BGP updates, omitting BGP prefixes with
AS number of the BGP neighbor in the AS path from the updates sent to that neighbor.
Cisco IOS does not contain outbound filters based on neighbor AS number; if you use
routers from other vendors, check the documentation.
There are several workarounds you can use when dealing with non-BGP devices in the forwarding
path:
Redistribute BGP routes into IGP (example: OSPF). Non-BGP devices in the forwarding path thus
receive BGP information through their regular IGP (see Figure 2-13).
Enable MPLS forwarding. Ingress network edge devices running BGP label IP datagrams with
MPLS labels assigned to BGP next hops to ensure the datagrams get delivered to the proper
egress device; intermediate nodes perform label lookup, not IP lookup, and thus dont need the
full IP forwarding information.
Create a dedicated layer-2 subnet (VLAN) between BGP edge routers and advertise default route
to other layer-3 devices as shown in Figure 2-14. This design might result in suboptimal
Well extend BGP to core layer-3 switches on the central site (these switches will also act as BGP
route reflectors) and use a VLAN between Service Providers CPE router and Internet VPN router on
remote sites.
REMOTE SITES
Internet VPN router will be added to each remote site. It will be in the same subnet as the existing
CPE router.
Remote site layer-3 switch might have to be reconfigured if it used layer-3 physical
interface on the port to which the CPE router was connected. Layer-3 switch should use a
VLAN (or SVI) interface to connect to the new router subnet.
Internet VPN router will redistribute internal OSPF routes received from the layer-3 switch into BGP.
External OSPF routes will not be redistributed, preventing routing loops between BGP and OSPF.
The OSPF-to-BGP route redistribution does not impact existing routing, as the CPE router already
does it; its configured on the Internet VPN router solely to protect the site against CPE router
failure.
Internet VPN router will redistribute EBGP routes into OSPF (redistribution of IBGP routes is disabled
by default on most router platforms). OSPF external route metric will be used to influence the
forwarding decision of the adjacent layer-3 switch.
OSPF metric of redistributed BGP routes could be hard-coded into the Internet VPN router
confirmation or based on BGP communities attached to EBGP routes. The BGP community-
based approach is obviously more flexible and will be used in this design.
The following routing policies will be configured on the Internet VPN routers:
EBGP routes with BGP community 65000:1 (Backup route) will get local preference 50. These
routes will be redistributed into OSPF as external type 2 routes with metric 10000.
EBGP routes with BGP community 65000:2 (Primary route) will get local preference 150. These
routes will be redistributed into OSPF as external type 1 routes with metric 1.
Furthermore, the remote site Internet VPN router has to prevent potential route leakage between
MPLS/VPN and Internet VPN WAN networks. A route leakage between the two WAN networks might
turn one or more remote sites into transit sites forwarding traffic between the two WAN networks.
NO-EXPORT community will be set on updates sent over the IBGP session to the CPE router,
preventing the CPE router from advertising routes received from the Internet VPN router into the
MPLS/VPN WAN network.
NO-EXPORT community will be set on updates received over the IBGP session from the CPE
router, preventing leakage of these updates into the Internet VPN WAN network.
CENTRAL SITE
The following steps will be used to deploy BGP on the central site:
1. BGP will be configured on existing MPLS/VPN edge routers, on the new Internet VPN edge
routers, and on the core layer-3 switches.
After this step, the central site BGP infrastructure is ready for routing protocol migration.
5. Internal OSPF routes will be redistributed into BGP on both core layer-3 switches. No other
central site router will perform route redistribution.
At this point, the PE routers start receiving central site routes through PE-CE EBGP sessions and
prefer EBGP routes received from MPLS/VPN edge routes over OSPF routes received from the same
routers.
6. Default route will be advertised from layer-3 switches into OSPF routing protocol.
Access-layer switches at the core site will have two sets of external OSPF routes: specific routes
originated by the PE routers and default route originated by core layer-3 switches. They will still
prefer the specific routes originated by the PE routers.
Likewise, the core site access-layer switches stop receiving specific remote site prefixes that were
redistributed into OSPF on PE routers and rely exclusively on default route advertised by the core
layer-3 switches.
Figure 2-16: Central site logical network topology and BGP+OSPF routing
VPN traffic flow through the central site: configure neighbor next-hop-self on DMVPN EBGP
sessions. Central site Internet VPN routers start advertising their IP addresses as EBGP next
hops for all EBGP prefixes, forcing the site-to-site traffic to flow through the central site.
Any-to-any VPN traffic flow: configure no neighbor next-hop-self on DMVPN EBGP sessions.
Default EBGP next hop processing will ensure that the EBGP routes advertised through the
central site routers retain the optimal BGP next hop IP address of the remote site if the two
remote sites connect to the same DMVPN subnet, or IP address of the central site router in any
other case.
Internet VPN as the backup connectivity: Set BGP community 65000:1 (Backup route) on all
EBGP updates sent from the central site routers. Remote site Internet VPN routers will lower the
local preference of routes received over DMVPN EBGP sessions and thus prefer IBGP routes
received from CPE router (which got the routes over MPLS/VPN WAN network).
CONCLUSIONS
A design with a single routing protocol running in one part of the network (example: WAN network
or within a site) is usually less complex than a design that involves multiple routing protocols and
route redistribution.
When you have to combine MPLS/VPN WAN connectivity with any other WAN connectivity, youre
forced to incorporate BGP used within the MPLS/VPN network into your network design. Even though
MPLS/VPN technology supports multiple PE-CE routing protocols, the service providers rarely
implement IGP PE-CE routing protocols with all the features you might need for successful enterprise
WAN integration. Provider-operated CE routers are even worse, as they cannot propagate
MPLS/VPN-specific information (extended BGP communities) into enterprise IGP in which they
participate.
WAN network based on BGP is thus the only logical choice, resulting in a single protocol (BGP) being
used in the WAN network. Incidentally, BGP provides a rich set of routing policy features, making
your WAN network more flexible than it could have been were you using OSPF or EIGRP.
IN THIS CHAPTER:
The initial DMVPN access network should offer hub-and-spoke connectivity, with any-to-any traffic
implemented at a later stage.
Should they use Internal BGP (IBGP) or External BGP (EBGP) in the DMVPN access network?
What autonomous system (AS) numbers should they use on remote (spoke) sites if they decide
to use EBGP in the DMVPN access network?
IBGP sessions within the WAN backbone are established between loopback interfaces and the
Customer is using OSPF is exchange reachability information within the WAN backbone (non-
backbone routes are transported in BGP).
IBGP is used to exchange routing information between all BGP routers within an autonomous
system. IBGP sessions are usually established between non-adjacent routers (commonly using
loopback interfaces); routers rely on an IGP routing protocol (example: OSPF) to exchange intra-AS
reachability information.
EBGP is used to exchange routing information between autonomous systems. EBGP sessions are
usually between directly connected IP addresses of adjacent routers. EBGP was designed to work in
without IGP.
12
IBGP or EBGP in an Enterprise Network
http://blog.ioshints.info/2011/08/ibgp-or-ebgp-in-enterprise-network.html
Routes received from an EBGP peer are further advertised to all other EBGP and IBGP peers
(unless an inbound or outbound filter drops the route);
Routes received from an IBGP peer are advertised to EBGP peers but not to other IBGP peers.
BGP route reflectors (RR) use slightly modified IBGP route propagation rules:
Routes received from an RR client are advertised to all other IBGP and EBGP peers. RR-specific
BGP attributes are added to the routes advertised to IBGP peers to detect IBGP loops.
Routes received from other IBGP peers are advertised to RR clients and EBGP peers.
The route propagation rules influence the setup of BGP sessions in a BGP network:
An BGP router advertising a BGP route without a NEXT HOP attribute (locally originated BGP
route) sets the BGP next hop to the source IP address of the BGP session over which the BGP
route is advertised;
A BGP router advertising a BGP route to an IBGP peer does not change the value of the BGP
NEXT HOP attribute;
A BGP router advertising a BGP route to an EBGP peer sets the value of the BGP NEXT HOP
attribute to the source IP address of the EBGP session unless the existing BGP NEXT HOP value
belongs to the same IP subnet as the source IP address of the EBGP session.
You can modify the default BGP next hop processing rules with the following Cisco IOS configuration
options:
neighbor next-hop-self router configuration command sets the BGP NEXT HOP attribute to the
source IP address of the BGP session regardless of the default BGP next hop processing rules.
13
BGP Next Hop Processing
http://blog.ioshints.info/2011/08/bgp-next-hop-processing.html
Recent Cisco IOS releases support an extension to the neighbor next-hop-self command:
neighbor address next-hop-self all configuration command causes a route server to
change BGP next hops on all IBGP and EBGP routes sent to the specified neighbor.
Inbound or outbound route maps can set the BGP NEXT HOP to any value with the set ip next-
hop command (the outbound route maps are not applied to reflected routes). The most useful
use of this command is the set ip next-hop peer-address used in an inbound route map.
set ip next-hop peer-address sets BGP next hop to the IP address of BGP neighbor when
used in an inbound route map or to the source IP address of the BGP session when used in
an outbound route map.
14
BGP Route Reflectors
http://wiki.nil.com/BGP_route_reflectors
The number of spoke sites connected to a single hub site is large enough to cause scalability
issues in other routing protocols (example: OSPF);
The customer wants to run a single routing protocol across multiple access networks (MPLS/VPN
and DMVPN) to eliminate route redistribution and simplify overall routing design 15.
In both cases, routing in the DMVPN network relies exclusively on BGP. BGP sessions are established
between directly connected interfaces (across DMVPN tunnel) and theres no IGP to resolve BGP
next hops, making EBGP a better fit (at least based on standard BGP use cases).
The customer has two choices when numbering the spoke DMVPN sites:
Each spoke DMVPN site could become an independent autonomous system with a unique AS
number;
All spoke DMVPN sites use the same autonomous system number.
15 See Integrating Internet VPN with MPLS/VPN WAN case study for more details.
The BGP configuration of the spoke routers would be even simpler: one or more BGP neighbors
(DMVPN hub routers) and the list of prefixes advertised by the DMVPN spoke site (see Printout 3-2).
For historic reasons the network BGP router configuration command requires the mask
option unless the advertised prefix falls on major network boundary (class-A, B or C
network).
All printouts were generated in a test network connecting a DMVPN hub router (Hub) to two
DMVPN spoke routers (RA and RB) with IP prefixes 192.168.1.0/24 and 192.168.2.0/24,
and a core router with IP prefix 192.168.10.0/24.
RA#show ip bgp
BGP table version is 6, local router ID is 192.168.1.1
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter,
x best-external, a additional-path, c RIB-compressed,
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found
You can adjust the EBGP next hop processing to the routing needs of Phase 1 DMVPN networks with
the neighbor next-hop-self router configuration command configured on the hub router. After
applying this command to our sample network (Printout 3-4) the hub router becomes the BGP next
hop of all BGP prefixes received by DMVPN spoke sites (Printout 3-5).
Printout 3-6: DMVPN hub router advertising just the default route to DMVPN spoke routers
RA#show ip bgp
BGP table version is 10, local router ID is 192.168.1.1
Status codes: s suppressed, d damped, h history, * valid, > best, i - internal,
r RIB-failure, S Stale, m multipath, b backup-path, f RT-Filter,
x best-external, a additional-path, c RIB-compressed,
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found
The default route only design works well for Phase 1 or Phase 3 DMVPN networks. Phase 2 DMVPN
networks require a slightly more complex approach: the hub router has to send all BGP prefixes
DMVPN spoke sites might have to use IPsec frontdoor VRF if they rely on default routing
within the enterprise network and toward the global Internet 16.
You could use an outbound route map that matches on BGP next hop value on the BGP hub router to
achieve this goal (see Printout 3-8 for details).
16
See DMVPN: From Basics to Scalable Networks webinar for more details
http://www.ipspace.net/DMVPN
Even though all spoke sites use the same AS number, theres no need for a full mesh of
IBGP sessions (or route reflectors) between spoke routers. All BGP updates are propagated
through the hub router.
Since all EBGP neighbors (spoke sites) belong to the same autonomous system, its possible to use
dynamic BGP neighbors configured with the bgp listen BGP router configuration command,
significantly reducing the size of BGP configuration on the hub router (see Printout 3-9 for details).
BGP loop detection drops all EBGP updates that contain local AS number in the AS path; spoke sites
thus discard all inbound updates originated by other spoke sites. Sample spoke router BGP table is
shown in Printout 3-10 note the lack of spoke-originated prefixes.
Printout 3-10: BGP table on DMVPN spoke router (prefixes originated by other spokes are missing)
The default BGP loop prevention behavior might be ideal in DMVPN Phase 1 or Phase 3 networks
(see Using EBGP with Phase 1 DMVPN Networks and Reducing the Size of the Spoke Routers BGP
Table for more details), but is not appropriate for DMVPN Phase 2 networks.
In DMVPN Phase 2 networks we have to disable the BGP loop prevention on spoke site routers with
the neighbor allowas-in command17 (sample spoke router configuration is in Printout 3-11).
17
The use of this command in similarly-designed MPLS/VPN networks is described in details in MPLS and VPN
Architectures book.
Printout 3-11: Disabling BGP loop prevention logic on a DMVPN spoke router
Disabling BGP loop prevention logic is dangerous prefixes originated by a DMVPN spoke router are
sent back (and accepted) to the same spoke router (example: prefix 192.168.1.0 in Printout 3-12),
and its possible to get temporary forwarding loops or long-term instabilities in designs with multiple
BGP speaking hub routers.
The maximum number of occurrences of local AS number in the AS path specified in the neighbor
allowas-in should thus be kept as low as possible (ideal value is one).
Printout 3-12: Duplicate prefix on a DMVPN spoke router caused by a BGP update loop
Alternatively, one could adjust the AS path on updates sent by the DMVPN hub router with the
neighbor as-override router configuration command18 (see Printout 3-13), which replaces all
instances of neighbor AS number with local AS number. The resulting BGP table on a DMVPN spoke
router is shown in Printout 3-14.
18 The neighbor as-override command is extensively described in MPLS and VPN Architectures book.
Printout 3-14: BGP table with modified AS paths on DMVPN spoke router
IBGP hub router configuration using dynamic BGP neighbors is extremely simple, as evidenced by
the sample configuration in Printout 3-15.
19
BGP Essentials: AS path prepending
http://blog.ipspace.net/2008/02/bgp-essentials-as-path-prepending.html
This approach works well in networks that use IBGP exclusively within the DMVPN network, as all
IBGP next hops belong to the DMVPN network (and are thus reachable by all spoke routers).
Designs satisfying this requirement include:
Networks that dont use BGP beyond the boundaries of DMVPN access network (core WAN
network might use an IGP like OSPF or EIGRP);
Networks that run DMVPN BGP routing in a dedicated autonomous system.
In all other cases, lack of BGP next hop processing across IBGP sessions (explained in the BGP Next
Hop Processing section) causes connectivity problems.
For example, in our sample network the spoke routers cannot reach destinations beyond the DMVPN
hub router BGP refuses to use those prefixes because the DMVPN spoke router cannot reach the
BGP next hop (Printout 3-16).
Printout 3-16: DMVPN spoke routers cannot reach prefixes behind the DMVPN hub router
Use default routing in DMVPN network (see section Reducing the Size of the Spoke Routers BGP
Table for more details) unless youre using Phase 2 DMVPN.
Advertise default route from the DMVPN hub router with the neighbor default-information
router configuration command. DMVPN spokes will use the default route to reach IBGP next hop.
Some versions of Cisco IOS might not use an IBGP route to resolve a BGP next hop. Check
the behavior of your target Cisco IOS version before deciding to use this approach.
Change the IBGP next hop on all spoke routers with an inbound route map using set ip next-
hop peer-address route map configuration command. This approach increases the complexity
of spoke site routers configuration and is thus best avoided.
Change the IBGP next hop on DMVPN hub router with the neighbor next-hop-self all router
configuration command.
This feature was introduced recently and might not be available on the target DMVPN hub
router.
Default routing fits current customers requirements (hub-and-spoke traffic), potential future direct
spoke-to-spoke traffic connectivity can be implemented with default routing and Phase 3 DMVPN.
Conclusion#1: Customer will use default routing over BGP. Hub router will advertise the default
route (and no other BGP prefix) to the spokes.
Spoke routers could use static host routes to send IPsec traffic to the hub router in the initial
deployment. Implementing spoke-to-spoke connectivity with static routes is time-consuming and
error prone, particularly in environments with dynamic spoke transport addresses. The customer
would thus like to use default routing toward the Internet.
Conclusion#2: Customer will use IPsec frontdoor VRF with default routing toward the Internet.
The customer does not plan to connect spoke DMVPN sites to any other access network (example:
MPLS/VPN), so theyre free to choose any AS numbering scheme they wish.
Any EBGP or IBGP design described in this document would meet customer routing requirements;
IBGP is the easiest one to implement and modify should the future needs change (assuming the
DMVPN hub router supports neighbor next-hop-self all functionality).
IN THIS CHAPTER:
SIMPLIFIED TOPOLOGY
IP ADDRESSING AND ROUTING
DESIGN REQUIREMENTS
FAILURE SCENARIOS
SOLUTION OVERVIEW
LAYER-2 WAN BACKBONE
BENEFITS AND DRAWBACKS OF PROPOSED TECHNOLOGIES
IP ROUTING ACROSS LAYER-2 WAN BACKBONE
BGP ROUTING
OUTBOUND BGP PATH SELECTION
The customer would like to make the Internet connectivity totally redundant. For example: if both
Internet connections from DC1 fail, the public IP prefix of DC1 should remain accessible through
Internet connections of DC2 and the DCI link.
Redundant DCI switches could also be merged into a single logical device using technologies
like VSS (Cisco), IRF (HP) or Virtual Chassis (Juniper). The simplified topology thus
accurately represents many real-life deployment scenarios.
Well further assume that the two sites do not have significant campus networks attached to them.
The outbound traffic traversing the Internet links is thus generated solely by the servers (example:
web hosting) and not by end-users surfing the Internet.
You can easily adapt the design to a mixed campus/data center design by modeling the campus
networks as separate sites attached to the same firewalls or Internet edge LAN.
Internet edge routers connect the public LAN segment to the Internet, run BGP with the ISPs edge
routers, and provide a single virtual exit point to the firewalls through a first-hop redundancy
Customers DCI routers connect the internal data center networks and currently dont provide transit
services.
Static default routes pointing to the local firewall inside IP address are used on the data center core
switches.
Figure 4-3: BGP sessions between Internet edge routers and the ISPs.
Resilient inbound traffic flow: both sites must advertise IP prefixes assigned to DC1 and DC2
to the Internet;
No session loss: Failure of one or more Internet-facing links must not result in application
session loss;
Optimal inbound traffic flow: Traffic for IP addresses in one of the data centers should arrive
over uplinks connected to the same data center; DCI link should be used only when absolutely
necessary.
Optimal outbound traffic flow: Outbound traffic must take the shortest path to the Internet;
as above, DCI link should be used only when absolutely necessary.
No blackholing: A single path failure (one or both Internet links on a single site, or one or
more DCI links) should not cause traffic blackholing.
FAILURE SCENARIOS
This document describes a network that is designed to survive the following failures:
SOLUTION OVERVIEW
We can meet all the design requirements by redesigning the Internet Edge layer of the corporate
network to resemble a traditional Internet Service Provider design 20.
In our network, upstream links and site subnets connect to the same edge routers.
The missing component in the current Internet Edge layer is the WAN backbone. Assuming we have
to rely on the existing WAN connectivity between DC1 and DC2, the DCI routers (D11 through D22)
have to become part of the Internet Edge layer (outside) WAN backbone as shown in Figure 4-4.
20 For more details, watch the Redundant Data Center Internet Connectivity video
http://demo.ipspace.net/get/X1%20Redundant%20Data%20Center%20Internet%20Connectivity.mp4
The outside WAN backbone can be built with any one of these technologies:
Point-to-point Ethernet links or stretched VLANs between Internet edge routers. This solution
requires layer-2 connectivity between the sites and is thus the least desirable option;
GRE tunnels between Internet edge routers;
Virtual device contexts on DCI routers to split them in multiple independent devices (example:
Nexus 7000).
WAN backbone implemented in a virtual device context on Nexus 7000 would require
dedicated physical interfaces (additional inter-DC WAN links).
Regardless of the technology used to implement the WAN backbone, all the proposed solutions fall in
two major categories:
Layer-2 solutions, where the DCI routers provide layer-2 connectivity between Internet edge
routers, either in form of point-to-point links between Internet edge routers or site-to-site VLAN
extension.
GRE tunnels between Internet edge routers are just a special case of layer-2 solution that
does not involve DCI routers at all.
Layer-3 solutions, where the DCI routers participate in the WAN backbone IP forwarding.
All layer-2 tunneling technologies introduce additional encapsulation overhead and thus
require increased MTU on the path between Internet edge routers (GRE tunnels) or DCI
routers (all other technologies), as one cannot rely on proper operation of Path MTU
Discovery (PMTUD) across the public Internet21.
21
The never-ending story of IP fragmentation
http://stack.nil.com/ipcorner/IP_Fragmentation/
Consider the potential failure scenarios in the simple topology from Figure 4-5 where the fully
redundant DCI backbone implements EoMPLS point-to-point links between Internet edge routers.
Failure of DCI link #1 (or DCI routers D11 or D21) causes the E1-to-E3 virtual link to fail;
Subsequent failure of E2 or E4 results in a total failure of the WAN backbone although there are
still alternate paths that could be used if the point-to-point links between Internet edge routers
wouldnt be tightly coupled with the physical DCI components.
Site-to-site VLAN extensions are slightly better in that respect; well-designed fully redundant
stretched VLANs (Figure 4-6) can decouple DCI failures from Internet edge failures.
You could achieve the proper decoupling with a single WAN backbone VLAN that follows these rules:
The VLAN connecting Internet edge routers MUST be connected to all physical DCI devices
(preventing a single DCI device failure from impacting the inter-site VLAN connectivity);
Redundant independent DCI devices MUST use a rapidly converging protocol (example: rapid
spanning tree) to elect the primary forwarding port connected to the WAN backbone VLAN. You
could use multi-chassis link aggregation groups when DCI devices appear as a single logical
device (example: VSS, IRF, Virtual Chassis).
Every DCI router MUST be able to use all DCI links to forward the WAN backbone VLAN traffic, or
shut down the VLAN-facing port when its DCI WAN link fails. Technet24.ir
Figure 4-7: Two non-redundant stretched VLANs provide sufficient end-to-end redundancy
The IP routing design of the WAN backbone should thus follow the well-known best practices used
by Internet Service Provider networks:
24
BGP Essentials: Configuring Internal BGP Sessions
http://blog.ioshints.info/2008/01/bgp-essentials-configuring-internal-bgp.html
Every Internet edge router should advertise directly connected public LAN prefix (on Cisco IOS,
use network statement, not route redistribution);
Do not configure static routes to null 0 on the Internet edge routers; they should announce the
public LAN prefix only when they can reach it28.
Use BGP communities to tag the locally advertised BGP prefixes as belonging to DC1 or DC2 (on
Cisco IOS, use network statement with route-map option29).
Use outbound AS-path filters on EBGP sessions with upstream ISPs to prevent transit route
leakage across your autonomous system30.
Use AS-path prepending31, Multi-Exit Discriminator (MED) or ISP-defined BGP communities for
optimal inbound traffic flow (traffic destined for IP addresses in public LAN of DC1 should arrive
through DC1s uplinks if at all possible). Example: E3 and E4 should advertise prefixes from DC1
with multiple copies of Customers public AS number in the AS path.
You can implement the outbound path selection using one of these designs:
Full Internet routing with preferred local exit. All Internet edge routers propagate EBGP routes
received from upstream ISPs to all IBGP peers (you might want to reduce the number of EBGP
routes to speed up the convergence32). Local preference is reduced on updates received from IGBP
peers residing in another data center (on Cisco IOS use neighbor route-map).
The same BGP path is received from EBGP peer and local IBGP peer: EBGP path is preferred over
IBGP path;
Different BGP paths to the same IP prefix are received from EBGP peer and local IBGP peer. Both
paths have the same BGP local preference; other BGP attributes (starting with AS path length)
are used to select the best path;
The same BGP path is received from local IBGP peer and an IBGP peer from another data center.
The path received from local IBGP peer has higher local preference and is thus always preferred;
In all cases, the outbound traffic uses a local uplink assuming at least one of the upstream ISPs
advertised a BGP path to the destination IP address.
Default routing between data centers. Redundant BGP paths received from EBGP and IBGP
peers increase the memory requirements of Internet edge routers and slow down the convergence
process. You might want to reduce the BGP table sizes on Internet edge routers by replacing full
IBGP routing exchange between data centers with default routing:
Internet edge routers in the same data center exchange all BGP prefixes received from EBGP
peers to ensure optimal outbound traffic flow based on information received from upstream ISPs;
Default route and locally advertised prefixes (BGP prefixes with empty AS path) are exchanged
between IGBP peers residing in different data centers.
With this routing policy, Internet edge routers always use the local uplinks for outbound traffic and
fall back to default route received from the other data center only when there is no local path to the
destination IP address.
Two-way default routing between data centers might result in packet forwarding loops. If at
all possible, request default route origination from the upstream ISPs and propagate only
ISP-generated default routes to IBGP peers.
There are two well-known technologies that can be used to implement multiple independent layer-3
forwarding domains (and thus security zones) on a single device: virtual device contexts (VDC) and
virtual routing and forwarding tables (VRFs).
Virtual Device Contexts would be an ideal solution if they could share the same physical link.
Unfortunately, no current VDC implementation supports this requirement device contexts are
associated with physical interfaces, not VLANs. VDCs would thus require additional DCI links (or
lambdas in WDM-based DCI infrastructure) and are clearly inappropriate solution in most
environments.
Figure 4-12: Virtual Routing and Forwarding tables: shared management, shared physical interfaces
The DCI routers in the WAN backbone should thus either participate in IBGP mesh (acting as BGP
route reflectors to reduce the IBGP mesh size, see Figure 4-13) or provide MPLS transport between
Internet edge routers as shown in Figure 4-14.
Both designs are easy to implement on dedicated high-end routers or within a separate VDC on a
Nexus 7000; the VRF-based implementations are way more complex:
Many MPLS- or VRF-enabled devices do not support IBGP sessions within a VRF; only EBGP
sessions are allowed between PE- and CE-routers (Junos supports IBGP in VRF in recent software
releases). In an MPLS/VPN deployment, the DCI routers would have to be in a private AS
inserted between two disjoint parts of the existing public AS. Multi-VRF or EVN deployment
would be even worse: each DCI router would have to be in its own autonomous system.
MPLS transport within a VRF requires support for Carriers Carrier (CsC) architecture; at the very
minimum, the DCI routers should be able to run Label Distribution Protocol (LDP) within a VRF.
IGP part of this design is trivial: an IGP protocol with VRF support (OSPF is preferred over EIGRP
due to its default routing features) is run between Internet edge routers and VRF instances of DCI
routers. Simple VLAN-based VRFs (not MPLS/VPN) or Easy Virtual Networking (EVN) is used between
DCI routers to implement end-to-end WAN backbone connectivity.
The BGP part of this design is almost identical to the BGP Routing design with few minor
modifications:
IBGP sessions between data centers could also be replaced with local prefix origination all
Internet edge routers in both data centers would advertise public LAN prefixes from all data
centers (using route-map or similar mechanism to set BGP communities), some of them
based on connected interfaces, others based on IGP information.
Outbound traffic forwarding in this design is based on default routes advertised by all Internet edge
routers. An Internet edge router should advertise a default route only when its Internet uplink (and
corresponding EBGP session) is operational to prevent suboptimal traffic flow or blackholing. Local
The following guidelines could be used to implement this design with OSPF on Cisco IOS:
Internet edge routers that receive default route from upstream ISPs through EBGP session
should be configured with default-route originate33 (default route is originated only when
another non-OSPF default route is already present in the routing table);
Internet edge routers participating in default-free zone (full Internet routing with no default
routes) should advertise default routes when they receive at least some well-known prefixes
(example: root DNS servers) from upstream ISP34. Use default-route originate always route-
map configuration command and use the route map to match well-known prefixes.
Use external type-1 default routes to ensure DCI routers prefer locally-originated default routes
(even when they have unequal costs to facilitate primary/backup exit points) over default routes
advertised from edge routers in other data centers.
In most cases the external WAN backbone has to share WAN links and physical devices with internal
data center interconnect links, while still maintaining strict separation of security zones and
forwarding planes.
The external WAN backbone could be implemented as either layer-2 backbone (using layer-2
tunneling mechanisms on DCI routers) or layer-3 backbone (with DCI routers participating in WAN
backbone IP forwarding). Numerous technologies could be used to implement the external WAN
backbone with the following ones being the least complex from the standpoint of a typical enterprise
data center networking engineer:
IN THIS CHAPTER:
DESIGN REQUIREMENTS
FAILURE SCENARIOS
SOLUTION OVERVIEW
DETAILED SOLUTION OSPF
FAILURE ANALYSIS
DETAILED SOLUTION INTERNET ROUTING WITH BGP
PREFIX ORIGINATION
NEXT-HOP PROCESSING
Layer-2 DCI was used to avoid IP renumbering in VM mobility and disaster recovery scenarios.
Occasional live migration between data centers is used during maintenance and hardware upgrades
operations.
Well assume none of the components or external links are redundant (see Figure 5-3), but its
relatively simple to extend a layer-3 design with redundant components.
Redundant DCI switches could also be merged into a single logical device using technologies
like VSS (Cisco), IRF (HP) or Virtual Chassis (Juniper).
Furthermore, DCI link failure might result in a split-brain scenario where both sites advertise the
same IP subnet, resulting in misrouted (and thus black-holed) traffic37.
External routing between the two data centers and both Internet and enterprise WAN (MPLS/VPN)
network should thus ensure that:
Every data center subnet remains reachable after a single link or device failure;
DCI link failure does not result in a split-brain scenario with traffic for the same subnet being
sent to both data centers.
Backup data center (for a particular VLAN/subnet) advertises the subnet after the primary data
center failure.
Single device or link failure anywhere in the data center network edge;
Total external connectivity failure in one data center;
Total DCI link failure;
Total data center failure.
Even though the network design provides automatic failover mechanism on data center
failure, you might still need manual procedures to move active storage units or to migrate
VM workloads following a total data center failure.
Stateful devices (firewalls, load balancers) are not included in this design. Each stateful device
partitions the data center network in two (or more) independent components. You can apply the
mechanisms described in this document to the individual networks; migration of stateful devices
following a data center failure is out of scope.
SOLUTION OVERVIEW
External data center routing seems to be a simple primary/backup design scenario (more details in
Figure 5-4):
Primary data center advertises a subnet with low cost (when using BGP, cost might be AS-path
length or multi-exit discriminator attribute);
The primary/backup approach based on routing protocol costs works reasonably well in enterprise
WAN network where ACME controls the routing policies, but fails in generic Internet environment,
where ACME cannot control routing policies implemented by upstream ISPs, and where every ISP
might use its own (sometimes even undocumented) routing policy.
For example, an upstream ISP might strictly prefer prefixes received from its customers over
prefixes received from other autonomous systems (peers or upstream ISPs); such an ISP would set
The only reliable mechanism to implement primary/backup path selection that does not rely on ISP
routing policies is conditional route advertisement BGP routers in backup data center should not
advertise prefixes from primary data center unless the primary data center fails or all its WAN
connections fail.
To further complicate the design, BGP routers in the backup data center (for a specific subnet) shall
not advertise the prefixes currently active in the primary data center when the DCI link fails.
Data center edge routers thus have to employ mechanisms similar to those used by data center
switches with a shared control plane (ex: Ciscos VSS or HPs IRF): they have to detect split brain
scenario by exchanging keepalive messages across the external network. When the backup router
(for a particular subnet) cannot reach the primary router through the DCI link but still reaches it
across the external network, it must enter isolation state (stop advertising the backup prefix).
You can implement the above requirements using neighbor advertise-map functionality available
in Cisco IOS in combination with IP SLA-generated routes (to test external reachability of the other
data center), with Embedded Event Manager (EEM) triggers, or with judicious use of parallel IBGP
sessions (described in the Detailed Solution Internet Routing With BGP section).
Intra-area routes;
Inter-area routes;
External OSPF routes.
External Type-2 OSPF routes are the only type of OSPF routes where the internal cost (OSPF cost
toward the advertising router) does not affect the route selection process.
Its thus advisable to advertise data center subnets as E2 OSPF routes. External route cost should be
set to a low value (ex: 100) on data center routers advertising primary subnet and to a high value
(ex: 1000) on data center routers advertising backup subnet (Figure 5-5):
FAILURE ANALYSIS
Consider the following failure scenarios (assuming DC-A is the primary data center and DC-B the
backup one):
DC-A WAN link failure: DC-B is still advertising the subnet into enterprise WAN (although with
higher cost). Traffic from DC-A flows across the DCI link, which might be suboptimal.
Performance problems might trigger evacuation of DC-A, but applications running in DC-A
remain reachable throughout the failure period.
MIGRATION SCENARIOS
Use the following procedures when performing a controlled migration from DC-A to DC-B:
DC evacuation (primary to backup). Migrate the VMs, decrease the default-metric on the
DC-B routers (making DC-B the primary data center for the shared subnet). Reduced cost of
prefix advertised by DC-B will cause routers in the enterprise WAN network to prefer path
through DC-B. Shut down DC-A.
DC restoration (backup to primary). Connect DC-A to the WAN networks (cost of prefixes
redistributed into OSPF in DC-A is still higher than the OSPF cost advertised from DC-B). Migrate
the VMs, increase the default-metric on routers in DC-B. Prefix advertised by DC-A will take
over.
Regular IBGP sessions are established between data center edge routers (potentially in
combination with external WAN backbone described in the Redundant Data Center Internet
Connectivity document38). These IBGP sessions could be configured between loopback or
internal LAN interfaces39;
Additional IBGP sessions are established between external (ISP-assigned) IP addresses of data
center edge routers. The endpoints of these IBGP sessions shall not be advertised in internal
routing protocols to ensure the IBGP sessions always traverse the public Internet.
IBGP sessions established across the public Internet should be encrypted. If you cannot
configure an IPsec session between the BGP routers, use MD5 authentication to prevent
man-in-the-middle or denial-of-service attacks.
BGP prefixes advertised by routers in primary data center have default local preference (100);
BGP prefixes advertised by routers in backup data center have lower local preference (50). The
routers advertising backup prefixes (with a network or redistribute router confirmation
command) shall also set the BGP weight to zero to make locally-originated prefixes comparable
to other IBGP prefixes.
Furthermore, prefixes with default local preference (100) shall get higher local preference (200)
when received over Internet-traversing IBGP session (see Figure 5-7):
NEXT-HOP PROCESSING
IBGP sessions between ISP-assigned IP addresses shall not influence actual packet forwarding. The
BGP next hop advertised over these sessions must be identical to the BGP next hop advertised over
DCI-traversing IBGP sessions.
Default BGP next hop processing might set BGP next hop for locally-originated directly connected
prefixes to the local IP address of the IBGP session (BGP next hop for routes redistributed into BGP
from other routing protocols is usually set to the next-hop provided by the source routing protocol).
After BGP converges, the prefixes originated in the backup data center (for a specific subnet) should
no longer be visible in the BGP tables of routers in the primary data center; routers in backup data
center should revoke them due to their lower local preference.
BGP routers in the backup data center should have three copies of the primary subnet in their BGP
table:
BGP routers in the backup data center should thus prefer prefixes received over Internet-traversing
IBGP session. As these prefixes have the same next hop as prefixes received over DCI-traversing
IBGP session (internal LAN or loopback interface of data center edge routers), the actual packet
forwarding is not changed.
Locally-originated prefix. The router is obviously the best source of routing information
either because its the primary router for the subnet or because the primary data center cannot
be reached through either DCI link or Internet.
IBGP prefix with local preference 100. Prefix with this local preference can only be received
from primary data center (for the prefix) over DCI-traversing IBGP session. Lack of better path
(with local preference 200) indicates failure of Internet-traversing IBGP session, probably
caused by Internet link failure in the primary data center. Prefix should be advertised with
prepended AS path40.
IBGP prefix with local preference 200. Prefix was received from the primary data center (for
the prefix) through Internet-traversing IBGP session, indicating primary data center with fully
operational Internet connectivity. Prefix must not be advertised to EBGP peers as its already
advertised by the primary data center BGP routers.
40
BGP Essentials: AS Path Prepending
http://blog.ioshints.info/2008/02/bgp-essentials-as-path-prepending.html
FAILURE ANALYSIS
Assume DC-A is the primary data center for a given prefix.
DC-A Internet link failure: Internet-traversing IBGP session fails. BGP routers in DC-B start
advertising the prefix from DC-A (its local preference has dropped to 100 due to IBGP session
failure).
DC-A BGP router failure: BGP routers in DC-B lose all prefixes from DC-A and start
advertising locally-originated prefixes for the shared subnet.
DCI failure: Internet-traversing IBGP session is still operational. BGP routers in DC-B do not
advertise prefixes from DC-A. No traffic is attracted to DC-B.
Total DC-A failure: All IBGP sessions between DC-A and DC-B are lost. BGP routers in DC-B
advertise local prefixes, attracting user traffic toward servers started in DC-B during the disaster
recovery procedures.
End-to-end Internet connectivity failure: Internet-traversing IBGP session fails. BGP routers
in DC-B start advertising prefixes received over DCI-traversing IBGP session with prepended
AS-path. Traffic for subnet currently belonging to DC-A might be received by DC-B but will still
be delivered to the destination host as long as the DCI link is operational.
41
BGP Essentials: BGP Communities
http://blog.ioshints.info/2008/02/bgp-essentials-bgp-communities.html
If your routers support Bidirectional Forwarding Detection (BFD) 42 over IBGP sessions, use it
to speed up the convergence process.
MIGRATION SCENARIOS
Use the following procedures when performing a controlled migration from DC-A to DC-B:
DC evacuation (primary to backup). Migrate the VMs, decrease the default local preference
on DC-A routers to 40. Even though these prefixes will be received over Internet-traversing
IBGP session by BGP routers in DC-B, their local preference will not be increased. Prefixes
originated by DC-B will thus become the best prefixes and will be advertised by both data
centers. Complete the evacuation by shutting down EBGP sessions in DC-A. Shut down DC-A.
42
Bidirectional Forwarding Detection
http://wiki.nil.com/Bidirectional_Forwarding_Detection_(BFD)
CONCLUSIONS
Optimal external routing that avoids split-brain scenarios is relatively easy to implement in WAN
networks with consistent routing policy: advertise each subnet with low cost (or shorter AS-path or
lower value of multi-exit discriminator in BGP-based networks) from the primary data center (for
that subnet) and with higher cost from the backup data center.
In well-designed active-active data center deployments each data center acts as the
primary data center for the subset of prefixes used by applications running in that data
center.
Optimal external routing toward the Internet is harder to implement due to potentially inconsistent
routing policies used by individual ISPs. The only solution is tightly controlled conditional route
advertisement: routers in backup data center (for a specific prefix) should not advertise the prefix
as long as the primary data center retains its Internet connectivity. This requirement could be
implemented with numerous scripting mechanisms available in modern routers; this document
presented a cleaner solution that relies exclusively on standard BGP mechanisms available in most
modern BGP implementations.
IN THIS CHAPTER:
They approached numerous vendors trying to figure out how the new network should look like, and
got thoroughly confused by all the data center fabric offerings, from FabricPath (Cisco) and VCS
Fabric (Brocade) to Virtual Chassis Fabric (Juniper), QFabric (Juniper) and more traditional leaf-and-
spine architectures (Arista). Should they build a layer-2 fabric, a layer-3 fabric or a leaf-and-spine
fabric?
Long-term average and peak statistics of existing virtualized or physical workload behavior are
usually a good initial estimate of the target workload. The Customer has collected these statistics
using VMware vCenter Operations Manager:
Parameter Value
RAM 10 TB
IOPS 50.000
Define the services offered by the cloud. Major decision points include IaaS versus PaaS and
simple hosting versus support for complex application stacks 43.
Select the orchestration system (OpenStack, CloudStack, vCloud Director) that will allow the
customers to deploy these services;
Select the hypervisor supported by the selected orchestration system that has the desired
features (example: high-availability);
Select optimal server hardware based on workload requirements;
Select the network services implementation (physical or virtual firewalls and load balancers);
Select the virtual networking implementation (VLANs or overlay virtual networks);
43
Does it make sense to build new clouds with overlay networks?
http://blog.ipspace.net/2013/12/does-it-make-sense-to-build-new-clouds.html
Every step in the above process requires a separate design; those designs are not covered
in this document, as we only need their results in the network infrastructure design phase.
DESIGN DECISIONS
The Customers private cloud infrastructure will use vCloud Automation Center and vSphere
hypervisors.
The server team decided to use the Nutanix NX-3050 servers with the following specifications:
Parameter Value
RAM 256 GB
IOPS 6000
Switch vendors use marketing math they count ingress and egress bandwidth on every
switch port. The Nutanix server farm would have 2 Tbps of total network bandwidth using
that approach.
The private cloud will use a combination of physical (external firewall) and virtual (per-application
firewalls and load balancers) network services44. The physical firewall services will be implemented
on two devices in active/backup configuration (two 10GE ports each); virtual services will be run on
a separate cluster45 of four hypervisor hosts, for a total of 54 servers.
The number of network segments in the private cloud will be relatively low. VLANs will be used to
implement the network segments; the network infrastructure thus has to provide layer-2
connectivity between any two endpoints.
This decision effectively turns the whole private cloud infrastructure into a single failure
domain. Overlay virtual networks would be a more stable alternative (from the network
perspective), but are not considered mature enough technology by more conservative cloud
infrastructure designers.
Most modern data center switches offer wire-speed layer-3 switching. The fabric will thus
offer layer-2+3 switching even though the network design requirements dont include layer-
3 switching.
The required infrastructure can be implemented with just two 10GE ToR switches. Most data center
switching vendors (Arista, Brocade, Cisco, Dell Force10, HP, Juniper) offer switches with 48 10GE
ports and 16 40GE ports that can be split into four 10GE ports (for a total of 64 10GE ports). Two
40GE ports on each switch would be used as 10GE ports (for a total of 56 10GE ports per switch),
the remaining two 40GE ports would be used for an inter-switch link.
Alternatively, Cisco Nexus 5672 has 48 10GE ports and 6 40GE ports, for a total of 72 10GE ports
(giving you a considerable safety margin); Arista 7050SX-128 has 96 10GE and 8 40GE ports.
Every data center switching vendor can implement ECMP layer-2 fabric with no blocked links using
multi-chassis link aggregation (Arista: MLAG, Cisco: vPC, HP: IRF, Juniper: MC-LAG).
Some vendors offer layer-2 fabric solutions that provide optimal end-to-end forwarding across larger
fabrics (Cisco FabricPath, Brocade VCS Fabric, HP TRILL), other vendors allow you to merge multiple
switches into a single management-plane entity (HP IRF, Juniper Virtual Chassis, Dell Force10
stacking). In any case, its not hard to implement end-to-end layer-2 fabric with ~100 10GE ports.
MANAGEMENT NETWORK
A mission-critical data center infrastructure should have a dedicated out-of-band management
network disconnected from the user and storage data planes. Most network devices and high-end
servers have dedicated management ports that can be used to connect these devices to a separate
management infrastructure.
The management network does not have high bandwidth requirements (most devices have Fast
Ethernet or Gigabit Ethernet management ports); you can build it very effectively with a pair of GE
switches.
Do not use existing ToR switches or fabric extenders (FEX) connected to existing ToR
switches to build the management network.
CONCLUSIONS
One cannot design an optimal network infrastructure without a comprehensive set of input
requirements. When designing a networking infrastructure for a private or public cloud these
requirements include:
Most reasonably sized private cloud deployments require few tens of high-end physical servers and
associated storage either distributed or in form of storage arrays. You can implement the network
infrastructure meeting these requirements with two ToR switches having between 64 10GE and 128
10GE ports.
IN THIS CHAPTER:
DESIGN REQUIREMENTS
VLAN-BASED VIRTUAL NETWORKS
REDUNDANT SERVER CONNECTIVITY TO LAYER-2 FABRIC
OPTION 1: NON-LAG SERVER CONNECTIVITY
OPTION 2: SERVER-TO-NETWORK LAG
OVERLAY VIRTUAL NETWORKS
OPTION 1: NON-LAG SERVER CONNECTIVITY
OPTION 2: LAYER-2 FABRIC
OPTION 3: SERVER-TO-NETWORK LAG
CONCLUSIONS
Regardless of the virtualization details, the server team wants to implement redundant server-to-
network connectivity: each server will be connected to two ToR switches (see Figure 7-1).
The networking team has to build the network infrastructure before having all the relevant input
data the infrastructure should thus be as flexible as possible.
Overlay virtual networks may be used in the private cloud, in which case a large layer-2 failure
domain is not an optimal solution Leaf-and-spine fabric SHOULD also support layer-3 connectivity
with a separate subnet assigned to each ToR switch (or a redundant pair of ToR switches).
The networking team can build a layer-2-only leaf-and-spine fabric with two core switches using
multi-chassis link aggregation (MLAG see Figure 7-2) or they could deploy a layer-2 multipath
technology like FabricPath, VCS Fabric or TRILL as shown in Figure 7-3.
Only the network edge switches see MAC addresses of individual hosts in environments
using Provider Backbone Bridging (PBB) or TRILL/FabricPath-based fabrics.
The only caveat of non-LAG server-to-network connectivity is suboptimal traffic flow. Lets consider
two hypervisor hosts connected to the same pair of ToR switches as shown in (see Figure 7-6).
Even though the two hypervisors could communicate directly, the traffic between two VMs might
have to go all the way through the spine switches (see Figure 7-7) due to VM-to-uplink pinning
which presents a VM MAC address on a single server uplink.
Conclusion: If the majority of the expected traffic flows between virtual machines and the outside
world (North-South traffic), non-LAG server connectivity is ideal. If the majority of the traffic flows
between virtual machines (East-West traffic) then the non-LAG design is clearly suboptimal unless
the chance of VMs residing on co-located hypervisors is exceedingly small (example: large cloud
with tens or even hundreds of ToR switches).
Switches in an MLAG group try to keep traffic arriving on orphan ports and destined to other orphan
ports within the MLAG group. Such traffic thus traverses the intra-stack (or peer) links instead of
leaf-and-spine links46 as shown in Figure 7-8.
46
vSwitch in MLAG environments
http://blog.ipspace.net/2011/01/vswitch-in-multi-chassis-link.html
Conclusion: Do not use MLAG or switch stacking in environments with non-LAG server-to-network
connectivity.
Most LAG solutions place traffic generated by a single TCP session onto a single uplink,
limiting the TCP session throughput to the bandwidth of a single uplink interface. Dynamic
NIC teaming available in Windows Server 2012 R2 can split a single TCP session into
multiple flowlets and distribute them across all uplinks.
Ethernet LAG was designed to work between a single pair of devices bundling links connected to
different ToR switches requires Multi-Chassis Link Aggregation (MLAG) support in ToR switches 48.
Static port channel is the only viable alternative when using older hypervisors (example:
vSphere 5.0), but since this option doesnt use a handshake/link monitoring protocol, its
impossible to detect wiring mistakes or misbehaving physical interfaces. Static port
channels are thus inherently unreliable and should not be used if at all possible.
Switches participating in an MLAG group (or stack) exchange the MAC addresses received from the
attached devices, and a switch receiving a packet for a destination MAC address reachable over a
LAG link always uses a local member of a LAG link to reach the destination49 (see Figure 7-10). A
49
MLAG and hot potato switching
http://blog.ipspace.net/2010/12/multi-chassis-link-aggregation-mlag-and.html
The only drawback of the server-to-network LAG design is the increased complexity introduced by
MLAG groups.
Figure 7-11: Redundant server connectivity requires the same IP subnet on adjacent ToR switches
In most setups the hypervisor associates its IP address with a single MAC address (ARP replies sent
by the hypervisor use a single MAC address), and that address cannot be visible over more than a
single server-to-switch link (or LAG).
Most switches would report MAC address flapping when receiving traffic from the same
source MAC address through multiple independent interfaces.
The traffic toward the hypervisor host (including all encapsulated virtual network traffic) would thus
use a single server-to-switch link (see Figure 7-12).
The traffic sent from a Linux hypervisor host could use multiple uplinks (with a different MAC
address on each active uplink) when the host uses balance-tlb or balance-alb bonding mode (see
Linux Bonding Driver Implementation Details) as shown in Figure 7-13.
Figure 7-14: All ToR switches advertise IP subnets with the same cost
Stackable switches are even worse. While its possible to advertise an IP subnet shared by two ToR
switches with different metrics to attract the traffic to the primary ToR switch, the same approach
doesnt work with stackable switches, which treat all members of the stack as a single virtual IP
router, as shown in Figure 7-15.
Conclusion: Do not use non-LAG server connectivity in overlay virtual networking environments.
50
See Linux bonding driver documentation for more details
https://www.kernel.org/doc/Documentation/networking/bonding.txt
Balance-alb mode replaces the MAC address in the ARP replies sent by the Linux kernel with one of
the physical interface MAC addresses, effectively assigning different MAC addresses (and thus
uplinks) to IP peers, and thus achieving rudimentary inbound load distribution.
All other bonding modes (balance-rr, balance-xor, 802.3ad) use the same MAC address on multiple
active uplnks and thus require port channel (LAG) configuration on the ToR switch to work properly.
All edge switches participating in a layer-2 fabric would have full MAC address reachability
information and would be able to send the traffic to individual hypervisor hosts over an optimal path
(assuming the fabric links are not blocked by Spanning Tree Protocol) as illustrated in Figure 7-17.
Layer-2 transport fabrics have another interesting property: they allow you to spread the load
evenly across all ToR switches (and leaf-to-spine links) in environments using server uplinks in
primary/backup mode all you have to do is to spread the primary links across evenly across all
ToR switches.
Unfortunately a single layer-2 fabric represents a single broadcast and failure domain 51 using a
layer-2 fabric in combination with overlay virtual networks (which dont require layer-2 connectivity
between hypervisor hosts) is therefore suboptimal from the resilience perspective.
51
Layer-2 network is a single failure domain
http://blog.ipspace.net/2012/05/layer-2-network-is-single-failure.html
VXLAN and STT encapsulations (VXLAN, STT) use source ports in UDP or TCP headers to
increase the packet entropy and the effectiveness of ECMP load balancing. Most other
encapsulation mechanisms use GRE transport, effectively pinning the traffic between a pair
of hypervisors to a single path across the network.
CONCLUSIONS
The most versatile leaf-and-spine fabric design uses dynamic link aggregation between servers and
pairs of ToR switches. This design requires MLAG functionality on ToR switches, which does increase
the overall network complexity, but the benefits far outweigh the complexity increase the design
works well with layer-2 fabrics (required by VLAN-based virtual networks) or layer-3 fabrics
(recommended for transport fabrics for overlay virtual networks) and usually results in optimal
traffic flow (the only exception being handling of traffic sent toward orphan ports this traffic might
have to traverse the link between MLAG peers).
You might also use layer-2 fabrics without server-to-network link aggregation for VLAN-based virtual
networks where hypervisors pin VM traffic to one uplink or for small overlay virtual networks when
youre willing to trade resilience of a layer-3 fabric for reduced complexity of non-MLAG server
connectivity.
Finally, you SHOULD NOT use non-MLAG server connectivity in layer-3 fabrics or MLAG (or stackable
switches) in layer-2 environments without server-to-switch link aggregation.
IN THIS CHAPTER:
Data center is segmented into several security zones (web servers, application servers,
database servers, supporting infrastructure);
Servers belonging to different applications reside within the same security zone, increasing
the risk of lateral movements in case of web- or application server breach;
Large layer-2 segments are connecting all servers in the same security zone, further
increasing the risk of cross-protocol attack52;
All inter-zone traffic is controlled by a pair of central firewalls, which are becoming
exceedingly impossible to manage;
The central firewalls are also becoming a chokepoint, severely limiting the growth of
ACMEs application infrastructure.
The networking engineers designing next-generation data center for ACME would like to replace the
central firewalls with iptables deployed on application servers, but are reluctant to do so due to
potential security implications.
Satisfy the business-level security requirements of ACME Inc., including potential legal,
regulatory and compliance requirements;
52
Compromised security zone = Game Over
http://blog.ipspace.net/2013/04/compromised-security-zone-game-over-or.html
Effectively, theyre looking for a scale-out solution, which will ensure approximately linear growth,
with minimum amount of state to reduce the complexity and processing requirements.
While designing the overall application security architecture, they could use the following tools:
Packet filters (or access control lists ACLs) are the bluntest of traffic filtering tools: they match
(and pass or drop) individual packets based on their source and destination network addresses and
transport layer port numbers. They keep no state (making them extremely fast and implementable
in simple hardware) and thus cannot check validity of transport layer sessions or fragmented
packets.
Some packet filters give you the option of permitting or dropping fragments based on network layer
information (source and destination addresses), others either pass or drop all fragments (and
sometimes the behavior is not even configurable).
Packet filters are easy to use in server-only environments, but become harder to maintain when
servers start establishing client sessions to other servers (example: application servers opening
MySQL sessions to database servers).
They are not the right tool in environments where clients establish ad-hoc sessions to random
destination addresses (example: servers opening random sessions to Internet-based web servers).
Packet filters with automatic reverse rules (example: XenServer vSwitch Controller) are a
syntactic sugar on top of simple packet filters. Whenever you configure a filtering rule (example:
ACLs that allow matches on established TCP sessions (typically matching TCP traffic with ACK or
RST bit set) make it easier to match outbound TCP sessions. In server-only environment you can
use them to match inbound TCP traffic on specific port numbers and outbound traffic of established
TCP sessions (to prevent simple attempts to establish outbound sessions from hijacked servers); in
client-only environment you can use them to match return traffic.
Reflexive access lists (Cisco IOS terminology) are the simplest stateful tool in the filtering arsenal.
Whenever a TCP or UDP session is permitted by an ACL, the filtering device adds a 5-tuple matching
the return traffic of that session to the reverse ACL.
Reflexive ACLs generate one filtering entry per transport layer session. Not surprisingly, you wont
find them in platforms that do packet forwarding and filtering in hardware they would quickly
overload the TCAM (or whatever forwarding/filtering hardware the device is using), cause packet
punting to the main CPU53 and reduce the forwarding performance by orders of magnitude.
Even though reflexive ACLs generate per-session entries (and thus block unwanted traffic that might
have been permitted by other less-specific ACLs) they still work on individual packets and thus
cannot reliably detect and drop malicious fragments or overlapping TCP segments.
Transport layer session inspection combines reflexive ACLs with fragment reassembly and
transport-layer validation. It should detect dirty tricks targeting bugs in host TCP/IP stacks like
overlapping fragments or TCP segments.
53
Process, Fast and CEF Switching, and Packet Punting
http://blog.ipspace.net/2013/02/process-fast-and-cef-switching-and.html
Web Application Firewalls (WAF) have to go way beyond ALGs. ALGs try to help applications get
the desired connectivity and thus dont focus on malicious obfuscations. WAFs have to stop the
obfuscators; they have to parse application-layer requests like a real server would to detect injection
attacks55. Needless to say, you wont find full-blown WAF functionality in reasonably priced high-
bandwidth firewalls.
DESIGN ELEMENTS
ACME designers can use numerous design elements to satisfy the security requirements, including:
Scale-out packet filters require a high level of automation they have to be deployed automatically
from a central orchestration system to ensure consistent configuration and prevent operator
mistakes.
In environments with extremely high level of trust in the server operating system hardening one
could use iptables on individual servers. In most other environments its better to deploy the packet
filters outside of the application servers an intruder breaking into a server and gaining root access
could easily turn off the packet filter.
You could deploy packet filters protecting servers from the outside on first-hop switches (usually
Top-of-Rack or End-of-Row switches), or on hypervisors in virtualized environment.
Packet filters deployed on hypervisors are a much better alternative hypervisors are not limited by
the size of packet filtering hardware (TCAM), allowing the security team to write very explicit
application-specific packet filtering rules permitting traffic between individual IP addresses instead of
IP subnets (see also High Speed Multi-Tenant Isolation for more details).
All major hypervisors support packet filters on VM-facing virtual switch interfaces:
vSphere 5.5 and Windows Server 2012 R2 have built-in support for packet filters;
Linux-based hypervisors can use iptables in the hypervisor kernel, achieving the same
results as using iptables in the guest VM in a significantly more secure way;
The implementation details that affect the scalability or performance of VM NIC virtual firewalls vary
greatly between individual products:
Distributed firewall in VMware NSX, Juniper Virtual Gateway 57, and Hyper-v firewalls using
filtering functionality of Hyper-V Extensible Switch58 use in-kernel firewalls which offer true
scale-out performance limited only by the number of available CPU resources;
vSphere App or Zones uses a single firewall VM per hypervisor host and passes all guest
VM traffic through the firewall VM, capping the server I/O throughput to the throughput of
a single core VM (3-4 Gbps);
Cisco Nexus 1000V sends the first packets of every new session to Cisco Virtual Security
Gateway59, which might be deployed somewhere else in the data center, increasing the
session setup delay. Subsequent packets of the same session are switched in the Nexus
1000V VEM module60 residing in the hypervisor kernel;
You should ask the following questions when comparing individual VM NIC firewall products:
Per-application firewalls (or contexts) significantly reduce the complexity of the firewall rule set
after all, a single firewall (or firewall context) contains only the rules pertinent to a single
application. It is also easily removed at the application retirement time, automatically reducing the
number of hard-to-audit stale firewall rules.
Virtual firewall appliances had significantly lower performance than their physical counterparts 62. The
situation changed drastically with the introduction of Xeon CPUs (and their AMD equivalents); the
performance of virtual firewalls and load balancers is almost identical to entry-level physical
products63.
Packet filters permitting only well-known TCP and UDP ports combined with hardened operating
systems offer similar protection as stateful firewalls; the real difference between the two is handling
of outgoing sessions (sessions established from clients in a data center to servers outside of the
data center). These sessions are best passed through a central proxy server, which can also provide
application-level payload inspection.
WAN edge packet filters combined with per-server (or VM NIC) packet filters are good
enough for environments with well-hardened servers or low security requirements;
WAN edge packet filters combined with per-application firewalls are an ideal solution for
security-critical applications in high-performance environment;
A high-performance data center might use packet filters in front of most servers and per-
application firewalls in front of critical applications (example: credit card processing).
Environments that require stateful firewalls between data center and external networks
could use a combination of WAN edge firewall and per-server packet filters, or a
combination of WAN edge firewall and per-application firewalls;
In extreme cases one could use three (or more) layers of defense: a WAN edge firewall
performing coarse traffic filtering and HTTP/HTTPS inspection, and another layer of stateful
firewalls or WAFs protecting individual applications combined with per-server protection
(packet filters or firewalls).
Distributed traffic control points (firewalls or packet filters) cannot be configured and
managed with the same tools as a single device. ACME operations team SHOULD use an
orchestration tool that will deploy the traffic filters automatically (most cloud orchestration
platforms and virtual firewall products include tools that can automatically deploy
configuration changes across a large number of traffic control points);
System administrators went through a similar process when they migrated workloads from
mainframe computers to x86-based servers.
Per-application traffic control is much simpler and easier to understand than a centralized
firewall ruleset, but its impossible to configure and manage tens or hundreds of small
point solutions manually. The firewall (or packet filter) management SHOULD use the
automation, orchestration and management tools the server administrators already use to
manage large number of servers.
Application teams SHOULD become responsible for the whole application stack including
the security products embedded in it. The might not configure the firewalls or packet filters
themselves, but SHOULD own them in the same way they own all other specialized
components in the application stack like databases.
Simple tools like nmap probes deployed outside- and within the data center are good
enough to validate the proper implementation of L3-4 traffic control solutions including
packet filters and firewalls.
IN THIS CHAPTER:
The new private cloud should offer centralized security, quick application deployment capabilities,
and easy integration of existing application stacks that are using a variety of firewalls and load
balancers from numerous vendors.
Most application stacks rely on data stored in internal databases or in the central database server
(resident in the central data center); some applications need access to third-party data reachable
over the Internet or tightly-controlled extranet connected to the private WAN network (see Figure
9-3).
For more details, please read the Designing a Private Cloud Network Infrastructure chapter.
The cloud architecture team decided to virtualize the whole infrastructure, including large bare-
metal servers, which will be implemented as a single VM running on a dedicated physical server, and
network services appliances, which will be implemented with open-source or commercial products in
VM format.
IDS devices will be deployed as VMs on dedicated hardware infrastructure to ensure the requisite
high performance; high-speed IDS devices inspecting the traffic to and from the Internet will use
hypervisor bypass capabilities made possible with SR-IOV or similar technologies67,68.
Individual customers (ministries or departments) migrating their workloads into the centralized
private cloud infrastructure could also choose to continue using their existing load balancing
vendors, and simply migrate their own load balancing architecture into a fully virtualized
environment (Bring-Your-Own-Load Balancer approach).
NETWORK-LEVEL FIREWALLS
Most hypervisor- or cloud orchestration products support VM NIC-based packet filtering capabilities,
either in form of simple access lists, or in form of distributed (semi)stateful firewalls.
The centralized private cloud infrastructure could use these capabilities to offer baseline security to
all tenants. Individual tenants could increase the security of their applications by using firewall
appliances offered by the cloud infrastructure (example: vShield Edge) or their existing firewall
products in VM format.
Yet again, the tenants might decide to use the default DPI/WAF product offered from the cloud
inventory catalog, or bring their own solution in VM format.
Modern Intel and AMD CPUs handle AES encryption in hardware 71, resulting in high-speed
encryption/decryption process as long as the encryption peers negotiate AES as the encryption*
The RSA algorithm performed during the SSL handshake is still computationally intensive; software
implementations might have performance that is orders of magnitude lower than the performance of
dedicated hardware used in physical appliances.
Total encrypted throughput and number of SSL transactions per second offered by a VM-
based load balancing or firewalling product should clearly be one of the major
considerations during your product selection process if you plan to implement SSL- or VPN
termination on these products.
72
Typical enterprise application deployment process is broken
http://blog.ipspace.net/2013/11/typical-enterprise-application.html
High-volume web sites might use a caching layer, in which case the physical load balancers
send the incoming requests to a set of reverse proxy servers, which further distribute
requests to web servers.
IN THIS CHAPTER:
Individual containers could be implemented with bare-metal servers, virtualized servers or even
independent private clouds (for example, using OpenStack). Multiple logical containers can share the
same physical infrastructure; in that case, each container uses an independent routing domain
(VRF) for complete layer-3 separation.
The Customer wants to implement high-speed traffic control (traffic filtering and/or firewalling)
between individual containers and the shared high-speed backbone. The solutions should be
redundant, support at least 10GE speeds, and be easy to manage and provision through a central
provisioning system.
They want to use the information from the central database to generate the traffic control rules
between individual containers and the layer-3 backbone, and a tool that will automatically push the
traffic control rules into devices connecting containers to the layer-3 backbone whenever the
information in the central database changes.
TCP sessions established from an outside client to a server within a container (example: web
application sessions). Target servers are identified by their IP address (specified in the
orchestration system database) or IP prefix that covers a range of servers;
TCP sessions established from one or more servers within a container to a well-known server in
another container (example: database session between an application server and a database
server). Source and target servers are identified by their IP addresses or IP prefixes;
UDP sessions established from one or more servers within a container to a well-known server in
another container (example: DNS and syslog). Source and target servers are identified by their
IP addresses or IP prefixes.
All applications are identified by their well-known port numbers; traffic passing a container boundary
does not use dynamic TCP or UDP ports73.
Servers within containers are not establishing TCP sessions with third-party servers outside of the
data center. There is no need for UDP communication between clients within the data center and
servers outside of the data center.
73
Are Your Applications Cloud-Friendly?
http://blog.ipspace.net/2013/11/are-your-applications-cloud-friendly.html
The use of stateful firewalls isolating individual containers from the shared backbone is not
mandated by regulatory requirements (example: PCI); the Customer can thus choose to implement
the traffic control rules with stateless filters, assuming that the devices used to implement traffic
filters recognize traffic belonging to an established TCP session (TCP packets with ACK or RST bit
set).
The following table maps the traffic categories listed in Communication Patterns section into typical
ACL rules implementable on most layer-3 switches:
74
I Dont Need no Stinking Firewall or Do I?
http://blog.ipspace.net/2010/08/i-dont-need-no-stinking-firewall-or-do.html
permit tcp any dst-server-ip eq dst- permit tcp server-ip eq dst-port any
port established
TCAM size: typical data center top-of-rack (ToR) switches support limited number of ACL
entries75. A few thousand ACL entries is more than enough when the traffic control rules use IP
prefixes to identify groups of servers; when an automated tool builds traffic control rules based
on IP addresses of individual servers, the number of ACL entries tends to explode due to
Cartesian product76 of source and destination IP ranges.
Object groups available in some products are usually implemented as a Cartesian product to
speed up the packet lookup process.
75 Nexus 5500 has 1600 ingress and 2048 egress ACL entries
http://www.cisco.com/c/en/us/products/collateral/switches/nexus-5000-series-switches/data_sheet_c78-
618603.html
Arista 7050 supports 4K ingress ACL and 1K egress ACL entries.
http://www.aristanetworks.com/media/system/pdf/Datasheets/7050QX-32_Datasheet.pdf
Arista 7150 supports up to 20K ACL entries
http://www.aristanetworks.com/media/system/pdf/Datasheets/7150S_Datasheet.pdf
76 See http://en.wikipedia.org/wiki/Cartesian_product
Juniper offers a Junos Puppet client, but its current version cannot provision manage access
control lists77. Arista provides Puppet installation instructions for EOS 78 but does not offer
agent-side code that would provision ACLs.
Standard Linux traffic filters (implemented, for example, with iptables or flow entries in Open
vSwitch) provide few gigabits of throughput due to the overhead of kernel-based packet
forwarding79. Solutions that rely on additional hardware capabilities of modern network interface
cards (NICs) and poll-based user-mode forwarding easily achieve 10Gbps throughput that satisfies
the Customers requirements. These solutions include:
79
Custom Stack: It Goes to 11
http://blog.erratasec.com/2013/02/custom-stack-it-goes-to-11.html
The solutions listed above are primarily frameworks, not ready-to-use traffic control
products. Integration- and other professional services are available for most of them.
Inserting new layer-3 appliances between container layer-3 switches and backbone switches
requires readdressing (due to additional subnets being introduced between existing adjacent layer-3
devices) and routing protocol redesign. Additionally, one would need robust routing protocol support
on the x86-based appliances. Its thus much easier to insert the x86-based appliances in the
forwarding path as transparent layer-2 devices.
Transparent appliances inserted in the forwarding path would not change the existing network
addressing or routing protocol configurations. The existing layer-3 switches would continue to run
VLAN-based interfaces to support logical containers that share the same physical infrastructure;
Transparent (bridge-like) traffic forwarding between two physical or VLAN interfaces. All non-IP
traffic should be forwarded transparently to support non-IP protocols (ARP) and any deployment
model (including scenarios where STP BPDUs have to be exchanged between L3 switches);
Ingress and egress IPv4 and IPv6 packet filters on physical or VLAN-based interfaces.
Ideally, the appliance would intercept LLDP packets sent by the switches and generate LLDP hello
messages to indicate its presence in the forwarding path.
Both approaches satisfy the technical requirements (assuming the customer uses DPDK-based OVS
to achieve 10Gbps+ performance); the Customer should thus select the best one based on the
existing environment, familiarity with orchestration tools or OpenFlow controllers, and the amount of
NECs ProgrammableFlow could support Customers OVS deployment model but would
require heavily customized configuration (ProgrammableFlow is an end-to-end fabric-wide
OpenFlow solution) running on non-mainstream platform (OVS is not one of the common
ProgrammableFlow switching elements).
Existing layer-3 switches MAY be used to implement the packet filters needed to isolate individual
assuming the number of rules in the packet filters does not exceed the hardware capabilities of the
layer-3 switches (number of ingress and egress ACL entries).
The Customer SHOULD consider x86-based appliances that would implement packet filters in
software or NIC hardware. The appliances SHOULD NOT use Linux-kernel-based packet forwarding
(usermode poll-based forwarding results in significantly higher forwarding performance).
x86-based appliances SHOULD use the same configuration management tools that the Customer
uses to manage other Linux servers. Alternatively, the customer MAY consider an OpenFlow-based
solution composed of software (x86-based) OpenFlow switches and a cluster of OpenFlow
controllers.
80
RFC 2119: Key words for use in RFCs to Indicate Requirement Levels
http://tools.ietf.org/html/rfc2119
IN THIS CHAPTER:
Two ToR switches providing intra-rack connectivity and access to the corporate backbone;
Dozens of high-end servers, each server capable of running between 50 and 100 virtual
machines;
Storage elements, either a storage array, server-based storage nodes, or distributed storage
(example: VMware VSAN, Nutanix, Ceph).
Racks in smaller data centers (example: colocation) connect straight to the WAN backbone, racks in
data centers co-resident with significant user community connect to WAN edge routers, and racks in
larger scale-out data centers connect to WAN edge routers or internal data center backbone.
This case study focuses on failure domain analysis and workload mobility challenges. Typical
rack design is described in the Redundant Server-to-Network Connectivity case study, WAN
connectivity aspects in Redundant Data Center Internet Connectivity one, and security
aspects in High-Speed Multi-Tenant Isolation.
ACME Inc. wants each infrastructure rack to be an independent failure domain. Each infrastructure
rack must therefore have totally independent infrastructure and should not rely on critical services,
management or orchestration systems running in other racks.
A failure domain is the area of an infrastructure impacted when a key device or service
experiences problems.
ACME Inc. should therefore (in an ideal scenario) deploy an independent virtualization management
system (example: vCenter) and cloud orchestration system (example: vCloud Automation Center or
CloudStack) in each rack. Operational and licensing considerations might dictate a compromise
where multiple racks use a single virtualization or orchestration system.
A typical cloud orchestration system (example: vCloud Automation Center, CloudStack, OpenStack)
provides multi-tenancy aspects of IaaS service, higher-level abstractions (example: subnets and IP
address management, network services, VM image catalog), and API, CLI and GUI access.
Both systems usually provide non-real-time management-plane functionality and do not interact
with the cloud infrastructure data- or control plane. Failure of one of these systems thus represents
a management-plane failure existing infrastructure continues to operate, but its impossible to
add, delete or modify its services (example: start/stop VMs).
Some hypervisor solutions (example: VMware High Availability cluster) provider control-plane
functionality that can continue to operate, adapt to topology changes (example: server or network
failure), and provide uninterrupted service (including VM moves and restarts) without intervention of
a virtualization- or cloud management system. Other solutions might rely on high availability
algorithms implemented in an orchestration system an orchestration system failure would thus
impact the high-availability functionality, making the orchestration system a mission-critical
component.
Use a server high-availability solution that works independently of the cloud orchestration
system;
Implement automated cloud orchestration system failover procedures;
Periodically test the proper operation of the cloud orchestration system failover.
81 A cloud orchestration system instance might be implemented as a cluster of multiple hosts running
(potentially redundant) cloud orchestration system components.
82
RFC 2119, Key words for use in RFCs to Indicate Requirement Levels
http://tools.ietf.org/html/rfc2119
Recommendation: ACME Inc. SHOULD NOT use a single critical management-, orchestration- or
service instance across multiple data centers a data center failure or Data Center Interconnect
(DCI) link failure would render one or more dependent data centers inoperable.
Server- and virtualization administrators tend to prefer long-distance virtual subnets over other
approaches due to their perceived simplicity, but the end-to-end intra-subnet bridging paradigm
might introduce undesired coupling across availability zones or even data centers when implemented
with traditional VLAN-based bridging technologies.
There are several technologies that reliably decouple subnet-level failure domain from infrastructure
availability zones83. Overlay virtual networks (transport of VM MAC frames across routed IP
infrastructure) is the most commonly used one in data center environments.
EoMPLS, VPLS and EVPN provide similar failure isolation functionality in MPLS-based
networks.
Summary: Long-distance virtual subnets in ACME Inc. cloud infrastructure MUST use overlay virtual
networks.
83
Decouple virtual networking from the physical world
http://blog.ipspace.net/2011/12/decouple-virtual-networking-from.html
Providing layer-2 connectivity inside a single rack doesnt increase the failure domain size the
network within a rack is already a single failure domain. Extending a single VLAN across multiple
racks makes all interconnected racks a single failure domain84.
Recommendation: ACME Inc. MUST use layer-3 connectivity between individual racks and the
corporate backbone.
Using overlay virtual networks its easy to provide end-to-end layer-2 connectivity between VMs
without affecting the infrastructure failure domains. Unfortunately one cannot use the same
approach for disk replication or bare metal servers.
Virtualizing bare metal servers in one-VM-per-host setup solves the server clustering
challenges; storage replication remains a sore spot.
Recommendation: ACME Inc. SHOULD NOT use storage replication products that require end-to-
end layer-2 connectivity.
84
Layer-2 network is a single failure domain
http://blog.ipspace.net/2012/05/layer-2-network-is-single-failure.html
85
See also: Whose failure domain is it?
http://blog.ipspace.net/2014/03/whose-failure-domain-is-it.html
Recommendation: Rate-limiting of VXLAN traffic and broadcast storm control MUST be used when
using VXLAN (or any other similar technology) to extend a VLAN across multiple availability zones to
limit the amount of damage a broadcast storm in one availability zone can cause in other availability
zones.
Live VM mobility across availability zones is harder to achieve and might not make much sense the
tight coupling between infrastructure elements usually required for live VM mobility often turns the
components participating in live VM mobility domain into a single failure domain.
Some virtualization vendors might offer a third option: warm VM mobility where you pause a VM
(saving its memory to a disk file), and resume its operation on another hypervisor.
Cold VM mobility is used in almost every high-availability and disaster recovery solution. VMware
High Availability (and similar solutions from other hypervisor vendors) restarts a VM on another
cluster host after the server failure. VMwares SRM does something similar, but usually in a different
data center. Cold VM mobility is also the only viable technology for VM migration between multiple
cloud orchestration systems (for example, when migrating a VM from private cloud into a public
cloud).
HOT VM MOBILITY
VMwares vMotion is probably the best-known example of hot VM mobility technology. vMotion
copies memory pages of a running VM to another hypervisor, repeating the process for pages that
have been modified while the memory was transferred. After most of the VM memory has been
successfully transferred, vMotion freezes the VM on source hypervisor, moves its state to another
hypervisor, and restarts it there.
A hot VM move must not disrupt the existing network connections and must thus preserve the
following network-level state:
The only mechanisms we can use today to meet all these requirements are:
Recommendation: ACME Inc. should keep the hot VM mobility domain as small as possible.
COLD VM MOVE
In a cold VM move a VM is shut down and restarted on another hypervisor. The MAC address of the
VM could change during the move, as could its IP address unless the application running on the VM
doesnt use DNS to advertise its availability.
Recommendation: The new cloud infrastructure built by ACME Inc. SHOULD NOT be used by
poorly written applications that are overly reliant on static IP addresses 86.
VMs that rely on static IP addressing might also have manually configured IP address of the first-hop
router. Networking- and virtualization vendors offer solutions that reduce the impact of that bad
practice (first-hop localization, LISP) while significantly increasing the overall network complexity.
86
Are your applications cloud-friendly?
http://blog.ipspace.net/2013/11/are-your-applications-cloud-friendly.html
These hosts might try to reach the moved VM using its old MAC address, requiring end-to-end layer-
2 connectivity between the old and new VM location.
Hyper-V Network Virtualization uses pure layer-3 switching in the hypervisor virtual switch.
VM moves across availability zones are thus theoretically possible as long as the
availability zones use a shared orchestration system.
Contrary to popular belief propagated by some networking and virtualization vendors disaster
recovery does not require hot VM mobility and associated long-distance virtual subnets its much
simpler to recreate the virtual subnets and restart the workload in a different availability zone using
This approach works well even for workloads that require static IP addressing within the
application stack internal subnets (port groups) using VLANs or VXLAN segments are
recreated within the recovery data center prior to workload deployment.
Another popular scenario that requires hot VM mobility is disaster avoidance live workload
migration prior to a predicted disaster.
Disaster avoidance between data centers is usually impractical due to bandwidth constraints88. While
it might be used between availability zones within a single data center, that use case is best avoided
due to additional complexity and coupling introduced between availability zones.
Increased latency between application components and traffic trombones89,90 are additional
challenges one must consider when migrating individual components of an application stack.
Its usually simpler and less complex to move the whole application stack as a single unit.
For example, VMware vSphere 5.5 (and prior releases) supports vMotion (hot VM mobility) within
hosts that use the same single virtual distributed switch (vDS) and are managed by the same
vCenter server. Hot workload migration across availability zones can be implemented only when
those zones use the same vCenter server and the same vDS, resulting in a single management-
plane failure domain (and a single control-plane failure domain when using Cisco Nexus 1000V 91).
vSphere 6.0 supports vMotion across distributed switches and even across multiple vCenters,
making it possible to implement hot workload mobility across multiple vCenter domains within the
same cloud orchestration system.
Recommendation: ACME Inc. can use inter-vCenter vMotion to implement hot workload mobility
between those availability zones in the same data center that use a single instance of a cloud
orchestration system.
Each data center within the ACME Inc. private cloud MUST use a separate instance of the cloud
orchestration system to limit the size of the management-plane failure domain. ACME Inc. thus
cannot use high availability or workload mobility implemented within a cloud orchestration system to
move workloads between data centers.
91
What exactly is Nexus 1000V?
http://blog.ipspace.net/2011/06/what-exactly-is-nexus-1000v.html
CONCLUSIONS
Infrastructure building blocks:
ACME Inc. will build its cloud infrastructure with standard rack-size compute/storage/network
elements;
Each infrastructure rack will be an independent data- and control-plane failure domain
(availability zone);
Each infrastructure rack must have totally independent infrastructure and SHOULD NOT rely on
critical services, management or orchestration systems running in other racks.
Network connectivity:
Each rack will implement a high-availability virtualization cluster that operates even when the
cloud orchestration system fails;
Hot VM mobility (example: vMotion) will be used within each rack;
Hot VM mobility MIGHT be used across racks in the same data center assuming ACME Inc.
decides to use a single cloud orchestration system instance per data center;
Workload mobility between data centers will be implemented with a dedicated workload
orchestration- or disaster recovery tool.
A single management- or orchestration system instance will control a single rack or at most one
data center to reduce the size of management-plane failure domain;
Management- and orchestration systems controlling multiple availability zones will have
automated failover/recovery procedures that will be thoroughly tested at regular intervals;
ACME Inc. SHOULD NOT use a single critical management-, orchestration- or service instance
across multiple data centers.
IN THIS CHAPTER:
ACME Inc. is building a private cloud and a disaster recovery site that will eventually serve as a
second active data center. They want to be able to simplify disaster recovery procedures, and have
the ability to seamlessly move workloads between the two sites after the second site becomes an
active data center.
The ACMEs cloud infrastructure design team is trying to find a solution that would allow them to
move quiescent workloads between sites with a minimum amount of manual interaction. They
considered VMwares SRM but found it lacking in the area of network services automation.
Figure 12-2: Typical workload architecture with network services embedded in the application stack
Load balancing and firewalling between application tiers is currently implemented with a central pair
of load balancers and firewalls, with all application-to-client and server-to-server traffic passing
through the physical appliances (non-redundant setup is displayed in Figure 12-4).
INFRASTRUCTURE CHALLENGES
The current data center infrastructure supporting badly-written enterprise applications generated a
number of problems92 that have to be avoided in the upcoming private cloud design:
Physical appliances are a significant chokepoint and would have to be replaced with a larger
model or an alternate solution in the future private cloud;
Current physical appliances support a limited number of virtual contexts. The existing workloads
are thus deployed on shared VLANs (example: web servers of all applications reside in the same
92
Sooner or later someone will pay for the complexity of the kludges you use
http://blog.ipspace.net/2013/09/sooner-or-later-someone-will-pay-for.html
Flexibility. Virtual appliances can be deployed on demand; the only required underlying physical
infrastructure is compute capacity (assuming one permanent or temporary licenses needed to
deploy new appliance instances).
Appliance mobility. Virtual appliance is treated like any other virtual machine by server
virtualization and/or cloud orchestration tools. Its as easy (or hard 96) to move virtual appliances as
the associated application workload between availability zones, data centers, or even private and
public clouds.
Most physical appliances dont support any other virtual networking technology but VLANs, the
exception being F5 BIG-IP which supports IP multicast-based VXLANs.97
Virtual appliances run on top of a hypervisor virtual switch and connect to whatever virtual
networking technology offered by the underlying hypervisor with one or more virtual Ethernet
adapters as shown in Figure 12-5.
The number of virtual Ethernet interfaces supported by a virtual appliance is often dictated
by hypervisor limitations. For example, vSphere supports up to 10 virtual interfaces per
VM98; KVM has much higher limits.
Configuration management and mobility. Virtual appliances are treated like any other virtual
server. Their configuration is stored on their virtual disk, and when a disaster recovery solution
replicates virtual disk data to an alternate location, the appliance configuration becomes
automatically available for immediate use at that location all you need to do after the primary data
center failure is to restart the application workloads and associated virtual appliances at an alternate
location99.
Reduced performance. Typical virtual appliances can handle few Gbps of L4-7 traffic, and few
thousand SSL transactions per second 100;
Appliance sprawl. Ease-of-deployment usually results in numerous instances of virtual
appliances, triggering the need for a completely different approach to configuration
management, monitoring and auditing (the situation is no different from the one we experienced
when server virtualization became widely used).
Shift in responsibilities. Its impossible to configure and manage per-application-stack virtual
appliances using the same methods, tools and processes as a pair of physical appliances.
Licensing challenges. Some vendors try to license virtual appliances using the same per-box
model they used in the physical world.
The move to virtual appliances enabled them to consider overlay virtual networks its trivial to
deploy virtual appliances on top of overlay virtual networks.
Finally, they decided to increase the application security by containerizing individual workloads and
using VM NIC filters (aka microsegmentation) instead of appliance-based firewalls wherever possible
100
Virtual appliance performance is becoming a non-issue
http://blog.ipspace.net/2013/04/virtual-appliance-performance-is.html
VM NIC firewalls will increase the packet filtering performance the central firewalls will no
longer be a chokepoint;
Virtual appliances will reduce ACMEs dependence on hardware appliances and increase the
overall network services (particularly load balancing) performance with a scale-out appliance
architecture;
Overlay virtual networks will ease the deployment of large number of virtual network segments
that will be required to containerize the application workloads.
Most VM NIC firewalls dont offer the same level of security as their more traditional counterparts
most of them offer stateful packet filtering capabilities that are similar to reflexive ACLs 101.
In-kernel VM NIC firewalls rarely offer application-level gateways (ALG) or layer-7 payload
inspection (deep packet inspection DPI).
101
The spectrum of firewall statefulness
http://blog.ipspace.net/2013/03/the-spectrum-of-firewall-statefulness.html
ORGANIZATIONAL DRAWBACKS
The technical drawbacks identified by the ACME architects are insignificant compared to
organizational and process changes that the new technologies require 103,104:
Move from traditional firewalls to VM NIC firewalls requires a total re-architecture of the
applications network security, including potential adjustments in security policies due to lack of
deep packet inspection between application tiers105;
PHASED ONBOARDING
Faced with all the potential drawbacks, the ACMEs IT management team decided to implement a
slow onboarding of application workloads.
New applications will be developed on the private cloud infrastructure and will include new
technologies and concepts in the application design, development, testing and deployment phases;
Moving an existing application stack to the new private cloud will always include security and
network services reengineering:
Load balancing rules (or contexts) from existing physical appliances will be migrated to per-
application virtual appliances;
Intra-application firewall rules will be replaced by equivalent rules implemented with VM NIC
firewalls wherever possible;
ORCHESTRATION CHALLENGES
Virtual appliances and overlay virtual networks enable simplified workload mobility but do not solve
that problem. Moving a complex application workload between instances of cloud orchestration
systems (sometimes even across availability zones) requires numerous orchestration steps before
its safe to restart the application workload:
Virtual machine definitions and virtual disks have to be imported into the target environment
(assuming the data is already present on-site due to storage replication or backup procedures);
Internal virtual networks (port groups) used by the application stack have to be recreated in the
target environment;
Outside interface(s) of virtual appliances have to be connected to the external networks in the
target environment;
Virtual appliances have to be restarted;
Configuration of the virtual appliance outside interfaces might have to be adjusted to reflect
different IP addressing scheme used in the target environment. IP readdressing might trigger
additional changes in DNS108;
108
IP renumbering in disaster avoidance data center designs
http://blog.ipspace.net/2012/01/ip-renumbering-in-disaster-avoidance.html
Some orchestration systems (example: vCloud Director) allow users to create application containers
that contain enough information to recreate virtual machines and virtual networks in a different
cloud instance, but even those environments usually require some custom code to connect migrated
workloads to external services.
Cloud architects sometimes decide to bypass the limitations of cloud orchestration systems
(example: lack of IP readdressing capabilities) by deploying stretched layer-2 subnets,
effectively turning multiple cloud instances into a single failure domain111. See the Scale-Out
Private Cloud Infrastructure chapter for more details.
Virtual routers could establish routing protocol adjacency (preferably using BGP 112) with first-hop
layer-3 switches in the physical cloud infrastructure (ToR or core switches depending on the data
center design).
One could use BGP peer templates on the physical switches, allowing them to accept BGP
connections from a range of directly connected IP addresses (outside IP address assigned to
virtual routers via DHCP), and use MD5 authentication to provide some baseline security.
A central BGP route server would be an even better solution. The route server could do be
dynamically configured by the cloud orchestration system to perform virtual router authentication
and route filtering. Finally, you could assign the same loopback IP address to route servers in all
data centers (or availability zones), making it easier for the edge virtual router to find its BGP
neighbor.
112
Virtual appliance routing network engineers survival guide
http://blog.ipspace.net/2013/08/virtual-appliance-routing-network.html
Workload mobility between different availability zones of the same cloud orchestration system is
easy to achieve, as most cloud orchestration system automatically create all underlying objects
(example: virtual networks) across availability zones as required.
Its also possible to solve the orchestration challenge of a disaster recovery solution by restarting
the cloud orchestration system at a backup location (which would result in automatic recreation of
all managed objects, including virtual machines and virtual networks).
Workload mobility across multiple cloud orchestration instances, or between private and public
clouds, requires extensive orchestration support, either available in the cloud orchestration system,
or implemented with an add-on orchestration tool.
113
Reduce costs and gain efficiencies with SDDC
http://blog.ipspace.net/2014/08/interview-reduce-costs-and-gain.html
IN THIS CHAPTER:
Should they build a new data center or try to implement disaster recovery in a public cloud?
If they decide to build a new data center, where should they build it? Should they own the
premises or use a colocation facility?
How could they optimize the infrastructure utilization if they decide to build their own
infrastructure or rent permanent infrastructure from a cloud provider?
Keep in mind that complete data center failures tend to be rare events if they occur more
often than once a decade someone did a poor job planning the data center location, or
designing or implementing the data center infrastructure. Implementing complex recovery
solutions (example: stretched LAN segments 114 to enable live VM mobility115) to cope with
rare events is usually an overkill that creates a permanent technical debt, which inevitably
raises the infrastructure operational expenses116.
Recovery point objective measured in seconds or minutes requires accurate data being present at
the DR site at the time of the disaster. There are numerous solutions one can use to satisfy this
requirement:
Short recovery time objective requires warm data present at the DR site and at least some
minimal warm/hot infrastructure being operational at the DR site. Expecting to get free capacity at a
hosted DR site at the moment the disaster strikes, and being able to set up the whole data center
infrastructure in hours is ludicrous.
Recovery time objective measured in tens of minutes or hours usually precludes data recovery
from off-site backups usable data should already be present at the DR site.
Synchronous storage replication is the all-encompassing solution that enables low RTO with minimal
loss of data (RPO close to disaster). It is also hard to implement over longer distances, and requires
significant bandwidth.
117
Long-Distance vMotion, Stretched HA Clusters and Business Needs
http://blog.ipspace.net/2013/01/long-distance-vmotion-stretched-ha.html
APPLICATION REQUIREMENTS
Ideal applications118 would have failure-resistant scale-out architecture relying on DNS or other
similar tools (example: Zookeeper) to find service endpoints.
Real-life applications often rely of fixed (or even hardcoded) IP addresses and convoluted firewall
rules or load balancing algorithms. Migrating such applications to a DR site quickly becomes an
exercise in futility, usually resulting in increasingly complex network-level kludges to support
impossible requirements dictated by the application teams (and often not justified by any real
business needs119).
They dont share VLANs with other workload (in which case one can migrate IP subnets together
with the application);
The external service endpoint (example: external IP address on a load balancer) is not fixed.
You can use routing tricks, for example advertising a host route into BGP, to move the
external service endpoint across multiple locations without deploying stretched subnets.
These tricks are however best avoided, as they inevitably increase the network complexity
and operational costs.
Finally, one SHOULD use virtual appliances when migrating application stacks to a disaster recovery
site to simplify the migration process and ensure that the virtual appliance used at the DR site has
the same configuration as the primary production appliance121.
DEPLOYMENT SCENARIOS
A disaster recovery site could be implemented with dedicated data center infrastructure (on-
premises or collocated), third-party dedicated infrastructure or a public cloud infrastructure.
Data replication requirements have been addressed in the Technical Requirements Dictated by
Business Needs section.
Orchestration system compatibility is crucial for environments that havent yet implemented
automated application deployment into a cloud infrastructure.
ACME uses semi-manual processes to create VM virtual disks, create VMs and start/stop VMs. Its
absurd to expect easy migration into an environment that doesnt mirror the virtualization
122
Stretched Clusters: Almost as Good as Heptagonal Wheels
http://blog.ipspace.net/2011/06/stretched-clusters-almost-as-good-as.html
The requirement for pre-provisioned network infrastructure might severely limit the deployment
options there are very few tools that would automatically mirror network infrastructure setup
(subnets, VM NIC firewall rules, distributed routing functionality) to a DR infrastructure, particularly
if the destination environment uses a different hypervisor, network virtualization, and orchestration
solutions (example: migrating workloads from vCenter-based vSphere environment into a public
OpenStack-based cloud).
Organizations with fully automated application stack deployments that include dynamic provisioning
of virtual network infrastructure and services using a cloud orchestration system are obviously able
to quickly deploy their workloads on any infrastructure compatible with their deployment tools.
Automated failover capabilities tested under real-life conditions are crucial for a successful
migration to a DR site. Expecting a manual DR migration supported by a hodgepodge of kludges
relying on dubious technologies (live VM mobility or automatic VM restart across stretched high-
availability clusters) is an almost-guaranteed recipe for another disaster.
While the network and cloud architects started designing the DR data center, the application teams
in conjunction with concerned CFO got a fantastic idea: wouldnt it be great if they could deploy
components of the same application stack across both locations, resulting in better utilization of the
new infrastructure?
The lets use what we have idea is indubitably alluring as long as:
It doesnt decrease the application performance (see below) which is hard to predict unless one
has an in-depth understanding of the application stack and the communication patterns between
individual components;
The infrastructure is not utilized beyond the point where the failure of a single data center would
overload the other data center(s).
In other words, if you plan to spread the application workload across two data centers, each one of
them shouldnt be more than 50% utilized unless youre willing to shut down non-critical
applications to free additional resources during the disaster recovery (tools like VMware SRM include
this step in their automated disaster recovery procedure).
Typical web applications organize the functionality of the generic architecture into the following
components (see also Figure 13-2):
Web servers;
Application servers;
Database servers;
123
PVLAN, VXLAN and Cloud Application Architectures
http://blog.ipspace.net/2012/08/pvlan-vxlan-and-cloud-application.html
The amount of data transferred between application components usually resembles the diagram in
Figure 13-3 (not to scale): the data transfer toward the web browser is pretty optimized, and
Figure 13-3: Total amount of data transferred between tiers in a typical web application
The number of requests made between a web browser and a web server (and the number of round-
trip-times RTT a web transaction takes) are usually very well understood this part of the
application stack tends to have the lowest bandwidth and highest latency and is thus heavily
optimized in well-performing web applications.
For more details, watch excellent video by Ilya Grigorik, and read his book.
The problem is compounded by the lack of good measurement tools. Measuring the total amount of
data exchanged between application stack components is relatively easy (Netflow records should
provide an accurate answer), but its really hard to measure the number of requests between
application components that are needed to fulfill a client request.
Figure 13-5: Typical transfer times in a single data center application deployment
When the components of an application stack reside in different geographical locations, the
bandwidth limitations between the components become significant.
Once we know the available bandwidth between the locations, we can modify the single-location
diagram from Figure 13-5 to reflect the new bandwidth constraints between the components of the
application stack based. The results are shown in Figure 13-6.
Usable bandwidth between application tiers might be lower than the maximum available
bandwidth due to TCP constraints - see TCP throughput calculator (based on Mathis
formula) for more details.
Reduced bandwidth between application- and database server results in increased response time
(increased total height of all orange bars), as illustrated in Figure 13-7.
You might decide to ignore the RTT between (for example) a web server and a database server
after all, adding 20 50 msec of extra latency to a web request processing doesnt seem like much.
The real problem is that you dont know how many SQL transactions (each one resulting in another
round trip time) it takes to get the data out of the database.
Some applications are well written and try to minimize the number of SQL queries (or RPC or web
services requests), but its not uncommon to see an application doing hundreds of SQL queries for
each user request due to poor programming practices.
124
For more details, watch the TCP, HTTP and SPDY webinar
http://content.ipspace.net/get/SPDY
In most cases, one has to test the application under realistic conditions to approximate the impact of
distributed application stack components using tools125 that can insert bandwidth limitations,
additional latency, or unreliable packet delivery (random drop) in a path between two application
stack components.
Does that mean that ACME cannot utilize the infrastructure in the second data center during regular
operations? Definitely not, they have at least two options:
Deploy some application stacks in primary data center and other applications stacks in DR data
center as long as those applications dont share common databases 126.
Most distributed application deployments solve this problem with a single read-write database
instance (residing in the primary data center) and read-only database replicas in all other data
centers (see Figure 13-11). Application stacks in secondary data centers can execute only read-
only transactions; all read-write transactions have to be executed in the primary data center.
This approach works extremely well in environments that can tolerate eventual consistency
(example: on-line sales or bookings), particularly when the ratio between read-only and
read-write transactions is significant (in many environments the ratio exceed 100:1).
When a user using an application stack in a secondary center executes a read-write transaction, that
transaction must be redirected to the read-write application stack in the primary data center (see
Figure 13-12).
Applications that use the same database connection for read-only and read-write transactions can
use multiple HTTP host names or HTTP redirects (implemented on web servers or even load
balancers) to redirect read-write applications to the application stack in the primary data center.
129
Database Mirroring in SQL Server
https://msdn.microsoft.com/en-us/library/5h52hef8(v=vs.110).aspx
Multiple copies of application stack running in different data centers enable the networking team to
use the traditional load balancing mechanisms to send individual users to the nearest instance of the
application stack:
DNS-based load balancing (illustrated in Figure 13-13) works well for applications with RTO
higher than a few minutes; browser-side DNS pinning usually interferes with the DNS time-to-
live mechanisms, making very low DNS TTL value useless.
Anycast load balancing (advertising the same IP address range from multiple locations as shown
in Figure 13-14) is an interesting solution used by numerous very large web properties. It
works well when the data centers are so far apart that theres no chance a single user could use
more than one of them at the same time due to ECMP load balancing.
The ideal solution depends on the organizations business needs, in particular the Recovery Time
Objective, Recovery Point Objective and the desired application availability, as well as the
organization readiness for automated workload deployments.
Even the organizations that have to build dedicated disaster recovery infrastructure due to manual
application deployment processes can optimize the utilization of that infrastructure by deploying
individual isolated application stacks or building application swimlanes in multiple data centers.
Regardless of the chosen disaster recovery solution, avoid designs that needlessly increase
infrastructure complexity during regular operations to provide a solution for rare events
like total data center failure.
Also keep in mind that most disaster recovery plans dont survive the first encounter with
the reality a disaster recovery plan that hasnt been thoroughly tested will often fail in
interestingly unexpected ways.