Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Technology brief
Introduction ......................................................................................................................................... 2 Hardware infrastructure ........................................................................................................................ 2 Network architecture ............................................................................................................................ 3 Network complexity ......................................................................................................................... 4 Network loop protection ................................................................................................................... 4 Congestion control in converged networks .......................................................................................... 5 Configuration and deployment capability ............................................................................................... 6 Virtual Connect Server Profile ............................................................................................................ 6 Cisco Service Profile ......................................................................................................................... 7 Server adapter and PCIe bus scalability .............................................................................................. 7 Performance and scalability .................................................................................................................. 8 Management..................................................................................................................................... 10 Conclusion ........................................................................................................................................ 11 For more information .......................................................................................................................... 12
Introduction
This paper compares the network architecture, configuration, deployment, management capabilities, and performance of two systems: HP BladeSystem servers with Virtual Connect (VC) Cisco Unified Computing System (UCS) Both systems connect physical and virtual servers to LAN and SAN networks and manage those connections up to the server-network edge. This paper does not compare HP CloudSystem Matrix to Cisco UCS. CloudSystem Matrix lets you readily provision and adjust infrastructure services to meet changing business demands. Neither Cisco UCS servers nor HP BladeSystem servers with VC alone provide that higher level of capability and services.
Hardware infrastructure
To compare HP BladeSystem with VC to Cisco UCS, we need to establish a hardware baseline. This ensures that we consider function and performance comparisons between equivalent systems as much as possible. Table 1 specifies the hardwaresample servers, NICs, switches, interconnectsand primary management software required to compare BladeSystem and UCS.
Table 1: Comparable UCS and VC BladeSystem components Component Enclosure Cisco UCS UCS 5108 Blade server chassis (capacity 8 half-height blades) Redundant UCS 6248UP Fabric Interconnect) with 48 port connections to FEX Cisco UCS 2208XP Series Fabric Extenders (FEX) with Enclosure interconnect 8 X 10 Gb uplinks to the Fabric Interconnect 32 midplane connections to the servers NICs Servers Connectivity management Cisco UCS 1280 Virtual Interface card (VIC) with 8 X 10 GbE uplinks ports Cisco UCS B200 M2 UCS Manager Integrated NC553i Dual Port FlexFabric Adapter with 2 X 10 Gb uplink ports HP ProLiant BL460c G7 Virtual Connect Manager (VCM) Redundant VC FlexFabric 10 Gb/24-port module with 16 x 10 Gb downlinks through the midplane and multi-enclosure stacking links between modules HP BladeSystem with VC C7000 enclosure (capacity 16 half-height blades)
Domain Interconnect
Both HP and Cisco network hardware portfolios contain more options than those listed in Table 1. The technology and hardware compared in this paper represent the newest and highest performing options available from HP and Cisco as of October 2011. This paper explores capabilities and issues of in both systems when you maximize performance and scalability. Check vendor options to find the appropriate level of scalability and performance for your business environment.
Network architecture
Cloud-computing and service-oriented applications are driving todays increasing demand for virtualization. As a result, a major shift is under way in data center traffic patterns. Server-to-server (east-west) communication generated by these new applications will likely account for up to 80% of all data center traffic by 2014 according to Gartner. HP BladeSystem and Cisco UCS network architectures are significantly different. Figure 1 illustrates the difference in the two network architectures.
Figure 1: VC has a flatter architecture than Cisco UCS in End Host Mode configuration.
The UCS design is hierarchical. Most data traffic goes upstream to Layer 2 aggregation switches before heading back down to its target. UCS Fabric Interconnects support two operating modes: Switch Mode and End Host Mode. Ciscos best practices recommend using End Host Mode for UCS configuration, and Cisco enables this mode by default. In End Host Mode, the Cisco UCS hierarchical model uses active-active alternating A and B fabrics (that is, topology connecting network nodes with one or more switches). A dual-port NIC, dual FEX modules per enclosure, and dual fabric interconnect switches per UCS domain provide redundancy. In End Host Mode, the A and B fabrics are isolated until traffic reaches the Layer 2 aggregation switches. This means that server-to-server traffic must travel to the Layer 2 aggregation switches and back through the FEX modules to the server. The exception here is for server-to-server traffic within the same fabric. A fabric interconnect can connect NICs associated with different servers if they are all part of the same fabric. Because the percentage of server-to-server traffic is increasing rapidly, the UCS architecture using fabric interconnect switches increases latency, adds another hop, and can create a network traffic bottleneck. This increases the probability of network congestion due to oversubscription, and that can result in unpredictable network performance and application behavior.
In contrast, the HP BladeSystem design is flatter. With HP VC and ProLiant BladeSystem servers, the c7000 enclosure supports 16 half-height blade servers. VC FlexFabric interconnect modules support redundancy management and a fault tolerant stacking solution. FlexFabric modules connect all LAN and SAN traffic through converged fabrics with egress to external networks through the same VC module. Server-to-server traffic stays within the VC domain. The VC domain exists within a single enclosure, or within multiple enclosures when configured as a multi-enclosure domain with stacking links. Server-to-core (north-south) traffic is in its native LAN or SAN format when it exits or enters the VC domain at the uplink ports of the VC FlexFabric interconnect modules. All network traffic connects to Layer 2 aggregation switches using appropriate industry-standard network connection protocols (native Ethernet, Fibre Channel).
Network complexity
VC FlexFabric provides a simple way to connect 16 blade servers in a single c7000 enclosure at the server edge while reducing networking sprawl. Converged LAN and SAN traffic travels from the embedded FlexFabric adapter available in ProLiant G7 Server blades, to the VC FlexFabric 10 Gb/24-port Interconnect module using dual 10 Gb uplinks. The VC configuration includes two FlexFabric modules for redundancy. It creates a complete, redundant server-edge network. The VC configuration requires two components versus the 56 components that the comparable UCS configuration requires. Cisco recommends using the UCS End Host Mode to connect 16 servers in a redundant configuration within two 5108 chassis. It requires you to use 2 Fabric Interconnects, 32 cables, 4 FEX modules, and 16 VIC cards. This approach adds network hops, latency, and complexityeven for blades in the same enclosure (Figure 2). It also increases the number of possible fault events, such as down-link events caused by cable faults, and the probability of failover.
Figure 2: Best practice redundant configurations for 16 servers using all 8 X 10 Gb 1280 VIC adapter ports
the server-network edge and into multi-hop configurations, congestion control becomes a significant issue. The IEEE ratified the 802.1Qau-2010 standard, also known as Quantized Congestion Notification (QCN), in March 2010. HP actively participated and drove the ratification efforts. QCN is one of the most significant standards for creating end-to-end converged data center networks that carry both LAN and SAN traffic. While existing Priority-based Flow Control (PFC) protects against link level congestion, QCN addresses end-to-end, switched, converged network infrastructure. It is a multi-hop protocol designed to protect the network against persistent oversubscribed congestion. To enable QCN in a network, the entire data path, including converged network adapters and switches, must support QCN. QCN does not guarantee a lossless environment in the LAN. It must work in conjunction with PFC to avoid dropping packets. The purpose of the QCN standard is to address current and future data center network densities and to create a solution that can keep pace with network growth. QCN-compliant, end-to-end network hardware is not yet broadly available, so HP has not yet implemented QCN. Cisco does not support QCN. Cisco claims that QCN is not a requirement for deploying an end-toend FCoE network. Instead, the Cisco multi-hop architecture includes full Fibre Channel switching services such as Distributed Name Server, Fabric Shortest Path First (FSPF) routing protocol, and zoning in each fabric interconnect or data center switch. This Fibre Channel switching relies entirely on DCB-standard PFC-based flow control. As a result, there are unanswered questions about congestion control for multi-hop converged networks: Will Cisco and other industry vendors adopt QCN? When will this happen? More important, how will Cisco interoperate with networks that support QCN? Does Ciscos multi-hop architecture with the inclusion of Fibre Channel, switching-based, congestion control add latency? Is the legacy FSPF protocol developed for native FC networks adequate to ensure end-to-end congestion management in arbitrarily complex data center networks? Does the Cisco multi-hop architecture require a Cisco-only solution for customers? If you choose to implement the QCN standard, the next generation of HP VC hardware will offer endto-end QCN support in the future, and will integrate with other QCN-compliant network infrastructures.
the appropriate MAC, PXE, WWN, and SAN boot settings and connects the appropriate networks and fabrics. VCM lets you use the iSCSI Boot Assistant to configure a server to boot from a remote iSCSI target as part of the VC server profile. This simplifies 90% of what can otherwise be a manual, error-prone process. Once you identify the iSCSI target, VCM automates most of the setup work and retrieves storage parameters directly from the target. The iSCSI Boot Assistant then attaches the boot configuration parameters to the server profile. You can move VC server profiles between Virtual Connect domains as long as the servers remain physically connected to the same networks.
performance calculations. The PCIe 2 bus constraint is an industry hardware limitation. With UCS, you may be investing in network bandwidth that you cannot use. In contrast, HP VC adapters do not configure 8 x 10 Gb ports for x8 or x16 PCI 2 connections. We balance HP VC adapters at 2x 10 Gb with a x8 PCIe Gen 2 bus connection. Consult hardware manufacturers roadmaps for future generations of PCIe server bus implementations.
Figure 3: This single, fully utilized UCS enclosure shows a 4:1 oversubscription when configured in an activeactive configuration, and a 2:1 configuration in an active-standby configuration.
The lack of available port connections between the FEX and fabric interconnect modules also affects scalability in UCS systems. Figure 4 shows that with paired 2208XP FEX modules and 6248UP fabric interconnect modules, the maximum number of fully utilized chassis drops from the first to the second generation of UCS hardware.
Figure 4: Fabric interconnect port limitations on fully utilized UCS server and chassis configurations drop scalability by almost half with the 2nd generation UCS hardware.
You should keep in mind that in most cases a lack of scalability is the cost for increased bandwidth or decreased oversubscription. The comparisons presented in this paper show what happens to scalability when you maximize performance in VC and UCS systems.
Figure 5 compares the bandwidth and subscription rates for server-to-core traffic in VC and UCS network architectures. In this comparison, we have configured the HP and Cisco servers for 40 Gb/s of network I/O. The cable count for the HP configuration is slightly lower between the interconnect modules and Layer 2 aggregation switches. In this example, HP VC has a 2:1 oversubscription rate compared to a 10:1 oversubscription rate for UCS.
Management
We designed Virtual Connect Enterprise Manager (VCEM) as the primary management tool for multiple VC domains. VCEM is a highly scalable software solution. It centralizes network connection management and workload mobility for thousands of servers that use VC to connect to data and storage networks. VCEM uses the same profile format, content, and general operations as VCM. VCEM provides these core capabilities: A single intuitive console that controls up to 250 VC domains (up to 1,000 BladeSystem enclosures and 16,000 servers) in VC multi-enclosure domain configurations The ability to define and manage server profiles for multiple VC domains from a central management interface A central repository that administers more than 256K MAC addresses and WWNs for server-tonetwork connectivity, simplifying address assignments and eliminating the risk of conflicts Group-based management of VC domains using common configuration profiles that increase infrastructure consistency, limit configuration errors, simplify enclosure deployment, and enable configuration changes pushed to multiple VC domains Scripted and manual movement of server connection profiles and associated workloads between BladeSystem enclosures so that you can add, change, and replace servers across the data center in minutes without affecting production or LAN and SAN availability Automated failover of server connection profiles to user-defined spare servers
10
Discovery and aggregation of existing VC domain resources into the VCEM console and address repository Licenses per c-Class enclosure, simplifying deployment and support In the Cisco hierarchical management approach, the UCS Manager centralizes management of all software and hardware components across multiple chassis and VMs. You access Cisco UCS Manager through GUI, CLI, or an XML API interfaces. The Cisco UCS fewer steps approach to deployment can be an issue in configurations with multiple policy pools, blade types, firmware types, and VMware clusters where all hardware requires a Service Profile documenting a complex data set. UCS Manager can only span a pair of fabric interconnects. Also, you cannot share UCS Service Profiles across different logical systems without using third-party softwarewith the associated costs of the software and training. HP provides system management beyond the scope of UCS manager. As you go up the HP solution stack, you will find a complete management solution with our Matrix Operating Environment, data center service orchestration, and self-service catalogs. Cisco must turn to partner and third-party software to provide this level of service.
Conclusion
Cisco UCS technology mimics many established technologies that form the foundation of HP Virtual Connect. You can find Flex-10 and FlexFabric-like features and capabilities in the UCS Virtual Interface Card 1280. The UCS Service Profile borrows heavily from VC server profile capabilities. But significant differences become evident when you compare the UCS and VC approaches to architecture and interoperability in the data center. We base HP VC on open industry-standard technologies. Cisco UCS promotes standards and architecture that force customers into proprietary, Cisco-only solutions. HP VC creates a flat architecture for server-to-server traffic, moving that traffic through high bandwidth midplane connections within the enclosure. UCS forces all server-to-server traffic upstream to Layer 2 aggregation switches to perform all management tasks. VC has no server-to-server oversubscription. It does not burden processors with the overhead of unneeded QoS processes. High oversubscription on UCS server-to-server traffic requires QoS mechanisms to deal with network bottlenecks and reduced performance. Mature, tested VC technology contains clear advantages over the emerging UCS technology for managing rapidly growing data centers with a heterogeneous mix of server and network solutions.
11
Copyright 2011 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. TC0000777, November 2011