Sei sulla pagina 1di 698

HP ExpertOne

******ebook converter DEMO Watermarks*******

Building HP FlexFabric Data Centers eBook (Exam HP2-Z34)

Hppress.com

******ebook converter DEMO Watermarks*******

HP ExpertOne

******ebook converter DEMO Watermarks*******

Building HP FlexFabric Data Centers eBook (Exam HP2-Z34)

© 2014 Hewlett-Packard Development Company, L.P.

Published by:

HP Press 660 4th Street, #802 San Francisco, CA 94107

All rights reserved. No part of this book may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without written permission from the publisher, except for the inclusion of brief quotations in a review.

ISBN: 978-1-937826-90-1

WARNING AND DISCLAIMER This book provides information about the topics covered in the Building HP FlexFabric Data Centers (HP2-Z34) certification exam. Every effort has been made to make this book as complete and as accurate as possible, but no warranty or fitness is implied.

The information is provided on an “as is” basis. The author, HP Press, and Hewlett- Packard Development Company, L.P., shall have neither liability nor responsibility to any person or entity with respect to any loss or damages arising from the information contained in this book or from the use of the discs or programs that may accompany it.

The opinions expressed in this book belong to the author and are not necessarily those of Hewlett-Packard Development Company, L.P.

TRADEMARK ACKNOWLEDGEMENTS All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. HP Press or Hewlett-Packard Inc. cannot attest to the accuracy of this information. Use of a term in this book should not be regarded as affecting the validity of any trademark or service mark.

GOVERNMENT AND EDUCATION SALES This publisher offers discounts on this book when ordered in quantity for bulk purchases, which may include electronic versions. For more information, please contact U.S. Government and Education Sales 1-855-4HPBOOK (1-855-447-2665)

******ebook converter DEMO Watermarks*******

or email sales@hppressbooks.com.

Feedback Information At HP Press, our goal is to create in-depth reference books of the best quality and value. Each book is crafted with care and precision, undergoing rigorous development that involves the expertise of members from the professional technical community.

Readers’ feedback is a continuation of the process. If you have any comments regarding how we could improve the quality of this book, or otherwise alter it to better suit your needs, you can contact us through email at feedback@hppressbooks.com. Please make sure to include the book title and ISBN in your message.

We appreciate your feedback.

Publisher: HP Press

Contributors and Reviewers: Olaf Borowski, Gerhard Roets, Vincent Gilles, Olivier Vallois

HP Press Program Manager: Michael Bishop

Olivier Vallois HP Press Program Manager : Michael Bishop HP Headquarters Hewlett-Packard Company 3000 Hanover Street
Olivier Vallois HP Press Program Manager : Michael Bishop HP Headquarters Hewlett-Packard Company 3000 Hanover Street

HP Headquarters

Hewlett-Packard Company

3000 Hanover Street

Palo Alto, CA

94304–1185

USA

Phone: (+1) 650-857-1501

Fax: (+1) 650-857-5518 ******ebook converter DEMO Watermarks*******

HP, COMPAQ and any other product or service name or slogan or logo contained in the HP Press publications or web site are trademarks of HP and its suppliers or licensors and may not be copied, imitated, or used, in whole or in part, without the prior written permission of HP or the applicable trademark holder. Ownership of all such trademarks and the goodwill associated therewith remains with HP or the applicable trademark holder.

Without limiting the generality of the foregoing:

a. Microsoft, Windows and Windows Vista are either US registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries; and

b. Celeron, Celeron Inside, Centrino, Centrino Inside, Core Inside, Intel, Intel Logo, Intel Atom, Intel Atom Inside, Intel Core, Intel Core Inside, Intel Inside Logo, Intel Viiv, Intel vPro, Itanium, Itanium Inside, Pentium, Pentium Inside, ViiV Inside, vPro Inside, Xeon, and Xeon Inside are trademarks of Intel Corporation in the U.S. and other countries.

******ebook converter DEMO Watermarks*******

Special Acknowledgments

This book is based on the Building HP FlexFabric Data Centers course (Course ID:

00908176). HP Press would like to thank the courseware developers, Peter Debruyne, David Bombal, and Steve Sowell.

Thanks to Debi Pearson and Miriam Allred for their help preparing this eBook for publication.

Introduction

This study guide helps you prepare for the Building HP FlexFabric Data Centers exam (HP2-Z34). The HP2-Z34 elective exam is for candidates who want to acquire the HP ASE-FlexNetwork Architect V2 certification, or the HP ASE-FlexNetwork Integrator V1 certification. The exam tests you on specific Data Center topics and technologies such as Multitenant Device Context (MDC), Datacenter Bridging (DCB), Multiprotocol Label Switching (MPLS), Fibre Channel over Ethernet (FCoE), Ethernet Virtual Interconnect (EVI), and Multi-Customer Edge (MCE). The exam will also cover topics on high availability and redundancy such as Transparent Interconnection of Lots of Links (TRILL) and Shortest Path Bridging Mac-in-Mac mode (SPBM).

HP ExpertOne Certification

HP ExpertOne is the first end-to-end learning and expertise program that combines comprehensive knowledge and hands-on real-world experience to help you attain the critical skills needed to architect, design, and integrate multivendor and multiservice converged infrastructure and cloud solutions. HP, the largest IT company in the world and the market leader in IT training, is committed to help you stay relevant and keep pace with the demands of a dynamic, fast-moving industry.

The ExpertOne program takes into account your current certifications and experience, providing the relevant courses and study materials you need to pass the certification exams. As an ExpertOne certified member, your skills, knowledge, and real-world experience are recognized and valued in the marketplace. To continue your professional and career growth, you have access to a large ExpertOne community of IT professionals and decision-makers, including the world’s largest community of cloud experts. Share ideas, best practices, business insights, and challenges as you gain professional connections globally.

******ebook converter DEMO Watermarks*******

To learn more about HP ExpertOne certifications, including storage, servers, networking, converged infrastructure, cloud, and more, please visit hp.com/go/ExpertOne.

Audience

This study guide is designed for networking professionals who want to demonstrate their expertise in implementing HP FlexNetwork solutions by passing the HP2-Z34 certification exam. It is specifically targeted at networking professionals who want to extend their knowledge of how to design and implement HP FlexFabric solutions for the data center.

Assumed Knowledge

To understand the technologies and protocols covered in this study guide, networking professionals should have “on the job” experience. The associated training course, which includes numerous hands on lab activities, provides a good foundation for the exam, but learners are also expected to have real world experience.

Relevant Certifications

After you pass these exams, your achievement may be applicable toward more than one certification. To determine which certifications can be credited with this achievement, log in to The Learning Center and view the certifications listed on the exam’s More Details tab. You might be on your way to achieving additional HP certifications.

Preparing for Exam HP2-Z34

This self-study guide does not guarantee that you will have all the knowledge you need to pass the exam. It is expected that you will also draw on real-world experience and would benefit from completing the hands-on lab activities provided in the instructor-led training.

Recommended HP Training

Recommended training to prepare for each exam is accessible from the exam’s page in The Learning Center. See the exam attachment, “Supporting courses,” to view and register for the courses.

******ebook converter DEMO Watermarks*******

Obtain Hands-on Experience

You are not required to take the recommended, supported courses, and completion of training does not guarantee that you will pass the exams. HP strongly recommends a combination of training, thorough review of courseware and additional study references, and sufficient on-the-job experience prior to taking an exam.

Exam Registration

To register for an exam, go to hp.com/certification/learn_more_about_exams.html.

******ebook converter DEMO Watermarks*******

1 Datacenter Products and Technologies Overview

EXAM OBJECTIVES

In this chapter, you learn to:

Understand the components of the HP FlexFabric network architecture.

Describe common datacenter networking requirements.

Position the HP FlexFabric products.

Describe the HP IMC VAN Modules.

INTRODUCTION

This chapter introduces HP’s FlexFabric portfolio, and describes how these products can be used to deploy simple, scalable, automated data center networking solutions. Specific data center technologies are also introduced. These include multi-tenant solutions such as MDC, MCE, and SPBM, along with Hypervisor integration protocols like PBB and VEPA. Other connectivity solutions include MPLS L2VPN, VPLS, EVI, SPBM, and TRILL.

ASSUMED KNOWLEDGE

Because this course introduces the HP FlexFabric portfolio and datacenter technologies, learners are not expected to have prior knowledge about this topic. It is helpful, however, to be familiar with the requirements and growing trends of modern datacenters.

HP FlexFabric Overview

******ebook converter DEMO Watermarks*******

This chapter provides an overview of the components that are involved in the FlexFabric network architecture. It describes common data center networking requirements, positions HP FlexFabric products, and describes the HP data center technologies.

The World is Moving to a New Style of IT

Many IT functions and systems are continuing to change at a relatively brisk pace. As shown in Figure 1-1, new paradigms arise, such as cloud computing and networking, big data, BYOD and new security mechanisms, to name a few. With these new paradigms come new challenges and new requirements, influencing how we build networks going forward.

influencing how we build networks going forward. Figure 1-1: The World is Moving to a New

Figure 1-1:

The World is Moving to a New Style of IT

■ Cloud: We must understand how to build an agile, flexible and secure network edge, especially with regards to multi-tenancy.

■ Security: We have to rebuild the perimeter of the network wherever a device connects without degrading the quality of business experience.

■ Big Data: We have to enable the network to respond dynamically to real-time data analytics and to deal with the volume of traffic involved.

■ Mobility: We need to simplify the policy model in the campus by unifying wired and wireless networks. In the data center, we need to increase the agility and performance of mobile VMs.

******ebook converter DEMO Watermarks*******

A converged infrastructure can meet these needs by providing several key features, including:

A resilient fabric for less downtime and faster VM mobility

Network virtualization for faster data center provisioning

Software Defined Networking (SDN) – to simplify deployment and security – creating business agility and network alignment to business priorities.

Apps Are Changing - Networks Must Change

Applications are changing and the networks infrastructure must be capable of handling these new application requirements. One significant trend is a massive increase in virtualization. Almost any service will be offered as a virtualized service, hosted inside a data center. These virtualized services can be in private clouds, a customer’s local data center, or public clouds. They might even be offered as a type of hybrid cloud service, which is a mix of private and public clouds.

Inside the data center, the bulk of data traffic is now server-to-server. This is mainly due to the change in application behavior, since (as shown in Figure 1-2) we see much more use of federated applications as opposed to monolithic application models of the past.

as opposed to monolithic application models of the past. Figure 1-2: Apps Are Changing - Networks

Figure 1-2:

Apps Are Changing - Networks Must Change

******ebook converter DEMO Watermarks*******

Previously, companies may have used a single email server that provided multiple functions. In today’s environment, companies may instead leverage a front-end server, a business logic server, and a back-end database system. In such a deployment, each client request towards the data center is handled by multiple services inside the data center. This results in similar client-server interactions as in the past, but with increased server-to-server traffic to fulfill those client requests.

Also, many storage services and protocols are now being supported by a converged network that handles both traditional client-server traffic, as well as disk storage- related traffic.

Multi-tier Legacy Architecture in the Data Center (DC)

Federated applications and virtualization has changed the way traffic flows through the infrastructure. As packets must be passed between more and more servers, increased latency can impact performance and end-user productivity. Networks must be designed to mitigate these risks, while ensuring a stable, loop-free environment, see Figure 1-3. Network loops in a large data center environment can have egregious impacts on the business, so the ability to maintain loop-free paths is of particular importance.

to maintain loop-free paths is of particular importance. Figure 1-3: Multi-tier Legacy Architecture in the Data

Figure 1-3:

Multi-tier Legacy Architecture in the Data Center (DC)

HPN FlexFabric Value Proposition

HP’s FlexFabric approach has a focus on the three customer benefits. The network

******ebook converter DEMO Watermarks*******

should be simple, scalable, and automated.

Simple – reducing operational complexity by up to 75%

■ Unified virtual/physical and LAN/SAN fabrics

■ OS/feature consistently, no licensing complexity, cost

Scalable – double the fabric scaling, with up to 500% improved service delivery

■ Non-blocking reliable fabric for 100-10,000 hosts

■ Spine and leaf fabric optimized for Cloud, SDN

Automated – cutting network provisioning time from months to minutes

■ 300% faster time to service delivery, Software-Defined Network Fabric

■ Open, standards based programmability, SDN App Store and SDK

HP FlexFabric Product Overview

This product overview section begins with a discussion of core and aggregation switches. This is followed by an overview of access switches, and the IMC network management systems.

HP FlexFabric Core Switches

Figure 1-4 introduces the current portfolio of HP FlexFabric core switches. This includes the HP FlexFabric 12900, 12500, 11900 and the 7904 Switch Series.

FlexFabric 12900, 12500, 11900 and the 7904 Switch Series. Figure 1-4: HP FlexFabric Core Switches ******ebook

Figure 1-4:

HP FlexFabric Core Switches

******ebook converter DEMO Watermarks*******

HP FlexFabric 12900 Switch Series

The HP FlexFabric 12900 Switch Series, shown in Figure 1-5, is an exceedingly capable core data center switch. The switch includes support for Open Flow 1.3, laying a foundation for SDN and investment protection.

1.3, laying a foundation for SDN and investment protection. Figure 1-5: HP FlexFabric 12900 Switch Series

Figure 1-5:

HP FlexFabric 12900 Switch Series

It provides 36 Tbps of throughput in a non-blocking fabric, and supports up to 768 10 gigabit ports, and up to 256 40Gbps ports. The 12900-series supports Fiber Channel over Ethernet (FCoE) and Data Center Bridging (DCB).

The switch allows for In Service Software Upgrades (ISSU) to minimize downtime. Additionally, protocols like TRILL and SPB can be used to provide scalable connectivity between data center sites. All of these functions can be used in conjunction with IRF to offer a redundant, flexible platform.

HP 12500E Switch Series

The HP 12500E Switch Series, shown in Figure 1-6, allows for up to 24Tbps switching capacity. It is available in 8 and 18-slot chassis. It supports very large Layer 2 and Layer 3 address and routing tables, and data buffers. It allows for up to four units in an IRF system.

******ebook converter DEMO Watermarks*******

Figure 1-6: HP 12500E Switch Series The HP 12500 Switch Series has been updated, and

Figure 1-6:

HP 12500E Switch Series

The HP 12500 Switch Series has been updated, and so now supports high density 10 gigabit, 40 gigabit or 100 gigabit Ethernet modules - up to 400 gigabit per slot. It can support traditional Layer 2 and Layer 3 functions IPv4 and IPv6. These devices also feature support for the more modern protocols, such as MPLS, VPLS, MDC, EVI, and more.

Wire-speed

efficient design lowers operational costs.

services

provide

a

high-performance

backbone

while

the

energy-

HP 12500 Switch Series Overview

Figure 1-7 compares the features and capabilities of the 12500C and 12500E platforms. The 12500C is based on Comware5 while the 12500E is based on Comware7. The use of Comware7 results in enhanced MPU performance.

******ebook converter DEMO Watermarks*******

Figure 1-7: HP 12500 Switch Series Overview HP FlexFabric 11908 Switch Series The HP FlexFabric

Figure 1-7:

HP 12500 Switch Series Overview

HP FlexFabric 11908 Switch Series

The HP FlexFabric 11900 Switch Series, shown in Figure 1-8, supports up to 7.7Tbps of throughput in a non-blocking fabric. This switch can be a good choice for data center aggregation switch.

can be a good choice for data center aggregation switch. Figure 1-8: HP FlexFabric 11908 Switch

Figure 1-8:

HP FlexFabric 11908 Switch Series

HP FlexFabric 7900 Switch Series

******ebook converter DEMO Watermarks*******

The HP FlexFabric 7900 Switch Series, shown in Figure 1-9, is the next generation compact modular data center core switch. It is based on the same architecture and ComWare7 code as larger chassis-based switches.

and ComWare7 code as larger chassis-based switches. Figure 1-9: HP FlexFabric 7900 Switch Series The feature

Figure 1-9:

HP FlexFabric 7900 Switch Series

The feature set includes full support for IRF, TRILL, DCB, EVI, MDC, OpenFlow and VXLAN.

HP FlexFabric Access Switches

The HP 5900 Switch Series, shown in Figure 1-10, can serve as traditional top-of- rack access switches.

, can serve as traditional top-of- rack access switches. Figure 1-10: HP FlexFabric Access Switches ******ebook

Figure 1-10:

HP FlexFabric Access Switches

******ebook converter DEMO Watermarks*******

The HP 5900AF Switch Series is available in various models, including 48 1Gbps port versions and as 48 10Gbps switch port versions, with 4x 40Gbps uplink ports. It is also available with 48 1/10Gbps ports, with 4 x 40Gbps uplink connections. The 1/10Gbps port version is especially convenient for data centers which are migrating servers from 1 to 10Gbps interfaces.

The 5930 is a Top-of-Rack (ToR) switch with 32 10/40Gbps ports. This switch could be used to terminate 40 gigabit connections from blade server enclosures, or it could be deployed as a distribution or aggregation layer device to concentrate a set of HP 5900Switch Series. Each of the 40Gbps ports can be split out as four 10Gbps ports with a special cable. This means that the 32 40Gbps ports could become 128 10Gbps ports, available in a 1U device.

The “CP” in the 5900CP model stands for Converged Ports. As the name implies, both Fibre Channel over Ethernet (FCoE) and native Fibre Channel (FC) are supported in a single, converged ToR access switch. All of the 5900 Switch Series shown here support FCoE, but only the 5900CP also supports native FC connectivity. The module installed in each port determines whether that port functions as a 10Gbps FCoE port, or as an 8Gbps FC port. The 5900CP supports FCoE-to-FC gateway functionality.

The HP FlexFabric 5900v is a virtual switch that can be installed as a replacement for the VMware switch on a Hypervisor. The 5900v is based on the VEPA protocol. This means that the 5900v does not support local direct configuration. Inter-VM traffic will be sent to an external ToR switch to be serviced. This is why the 5900v must be deployed in combination with a physical switch which also supports the VEPA protocol. All Comware7-based 5900-series switches support VEPA.

In HP blade enclosures can have interconnects installed. These interconnects must match the physical form factor of the blade enclosure. The HP 6125 XLG can provide this blade server interconnectivity.

This switch belongs to the HP 5900 Switch Series family of switches, as it provides 10Gbps access ports for blade servers, along with 4 x 40Gbps uplink ports. As a Comware7-based product, the 6125 XLG can be configured with the same protocols and features as traditional HP 5900 Switch Series. For example, features like FCoE and IRF are supported. This means that multiple 6125 XLG switches in the same blade enclosure can be grouped together as a single virtual IRF system. It also supports VEPA, and so can work with the 5900v switch running on a Hypervisor.

HP FlexFabric 5930 Switch Series

******ebook converter DEMO Watermarks*******

The HP FlexFabric 5930 Switch Series, shown in Figure 1-11, are built on the latest generation of ASICs, and so includes hardware support for VXLAN & NVGRE. VXLAN is an overlay virtualization technology which is largely promoted by VMware. NVGRE is an overlay technology which is largely promoted with Microsoft and used in their HyperV product.

promoted with Microsoft and used in their HyperV product. Figure 1-11: HP FlexFabric 5930 Switch Series

Figure 1-11:

HP FlexFabric 5930 Switch Series

Since the HP FlexFabric 5930 Switch Series has hardware support for both technologies, both products can be interconnected with traditional VLANs, with support for OpenFlow and SDN. With 32 40Gbps ports, it is suitable as a component in large scale spine or leaf networks that can leverage IRF and TRILL.

HP FlexFabric 5900CP Converged Switch

The HP FlexFabric 5900CP supports 48 x 10Gbps converged ports. As shown in Figure 1-12, support for 4/8Gbps FC or 1/10Gbps Ethernet is available on all ports. It supports HP’s universal converged optic transceivers. The hardware optics in each port determines whether that port will function as a native FC port, or as an Ethernet port. The converged optics interface is a single device that can be configured to operate as either of the two. This means that the network administrator can easily change the operational mode of the physical interface via CLI configuration. This eliminates the need to unplug receivers and reconnect transceivers for this purpose.

******ebook converter DEMO Watermarks*******

Figure 1-12: HP FlexFabric 5900CP Converged Switch FlexFabric 5700 Datacenter ToR Switch The HP FlexFabric

Figure 1-12:

HP FlexFabric 5900CP Converged Switch

FlexFabric 5700 Datacenter ToR Switch

The HP FlexFabric 5700 Top-of-Rack switch is available in various combinations of 1Gbps and 10Gbps port configurations with 10Gbps or 40Gbps uplinks, as shown in Figure 1-13. This relatively new addition to the FlexFabric family offers L2 and L3 lite support, IRF support of nine switches to simplify management operations.

support of nine switches to simplify management operations. Figure 1-13: FlexFabric 5700 Datacenter ToR Switch The

Figure 1-13:

FlexFabric 5700 Datacenter ToR Switch

The 5700 switch series delivers 960Gbps switching capacity and is SDN-ready.

HP HSR6800 Router Series

******ebook converter DEMO Watermarks*******

The HP HSR6800 Router Series, shown in Figure 1-14, provides comprehensive routing, firewall and VPN functions. It uses 2Tbps backplane to support 420Mpps routing throughput. This is a high-density WAN router that can support up to 31 10Gbps Ethernet ports and is 40/100Gbps ready.

up to 31 10Gbps Ethernet ports and is 40/100Gbps ready. Figure 1-14: HP HSR6800 Router Series

Figure 1-14:

HP HSR6800 Router Series

Two of these carrier-class devices can be grouped into an IRF team to operate as a single, logical router entity. This eases configuration and change management, and eliminates the need for other redundancy protocols like VRRP.

Virtual Services Router

The Virtual Services Router (VSR) can be seen as a network function virtualization (NFV) technology. It is very easy to deploy the VSR on any branch or data center or cloud infrastructure, see Figure 1-15 for more information. It is based on Comware7 and can be installed on a hypervisor, such as VMware ESXi or LINUX KVM.

******ebook converter DEMO Watermarks*******

Figure 1-15: Virtual Services Router The VSR makes it very easy and convenient to support

Figure 1-15:

Virtual Services Router

The VSR makes it very easy and convenient to support a multi-tenant data center. New router instances can be quickly deployed inside the hosted environment to provide routed functionality for a specific customer solution. VSR comes in multiple versions, with various licensing options to provide more advanced capabilities.

IMC VAN Fabric Manager

Basic data center management of devices is handled by IMC. The VAN Fabric Manager (VFM) is a software module that can be added to IMC. This module adds advanced traffic management capabilities for many data center protocols, such as SPB, TRILL, and IRF. Storage protocols such as DCB and FCoE are also supported, see Figure 1-16.

******ebook converter DEMO Watermarks*******

Figure 1-16: IMC VAN Fabric Manager It also manages the data center interconnect protocol such

Figure 1-16:

IMC VAN Fabric Manager

It also manages the data center interconnect protocol such as EVI, and provides zoning services for converged storage management.

You can easily view and manage information about VM migrations. VM migration records include the VM name, source and destination server, start and end times for the migration, and name of the EVI service to which the VM belongs. You can also perform a migration replay, which allows you to playback the migration process, allowing you to view the source, destination, and route of a migration in a video.

HP FlexFabric Cloud: Virtualized DC Use Case

Figure 1-17 shows an example of an HP FlexFabric deployment. At the access layer, 5900v’s are deployed inside a blade server hypervisor environment, in conjunction with 5900-series switches with VESA support.

******ebook converter DEMO Watermarks*******

Figure 1-17: HP FlexFabric Cloud: Virtualized DC Use Case With a interconnectivity. deployment of HP

Figure 1-17:

HP FlexFabric Cloud: Virtualized DC Use Case

With a

interconnectivity.

deployment of HP blade

systems,

the

6125

XLGs

can be

used

for

In this scenario the access layer is directly connected to the core, which could be comprised of 12900 or 11900-series devices. Connectivity to remote locations can be provided by the HSR 6800 router, and the entire system can be managed from a single pane-of-glass with HP’s IMC. Additional insight and management for data center specific technologies can be provided by the addition of the VFM module for IMC.

Data Center Technologies Overview

The data center may provide support for multiple tenants. Multiple infrastructures may co-exist in an independent way.

The data center should also have support for Ethernet fabric technologies to provide interconnect between all the switches, as well as converged FC/FCoE support. This fabric should integrate with Hypervisor environments.

Also, data center interconnect technologies Network overlay technologies are used to connect several multi-tenant data centers together in a scalable, seamless way.

Overview of DC Technologies

Figure 1-18 provides an overview of data center technologies and generalizes where these technologies are deployed.

******ebook converter DEMO Watermarks*******

Figure 1-18: Overview of DC Technologies ■ Multi-tenant support is provided by technologies such as

Figure 1-18:

Overview of DC Technologies

■ Multi-tenant support is provided by technologies such as MDC, MCE and SPBM. Hypervisor integration is provided by PBB and VEPA protocols, along with the 5900v switch product.

■ Overlay networking solutions are provided by VXLAN and SDN.

■ Data center interconnect technologies include MPLS L2VPN, VPLS, EVI, and SPBM.

■ OpenFlow technology can be used to understand, define, and control network behavior.

■ Large-scale Layer 2 Ethernet fabrics can be deployed using traditional link aggregation along with TRILLor SPBM.

■ IRF or Enhanced IRF can be used to improve manageability and redundancy in the Ethernet fabric.

■ Storage and Ethernet technologies can be converged with switches that support DCB, FCoE, and native FC.

Multi-tenant Support

Multi-tenancy support involves the ability to support multiple business units, customers, and services over a common infrastructure. This data center infrastructure must provide techniques to isolate multiple customers from each other.

******ebook converter DEMO Watermarks*******

Multi-tenant Isolation

Several isolation techniques are available, in two general categories. Physical isolation is one solution. However, this solution is less scalable due to the cost of purchasing separate hardware for each client, as well as the space, power, and cooling concerns. With logical isolation, isolated services and customers share a common hardware infrastructure. This reduces initial capital expenditures and improves return on investment.

Multi-tenant Isolation with MDC and MCE

One isolation technique is Multi-tenant Device Context (MDC). This technology creates a virtual device inside a physical device. This ensures customer isolation at the hardware layer, since ASICs or line cards are dedicated to each customer.

Since each MDC has its own configuration file, with separate administrative logins, isolation at the management layer is also achieved. There is also isolation of control planes, since each MDC has its own path selection protocol, such as TRILL, SPB, OSPF, or STP. Isolation at the data plane is achieved through separate routing tables and Layer 2 MAC address tables.

Another technology to provide Layer 3 routing isolation is Multi-customer Carrier Ethernet (MCE). This is also known in the market as Virtual Routing and Forwarding (VRF). With VRF, separate virtual routing instances can be defined in a single physical router.

This technology maintains separate routing functionality and routing tables for each customer. However, the platform’s hardware limitations still apply. For example, ten MCE’s might be configured on a device that has a hardware limit of 128,000 IPv4 routes. In this scenario, all ten customer MCE routing tables must share that 128,000 entry maximum.

Unlike MDC, which allows for different management planes per customer, MCE features a single management plane for all customers. In other words, a single administrator configures and manages all customer MCE instances.

Multi-tenant Isolation for Layer 2

VLANs are the traditional method used to isolate Layer 2 networks, and this remains a prominent technology in data centers. However, the 4094 VLAN maximum can be a limiting factor for large, multi-tenant deployments. Another difficulty is preventing

******ebook converter DEMO Watermarks*******

each client from using the same set of VLANs.

QinQ technology alleviates some of these concerns. Each customer has their own set of 4096 VLANs, using a typical 802.1q tag. An outer 802.1q tag is added, which is unique to each client. The data center uses this unique outer tag to move frames

between customer devices. Before the frame is handed off to the client, the outer tag

is removed.

A limitation of this technique involves the MAC address table. All customer VLANs

traverse the provider network with a common outer 802.1q tag. Therefore, all client

VLANs share the same MAC address table. It is possible for this to increase the odds

of MAC address collision – multiple devices that use the same address.

Another option is Shortest Path Bridging using MAC in MAC mode (SPBM). SPBM can also isolate customers, similar to QinQ. Unlike QinQ, SPBM creates a new encapsulation, with the original customer frame as the payload of the new frame. This new outer frame includes a unique customer service identifier, providing a highly scalable solution.

SPBM supports up to 16 million service identifiers. Each of the 16 million customers can have their own set of 4094 VLANs. A common outer VLAN identifier tag can be used for all client VLANs, like with QinQ. Alternatively, different customer VLANs can use different identifiers. Compared to QinQ, SPBM provides increased scalability while limiting the issue of MAC address collision.

Virtual eXtensible LAN (VXLAN) is another technology that provides a virtualized VLAN for Hypervisor environments. A Virtual Machine (VM) can be assigned to a VXLAN, and use it to communicate with other VMs in the same VXLAN.

This technology requires some integration with traditional VLANs via a hardware gateway device. This functionality can be provided by the HP Comware 5930 switch. VXLAN supports up to 16 million VXLAN IDs so is quite scalable.

VXLAN provides a single VXLAN ID space. While SPBM could be used to encapsulate 4094 traditional VLANs into a single customer service identifier, with VXLAN, a customer with 100 VLANs would use 100 VXLAN IDs. For this reason, some planning is required to ensure that each client uses a unique range of VXLAN IDs.

Network Overlay Functions

Network overlay functions provide a virtual network for a specific, typically VM-

******ebook converter DEMO Watermarks*******

based service.

Software Defined Networking (SDN) can be considered a network overlay function, since it can centralize the control of traffic flows between devices, virtual or otherwise.

VXLAN is an SDN technology that can provide overlay networks for VMs. Each VM can be assigned to a unique VXLAN ID, as supposed to a physical, traditional VLAN ID. HP is developing solutions to integrate SDN and VXLAN solutions. This will enable inter-connectivity between VXLAN-assigned virtual services and physical hosts.

SDN: Powering Your Network Today and Tomorrow

SDN can be used to control the network behavior inside the data center. As shown in Figure 1-19, the SDN architecture consists of the infrastructure, control, and application layers.

of the infrastructure, control, and application layers. Figure 1-19: SDN: Powering Your Network Today and Tomorrow

Figure 1-19:

SDN: Powering Your Network Today and Tomorrow

The infrastructure layer consists of overlay technologies such as VXLAN or NVGRE. Or it can is consist of devices that support OpenFlow.

The control plane is to be delivered by the HP Virtual Application Network (VAN) SDN controller. This controller will be able to interact with VXLAN and OpenFlow- enabled devices. It will have the ability to be directly configured, or to be controlled

******ebook converter DEMO Watermarks*******

by an external application, such as automation, cloud management, or security tools. The HP SDN app store will provide centralized availability for SDN-capable applications. Load-balancing will also be provided.

Data Center Ethernet Fabric Technologies

This section will focus on Ethernet fabric technologies for the data center. An Ethernet fabric should provide a high speed Layer 2 interconnect with efficient path selection. It should also provide scalability to enable ample bandwidth and link utilization.

Data Center Ethernet Fabric Technologies 2

IRF combines two or more devices into a single, logical device. IRF systems can be deployed at each layer in the data center. For example, they are often deployed at the core layer of a data center, and could also be used to aggregate access layer switches. Servers could also be connected to IRF systems at the access layer.

These layers can be interconnected by traditional multi-chassis link aggregations, which provide an active-active redundancy solution. Each IRF system is managed as an independent entity. If a customer has 200 physical access switches, they could be grouped into 100 IRFs, each IRF system containing two physical switches. If a new VLAN must be defined, it must be defined on each of the 100 IRF systems.

Enhanced IRF (EIRF) is the next generation of IRF technology, allowing for the grouping of up to 100 or more devices into a single logical device. Enhanced IRF can combine multiple layers into a single logical system. For instance, several aggregation and access layer switches can be combined into a single logical device.

Like traditional IRF, this provides a relatively easy active-active deployment model. However, with Enhanced IRF a large set of physical devices will be perceived as a single, very large switch with many line cards. If 100 physical switches were combined into a single EIRF system, they are all managed as a single entity. If a new VLAN must be defined, it only needs to be defined one time, as opposed to multiple times with traditional IRF. Also, EIRF eliminates the need to configure multi-chassis link aggregations as inter-switch links.

Data Center Ethernet Fabric Technologies 3

IRF and EIRF offer a compelling, HP Comware-based solution for building an Ethernet fabric. TRILL and SPBM offer other, standards-based technologies for data

******ebook converter DEMO Watermarks*******

center connectivity. HP Comware IRF or EIRF technology can provide switch and link redundancy while connecting to a standards-based TRILLor SPBM fabric.

TRILL ensures that the shortest path for Layer 2 traffic is selected, while allowing maximum, simultaneous utilization of all available links. For example, two server access switches could connect to multiple aggregation switches and also be directly connected to each other. Traffic flow between servers on the two switches can utilize the direct connection between the two switches, while other traffic uses the access- to-aggregation switch links. This is an advantage over traditional STP-based path

selection, which would require one of the links (likely the access-access connection)

to be disabled for loop prevention.

TRILLcan also take advantage of this active-active, multi-path connectivity for cases when switches have, say four uplinks between them. The traffic will be load- balanced over all equal-cost links. This load balancing can be based on source/destination MAC address pairs, or source/destination IP addressing.

A limitation of TRILL is the fact that it supports a single VLAN space only. While

TRILL provides for very efficient traffic delivery, it remains limited by the 4094

VLAN maximum.

SPBM is similar to TRILL in its ability to leverage routing-like functionality for efficient Layer 2 path selection. Compared to TRILL, SPBM offers a more deterministic method of providing load-sharing over multiple equal-cost paths. This allows the administrator to engineer specific paths for specific customer traffic

SPBM also offers the potential for greater scalability than TRILL. This is because SPBM supports multiple VLAN spaces, since each customer’s traffic is uniquely tagged with a service identifier in the SPBM header.

RFC 7272 is a relatively recent standard that will allow the use of a 24-bit identifier, as opposed to the current 12-bit VLAN ID. This will allow greater scalability for multiple tenants. This feature is not currently supported on HP Comware switches.

Server Access Layer – Hypervisor Networking

Hypervisor networking is supported at the access layer of a data center deployment,

in the form of VEPA and EVB. These technologies enable integration between virtual

and physical environments.

For example, the HP Comware Hypervisor 5900v provides a replacement option for the Hypervisor’s own built-in software vSwitch. The 5900v sends inter-VM traffic to

******ebook converter DEMO Watermarks*******

an external, physical switch for processing. This external switch must support VEPA technology to be used for this purpose.

Typically, most inter-VM traffic is handled by a physical switch anyway, since there are typically multiple ESX hosts. Traffic between VMs hosted by different ESX platforms are handled by an external physical switch. Only inter-VM traffic on the same ESX host is handled by that host’s internal vSwitch. The VEPA EVB model ensures a more consistent traffic flow, since all inter-VLAN traffic is via an external switch.

This results in greater visibility and insight into inter-VLAN traffic flow. Traditional network analysis tools and port mirroring tools are thus capable of detailed traffic inspection and analysis.

Server Access Layer – Converged Storage

Storage convergence means that a single infrastructure has support for both native Fibre Channel (FC), Fibre Channel over Ethernet (FCoE), and iSCSI.

With Fibre Channel technology, a physical Host Bus Adapter (HBA) is installed in each server to provide access to storage devices. To ensure lossless delivery of storage frames, FC uses a buffer-to-buffer credit system for flow control. A separate Ethernet interface is installed in the server to perform traditional Ethernet data communications.

FCoE is a technology that provides traditional FC and 10Gbps Ethernet support over

a single Converged Network Adapter (CNA). The server’s application layer

continues to perceive a separate adapter for each of these functions. Therefore, the

CNA must accept traditional FC frames, encapsulate them in Ethernet and send it over the converged network fabric. A suite of Data Center Bridging (DCB) protocols enhance the Ethernet standard. This ensures the lossless frame delivery that is required by FC.

iSCSI encapsulates traditional SCSI protocol communications inside a TCP/IP packet, which is then encapsulated in an Ethernet frame. The iSCSI protocol does not require that Ethernet be enhanced by DCB or any other special protocol suite. Instead, capabilities inherent to the TCP/IP protocol stack will mitigate packet loss issues.

However enterprise-class iSCSI deployments should have robust QoS capabilities and hardware switches with enhanced buffer capabilities. This will help to ensure that iSCSI frame delivery is reliable, with minimal retransmissions.

******ebook converter DEMO Watermarks*******

Although DCB was originally developed to ensure lossless delivery for FCoE, it can also be used for iSCSI deployments. This minimizes frame drop and retransmission issues.

Server Access Layer – FC/FCoE

The 5900CP provides native FC fabric services. Since it provides both FCoE and native FC connections, it can act as a gateway between native FC and FCoE environments.

In addition to this FC-FCoE gateway service, other deployment scenarios are supported by the HP 5900CP. It can be used to interconnect a collection of traditional FC storage and server devices, or to connect a collection of FCoE-based systems.

Multiple Fiber Channel device roles are supported. The 5900CP can fill the FCF role to support full fabric services. It can also act as an NPV node to support endpoint ID virtualization.

Data Center Interconnect Technologies

Data center Interconnect technologies allow customer services to be interconnected across multiple data center sites. Two data center locations could be deployed, or multiple data centers could be spread over multiple locations for additional scalability and redundancy.

These technologies typically require options for path redundancy and scalable Layer 2 connectivity between the data centers. This ensures that all customer requirements can be met, such as the ability to move VMs to different physical hosts via technologies such as VMWare’s vMotion.

Data Center Interconnect Technologies 2

Data centers can be connected using some traditional Layer 2 connection. This could be dark fiber connectivity between two sites, or some other connectivity available from a service provider. Once these physical connections are established, traditional VLAN trunk links and link aggregation can be configured to connect core devices at each site.

MPLS L2VPN is typically offered and deployed by a service provider, although some larger enterprises may operate their own internal MPLS infrastructure. Either way, L2VPN tunnels can be established to connect sites over the MPLS fabric.

******ebook converter DEMO Watermarks*******

In this way, MPLS L2VPN provides a kind of “pseudo wire” between sites. It is important to note this connection lacks the intelligence to perform MAC-learning or other Layer 2 services. It is simply a “dumb” connection between sites.

Data Center Interconnect Technologies 3

MPLS Virtual Private LAN Service (VPLS) is another option that is typically deployed by a service provider. Some enterprises may have their own MPLS infrastructure, over which they may wish to deploy a VPLS solution. Unlike MPLS L2VPN, VPLS has the intelligence to perform traditional Layer 2 functions, such as MAC learning for each connected site. Therefore, when a device at one location sends a unicast frame into the fabric, it can be efficiently forwarded to the correct site. This is more efficient than having to flood the frame to all sites.

Ethernet Virtual Interconnect (EVI) is an HP propriety technology to interconnect data centers with Layer 2 functionality. This technology enables the transport of L2 VPN and VPLS without need for an underlying MPLS infrastructure. Any typical IP routed connection between the data centers can be used to interconnect up to eight remote sites.

The advantage of EVI is that it is very easy to configure as compared to MPLS. MPLS requires expertise with several technologies, including IP backbone technologies, label switching, and routing. EVI also makes it easy to optimize the Ethernet flooding behavior.

Summary

In this chapter, you learned that HP’s FlexFabric provides a simple, scalable, automated approach to data center networking solutions.

You also learned that HP’s FlexFabric product portfolio includes core switches like the 12900, 12500, 11900, and 7904. It also includes 5900AF, 5930, 5900CP, 5900v, and 6125XLG access switches. For routing, the HSR 6800 and VSR are available. Improved visibility and management functions for TRILL/SPB and FCoE/FC are available with the IMC VAN fabric manager product.

You also learned that:

Technologies that support multi-tenant solutions include MDC, MCE, and SPBM. Hypervisor integration is provided by PBB and VEPA.

Overlay solutions include VXLAN and SDN, while data center interconnect technologies include MPLS L2VPN, VPLS, EVI, and SPBM.

******ebook converter DEMO Watermarks*******

■ Large-scale Layer 2 fabrics can be deployed using TRILLor SPBM, with IRF and EIRF providing improved manageability and redundancy.

■ The HP data center portfolio can create converged network support with DCB, FCoE, and native FC.

Learning Check

Answer each of the questions below.

1. HP’s FlexFabric includes the following components (choose all that apply)?

a. Core switches

b. Aggregation switches

c. MSM 4x0-series access points.

d. Access switches.

e. The 5900CP converged switch.

f. Both physical and virtual services routers

g. HP’s IMC management platform

2. The IMC VAN fabric manager provides which three capabilities (choose three)?

a. Unified SPB, TRILL, and IRF fabric management

b. VPN connectivity and performance management.

c. VXLAN system management

d. Unified DCB, FCoE, and FC SAN management.

e. EVI protocol management for data center interconnects.

f. Switch and router ACLconfiguration management.

3. Which two statements are true about multi-tenant isolation for Layer 2?

a. VLANs provide a traditional method to isolate Layer 2 networks that is limited to 4094 VLANs1

b. With QinQ technology, up to 256 customers can each have their own set of 4094 isolated VLANs.

c. DCB is an overlay technology that allows a converged infrastructure

d. Shortest Path Bridging MAC-in-MAC mode can support 16 million isolated customers through the use of and I-SID2

4. Which technology can extend a Layer 2 VLAN across multiple data centers using

******ebook converter DEMO Watermarks*******

a Layer 3 technology?

a. DCB.

b. EIRF.

c. SDN.

d. TRILL.

e. VXLAN.

Learning Check Answers

1. a, b, d, e, f, g

2. a, d, e

3. a, d

4. d

******ebook converter DEMO Watermarks*******

2 Multitenant Device Context

EXAM OBJECTIVES

In this chapter, you learn to:

Describe MDC features.

Explain MDC use cases.

Describe MDC architecture and operation.

Describe support for MDC on various hardware platforms.

Understand firmware updates and ISSU with MDC.

Describe supported IRF configurations with MDC.

INTRODUCTION

Multitenant Device Context (MDC) is a technology that can partition a physical device or an IRF fabric into multiple logical switches called "MDCs."

Each MDC uses its own hardware and software resources, runs independently of other MDCs, and provides services for its own customer. Creating, starting, rebooting, or deleting an MDC does not affect any other MDC. From the user's perspective, an MDC is a standalone device.

MDC Overview

Multitenant Device Context (MDC) can partition either a single physical device or an IRF fabric into multiple logical switches called "MDCs."

With MDC, physical networking platforms, such as HP 11900, 12500, and 12900 switches can be virtualized to support multitenant networks. In other words, MDC

******ebook converter DEMO Watermarks*******

provides customers with 1:N device virtualization capability to virtualize one physical switch into multiple logical switches as shown in Figure 2-1.

into multiple logical switches as shown in Figure 2-1 . Figure 2-1: Feature overview Other benefits

Figure 2-1:

Feature overview

Other benefits of MDC include:

■ Complete separation of control planes, data planes and forwarding capabilities.

■ No additional software license required to enable MDC.

■ Reduced power, cooling and space requirements within the data center.

■ Up to 75% reduction of devices and cost when compared to deployments without 1:N device virtualization.

■ Modification of interface allocations without stopping MDCs.

IRF versus MDC

What is the difference between MDC and technologies like IRF?

The main difference is that in the case of IRF (N:1 Virtualization), you are combining multiple physical devices into one logical device. With MDC on the other hand (N:1 Virtualization), you are splitting either a single device or a logical IRF device into separate discrete logical units.

******ebook converter DEMO Watermarks*******

The reason for doing this is to provide network features such as VLANs, routing, IRF

and other features to different entities (customers, development network), but still use the same hardware. Customers can also be given different feature sets inside the

same logical "big box" device. Each of the MDCs operate as a totally independent device inside the same physical device (or IRF fabric).

Instead of buying additional core switches for different customers or business units, a single core switch or IRF fabric can be used to provide the same hardware feature set to multiple customers or business units.

MDC Features

Each

MDC uses its own hardware and software resources, runs independently of

other

MDCs, and provides services for its own environment. Creating, starting,

rebooting, or deleting an MDC does not affect the configuration or service of any other MDC. From the user's perspective, an MDC is a standalone device.

Each MDC is isolated from the other MDCs on the same physical device and cannot communicate with them via the switch fabric. To allow two MDCs on the same physical device to communicate with each other, you must physically connect a port allocated to one MDC to a port allocated to the other MDC using an external cable. It is not possible to make a connection between MDCs over the backplane of the switch.

Each MDC has its own management, control and data planes, which is the same size as the physical device. For example, if the device has a 64-KB space for ARP entries, each MDC created on the device gets a separate 64-KB space for its own ARP entries.

Management of MDCs on the same physical device is done via the default MDC (admin MDC), or via management protocols such as SSH or telnet.

MDC Applications

MDC can be used for applications such as the following:

■ Device renting

■ Service hosting

■ Staging of a new network on production equipment

■ Testing features such as SPB and routing that cannot be configured on a single device

******ebook converter DEMO Watermarks*******

■ Student labs

Instead of purchasing new devices, you can configure more MDCs on existing network devices to expand the network.

As an initial example, in Figure 2-2 a service provider provides access services to three companies, but only deploys a single physical device (or IRF stack). The provider configures an MDC for each company on the same hardware device to logically create three separate devices.

hardware device to logically create three separate devices. Figure 2-2: MDC application example The administrators of

Figure 2-2:

MDC application example

The administrators of each of the three companies can log into their allocated MDC to maintain their own network without affecting any other MDC. The result is the same as deploying a separate gateway for each company.

Additional use cases will be discussed later in this chapter.

MDC Benefits Overview

MDC Benefits

Higher utilization of existing network resources and fewer hardware upgrade costs: Instead of purchasing new devices, you can configure more MDCs on existing network devices to expand the network. For example, when there are more user groups, you can configure more MDCs and assign them to the user groups. When there are more users in a group, you can assign more interfaces and other resources to the group.

******ebook converter DEMO Watermarks*******

Lower management and maintenance cost: Management and maintenance of multiple MDCs occur on a single physical device.

Independence and high security: Each MDC operates like a standalone physical device. It is isolated from other MDCs on the same physical device and cannot directly communicate with them. To allow two MDCs on the same physical device to

communicate, you must physically connect a cable from a port allocated to one MDC

to another port allocated to the other MDC.

MDC Features

An MDC can be considered a standalone device. Creating, running, rebooting or

deleting a MDC does not affect the configuration or service of any other MDC. This

is because of Comware v7's container based OS level virtualization technology as

shown in Figure 2-3.

OS level virtualization technology as shown in Figure 2-3 . Figure 2-3: Feature overview Each MDC

Figure 2-3:

Feature overview

Each MDC is a new logical device defined on the existing physical device. The physical device could either be a single switch or an IRF fabric.

A traditional switching device has its own control, management and data planes.

When you define a new MDC, the same features and restrictions of the physical

******ebook converter DEMO Watermarks*******

device will apply to the new MDC and the new MDC will have separated control and management planes. Each MDC has a separate telnet server process, separate SNMP process, separate LACP process, separate OSPF process etc.

In addition, each MDC will also have an isolated data plane. This means that the

VLANs defined in one MDC are totally independent of the VLANs defined in a different MDC. As an example, MDC1 can have VLANs 10, 20 and 30 configured. MDC2 can also have VLANs 10, 20 and 30 configured, but here is no communication between VLAN 10 on MDC1 and VLAN 10 on MDC2.

Each MDC also has its own hardware limits. This is because resources are assigned to MDCs down to the ASIC level.

A switch configured without multiple MDCs has a limit of 4094 VLANs in the

overall chassis. However, once a new MDC is created, ASICs and line cards within the physical device are assigned to the new MDC and can be programmed by the new management and control plane. Each MDC is a new logical device inside the physical device and has a separate limit of 4094 VLANs. Other features such as the number of VRFs supported are also set per MDC and what is configured in one MDC does not affect other MDCs limits.

In other words, if you have 4 MDCs on a chassis, the total chassis will support 4

times the hardware and software limits of the same chassis with a single MDC or a

traditional chassis. As an example, rather than supporting only 4094 VLANs, 4 x 4094 VLANs are supported with a total of 16,376 VLANs supported (4094 per MDC and running 4 MDCs).

MDCs share and compete for CPU resources. If an MDC needs a lot of CPU resources while the other MDCs are relatively idle, the MDC can access more CPU resources. If all MDCs need a lot of CPU resources, the physical device assigns CPU resources to MDCs according to their CPU weights.

Use the limit-resource

MDCs.

cpu

weight command to assign CPU weights to user

Supported Platforms

Supported Products

MDC is supported on chassis based platforms running the HP Comware 7 operating

system. MDC is not by the HP Comware 5 operating system. As an example, the 12500 series switches require main processing units (MPUs) running HP Comware 7

******ebook converter DEMO Watermarks*******

and not HP Comware 5. This also applies to the HP 10500 series switches. See Figure 2-4 for supported platforms.

series switches. See Figure 2-4 for supported platforms. Figure 2-4: Supported platforms MDC is only available

Figure 2-4:

Supported platforms

MDC is only available on chassis based switches and not fixed port switches. This is

due to the processing and memory requirements of running separate virtual switches within the same physical switch. If you configured three MDCs that would require 3 x LACP process, 3 x BGP processes, 3 x OSPF processes, 3 x telnet processes etc. Fixed port switches do not have enough memory to run multiple MDCs and create separate instances of all processes.

In contrast, chassis based switches have the HP Comware operating system installed on the Main Processing Unit (MPU) and may also have the HP Comware operating system running on the line cards or Line Processing Units (LPU) with their own memory. The chassis based switches have more memory and can therefore run multiple MDCs.

All MDC capable devices have a "default MDC” or “admin MDC.” The default

MDC can access and manage all hardware resources. User MDCs can be created,

managed or deleted via the default MDC and. The default MDC is system predefined and cannot be created or deleted. The default MDC always uses the name "Admin" and the ID 1.

The number of MDCs available depends on the Main Processing Unit (MPU) capabilities and switch generation. The supported number of MPUs is in the range four to nine:

■ The 11900 and 12500 switch series support four MDCs.

■ The HP FlexFabric 12900 switch series supports nine MDCs. This is because the switch has enhanced memory capabilities.

Note When you configure MDCs, follow these restrictions and guidelines: When you configure MDCs, follow these restrictions and guidelines:

Only MPUs with 4-GB memory or 8-GB memory space support configuring MDCs. The MDC feature and the enhanced IRF feature (4-chassis IRF) are mutually exclusive. When using MDC, the IRF Fabric is currently limited to 2 nodes.

The number of MDCs supported per LPU differs depending on LPU memory. Refer to Table 2-1 through Table 2-5 below for summary and SKUs with LPU memory.

Note The product details shown below are for reference only. The product details shown below are for reference only.

Table 2-1: MDCs support per device and LPU

reference only. Table 2-1: MDCs support per device and LPU Table 2-2: LPUs with 512MB Memory

Table 2-2: LPUs with 512MB Memory

SKU

Description

JC068A

HP 12500 8-port 10-GbE XFP LEC Module

JC065A

HP 12500 48-port Gig-T LEC Module

JC476A

HP 12500 32-port 10-GbE SFP+ REC Module

JC069A

HP 12500 48-port GbE SFP LEC Module

JC075A

HP 12500 48-port GbE SFP LEB Module

JC073A

HP 12500 8-port 10-GbE XFP LEB Module

JC074A

HP 12500 48-port Gig-T LEB Module

JC064A

HP 12500 32-port 10-GbE SFP+ REB Module

JC070A

HP 12500 4-port 10-GbE XFP LEC Module

Table 2-3: LPUs with 1G Memory

SKU

Description

JC068B

HP 12500 8-port 10GbE XFP LEC Module

******ebook converter DEMO Watermarks*******

JC069B

HP 12500 48-port GbE SFP LEC Module

JC073B

HP 12500 8-port 10GbE XFP LEB Module

JC074B

HP 12500 48-port Gig-T LEB Module

JC075B

HP 12500 48-port GbE SFP LEB Module

JC064B

HP 12500 32-port 10GbE SFP+ REB Module

JC065B

HP 12500 48-port Gig-T LEC Module

JC476B

HP 12500 32-port 10-GbE SFP+ REC Module

JC659A

HP 12500 8-port 10GbE SFP+ LEF Module

JC660A

HP 12500 48-port GbE SFP LEF Module

JC780A

HP 12500 8-port 10GbE SFP+ LEB Module

JC781A

HP 12500 8-port 10GbE SFP+ LEC Module

JC782A

HP 12500 16-port 10-GbE SFP+ LEB Module

JC809A

HP 12500 48-port Gig-T LEC TAA Module

JC810A

HP 12500 8-port 10-GbE XFP LEC TAA Mod

JC811A

HP 12500 48-port GbE SFP LEC TAA Module

JC812A

HP 12500 32p 10-GbE SFP+ REC TAA Module

JC813A

HP 12500 8-port 10-GbE SFP+ LEC TAA Mod

JC814A

HP 12500 16p 10-GbE SFP+ LEC TAA Module

JC818A

HP 12500 48-port GbE SFP LEF TAA Module

Table 2-4: LPUs with 4G Memory

SKU

Description

JG792A

HP FF 12500 40p 1/10GbE SFP+ FD Mod

JG794A

HP FF 12500 40p 1/10GbE SFP+ FG Mod

JG796A

HP FF 12500 48p 1/10GbE SFP+ FD Mod

JG790A

HP FF 12500 16p 40GbE QSFP+ FD Mod

JG786A

HP FF 12500 4p 100GbE CFP FD Mod

JG788A

HP FF 12500 4p 100GbE CFP FG Mod

Refer to device release notes to determine support.

******ebook converter DEMO Watermarks*******

Table 2-5: Example of HP 12500-CMW710-R7328P01 support of Ethernet interface cards for ISSU and MDC

HP 12500-CMW710-R7328P01 support of Ethernet interface cards for ISSU and MDC ******ebook converter DEMO Watermarks*******

******ebook converter DEMO Watermarks*******

Use Case 1: Datacenter Change Management Overview A number of use cases are discussed in

Use Case 1: Datacenter Change Management

Overview

A number of use cases are discussed in this chapter. In this first use case, MDC is used to better handle change management procedures in a data center.

Separate MDCs are created for a production network, a quality assurance (QA) network and a Development network. This is in line with procedures followed by Enterprise resource planning (ERP) applications which tend to have three separate installations.

Development Network

******ebook converter DEMO Watermarks*******

A separate development MDC allows for testing to be performed on a separate logical network, but still using the same physical switches as are used in the production network.

As an example, a customer may want to test a new load balancer for two to three weeks. The test can be performed on a temporary basis using the development network rather than the production network. However, as mentioned both networks use the same physical switches.

Rather than introducing the additional risk of a new untested device in the production network, comprehensive tests can be performed using the development network. Features of the new device can be tested, issues resolved and updated network configuration verified without affecting the current running network. The additional benefit of MDC is that the test will be relevant and consistent with the production network as the tests are being performed on the same hardware as the production network.

Quality Assurance (QA) Network

A Quality Assurance network is an identical logical copy of the production network. When a major change is required on the production network, the change can be validated on the QA network. Changes such as the addition of new VLANs, new routing protocols or new access control lists (ACLs) can be tested and validated in advance on the QA network before deploying the change on the production network.

The advantage of using MDC in this scenario is that all the MDCs are running on the same physical hardware. Thus the tests and configuration are validated as if they were running on the production network. This is a much better approach than using smaller test switches instead of actual production core switches to try to validate changes. Using different switches does not make the QA tests 100% valid as there could be differences in firmware or hardware capabilities between the QA network and the production network when tested on different hardware.

the production network when tested on different hardware. Note The QA process will validate feature configurations,

Note The QA process will validate feature configurations, but cannot be used to test or validate firmware updates. All MDCs in a physical device or IRF are running the same firmware version and all MDCs will be upgraded together during a firmware update

Use Case 2: Customer Isolation

******ebook converter DEMO Watermarks*******

This second use case uses MDC for customer isolation.

In a data center, multiple customers could use the same core network infrastructure, but be isolated using traditional network isolation technologies such as VLANS and VRFs.

A customer may however want further isolation in addition to traditional network isolation technologies. They may want isolation of their configurations, memory and CPU resources from other customers. MDC provides this functionality whereas traditional technologies such as VLANs don't provide this level of isolation.

This use case is limited by the number of supported MDCs on the physical switches. As an example, when using a 12500 series switch with 4GB MPU, this use case will only allow for isolation of two to three customers, as shown in Figure 2-5. This is because one MDC is used for the Admin MDC and the switch supports a maximum of four MDCs.

Admin MDC and the switch supports a maximum of four MDCs. Figure 2-5: Use Case 2:

Figure 2-5:

Use Case 2: Customer isolation

An additional use case for MDC isolation is where different levels of security are required within a single customer network. A customer may have a lower security level network and a higher security level network and may want to keep these separate from each other. These networks would be separated entirely by using multiple MDCs. This use case is however also restricted by the number of MDCs a switch can support.

MDCs are different to VRFs as VRFs only separate the data plane and not the management plane of a device. In the example use case of different security level networks, multiple network administrators are involved. A lower level security zone administrator cannot configure or view the configuration of a higher level security zone. When configuring VRFs however, the entire configuration would be visible to network administrators.

Use Case 3: Infrastructure and Customer Isolation

The third MDC use case splits a switch logically into two separate devices. One MDC is used for core infrastructure and another MDC is used for customers, as shown in Figure 2-6.The benefit here is that the core data center infrastructure network is isolated from all customer networks. There are separate VLANs (4094), separate QinQ tags and separate VRFs per MDC.

VLANs (4094), separate QinQ tags and separate VRFs per MDC. Figure 2-6: Use Case 3: Infrastructure

Figure 2-6:

Use Case 3: Infrastructure and Customer Isolation

The data center core network is logically running a totally separate management network independent of all customer data networks. Both management and customer networks still use the same physical equipment.

Use Case 4: Hardware Limitation Workaround

In this fourth use case, MDC provides a workaround for hardware limitations on switches. As an example, a data center may use Shortest Path Bridging MAC mode (SPBM) or Transparent Interconnection of Lots of Links (TRILL). The current switch ASICs cannot provide the core SPBM service and layer 3 routing services at the same time.

******ebook converter DEMO Watermarks*******

SPB is essentially a replacement for Spanning Tree. One caveat of SPB is that core devices simply switch encapsulated packets and do not read the packet contents. This is similar to the behavior of P devices in an MPLS environment. A core SPBM device would therefore not be able to route packets between VLANs.

An SPB edge device is typically required for the routing. SPB encapsulated packets

would

be de-capsulated so that the device can view the IP frames and perform inter-

VLAN

routing.

If IP routing is required on the same physical core as the device configured for SPB, two MDCs would be configured, as shown in Figure 2-7. One MDC would be configured with SPB and be part of the SPB network. Another MDC would then be

configured that is not running SPB to provide layer 3 functionality. A physical cable

would be used to connect the two MDCs on the same chassis switch. The SPB MDC

is thus connected to the layer 3 routing core MDC via a physical back-to-back cable.

layer 3 routing core MDC via a physical back-to-back cable. Figure 2-7: Use Case 4: Hardware

Figure 2-7:

Use Case 4: Hardware limitation workaround

This scenario would apply for both SPB and TRILL.

MDC Numbering and Naming

MDC 1 is created by default with HP Comware 7 and is named “Admin” in the

default configuration. Non-default MDCs are allocated IDs 2 and above. Names are assigned to these MDCs as desired, such as “DevTest”, “DMZ” and “Internal” as

shown in Figure 2-8.

******ebook converter DEMO Watermarks*******

Figure 2-8: MDC numbering and naming Architecture It is important to realize that even though

Figure 2-8:

MDC numbering and naming

Architecture

It is important to realize that even though MDCs look like two, three or even four

logical devices running on a physical device, there is still only one MPU with only one CPU.

Only one kernel is booted. On top of this kernel, multiple MDC contexts will be started and each MDC context will have its own processes and allocated resources. But, there is still only one Kernel. This also explains why multiple MDCs need to run the same firmware version.

A device supporting MDCs is an MDC itself, and is called the "Admin" MDC. The

default MDC always uses the name Admin and the ID 1. You cannot delete it or change its name or ID. By default, there is one kernel that started and it will start one MDC and one MDC only (the Admin MDC). The Admin MDC is used to manage any other MDC’s.

The moment a new MDC is defined, all the control plane protocols of the new MDC will run in that MDC process group. This process group is isolated from other process groups and they cannot interact with each other.

Processes that form part of the process group can be allocated a CPU weight to

******ebook converter DEMO Watermarks*******

provide more processing to specific MDCs. CPU, disk usage and memory usage of process groups can also be restricted for any new MDC. Resource allocation will be covered later in this chapter.

This restriction does not apply to the Admin MDC. The Admin MDC will always have 100% access to the system. If necessary, it can take all CPU resources, or use all memory, or use the entire flash system. The Admin MDC can also access the files of the other MDCs, since these files are stored in a subfolder per MDC on the main flash.

It is important to remember that there is still a physical MPU dependency. If the physical MPU goes down, all of the MDCs running on top of the physical MPU will also go down. That is why it is worth considering the use of an IRF fabric for high availability.

As an example, two core physical chassis switches are configured as an IRF fabric. In addition, three MDCs are configured.

If the first physical switch is powered off, all MDCs (three in this example), will have a master IRF failure and will activate the slave as the new master (second chassis).

Architecture, Control Plane

When a new MDC is defined, the MDC can be started. A new control plane is configured for the MDC. However, the MDC only has access to the Main Processing Unit (MPU). No line cards or interfaces are available to the MDC until they have been assigned by an administrator to the MDC.

This is similar to booting a chassis with only the MPU and no Line Processing Units (LPU) / line cards inserted in the chassis.

Using the display interfaces.

Architecture, ASICs

interface

brief command for example would show no

How do you assign line card interfaces to an MDC?

Because of the hardware restrictions on devices, the interfaces on some interface cards are grouped. Interfaces therefore need to be allocated to the MDC per ASIC (port group).

******ebook converter DEMO Watermarks*******

It is important to understand how ASICs are used within a chassis based switch.

In a chassis, each of the line cards has one or more local ASICs. This affects the data

plane of the switch as the data plane packet processing is done by the ASIC. When

packets are received by the switch, functions such as VLAN lookups, MAC address lookups and so on are performed by ASICs. These ASICs also hold the VLAN table or the IP routing table.

One ASIC can be used by multiple physical interfaces. As an example, one ASIC on the line card can be used by 24 Gigabit Ethernet ports. Depending on the line card models there may be up to 4 ASICs on a physical line card. Another example is a 48 Gigabit Ethernet port line card which could have only two ASICs.

Architecture, ASIC Control

Why this is important to understand? Because each of these ASICs has its own hardware resources and limits. For each ASIC as an example, there is a limit of 4094 VLANs.

The moment you define a new VLAN at the global chassis level, that VLAN will be programmed by the control plane into each of the ASICs on the chassis. If there are six different ASICs on a line card, each ASIC will be programmed with all globally configured VLANs. In a normal chassis all the ASICs are used by the MPU, so they are programmed by the single control plane.

Each ASIC can only have one control plane or ASIC programming process. The

ASIC can have only one master and cannot be configured by other control planes.

When creating a new MDC, a new control plane is created. Two control planes cannot modify the same ASIC.

By default, all ASICs and line cards are controlled by the Admin MDC. When creating a new MDC, the control of an ASIC can be changed from the default Admin

MDC

to that new MDC. This results in all physical interfaces that are bound to the

ASIC

also being moved to the new MDC. Individual interfaces cannot be assigned to

an MDC. They are assigned indirectly to the MDC when the ASIC they use is assigned to the MDC.

All interfaces which are managed by one ASIC must be assigned to the same MDC. For example 10500/11900/12900 series switches only support one MDC per LPU. In the configuration, this is enforced by the CLI through port-groups. As shown in Figure 2-9, all interfaces which are bound to the same ASIC must be assigned as a port-

******ebook converter DEMO Watermarks*******

group

Implementation is given in Table 2-6.

MDC.

to

an

An

example

of

12500/12500E

LPU

MDC

Port

Group

. MDC. to an An example of 12500/12500E LPU MDC Port Group Figure 2-9: Architecture, ASIC

Figure 2-9:

Architecture, ASIC control

Table 2-6: Example 12500/12500E LPU MDC Port Group Implementation

2-6: Example 12500/12500E LPU MDC Port Group Implementation HP Comware7 will notify which ports belong to

HP Comware7 will notify which ports belong to a port-group. The following sample configuration shows 11900 MDC port group allocation:

[DC1-SPINE-1-mdc-2-mdc2]allocate interface FortyGigE 1/1/0/1

Configuration of the interfaces will be lost. Continue? [Y/N]:y

Group error: all interfaces of one group must be allocated to the same mdc.

FortyGigE1/1/0/1

Port list of group 5:

FortyGigE1/1/0/1

FortyGigE1/1/0/3

FortyGigE1/1/0/5

FortyGigE1/1/0/2

FortyGigE1/1/0/4

FortyGigE1/1/0/6

******ebook converter DEMO Watermarks*******

FortyGigE1/1/0/7

FortyGigE1/1/0/8

Architecture, Hardware Limits

In addition to a new control plane being created, hardware limits change with the creation of a new MDC.

As an example, if 1000 VLANs were created using the Admin MDC, these VLANs would be programmed on each ASIC that is associated with the Admin MDC. However, ASICs associated with another MDC, such as the Development MDC, will not have the 1000 VLANs programmed. They only have the VLANs configured by an administrator of the Development MDC. The control plane of the Admin MDC does not control and can therefore not program the ASICs associated with the Development MDC.

If VLAN 10 was configured on the Admin MDC, that VLAN is not programmed onto the ASICs of the Development MDC. VLAN 10 would only be programmed on the ASICs if VLAN 10 was configured on the Development MDC. However, VLAN 10 on the Admin MDC is different and totally independent from VLAN 10 on the Development MDC. MAC addresses learned in the Development MDC are different from the MAC addresses learned in the Admin MDC.

There is no control plane synchronization between the ASICs of different MDCs. By default there is only one MDC and all ASICs have the same VLAN information. However, as soon as multiple MDCs are created, each ASIC in a different MDC is in effect part of a different switch, controlled and programmed separately. This principle applies to all the resources and features such as access lists, VRFs, VPN instances, routing table sizes etc.

This also means that if any MDC is running out of hardware resources at the ASIC level, the resource shortage will not impact any of the other MDCs.

This is ideal for heavy load environments. Customers could stress test a network with many VRFs, access lists or quality of service (QoS) rules without affecting other MDCs. A development MDC could run out of resources without affecting the production MDC for example.

However, while there is isolation of the data plane by isolating the ASICs, this is not the case for a number of other components. Switch hardware resources such as CPU, physical memory and the flash file systems are shared between MDCs.

Architecture, File System

******ebook converter DEMO Watermarks*******

Each MDC has its own configuration files on a dedicated part of the disk. An MDC administrator can therefore only modify or restart their own MDC.

Access to the switch CPU and physical memory by MDCs can also be restricted. There is also good isolation and separation of MDC access to these resources.

For the file system however, there is only one file system available on the flash card. The Admin MDC (which is the original MDC) has root access to the file system. This MDC has total control of the flash and has the privileges to perform operations such as formatting the file system. Any file system operations such as formatting the flash or using fixdisk are only available from the Admin MDC.

Configurations saved from the Admin MDC are typically saved to the root of the file system. Other MDCs only have access to a subset of the file structure. This is based on the MDC identifier. When a new MDC is defined, a folder is created on flash with the MDC identifier. MDC 2 for example, has a folder "2" created for it on flash. All files saved by MDC 2 are stored in this subfolder. Additionally, any file operations such as listing directories and files on the flash using DIR will only show files within this subfolder. From within the MDC, it appears that root access is provided, but in effect, only a subfolder is made available to the MDC.

The Admin MDC can view all the configuration files of other MDCs as they are subfolders in the root of the file system. This is something to consider in specific use cases.

Within the other MDCs, only the local MDC files are visible. MDC 2 would not be able to view the files of Admin MDC or other MDCs (such as MDC 3).

The Admin MDC can also be used to monitor and restrict the file space made available to other MDCs. The Admin MDC has full access and unlimited control over the file system, but other MDCs can be restricted from the Admin MDC if required.

Architecture, Console Ports

Console Port

Other components which are shared between the MDC’s are the console port and the Management-Ethernet ports. The console port and AUX port of the physical chassis always belong to the Admin MDC (default MDC). Other MDCs do not have access to the physical console or AUX ports.

******ebook converter DEMO Watermarks*******

To access the console of the other MDCs, first access the admin MDC console and then use the switchto mdc command to switch to the console of a specific MDC. This is similar to the Open Application Platform (OAP) connect functionality used to connect to the console of subslots on other devices like the unified wireless controllers.

Management-Ethernet ports

The management interface of all MDCs share the same physical Out Of Band (OOB) management Ethernet port. Switching to this interface using the switchto command is not possible like with the console port.

The management Ethernet interface is shared between all MDCs. When a new MDC is created, the system automatically shows the Management Ethernet interface of the MPU inside the MDC.

You must assign different IP addresses to the Management-Ethernet interfaces so MDC administrators can access and manage their respective MDCs. The IP addresses for the management Ethernet interfaces do not need to belong to the same network segment.

The interface can be configured from within each MDC as the interface is shared between all the MDCs. This means that the physical interface will accept configurations from all the MDCs. Network administrators or operators of the MDCs will need to agree on the configuration of the Management-Ethernet port.

Design Considerations

ASIC Restrictions

When designing an MDC solution, remember that ASIC binding determines the interface grouping that will need to be allocated to an MDC. Interfaces have to be assigned per ASIC.

Some of the line cards only have a single ASIC. This means that all the interfaces on the line card will need to be assigned to or removed from an MDC at the same time.

Some line cards may have 2 or more ASICs. This allows for a smaller number of interfaces to be assigned to an MDC at the same time.

The second consideration is that the number of MDC’s will depend on the MPU generation and memory size.

******ebook converter DEMO Watermarks*******

The interfaces in a group must be assigned to or removed from the same MDC at the same time. You can see how the interfaces are grouped by viewing the output of the

allocate interface or undo allocate interface command:

■ If the interfaces you specified for the command belong to the same group or groups and you have specified all interfaces in the group or groups for the command, the command outputs no error information.

■ Otherwise, the command displays the interfaces that failed to be assigned and the interfaces in the same group or groups.

Assigning or reclaiming a physical interface restores the settings of the interface to the defaults. For example, if the MDC administrator configures the interface, and later on the interfaces are assigned to a different MDC, the interface configuration settings are lost.

To assign all physical interfaces on an LPU to a non-default MDC, you must first reclaim the LPU from the default MDC by using the undo location and undo allocate commands. If you do not do so, some resources might be still occupied by the default MPU.

Platforms

The number of MDCs supported by a platform also needs to be considered. This depends on the MPU platform as well as the MPU generation.

You can create MDCs only on MPUs with a memory space that is equal to or greater than 4 GB. The maximum number of non-default MDCs depends on the MPU model.

Refer to earlier in this chapter for more details.

Basic Configuration Steps

Overview

The configuration steps for creating and enabling an MDC will be discussed. Basic

MDC configuration is discussed first and then advanced configuration options such

as setting resource limits will be covered.

Step 1: Define the new MDC with the new ID and a new name.

Step 2: Authorize the MDC to use specific line cards. ASICs are not assigned at this point. Authorization is given so the next step can be used to assign interfaces.

******ebook converter DEMO Watermarks*******

Step 3: Allocate interfaces to the MDC. Remember to allocate per ASIC group.

Step 4: Start the MDC. This starts the new MDC control plane.

Step 5: Access the MDC console by using the switchto command.

Configuration Step 1: Define a New MDC

Step 1: Define the new MDC with the new ID and a new name.

This command needs to be entered from within the default Admin MDC. You cannot type this command from any non-default MDCs.

From the default MDC enter system view. Next, define a new MDC by specifying a name of your choice and ID of the MDC. This ID is used for the subfolder on the flash file system. See Figure 2-10 for an example.

on the flash file system. See Figure 2-10 for an example. Figure 2-10: Configuration step 1:

Figure 2-10:

Configuration step 1: Define a new MDC

Once the MDC is configured, a new process group is defined. The process group is not started at this point as the MDC needs to be manually started in step 4. To create an MDC, see Table 2-7.

Table 2-7: Creating an MDC

Step

Command

Remarks

1.

   

Enter

system-

system

view

view.

2.

 

By default, there is a default MDC with the name Admin and the ID 1. The default MDC is system predefined. You do not need to create it, and you cannot delete it. The MDC starts to work after you execute the mdc start command.

Create

mdc mdc- name [ id mdc-id ]

and

******ebook converter DEMO Watermarks*******

MDC.

MDC. This command is mutually exclusive with the irf mode enhanced command.

This command is mutually exclusive with the irf mode enhanced command.

Configuration Step 2: Authorize MDC for a Line Card

When you create an MDC, the system automatically assigns CPU, storage space, and memory space resources to the MDC to ensure its operation. You can adjust the resource allocations as required (this is discussed in more detail later in this chapter).

An MDC needs interfaces to forward packets. However, the system does not automatically assign interfaces to MDCs and you must assign them manually.

By default, a non-default MDC can access only the resources on the MPUs. All LPUs of the device belong to the default MDC and a non-default MDC cannot access any LPUs or resources on the LPUs. To assign physical interfaces to an MDC, you must first authorize the MDC to use the interface cards to which the physical interfaces belong.

Step 2 is to authorize the MDC to access interfaces of a specific line card. This command is entered from the non-default MDC context. In Figure 2-11, MDC 2 with the name Dev is authorized to allocate interfaces on the line card in slot 2.

to allocate interfaces on the line card in slot 2. Figure 2-11: Configuration step 2: Authorize

Figure 2-11:

Configuration step 2: Authorize MDC for a line card

This command does not assign any of the interfaces to the MDC at this point. It only authorizes the assignment of the interfaces on that line card. Interfaces will be assigned to the MDC in step 3, as outlined in Table 2-8.

Multiple MDCs can be authorized to use the same interface card.

Table 2-8: To authorize an MDC to use an interface card

Step

Command

Remarks

1. Enter system

   

view.

system-view

******ebook converter DEMO Watermarks*******

2.

Enter MDC

mdc mdc-name [ id mdc-id ]

 

view.

 

In standalone mode:

By default, all interface cards of the device belong to the default MDC, and a non-default MDC cannot use any interface card. You can authorize multiple MDCs to use the same interface card.

3.

Authorize the

location slot slot-number

MDC to use an interface card.

In IRF mode:

location chassis chassis-number slot slot-number

Configuration Step 3: Allocate Interfaces per ASIC

By default, all physical interfaces belong to the default MDC, and a non-default MDC has no physical interfaces to use for packet forwarding. To enable a non-default MDC to forward packets, you must assign it interfaces.

The console port and AUX port of the device always belong to the default MDC and cannot be assigned to a non-default MDC.

the default MDC and cannot be assigned to a non-default MDC. Important When you assign physical

Important When you assign physical interfaces to MDCs on an IRF member device, make sure the default MDC always has at least one physical IRF port in the up state. Assigning the default MDC's last physical IRF port in the up state to a non- default MDC splits the IRF fabric. This restriction does not apply to 12900 series switches.

Only a physical interface that belongs to the default MDC can be assigned to a non- default MDC. The default MDC can use only the physical interfaces that are not assigned to a non-default MDC.

One physical interface can belong to only one MDC. To assign a physical interface that belongs to a non-default MDC to another non-default MDC, you must first remove the existing assignment by using the undo allocate interface command.

Assigning a physical interface to or reclaiming a physical interface from an MDC restores the settings of the interface to the defaults.

Remember that because of hardware restrictions, the interfaces on some interface

******ebook converter DEMO Watermarks*******

cards are grouped. The interfaces that form part of the ASIC group may vary

depending on the line card and the interfaces in a group must be assigned to the same

MDC at the same time.

When interfaces are allocated to the new MDC, they are removed from the default MDC and moved to the specified non-default MDC. All current interface configuration is reset on the interfaces when moved to the new MDC. These

interfaces appear as new interfaces in the MDC. They will thus be assigned by default to VLAN 1. In Figure 2-12, interfaces Gigabit Ethernet 2/0/1 to 2/0/48 have

been allocated to MDC 2, named Dev. To configure parameters for a physical

interface assigned to an MDC, you must log in to the MDC.

interface assigned to an MDC, you must log in to the MDC. Figure 2-12: Configuration step

Figure 2-12:

Configuration step 3: Allocate interfaces per ASIC

In IRF mode on 12500 series switches, you must assign non-default MDCs physical interfaces for establishing IRF connections. A non-default MDC needs to use the physical IRF ports to forward packets between member devices. This is discussed in more detail later in this chapter.

After you change the configuration of a physical IRF port, you must use the save

command to save the running configuration. Otherwise, after a reboot, the master and subordinate devices in the IRF fabric have different physical IRF port configurations and you must use the undo allocate interface command and the undo port group interface command to restore the default and reconfigure the physical IRF port. Table 2-9 outlines the configuration procedure.

Table

2-9: Configuration Procedure

 

Step

Command

Remarks

1.

Enter

   

system

system-view

view.

2.

Enter

   

MDC view.

mdc mdc-name [ id mdc-id ]

 

(Approach 1) Assign individual interfaces to the MDC:

 

allocate interface {

Use either or both approaches.

******ebook converter DEMO Watermarks*******

3.

Assign

interface-type interface- number }&<1-24>

Approach 2) Assign a range of interfaces to the MDC:

By default, all physical interfaces belong to the default MDC, and a non-default MDC has no physical interfaces to use. You can assign multiple physical interfaces to the same MDC.

physical

interfaces to

the MDC.

allocate interface interface-type interface- number1 to interface-type

interface-number2

 

Configuration Step 4: Start MDC

Once interfaces are assigned to the MDC, the MDC can be started. The start command starts the control plane and management plane of the MDC, as shown in Figure 2-13. The data plane will be active for any interfaces which have been allocated to this MDC at the moment the MDC is started.

been allocated to this MDC at the moment the MDC is started. Figure 2-13: Configuration step

Figure 2-13:

Configuration step 4: Start MDC

At this point you may notice that the total memory utilization of the switch will increase. This is because multiple additional processes for the MDC are being started. To start an MDC, see Table 2-10.

MDC are being started. To start an MDC, see Table 2-10 . Important If you access

Important If you access the BootWare menus and select the Skip Current System Configuration option while the device starts up, all MDCs will start up without loading any configuration file.

Table 2-10: Starting an MDC

Step

Command

1. Enter system view.

system-view

2. Enter MDC view.

mdc mdc-name [ id mdc-id ]

******ebook converter DEMO Watermarks*******

3. Start the MDC.

mdc start

Configuration Step 5: Access the MDC

A non-default MDC operates as if it were a standalone device. From the system view of the default MDC, you can log in to a non-default MDC and enter MDC system view.

In Figure 2-14, the console is switched to the Dev MDC from the Admin MDC. The prompt will display as if you are accessing a new console session. Within the Dev MDC, you will need to enter the system-view again to configure the switch. In this example the host name is changed to Dev for the Dev MDC.

example the host name is changed to Dev for the Dev MDC. Figure 2-14: Configuration step

Figure 2-14:

Configuration step 5: Access the MDC

In MDC system view, you can assign an IP address to the Management-Ethernet

interface, or create a VLAN interface on the MDC and assign an IP address to the interface. This will allow administrators of the MDC to log in to the MDC by using

Telnet or SSH.

To return from a user MDC to the default MDC, use the switchback or quit

command. In this example the switchback command is used to return to the Admin

MDC and the output shows the switch name as switch. Table 2-11 outlines how to

log in to a non-default MDC from the system view of the default MDC.

Table

2-11: To log in to a non-default MDC from the system view of the default MDC

Step

Command

Remarks

******ebook converter DEMO Watermarks*******

1.

Enter

system-view

 

system view.

2.

Log in to an

switchto mdc

You use this command to log in to only an MDC that is in active state.

MDC

mdc-name

MDC Advanced Configuration Topics

Once basic configuration has been completed, multiple advanced options can be configured.

Options such as restricting MDC resource access to CPU, memory and file system access will be discussed in this chapter. Configuration of the Management-Ethernet interface and firmware updates will also be discussed.

Resource allocation to MDCs are explained in Table 2-12, values may be modified if required.

Table 2-12: The default values shown will fit most customer deployments

 

Allocation

 

Resource

Information

Default

 

Used to assign MPU and LPU CPU resources to each MDC according to their CPU weight

• 10 Default (100 Max)

When MDCs need more CPU

• By default, the default MDC has a CPU weight of 10 (unchangeable) on each MPU and each interface card.

CPU

resources, the

weight

device assigns CPU resources according to their CPU weights

• Each non-default MDC has a CPU weight of 10 on each MPU and each interface card that it is authorized to use.

Specify CPU weights for MDCs using “limit- resource cpu weight” command

******ebook converter DEMO Watermarks*******

 

• Used to limit the amount of disk space each MDC can use for configuration and log files

• 100% Default (100% Max)

Disk

• By default, all MDCs share the disk space in

space

• Specify disk space percentages for MDCs using “limit-resource memory” command

the system, and an MDC can use all free disk space in the system.

 

• Used to limit the amount of memory space each MDC can use

• 100% Default (100% Max)

Memory

• By default, all MDCs share the memory space in the system, and an MDC can use all free

space

• Specify memory space percentages for MDCs using “limit-resource disk” command

memory space in the system.card, and each non-default MDC has a CPU weight of 10 on each MPU and each interface card that it is authorized to use.

Although fabric modules are shared by MDCs, traffic between MDCs are isolated as source/destination Packet Processors within the chassis are isolated.

Restricting MDC Resources: Limit CPU

All MDCs are authorized to use the same share of CPU resources. If one MDC takes too many CPU resources, the other MDCs might not be able to operate. To ensure correct operation of all MDCs, specify a CPU weight for each MDC.

The amount of CPU resources an MDC can use depends on the percentage of its CPU weight among the CPU weights of all MDCs that share the same CPU. For example, in Figure 2-15, three MDCs share the same CPU, setting their weights to 10, 10, and 5 is equivalent to setting their weights to 2, 2, and 1:

■ The two MDCs with the same weight can use the CPU for approximately the same period of time.

■ The third MDC can use the CPU for about half of the time for each of the other

******ebook converter DEMO Watermarks*******

two MDCs.

two MDCs. Figure 2-15: Restricting MDC resources: Limit CPU The CPU weight specified for an MDC

Figure 2-15:

Restricting MDC resources: Limit CPU

The CPU weight specified for an MDC takes effect on all MPUs and all LPUs that the

MDC

is authorized to use. Table 2-13 outlines how to specify a CPU weight for an

MDC.

The resource limits are only used if required. If an MDC does not require any of the CPU resources, other MDCs can use all the available CPU. In other words, there is no hard limit on the CPU usage when CPU resources are available.

Table

2-13: How to specify a CPU weight for an MDC

Step

Command

Remarks

1.

Enter

   

system

system-

view

view.

2.

Enter

mdc mdc-

 

name [

MDC

view.

id mdc-

id ]

 

limit-

 

3.

Specify

resource

By default, the default MDC has a CPU weight of 10 (unchangeable) on each MPU and each interface card, and each non-default MDC has a CPU weight of 10 on each MPU and each interface card that it is authorized to use.

a CPU

cpu

weight for

the MDC.

weight

weight-

 

value

 

Restricting MDC Resources: Limit Memory

******ebook converter DEMO Watermarks*******

By default, MDCs on a device share and compete for the system memory space. All MDCs share the memory space in the system, and an MDC can use all free memory

space in the system. If an MDC takes too much memory space, other MDCs may not

be able to operate normally. To ensure correct operation of all MDCs, specify a memory space percentage for each MDC to limit the amount of memory space each

MDC

can use. Table 2-14 outlines how to specify a memory space percentage for an

MDC.

The memory space to be assigned to an MDC must be greater than the memory space that the MDC is using. Before you specify a memory space percentage for an MDC, use the mdc start command to start the MDC and use the display mdc resource command to view the amount of memory space that the MDC is using.

to view the amount of memory space that the MDC is using. Note An MDC cannot

Note An MDC cannot use more memory than the allocated value specified by the limit-resource memory command. This is in contrast to CPU resource limit which is a weighted value.

Table

2-14: How to specify a memory space percentage for an MDC

Step

Command

Remarks

1.

Enter system

   

view.

system-view

2.

Enter MDC

mdc mdc-name [ id mdc-id

 

view.

]

 

In standalone mode:

limit-resource memory slot slot-number ratio limit-ratio

In IRF mode:

 

3.

Specify a

By default, all MDCs share the memory space in the system, and an MDC can use all free memory space in the system.

memory space percentage for the MDC.

limit-resource memory chassis chassis-number slot slot-number ratio limit-ratio

Restricting MDC Resources: Limit Storage

By default, MDCs on a device share and compete for the disk space of the device's storage media, such as the Flash and CF cards. An MDC can use all free disk space

******ebook converter DEMO Watermarks*******

in the system.

If an MDC occupies too much disk space, the other MDCs might not be able to save information such as configuration files and system logs. To prevent this, specify a disk space percentage for each MDC to limit the amount of disk space each MDC can use for configuration and log files. Table 2-15 outlines how to specify a disk space percentage for an MDC.

Before you specify a disk space percentage for an MDC, use the display mdc resource command to view the amount of disk space the MDC is using. The amount of disk space indicated by the percentage must be greater than that the MDC is using. Otherwise, the MDC cannot apply for more disk space and no more folders or files can be created or saved for the MDC.

If the device has more than one storage medium, the disk space percentage specified for an MDC takes effect on all the media.

Table 2-15: To specify a disk space percentage for an MDC

Step

Command

Remarks

1.

Enter system

   

view.

system-view

2.

Enter MDC

mdc mdc-name [ id mdc-id

 

view.

]

 

In standalone mode:

limit-resource disk slot slot-number ratio limit-ratio

In IRF mode:

 

3.

Specify a

By default, all MDCs share the disk space in the system, and an MDC can use all free disk space in the system.

disk space percentage for the MDC.

limit-resource disk chassis chassis-number slot slot-number ratio limit-ratio

Management Ethernet

When a non-default MDC is created, the system automatically provides access to the Management Ethernet interface of the MPU. The Management-Ethernet interfaces of all non-default MDCs use the same interface type and number and the same physical port and link as the default MDC's physical Management-Ethernet interface. However, you must assign a different IP address to the Management-Ethernet

******ebook converter DEMO Watermarks*******

interface so MDC administrators can access and manage their respective MDCs, see Figure 2-16 for an example. The IP addresses for the Management-Ethernet interfaces do not need to belong to the same network segment.

do not need to belong to the same network segment. Figure 2-16: Management Ethernet Device Firmware

Figure 2-16:

Management Ethernet

Device Firmware Updates

To run Comware 7, MPUs must be fitted with 4GB SDRAM and also have a CF card of at least 1 GB in size. 4 GB SDRAM is fitted as standard in the JC072B and the JG497A, but the JC072A must be upgraded from 1 GB to 4 GB of SDRAM by using two memory upgrade kits (2 x JC609A). If required, 1 GB CF cards (JC684A) are available for purchase. If an upgraded JC072A needs to be returned for repair, be sure to retain the upgrade parts for use in the replacement unit.

As shown in Figure 2-17, due to physical memory limits, interface cards with 512 MB memory do not support ISSU, and the interfaces on each of these cards can be assigned to only one MDC. Except for these ISSU and MDC limitations, these cards provide full support for all other features.

these cards provide full support for all other features. Figure 2-17: Device firmware updates Refer to

Figure 2-17:

Device firmware updates

Refer to earlier in this chapter for more detail.

Network Virtualization Types

******ebook converter DEMO Watermarks*******

In this section, MDC and IRF interoperability will be discussed.

IRF

Refer to the left hand of Figure 2-18. The network virtualization shown in the figure is the combination of multiple physical switches configured as a single logical fabric using IRF. Distributed link aggregation could then be used to connect multiple physical cables to the separate physical switches as a single logical link connected to a single logical device. Multi-Chassis Link Aggregation (MLAG) could be used for link aggregation between the IRF fabric and other switches.

IRF supports both 2 and 4 chassis configurations.

switches. IRF supports both 2 and 4 chassis configurations. Figure 2-18: Network Virtualization Types MDC The

Figure 2-18:

Network Virtualization Types

MDC

The middle of Figure 2-18 shows MDC on a single physical switch. This has been discussed at length previously in this chapter. We have discussed how the MDC technology provides multi tenant device contexts, where multiple virtual or logical devices are created on a single physical chassis.

Each of these logical contexts provides unique VLAN and VRF resources and also provides hardware isolation inside the same physical chassis.

MDC and IRF

Although MDC can be deployed on a single chassis with redundant power supplies, redundant management modules (MPUs) and redundant line cards (LPUs), most

******ebook converter DEMO Watermarks*******

customers have MDC deployed together with HP Intelligent Resilient Framework (IRF).

IRF N:1 device virtualization together with MDC 1:N virtualization achieves a combined N:1 + 1:N device virtualization solution as shown in the right hand of Figure 2-18. This achieves higher port densities together with chassis redundancy. Currently, only 2-chassis IRF & MDC is supported.

The right hand of Figure 2-18 shows MDC and IRF combined to provide a single virtual device with multiple device contexts. In this example, two physical switches are virtualized using IRF to create a single logical switch. The IRF fabric is then carved up into multiple MDCs to provide IRF resiliency for each of the MDCs defined in the IRF fabric.

This would be used to provide a common control plane, data plane and management plane for each MDC across 2 physical systems.

IRF-Based MDCs

When you configure MDCs, follow these guidelines (see Figure 2-19):

■ To configure both IRF and MDCs on a device, configure IRF first. Otherwise, the device will reboot and load the master's configuration rather than its own when it joins an IRF fabric as a subordinate member, and none of its settings except for the IRF port settings take effect.

■ Before assigning a physical IRF port to an MDC or reclaiming a physical IRF port from an MDC, you must use the undo port group interface command to restore the default. After assigning or reclaiming a physical IRF port, you must use the save command to save the running configuration.

use the save command to save the running configuration. Figure 2-19: IRF-Based MDCs By default, when

Figure 2-19:

IRF-Based MDCs

By default, when a new IRF fabric is created, only the default Admin MDC is created on the IRF fabric. All line cards are assigned to the Admin MDC by default. Line

******ebook converter DEMO Watermarks*******

cards and interfaces will then need to be manually assigned to other MDCs as required.

It is important to note that at the time of this writing only 2 chassis IRF fabrics are currently supported in conjunction with the MDC feature. A 4 chassis IRF fabric which provides greater IRF scalability is not currently supported with MDC.

IRF-Based MDCs

As discussed previously, any new MDCs need to be authorized to use line cards before interfaces can be allocated to the MDC. Once authorized, port groups are used to allocate interfaces to the MDC.

What kind of combinations would be possible with IRF and MDCs?

Figure 2-20 shows various MDC and IRF scenarios.

and MDCs? Figure 2-20 shows various MDC and IRF scenarios. Figure 2-20: IRF-Based MDCs The first

Figure 2-20:

IRF-Based MDCs

The first scenario is the most typical. Each MDC is allowed to allocate resources on both chassis 1 and chassis 2. This will provide redundancy for each of the configured MDCs.

This is not a required configuration. An MDC can be created without redundancy (as shown in the second scenario). In this example, only specific line cards on chassis 1 in the IRF fabric have been allocated to MDC 4. MDC4 does not have any IRF redundancy on chassis 2. The other MDCs have redundancy and have line cards allocated on both chassis 1 and chassis 2 in the IRF fabric.

******ebook converter DEMO Watermarks*******

In the third scenario, both MDC 3 and 4 have line cards allocated only on chassis 1, while MDC 1 and 2 have line cards allocated from both chassis 1 and 2. MDC 1 and 2 have redundancy in case of a chassis failure, but MDC 3 and 4 do not have any redundancy if chassis 1 fails.

In the same way, as seen in the fourth scenario, MDC 1 is only configured on chassis 1, while MDCs 2, 3 and 4 are only configured on chassis 2. This is also a supported configuration.

Scenario 5 and 6 show other supported variations of how MDCs can be configured within an IRF fabric.

As can be seen, various combinations are possible and the administrator can decide where MDCs operate. There is no limitation on where the MDCs need to be configured on the chassis devices in the IRF fabric.

MDCs and IRF Types

Overview

There are two ways to configure IRF in combination with MDC. This is dependent on the switch generation.

As shown in Figure 2-21, the method used by the 12500 and 12500E Series Switches has separate IRF links per MDC. The alternate method used on the 10500, 11900 and 12900 Series Switches uses a shared IRF link for all MDCs.

******ebook converter DEMO Watermarks*******

12500/12500E Figure 2-21: MDCs and IRF types When configuring IRF on the 12500/12500E Series Switches,

12500/12500E

Figure 2-21:

MDCs and IRF types

When configuring IRF on the 12500/12500E Series Switches, a dedicated IRF link per MDC is required.

For MDC 2 on chassis 1 to communicate with MDC 2 on chassis 2, a dedicated IRF port needs to be configured on both chassis switches that are physically part of that MDC. For example, if line card 2 was assigned to MDC 2, then you would need to assign a physical port on line card 2 as an IRF port for MDC 2. If line card 3 was assigned to MDC 3, then a physical port on line card 3 would need to be configured as an IRF port for MDC 3. This would be configured for each MDC.

This configuration also results in all data packets for an MDC using the dedicated IRF port between the two chassis switches. As an example, if data is sent between MDC1 on chassis 1 and MDC1 on chassis 2, the data would traverse the dedicated IRF port connecting the two MDCs and not other IRF links.

This results in isolation of the data plane as the IRF link of MDC 1 will not receive

******ebook converter DEMO Watermarks*******

traffic from MDC 2 or other MDCs. This also applies to other MDCs.

10500/11900/12900

The version of the IRF and MDC interoperability used on the 10500, 11900 and 12900 Series Switches uses a single shared IRF link for all MDCs rather than a dedicated IRF link per MDC.

This results in a change of packet flow between physical switches and MDCs. On a 12500 switch, a packet sent from one MDC to another uses the dedicated link for that MDC. There is no explicit specification of source MDC when traffic traverses the IRF link. It is therefore important that IRF the link be correctly connected to the appropriate MDCs on both chassis switches. If an administrator accidently cabled MDC2 on chassis 1 to MDC3 on chassis 2 on 12500 switches, traffic will flow between the two MDCs using that IRF physical link. VLAN 10 traffic in MDC 2 would end up as VLAN 10 traffic on MDC 3 for example. This breaks the original design principals of MDCs as the switch fabric is now extended from one MDC to another, whereas MDCs should be separate logical switches. Each MDC should have a separate VLAN space, but in this example VLANs are shared.

IRF and MDC on 10500, 11900 and 12900 switches no longer require dedicated links per MDC. A shared IRF link is used and MDC traffic is differentiated using an additional tag.

Using the same example, if VLAN 10 traffic is sent from MDC2 on chassis 1 to MDC 2 on chassis 2, an additional tag is added to the traffic across the IRF link. This allows chassis 2 to different between the VLAN 10 traffic of MDC 2 and the VLAN 10 traffic of MDC 3.

The IRF port is part of the Admin MDC and direct MDC connections are no longer supported. IRF commands are not available in non-default MDCs.

Proper bandwidth provisioning is required however, as the IRF port will now be carrying traffic for multiple MDCs.

Configuration Examples

12500/12500E

Differences in IRF approaches are reflected in the configuration commands. When configuring IRF on 12500/12500E switches, the MDC is specified in the port group command.

******ebook converter DEMO Watermarks*******

Even though the IRF configured is completed using the Admin MDC, the IRF configuration associates specific IRF interfaces with specific MDCs.

In Figure 2-22, the IRF configuration of IRF port 1/1 is shown. Interface Gigabit Ethernet 1/3/0/1 is added to the IRF port, but is associated with MDC 2. The physical interface 1/0/3/1 must be assigned to MDC 2. Gigabit Ethernet 1/3/0/24 could not be used with MDC 2 for example as it has already been associated with MDC 3 using the allocate interface command. In this example, the interface is correctly associated with MDC 3.

example, the interface is correctly associated with MDC 3. Note MDC allows IRF fabrics to use

Note MDC allows IRF fabrics to use 1 Gigabit Ethernet ports rather than only 10 Gigabit Ethernet ports.

Ethernet ports rather than only 10 Gigabit Ethernet ports. Figure 2-22: 10500/11900/12900 Configuration examples The

Figure 2-22:

10500/11900/12900

Configuration examples

The 10500, 11900 and 12900 Series Switches no longer use the MDC keyword when IRF is configured. The interfaces are simply bound to the IRF port (1/1 in this example). The main difference with these switches is that all the interfaces are part of the Admin MDC. It is no longer possible to bind interfaces associated with non- default MDCs to the IRF port.

******ebook converter DEMO Watermarks*******

More MDC and IRF Configuration Information

Because of port groups and ASIC limitations, it may not be possible to assign individual interfaces to IRF ports. Multiple physical interfaces may need to be associated with the IRF port at the same time. Groups of four interfaces are often associated as per the example shown in Figure 2-23.

often associated as per the example shown in Figure 2-23 . Figure 2-23: More MDC and

Figure 2-23:

More MDC and IRF configuration information

This is similar to the behavior on 5900 switches which also require that a group of four interfaces be configured for IRF. This doesn't mean that you have to use all four ports for IRF to function. You could as an example only physically cable two of the ports. But, you cannot use any of the four ports in the group for any other function apart from IRF once the group is used for IRF.

In Figure 2-23, port TenGigabitEthernet 1/0/0/5 is added to IRF. However, an error is displayed indicating that ports 1/0/0/5 to 1/0/0/8 need to be shut down. As the interfaces are part of a port group, they need to be allocated for IRF use as a group rather than individually. Once allocated, one of the interfaces could be used for the actual IRF functionality, but the entire group needs to be activated for IRF use (this is true for certain platforms such as the 5900 series switches but may be different on other platforms).

10500/11900/12900 Link Failure Scenario

******ebook converter DEMO Watermarks*******

As per IRF best practices, multiple physical interfaces should form part of the IRF link between switches. If one of the physical interfaces goes down, IRF continues to use the remaining links. As long as at least one link is active between the switches, IRF will remain active. There will be reduced bandwidth between the IRF devices, but IRF functionality is not affected (no split brain).

However, as shown in Figure 2-24, when all physical links between the switches go down, an IRF split will occur.

links between the switches go down, an IRF split will occur. Figure 2-24: 10500/11900/12900 link failure

Figure 2-24:

10500/11900/12900 link failure scenario

Since the admin MDC is used for IRF configuration port configuration, this is also the MDC where IRF MAD needs to be configured. There will be no MAD configuration in other MDCs. This also implies that the IRF MAD ports have to belong to the Admin MDC.

12500/12500E Link Failure Scenario

On the 12500/12500E switches, IRF configuration is more complicated.

There is a base IRF protocol running at the chassis level and in addition, MDCs use the IRF physical interfaces to exchange data. The data sent by a MDC is for that particular MDC only. As an example, an IRF link configured in MDC 1 will only transport data between MDC1 contexts. The link between MDC 2 contexts will only transport data for MDC 2. The links do not carry data for other MDC contexts, but are used by the base IRF protocol.

******ebook converter DEMO Watermarks*******

Refer

to the first scenario in Figure 2-25. If the link between MDC1 on chassis 1 and

MDC

1 on chassis 2 fails, the base IRF protocol will remain online as there are still

3 active links between chassis that can be used by the base IRF protocol.

between chassis that can be used by the base IRF protocol. Figure 2-25: 12500/12500E link failure

Figure 2-25:

12500/12500E link failure scenario

However, the data plane connection for MDC 1 is down which results in a split for

MDC 1. In a traditional IRF system, that would result in a chassis split brain.

However, in this example by contrast, the base IRF protocol can determine that both chassis are still online and are still connected because the 3 remaining links are still

active. The base IRF protocol running at the chassis level will trigger MDC 1 to shut

down

all external ports on the standby chassis, but the core IRF protocol and other

MDCs

continue to operate normally.

This is in effect a split brain scenario for MDC 1, but is automatically resolved by

the base IRF protocol because the remaining links are still active and can be used to

detect the failure of the single MDC. Once again, MDC 1 is lost on the standby

chassis, but MDC 2, 3 and 4 will continue to operate normally.

In the second scenario, the IRF link that is part of MDC 2 is lost. In this example, as per the previous example, the base IRF protocol continues to function normally. This is because 3 out of 4 links are still up for the base IRF protocol. The data connection for MDC 2 is down in this example, and this results in a split brain for MDC 2. The IRF protocol will shut down the external facing interfaces of MDC 2 on the standby chassis. All other MDCs will continue to operate normally and so will the base IRF

******ebook converter DEMO Watermarks*******

protocol.

Another advantage of this setup is that if the IRF link for a given MDC is restored, the MDC is not rebooted and the ports on the slave device are restored automatically.

There is no reboot of the slave device as long as there is an IRF connection between

the switches.

A similar situation occurs in the third scenario. In this example, both MDC 1 and

MDC 2 will have the external interfaces of the standby chassis shut down because of

the split brain on those MDCs. The base IRF protocol will continue to operate as normal as there are still two remaining links up between the chassis. MDCs 3 and 4 will also continue to operate normally.

In the last example, all links between the chassis are lost. This means that there is no communication between the chassis IRF ports. This results in a split brain scenario for the base IRF protocol and all MDCs. This scenario requires an external multiple active detection method such as MAD BFD to resolve the split brain.

IRF-based MDC: IRF Fabric Split

An IRF fabric is split when no physical IRF ports connecting the chassis are active. As shown in Figure 2-26, this results in both chassis becoming active at the same time with the same IP address and same MAC address. This results in multiple network issues and requires a split brain protocol such as Multi Active Detection (MAD) to resolve. One of the systems in the IRF fabric should shut down all external ports.

in the IRF fabric should shut down all external ports. Figure 2-26: IRF-based MDC: IRF Fabric

Figure 2-26:

IRF-based MDC: IRF Fabric Split

Previously in this chapter, we discussed the scenario of a split in a single MDC where the standby MDC is automatically shut down. When the link recovers, the

MDC is restarted and not the entire chassis. The base kernel and other MDCs will

******ebook converter DEMO Watermarks*******

continue to operate normally.

However, when the entire IRF is lost like in this example, the situation is different.

When the link is recovered, the standby system will need to be rebooted when it

rejoins the fabric. This is similar to a traditional IRF system.

Multi Active Detection (MAD)

When all physical IRF ports between chassis go down, an additional mechanism is

required to resolve multiple active devices. In order to ensure that the split brain is

detected and resolved, configure traditional MAD BFD or MAD LACP.

MAD BFD may be the preferred MAD method as there is no dependency on any

other devices outside of the IRF fabric and MAD BFD is very fast at detecting the

split.

As shown in Figure 2-27, MAD BFD is configured at the base IRF level and it thus configured using the Admin MDC. In addition, all MAD BFD links need to be assigned to the Admin MDC.

all MAD BFD links need to be assigned to the Admin MDC. Summary Figure 2-27: Multi

Summary

Figure 2-27:

Multi Active Detection (MAD)

In this chapter, you learned about Multitenant Device Context (MDC). This is a technology that can partition a physical device or an IRF fabric into multiple logical switches called "MDCs."

MDC features and use cases were discussed in this chapter, including using a single

physical switch for multiple customers, which provides separation but also leverages as single device.

******ebook converter DEMO Watermarks*******

The MDC architecture, supported devices and operation were discussed. Upgrade restrictions and options were also discussed.

Lastly, support for MDC and IRF was discussed including the differences between first and second generation switches such as the 12500 and 12900. The way IRF ports are configured and the results of link failures including split brain scenarios was also discussed.

Learning Check

Answer each of the questions below.

1. An administrator has configured two customer MDCs (MDC 2 and MDC 3) on a core 12500 switch. What should an administrator configure to allow traffic between the two MDCs?

a.

Create routed ports in each MDC and configure inter-VLAN routing between the MDCs.

b.

Configure VRFs in each MDC and enable route leaking between the VRFs.

c.

Connect a physical cable from a port in MDC 2 to a port in MDC 3 and then configure the ports to be in the same VLAN on each MDC.

d.

Configure routing between MDC 1 and the customer MDCs. Traffic between customer MDCs must be sent via the Admin MDC.

2. A network administrator has taken delivery of a new HP 12900 switch. How many MDCs exist when the switch is booted?

a. Zero

b. One

c. Two

d. Four

e. Nine

3. How are interfaces allocated to MDCs?

a. By individual interface

b. By interface group

c. By interface port

d. By MDC number

4. Which device requires separate IRF ports per MDC?

******ebook converter DEMO Watermarks*******

a. 10500

b. 12900

c. 11900

d. 12500

5. A 12500 switch is configured with 4 IRF ports, each of which is in a different MDC. Port 1 = MDC 1, Port 2 = MDC 2, Port 3 = MDC 4.

IRF Port 1 goes down. What is the result?

a. All MDCs go offline.

b. An IRF split occurs and MAD is required to resolve the split brain.

c. The core IRF protocol goes offline, but IRF within the MDCs continues as normal.

d. MDC 1 goes offline, but other MDCs continue as normal. The core IRF protocol requires MAD to resolve the split brain.

e. MDC 1 goes offline, but other MDCs continue as normal. The core IRF protocol continues as normal.

Learning Check Answers

1. c

2. b

3. b

4. d

5. e

******ebook converter DEMO Watermarks*******

3 Multi-CE (MCE)

EXAM OBJECTIVES

In this chapter, you learn to:

Describe MCE Features.

Describe MCE use cases.

Configure MCE.

Describe and configure route leaking.

Configure isolated management access.

INTRODUCTION

Multi-VPN-Instance CE (MCE) enables a switch to function as a Customer Edge (CE) device of multiple VPN instances in a BGP/MPLS VPN network, thus reducing network equipment investment. In the remainder of this module we will use Multi-CE or MCE when talking about Multi-VPN-Instance CE.

MPLS L3VPN Overview

MPLS L3VPN is an L3VPN technology used to interconnect geographically dispersed VPN sites, as shown in Figure 3-1. MPLS L3VPN uses BGP to advertise VPN routes and uses MPLS to forward VPN packets over a service provider backbone.

******ebook converter DEMO Watermarks*******

Figure 3-1: MPLS L3VPN overview MPLS L3VPN provides flexible networking modes, excellent scalability, and convenient

Figure 3-1:

MPLS L3VPN overview

MPLS L3VPN provides flexible networking modes, excellent scalability, and convenient support for MPLS QoS and MPLS TE.

and convenient support for MPLS QoS and MPLS TE. Note MPLS basics are discussed in chapter

Note MPLS basics are discussed in chapter 3 and MPLS VPNs in other study guides. This study guide only covers the MCE feature without a detailed discussion of MPLS L3VPNs.

Basic MPLS L3VPN Architecture

A basic MPLS L3VPN architecture has the following types of devices:

■ Customer edge device (CE device or CE) - A CE device resides on a customer network and has one or more interfaces directly connected to a service provider network. It does not support VPN or MPLS.

■ Provider edge device (PE device or PE) - A PE device resides at the edge of a service provider network and connects to one or more CEs. All MPLS VPN services are processed on PEs.

■ Provider device (P device or P) - A P device is a core device on a service provider network. It is not directly connected to any CE. A P device has only basic MPLS forwarding capability and does not handle

CEs and PEs mark the boundary between the service providers and the customers. A

******ebook converter DEMO Watermarks*******

CE is usually a router. After a CE establishes adjacency with a directly connected PE, it redistributes its VPN routes to the PE and learns remote VPN routes from the PE. CEs and PEs use BGP/IGP to exchange routing information. You can also configure static routes between them.

After a PE learns the VPN routing information of a CE, it uses BGP to exchange VPN routing information with other PEs. A PE maintains routing information about only VPNs that are directly connected, rather than all VPN routing information on the provider network.

A P router maintains only routes to PEs. It does not need to know anything about VPN routing information.

When VPN traffic is transmitted over the MPLS backbone, the ingress PE functions as the ingress LSR, the egress PE functions as the egress LSR, while P routers function as the transit LSRs.

Site

A site has the following features:

■ A site is a group of IP systems with IP connectivity that does not rely on any service provider network.

■ The classification of a site depends on the topological relationship of the devices, rather than the geographical relationships, though the devices at a site are, in most cases, adjacent to each other geographically.

■ A device at a site can belong to multiple VPNs, which means that a site can belong to multiple VPNs.

■ A site is connected to a provider network through one or more CEs. A site can contain multiple CEs, but a CE can belong to only one site.

Sites connected to the same provider network can be classified into different sets by policies. Only the sites in the same set can access each other through the provider network. Such a set is called a VPN.

Termininology

VRF / VPN Instance

VPN instances, also called virtual routing and forwarding (VRF) instances, implement route isolation, data independence, and data security for VPNs.

******ebook converter DEMO Watermarks*******

A VPN instance has the following components:

■ A separate Label Forwarding Information Base (LFIB).

■ A separate routing table.

■ Interfaces bound to the VPN instance.

■ VPN instance administration information, including route distinguishers (RDs), route targets (RTs), and route filtering policies.

To associate a site with a VPN instance, bind the VPN instance to the PE's interface connected to the site. A site can be associated with only one VPN instance, and different sites can associate with the same VPN instance. A VPN instance contains the VPN membership and routing rules of associated sites.

With MPLS VPNs, routes of different VPNs are identified by VPN instances.

A PE creates and maintains a separate VPN instance for each directly connected site. Each VPN instance contains the VPN membership and routing rules of the corresponding site. If a user at a site belongs to multiple VPNs, the VPN instance of the site contains information about all the VPNs.

For independence and security of VPN data, each VPN instance on a PE has a separate routing table and a separate label forwarding information base (LFIB).

A VPN instance contains the following information: an LFIB, an IP routing table, interfaces bound to the VPN instance, and administration information of the VPN instance. The administration information includes the route distinguisher (RD), route filtering policy, and member interface list.

VPN-IPv4 Address

Each VPN independently manages its address space. The address spaces of VPNs might overlap. For example, if both VPN 1 and VPN 2 use the addresses on subnet 10.110.10.0/24, address space overlapping occurs.

BGP cannot process overlapping VPN address spaces. For example, if both VPN 1 and VPN 2 use the subnet 10.110.10.0/24 and each advertise a route destined for the subnet, BGP selects only one of them, resulting in the loss of the other route.

Multiprotocol BGP (MP-BGP) can solve this problem by advertising VPN-IPv4 addresses (also called VPNv4 addresses).

As shown in Figure 3-2, a VPN-IPv4 address consists of 12 bytes. The first eight

******ebook converter DEMO Watermarks*******

bytes represent the RD, followed by a four-byte IPv4 prefix. The RD and the IPv4 prefix form a unique VPN-IPv4 prefix.

The RD and the IPv4 prefix form a unique VPN-IPv4 prefix. Figure 3-2: VPN-IPv4 address An

Figure 3-2:

VPN-IPv4 address

An RD can be in one of the following formats:

■ When the Type field is 0, the Administrator subfield occupies two bytes, the Assigned number subfield occupies four bytes, and the RD format is 16-bit AS number:32-bit user-defined number. For example, 100:1.

■ When the Type field is 1, the Administrator subfield occupies four bytes, the Assigned number subfield occupies two bytes, and the RD format is 32-bit IPv4 address:16-bit user-defined number. For example, 172.1.1.1:1.

■ When the Type field is 2, the Administrator subfield occupies four bytes, the Assigned number subfield occupies two bytes, and the RD format is 32-bit AS number:16-bit user-defined number, where the minimum value of the AS number is 65536. For example, 65536:1.

To guarantee global uniqueness for a VPN-IPv4 address, do not set the Administrator subfield to any private AS number or private IP address.

Route Target Attribute

MPLS L3VPN uses route target community attributes to control the advertisement of VPN routing information. A VPN instance on a PE supports the following types of route target attributes:

■ Export target attribute—A PE sets the export target attribute for VPN-IPv4 routes learned from directly connected sites before advertising them to other PEs.

■ Import target attribute—A PE checks the export target attribute of VPN-IPv4 routes received from other PEs. If the export target attribute matches the import target attribute of a VPN instance, the PE adds the routes to the routing table of the VPN instance.

Route target attributes define which sites can receive VPN-IPv4 routes, and from which sites a PE can receive routes.

******ebook converter DEMO Watermarks*******

Like RDs, route target attributes can be one of the following formats:

■ 16-bit AS number:32-bit user-defined number. For example, 100:1.

■ 32-bit IPv4 address:16-bit user-defined number. For example, 172.1.1.1:1.

■ 32-bit AS number:16-bit user-defined number, where the minimum value of the AS number is 65536. For example, 65536:1.

MCE / VRF-Lite

Multi-CE or VRF-Lite supports multiple VPN instances in customer edge devices. This feature provides separate routing tables or VPNs without MPLS L3VPNs and supports overlapping IP addresses.

MCE Overview

BGP/MPLS VPN transmits private network data through MPLS tunnels over the public network. However, the traditional MPLS L3VPN architecture requires that each VPN instance use an exclusive CE to connect to a PE, as shown in Figure 3-3.

exclusive CE to connect to a PE, as shown in Figure 3-3 . Figure 3-3: MCE

Figure 3-3:

MCE overview

A private network is usually divided into multiple VPNs to isolate services. To meet these requirements, you can configure a CE for each VPN, which increases device expense and maintenance costs. Or, you can configure multiple VPNs to use the same CE and the same routing table, which sacrifices data security.

You can use the Multi-VPN-Instance CE (MCE) function in multi-VPN networks.

******ebook converter DEMO Watermarks*******

MCE allows you to bind each VPN to a VLAN interface. The MCE creates and maintains a separate routing table for each VPN.

This separates the forwarding paths of packets of different VPNs and, in conjunction with the PE, can correctly advertise the routes of each VPN to the peer PE, ensuring the normal transmission of VPN packets over the public network.

As shown in Figure 3-3, the MCE device creates a routing table for each VPN. VLAN interface 2 binds to VPN 1 and VLAN-interface 3 binds to VPN 2. When receiving a route, the MCE device determines the source of the routing information according to the number of the receiving interface, and then adds it to the corresponding routing table. The MCE connects to PE 1 through a trunk link that permits packets tagged with VLAN 2 or VLAN 3. PE 1 determines the VPN that a received packet belongs to according to the VLAN tag of the packet, and sends the packet through the corresponding tunnel.

You can configure static routes, RIP, OSPF, IS-IS, EBGP, or IBGP between an MCE and a VPN site and between an MCE and a PE.

between an MCE and a VPN site and between an MCE and a PE. Note To

Note To implement dynamic IP assignment for DHCP clients in private networks, you can configure DHCP server or DHCP relay agent on the MCE. When the MCE functions as the DHCP server, the IP addresses assigned to different private networks cannot overlap.

Feature Overview

MCE Features

MCE supports the configuration of additional routing tables within a single routing device. As analogy, this can be compared to VLANs configured on Layer 2 switches. Each VLAN is a separate, isolated Layer 2 network and each VPN instance is a separate, isolated Layer 3 network. Each VPN instance or VRF is a separate routing table which runs independently of other routing tables on the device.

In Layer 2 VLANs, a Layer 2 access port belongs to a single VLAN. In the same way, in VPN-instances, each Layer 3 routed interface belongs to a single VPN instance.

Examples of interfaces that belong to a single VPN instance include:

■ The Layer 3 interface of a VLAN. Example: interface vlan 10

******ebook converter DEMO Watermarks*******

Routed ports. Example: Gigabit Ethernet 1/0/2

■ Routed subinterfaces. Example: Gigabit Ethernet 1/0/2.10

■ Loopback interfaces. Example: interface loopback 1

In Figure 3-4, various interfaces have been defined in separate VPN instances. As an example, Gigabit Ethernet 1/0 and Gigabit Ethernet 2/0.10 are configured in the RED VPN instance, Gigabit Ethernet 2/0.20 is configured in the GREEN VPN instance and loopback 10, interface VLAN 10 and interface VLAN 10 are configured in the BLUE VPN instance.

interface VLAN 10 are configured in the BLUE VPN instance. Figure 3-4: Feature overview Each VPN

Figure 3-4:

Feature overview

Each VPN instance configured by a network administrator has separate interfaces and separate routing tables.

Supported Platforms

MCE is available on almost all Comware routing devices (switches and routers).

Comware 5 fixed port switches include the 3600v2, 5500, 5800 and 5820 switches. Comware 7 fixed port switches include the 5900, 5920 and 5930 switches. Chassis based switches running either Comware 5 or Comware 7 include the 7500 (Comware 5), 10500, 11900, 12500 and 12900 switches.

Routers that support MCE include the MSR, HSR and SR series routers.

******ebook converter DEMO Watermarks*******

Design Considerations

The number of VPN instances supported is hardware dependent, as shown in Figure 3-5. For software based routers, the restriction is typically a memory restriction.

routers, the restriction is typically a memory restriction. Figure 3-5: Design considerations For switches, this is

Figure 3-5:

Design considerations

For switches, this is typically restricted by the ASICs used in the switches.

Use Case 1: Multi-Tenant Datacenter

A number of use cases for MCE will now be discussed.

The first use case is a Multi-Tenant Data Center. This is a data center infrastructure provided by a hosting provider offering various services to customers.

A requirement in the environment is that each customer should have a separate routing infrastructure isolated from other customers.

Access control lists (ACLs) could be used to separate customers, but ACLs need to be individually configured and are often very complex and are prone to errors. Customers would still be running within the same routing table instance and a mis- configured ACL would allow access between customer networks. By default traffic would be permitted between customers and only with careful ACL configuration are customers blocked.

MCE in contrast creates separate routing tables and thus separates customer resources by design. No access is permitted between VPN instances by default. Only with explicit additional configuration (route leaking) is traffic permitted between the separate VPN instances. The MCE feature is also much simpler to configure and maintain than traditional ACLs.

******ebook converter DEMO Watermarks*******

Typically, to ensure that all of these customers can access a common internet gateway connection, MCE is combined with a virtual firewall per customer. The firewall used would also be VPN instance aware to ensure separation.

In Figure 3-6, the RED and GREEN customer are configured in separate VPN instances and cannot communicate with each other, even though they are using a shared network infrastructure. Both customers can access also the Internet via the common Internet firewall.

access also the Internet via the common Internet firewall. Figure 3-6: Use Case 1: Multi-tenant datacenter

Figure 3-6:

Use Case 1: Multi-tenant datacenter

Use Case 2: Campus with Independent Business Units

The second use case is a campus with independent business units, or teams or applications.

In some cases, external teams may be working at a customer site on a specific project, but may be located throughout the campus. The owner of the infrastructure

******ebook converter DEMO Watermarks*******

may want to isolate the external team from the rest of the network, but allow them to communicate across different parts of the core infrastructure. This would create a separate isolated virtual network using the same equipment.

A second example may be the use of external application monitoring. An internal

ERP application may be monitored by an external supplier or partner. MCE could be used to tightly control which networks are available to the external party. Only certain internal routes would be advertised and available to the external party.

A third example of service isolation is a managed voice over IP (VoIP) infrastructure. In this example, the entire VoIP infrastructure is managed and configured by an external partner. The internal VoIP addressing is isolated from the normal corporate infrastructure providing better security and separation. The external VoIP partner can manage the VoIP network, but has no access to the rest of the network.

A forth example is a guest network. A network may consist of multiple locations

connected via routed links. Each location may need to provide guest connectivity, but

also use a centralized Internet connection. A remote site may be connected via a routed WAN link to the central site and in this case, configuration of separate VPN instances may be beneficial to provide guest network isolation across routed networks.

Use Case 3: Overlapping IP Segments

In this third use case example, support for overlapping IP networks is required. This

may occur when companies merge and the same IP address space is used by multiple parts of the business.

In this case each business or department is separated by VPN instances to isolate the

networks and their addressing.

If connectivity between the instances is required, a VPN instance aware firewall

could be used at the Layer 3 border between instances. This device would perform

network address translation (NAT) between the VPN instances as well as provide firewall functionality.

Use Case 4: Isolated Management Network

A forth use case of VPN instances is an isolated management network for network

devices.

This would not be required for Layer 2 switches as these devices do not have IP

******ebook converter DEMO Watermarks*******

addresses in the customer network. The management subnet of a Layer 2 device is by default isolated from the customer or user portion of the network. This is because a Layer 2 switch only has one Layer 3 IP address which is used exclusively for device management, but is configured in a separate management VLAN.

On Inter-VLAN routing devices or Layer 3 devices however, the IP interfaces of the device are accessible by user or customer devices by design. Separation in this case would be required. A dedicated VPN instance would be created for the management interface of the device. Protocols such as SNMP, telnet, SSH and other traditional networking management protocols would operate inside the dedicated VPN-Instance and would not be accessible from the customer VPN instances.

and would not be accessible from the customer VPN instances. Note Several HP Provision switches have

Note Several HP Provision switches have OOB Management ports. The Provision OOB Management ports operate by default in their own IP routing space. There is no requirements to define a new routing table for management purposes. This is in contrast with HP Comware devices which require administrators to define a management routing table (VPN Instance) for the OOB Management port.

Use Case 5: Shared Services in Data Center

This last use case discussed is a shared services VPN instance in a data center. In the first use case discussed, VPN instances were used to separate customer networks. In this example, VPN instances are extended to provide shared services.

The type of shared services that a service provider may offer a customer includes central firewall facilities, backup facilities, network monitoring, hypervisor management and security services. All services could be provided either within a single VPN instance or by using multiple VPN instances.

Customers could continue using their own routing protocols such as OSPF within their customer VPN instances. The shared services instances may even use different routing protocols. Each VPN instance is still isolated and only specific routes are permitted between the VPN instances by using route leaking.

Basic Configuration Steps

The following is an overview of the basic configuration steps:

1. Define a new VPN instance. This creates a new routing table or virtual routing

******ebook converter DEMO Watermarks*******

and forwarding instance (VRF).

2. Each VPN instance is uniquely identified by a route distinguisher (RD). This is an eight byte value used to uniquely identify routes in Multiprotocol BGP (MP- BGP). Even though MP-BGP is not used, the RD must be specified.

3. Layer 3 interfaces are then assigned to the VPN instance.

4. All existing interface configuration is removed in step 3. Any IP address or other configuration will need to be reconfigured.

5. Optionally, dynamic or static routing can be configured.

Configuration Step 1: Define VPN-Instance

A VPN instance is a collection of the VPN membership and routing rules of its associated site. See Figure 3-7 and Table 3-1 for the first configuration steps to create a VPN instance.

for the first configuration steps to create a VPN instance. Figure 3-7: Configuration step 1: Define

Figure 3-7:

Configuration step 1: Define VPN-Instance

Table 3-1: The first configuration step is to create a VPN instance

Step

Command

Remarks

1. Enter system view.

system-view

 

2. Create a VPN instance and enter VPN instance view.

ip vpn-instance vpn-instance-name

By default, no VPN instance is created.

Once the VPN instance has been defined, a list of VPN instances can be displayed and the routing table of the VPN instance can be displayed.

******ebook converter DEMO Watermarks*******

By default, no interfaces will be bound to the VPN instance apart from internal loopback interfaces in the 127.0.0.0 range. The display ip routing-table vpn- instance <name> will display this, as shown in Figure 3-8.

<name> will display this, as shown in Figure 3-8 . Figure 3-8: Step 1: Define VPN-Instance

Figure 3-8:

Step 1: Define VPN-Instance (continued)

Configuration Step 2: Route Distinguisher

The second step is to configure the route-distinguisher (RD) of the VPN instance, as shown in Figure 3-9.

(RD) of the VPN instance, as shown in Figure 3-9 . Figure 3-9: Configuration step 2:

Figure 3-9:

Configuration step 2: Route Distinguisher

BGP cannot process overlapping VPN address spaces. For example, if both VPN 1 and VPN 2 use the subnet 10.110.10.0/24 and each advertise a route destined for the subnet, BGP selects only one of them, resulting in the loss of the other route. Multiprotocol BGP (MP-BGP) can solve this problem by advertising VPN-IPv4 prefixes.

MCE does not require MP-BGP, but a unique RD is still required.

Use Table 3-2 to configure a Route Distinguisher and optional descriptions.

Table 3-2: How to configure an RD and optional descriptions

******ebook converter DEMO Watermarks*******

Step

Command

Remarks

1. Enter system view.

system-view

 

2. Create a VPN instance and

ip vpn-instance vpn-instance-name

By default, no VPN instance is created.

enter VPN instance view.

3.

Configure an RD for the

route-distinguisher

By default, no RD is specified for a VPN instance.

VPN instance.

route-distinguisher

4.

(Optional.) Configure a

 

By default, no description is configured for a VPN instance.

description for the VPN instance.

Description

description

5.

(Optional.) Configure a

 

By default, no VPN ID is configured for a VPN instance.

VPN ID for the VPN instance.

vpn vpn

The command display ip vpn-instance [ instance-name vpn-instance-

name ] displays information about a specified or all VPN instances.

Syntax

display ip vpn-instance [ instance-name vpn-instance-name ]

instance-name vpn-instance-name

Displays information about the specified VPN instance. The vpn- instance-name is a case-sensitive string of 1 to 31 characters. If no VPN instance is specified, the command displays brief information about all VPN instances.

Example

Display brief information about all VPN instances, as shown in Figure 3-10.

Display brief information about all VPN instances, as shown in Figure 3-10 . ******ebook converter DEMO

******ebook converter DEMO Watermarks*******

Figure 3-10: Step 2: Route Distinguisher (continued) Command output is shown in Table 3-3 .

Figure 3-10:

Step 2: Route Distinguisher (continued)

Command output is shown in Table 3-3.

Table 3-3: Display VPN-instance route distinguisher command output

Field

Description

VPN-Instance Name

Name of the VPN instance.

RD

RD of the VPN instance.

Create Time

Time when the VPN instance was created.

Configuration Step 3: Define L3 Interface

Optionally, Layer 3 routed interfaces can be defined in the VPN instance. This typically applies to switches as most switches have only a single routed interface by default - interface VLAN 1. Additional Layer 3 interfaces can be created either as routed ports, or Layer 3 VLAN interface, or routed subinterface, or loopback interface.

Use display interface brief to display brief Ethernet interface information. In the output in Figure 3-11, multiple interface types are shown, including a routed port, routed subinterface, loopback interface and VLAN interface.

******ebook converter DEMO Watermarks*******

Figure 3-11: Step 3: Define L3 Interface (continued) Syntax display interface [ interface-type [ interface-number

Figure 3-11:

Step 3: Define L3 Interface (continued)

Syntax

display interface [ interface-type [ interface-number | interface- number.subnumber ] ] brief [ description ]

interface-type

Specifies an interface type.

interface-number

Specifies an interface number.

interface-number.subnumber

Specifies a subinterface number, where interface-number is a main interface (which must be a Layer 3 Ethernet interface) number, and subnumber is the number of a subinterface created under the interface. The value range for the subnumber argument is 1 to 4094.

description

Displays the full description of the specified interface. If the keyword is not specified, the command displays at most the first 27 characters of the interface description. If the keyword is specified, the command displays all characters of the interface description.

Usage Guidelines

If no interface type is specified, this command displays information about all interfaces.

******ebook converter DEMO Watermarks*******

If an interface type is specified but no interface number or subinterface number is specified, this command displays information about all interfaces of that type.

If both the interface type and interface number are specified, this command displays information about the specified interface.

Examples

Display brief information about all interfaces.

Examples Display brief information about all interfaces. The brief information of interface(s) under bridge mode:

The brief information of interface(s) under bridge mode:

The brief information of interface(s) under bridge mode: Command output is shown in Table 3-4 .

Command output is shown in Table 3-4.

Table 3-4: Display brief information about all interfaces command output

Field

Description

The brief

******ebook converter DEMO Watermarks*******

information of interface(s) under route mode:

Brief information about Layer 3 interfaces.

Link: ADM - administratively down; Stby - standby

ADM—The interface has been shut down by the network administrator. To recover its physical layer state, run the undo shutdown command. Stby—The interface is a standby interface.

Protocol: (s) – spoofing

If the network layer protocol of an interface is UP, but its link is an on-demand link or not present at all, this field displays UP (s), where s represents the spoofing flag. This attribute is typical of interface Null 0 and loopback interfaces.

Interface

Interface name.

 

Physical link state of the interface:

Link

UP—The link is up. DOWN—The link is physically down. ADM—The link has been administratively shut down. To recover its physical state, run the undo shutdown command. Stby—The interface is a standby interface.

Description

Interface description configured by using the description command. If the description keyword is not specified in the display interface brief command, the Description field displays at most 27 characters. If the description keyword is specified in the display interface brief command, the field displays the full interface description.

The brief information of interface(s) under bridge mode:

Brief information about Layer 2 interfaces.

Speed or Duplex: (a)/A - auto; H - half; F – full

If the speed of an interface is automatically negotiated, its speed attribute includes the auto negotiation flag, indicated by the letter a in parentheses. If the duplex mode of an interface is automatically negotiated, its duplex mode attribute includes the following options:

(a)/A—Auto negotiation. H—Half negotiation. F—Full negotiation.

Type: A - access; T -

Li