Sei sulla pagina 1di 112

9771 WCE RNC PRODUCT ENGINEERING

INFORMATION
UMT/IRC/APP/041734

INTERNAL

09/DEC/2014

Standard 02.07/EN
9771 WCE RNC Product Engineering Information

Copyright 2014 by Alcatel-Lucent. All Rights Reserved.

About Alcatel-Lucent
Alcatel-Lucent (Euronext Paris and NYSE: ALU) provides solutions that enable service providers,
enterprises and governments worldwide, to deliver voice, data and video communication services to
end-users. As a leader in fixed, mobile and converged broadband networking, IP technologies,
applications, and services, Alcatel-Lucent offers the end-to-end solutions that enable compelling
communications services for people at home, at work and on the move. For more information, visit
Alcatel-Lucent on the Internet:
http://www.alcatel-lucent.com
Notice
The information contained in this document is subject to change without notice. At the time of
publication, it reflects the latest information on Alcatel-Lucents offer, however, our policy of continuing
development may result in improvement or change to the specifications described.
Trademarks
Alcatel, Lucent Technologies, Alcatel-Lucent and the Alcatel-Lucent logo are trademarks of Alcatel-
Lucent. All other trademarks are the property of their respective owners. Alcatel-Lucent assumes no
responsibility for inaccuracies contained herein.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 2 sur 112


9771 WCE RNC Product Engineering Information

PUBLICATION HISTORY

June 2013
Issue V01/ EN, Preliminary
. Document creation started from Workshop and updated with R&D

March 2014

Issue V01.01/EN Preliminary

Update on
Dimensioning view
vMWare feature description
Capacity evaluation
Connectivity Model
VM reference Configuration
IP Engineering Architecture.

June 2014

Issue V02.01/EN Preliminary

Update on tenant references for LR14.2W release


Update on external IP addresses configuration

June 2014
Issue V02.02/EN Preliminary
Correction on WCE RNC3G capacity licensing

August 2014

Issue V02.03/EN Preliminary


Update concerning WCE Dimensioning, Power Supply and Consumption
Update concerning the Platform Management Server description

November 2014

Issue V02.04/EN Standard

December 2014
Issue V02.05/EN Approved Standard

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 3 sur 112


9771 WCE RNC Product Engineering Information

Update following R&D and PS remarks

December 2014

Issue V02.06/EN
Update for new hardware configurations (LR14.2W) due to some HP materials designed out

December 2014
Issue V02.07/EN
Corrections of some typos and about editorial change

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 4 sur 112


9771 WCE RNC Product Engineering Information

TABLE OF CONTENTS

1 INTRODUCTION .................................................................................................................................. 11
1.1 OBJECT ......................................................................................................................................... 11
1.2 HOW THIS DOCUMENT IS ORGANIZED ..................................................................................................... 11
1.3 AUDIENCE FOR THIS DOCUMENT .......................................................................................................... 11
1.4 SCOPE OF THIS DOCUMENT ................................................................................................................. 11
1.5 RULES AND RECOMMANDATIONS .......................................................................................................... 12
1.6 RELATED DOCUMENTS........................................................................................................................ 13
1.7 STANDARDS ..................................................................................................................................... 13
1.7.1 3GPP .............................................................................................................. 13
1.8 PROCESS ........................................................................................................................................ 13
1.8.1 external .......................................................................................................... 13
1.9 PLM ............................................................................................................................................... 13
1.9.1 external .......................................................................................................... 13
1.10 TECHNICAL PUBLICATIONS ................................................................................................................ 13
1.10.1 external ........................................................................................................ 13
1.11 TECHNICAL PUBLICATIONS OPERATIONS ............................................................................................... 14
1.11.1 external ........................................................................................................ 14
1.12 R&D DOCUMENTS (INTERNAL) ........................................................................................................... 14
1.13 ENGINEERING ................................................................................................................................ 14
1.13.1 Global Document .............................................................................................. 14
1.13.2 external ........................................................................................................ 14
1.13.3 Internal ......................................................................................................... 14
1.14 I&C ............................................................................................................................................ 15
1.14.1 Customer Documentation .................................................................................... 15

2 ABBREVIATIONS AND DEFINITIONS ................................................................................................... 16


2.1 ABBREVIATIONS ................................................................................................................................ 16
2.2 DEFINITIONS .................................................................................................................................... 18

3 WCE ARCHITECTURE INTRODUCTION ............................................................................................... 19


3.1 WCE PLATFORM .............................................................................................................................. 19
3.2 WIRELESS CLOUD ELEMENT ARCHITECTURE ........................................................................................... 19
3.3 VIRTUALIZATION OVERVIEW ................................................................................................................ 21
3.3.1 Horizontal vs vertical scability ............................................................................... 22
3.4 WCE RNC ARCHITECTURE ................................................................................................................ 23
3.4.1 3gOAM ............................................................................................................ 24
3.4.2 CMU Cell Management Unit ................................................................................. 25
3.4.3 UMU UE Management unit ................................................................................... 25
3.4.4 pc protocol converter ........................................................................................ 25
3.5 WCE SITE PREPARATION / COMMISIONING/ INTEGRATION .................................................................... 26

4 WCE HARDWARE & MODULES DESCRIPTION .................................................................................... 27


4.1 HP BLADE SYSTEM C7000 ENCLOSURE ................................................................................................. 27
4.2 HP PROLIANT BL460C G8 SERVER BLADE MODULE DESCRIPTION ................................................................ 32

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 5 sur 112


9771 WCE RNC Product Engineering Information

4.2.1 blade server front view........................................................................................ 32


4.2.2 blade server description ...................................................................................... 34
4.3 CONNECTIVITY MODEL WITH HP BLADE SWITCH ...................................................................................... 35
4.3.1 HP6125XLG faceplate port assignment ..................................................................... 36
4.3.2 hp6125G port assignment ..................................................................................... 36
4.4 NETAPP E5400 SAN DESCRIPTION ...................................................................................................... 37
4.5 WCE PLATFORM MANAGEMENT SERVER ................................................................................................ 39
4.5.1 Characteristics .................................................................................................. 40
4.5.1.1 SERVER HARDWARE ......................................................................................... 40
4.5.1.2 PHYSICAL SPECIFICATIONS ................................................................................ 41
4.5.1.3 POWER REQUIREMENTS (AC & DC) .................................................................... 41
4.5.1.4 DISK ALLOCATION ........................................................................................... 41
4.6 WCE DIMENSIONS / POWER SUPPLY AND CONSUMPTION .......................................................................... 42
4.6.1 characteristics for modules used for central office version ............................................. 42
4.6.2 WCE telecom version Physical Specifications .............................................................. 43
4.6.3 WCE Cabinet single enclosure ................................................................................ 43
4.6.4 WCE Cabinet dual enclosure .................................................................................. 43
4.6.5 WCE POWER specifications Tables ........................................................................... 44
4.6.5.1 EXTERNAL POWER REQUIREMENTS ....................................................................... 44
4.6.5.2 INTERNAL POWER REQUIREMENTS ....................................................................... 44
4.6.6 c7000 enclosure power dissipation tables .................................................................. 44
4.6.6.1 C7000 BLADE SYSTEM ENCLOSURE ...................................................................... 44
4.6.6.2 BL 460 G8 BLADE SERVER E5-2680V2 CPU ........................................................ 44
4.6.7 system power dissipation ..................................................................................... 45
4.6.8 WCE System level power and heat dissipation ............................................................. 45
4.6.8.1 PRIMARY CABINET SINGLE ENCLOSURE 80% COMPUTE UTILIZATION ............................ 45
4.6.8.2 PRIMARY CABINET DUAL ENCLOSURE 80% COMPUTE UTILIZATION .............................. 45
4.6.8.3 EXPANSION CABINET SINGLE ENCLOSURE 80% COMPUTE UTILIZATION ......................... 46
4.6.8.4 EXPANSION CABINET DUAL ENCLOSURE 80% COMPUTE UTILIZATION ........................... 46
4.6.9 c7000 enclosure power sizing ................................................................................ 46
4.6.9.1 SITE POWER AND GROUND CABLING ..................................................................... 46
4.6.10 WCE Cabinet level power infrastructure requirements ................................................. 46
4.7 INTERNAL & EXTERNAL CONNECTIVITY DESCRIPTION................................................................................ 47

5 WCE SYSTEM ARCHITECTURE ........................................................................................................... 48


5.1 WCE IN UMTS ARCHITECTURE ........................................................................................................... 49
5.2 WCE INTERNAL LAN ......................................................................................................................... 50
5.3 WCE LINK AND SWITCHES REDUNDANCY ............................................................................................... 50
5.4 WCE TELECOM CONNECTIVITY TO EXTERNAL NETWORK ........................................................................... 51
5.5 NETWORK CONFIGURATION ................................................................................................................. 51
5.5.1 WCE1 Platform .................................................................................................. 51
5.5.2 WCE2 Platform .................................................................................................. 52
5.5.3 WCE4 platform single network ............................................................................... 53
5.5.4 WCE4 platform dual network ................................................................................. 54
5.6 DATA FLOWS .................................................................................................................................. 55
5.7 WCE OAM INTERFACES AND CONFIGURATION ........................................................................................ 57

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 6 sur 112


9771 WCE RNC Product Engineering Information

5.7.1 in-rack aggregation switch configuration ................................................................... 58

6 WCE SOFTWARE PLATFORM ............................................................................................................. 59


6.1 LRC MANAGER VIEW ........................................................................................................................ 59
6.2 FUNCTIONALITY ............................................................................................................................... 60
6.3 LRC MGR PLATFORM INTERFACES ....................................................................................................... 60
6.4 SOFTWARE MANAGEMENT FOR WCE PLATFORM ...................................................................................... 61
6.5 WCE 3G LINUX DISTRIBUTION MANAGEMENT ......................................................................................... 62

7 WCE RNC REFERENCE ARCHITECTURE ............................................................................................. 62


7.1 UMTS RNC VM CONFIGURATION ........................................................................................................ 62
7.1.1 dynamic VM allocation - anti affinity rules ................................................................. 64
7.2 CMU VM STRUCTURE ....................................................................................................................... 65
7.2.1 Call Processing aka Call-P or c-plane ....................................................................... 66
7.2.2 user plane processing aka u-plane ........................................................................... 66
7.3 UMU VM STRUCTURE....................................................................................................................... 67
7.3.1 call processing aka Call-P or c-plane ........................................................................ 68
7.3.2 user plane processing aka u-plane ........................................................................... 68
7.4 PC VM STRUCTURE .......................................................................................................................... 69
7.4.1 overview ......................................................................................................... 69
7.4.2 mapping nodeb Ids to n@ ..................................................................................... 70
7.5 RESSOURCE ALLOCATION ................................................................................................................... 71
7.5.1 Bottleneck elimination ........................................................................................ 72
7.5.2 No Single point of failure ..................................................................................... 73
7.6 RESSOURCE RESERVATION .................................................................................................................. 74
7.7 WCE RNC CARRIER GRADE DESCRIPTION .............................................................................................. 74
7.8 CARRIER GRADE REQUIREMENT FOR A TENANT ........................................................................................ 75
7.9 WCE IP ARCHITECTURE .................................................................................................................... 77
7.9.1 tenant internal ip address .................................................................................... 78
7.9.2 rnc internal address usage .................................................................................... 79
7.9.3 external ip address dimensioning ............................................................................ 80
7.9.4 wce ip tunneling ................................................................................................ 80
7.9.5 deployment of the tunnel ..................................................................................... 81
7.9.6 scope of the tunneled ip addressing (192.168.x.y) ....................................................... 82
7.9.7 gre implementation note...................................................................................... 82
7.9.8 IP Engineering architecture with 2 rnc tenant ............................................................. 82

8 WCE PLATFORM OVERVIEW .............................................................................................................. 85


8.1 WCE PLATFORM COMPONENT ROLES .............................................................................................. 86
8.1.1 LRC Mgr .......................................................................................................... 86
8.1.2 VmWare Vcenter server (Vcs) ................................................................................ 87
8.1.3 vcenter server core services .................................................................................. 88
8.1.4 VMWare functions .............................................................................................. 88
8.1.5 Disk Access VM .................................................................................................. 90
8.1.5.1 REQUIREMENT FOR 3G DISK ACCESS .................................................................... 90
8.1.5.2 TENANT SOFTWARE DISK ................................................................................... 90
8.2 WCE RNC CPLANE AND UPLANE DATAPATH ........................................................................................... 91
8.3 VLAN TAGGING AND CONFIGURATION .................................................................................................... 92

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 7 sur 112


9771 WCE RNC Product Engineering Information

8.3.1 Non Telecom VLAN description .............................................................................. 92


8.3.2 telecom vlan description ...................................................................................... 94
8.3.3 RNC telecom vlan separation ................................................................................. 94

9 WCE TRANSPORT OVERVIEW ............................................................................................................ 96


9.1 WCE TRANSPORT COMPONENT ........................................................................................................... 97
9.2 NIC TEAMING AND DATA FLOW ............................................................................................................. 99

10 WCE DIMENSIONING RULES ........................................................................................................... 100


10.1 OVERLOAD INDICATORS .................................................................................................................. 100
10.1.1 local overload ................................................................................................ 100
10.1.2 overload decision table ..................................................................................... 100
10.2 VRNC DIMENSIONING RULES....................................................................................................... 103
10.2.1 TRAFFIC ........................................................................................................ 103
10.2.2 vRNC Dimensionning Rules.................................................................................. 104
10.2.3 Coverage limits ............................................................................................... 104

11 WCE RNC 3G CAPACITY DESCRIPTION .......................................................................................... 104


11.1 WCE RNC 3G CAPACITY LICENSING ................................................................................................... 104
11.2 WCE RNC 3G CAPACITY STATUS .................................................................................................... 105
11.2.1 capacity evaluation from customer profile ............................................................. 106
11.2.2 Throughput estimation methodology ..................................................................... 109

12 WCE SHADOW UPGRADE DESCRIPTION ........................................................................................ 109


12.1 SHADOW MANAGEMENT GENERIC INFRASTRUCTURE .............................................................................. 110
12.2 SHADOW DEPLOYMENT PROCEDURE .................................................................................................. 111

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 8 sur 112


9771 WCE RNC Product Engineering Information

LIST OF FIGURES
Figure 1: Wireless Cloud Element Structure....................................................................................................... 20
Figure 2: Virtual Platform ............................................................................................................................................ 21
Figure 3: Virtualization Operation ............................................................................................................................. 22
Figure 4: WIRELESS CLOUD ELEMENT RNC ARCHITECTURE .................................................................................. 24
Figure 5: HP Blade System c7000 Enclosure Front View ................................................................................... 27
Figure 6: HP Blade System c7000 Enclosure Rear View .................................................................................. 28
Figure 7: WCE Primary Cabinet Single Enclosure with OAM switches ............................................................... 30
Figure 8: WCE Primary Cabinet Dual C7000 Enclosure ......................................................................................... 31
Figure 9: WCE Expansion Cabinet Single C7000 Enclosure .................................................................................. 31
Figure 10: WCE Expansion Cabinet Dual C7000 Enclosure ................................................................................... 32
Figure 11: HP Blade Server BL460 G8 ..................................................................................................................... 33
Figure 12: HP Blade Server BL460 G8 Layout ....................................................................................................... 34
Figure 13: HP 6125XLG Faceplate Port Assignment ............................................................................................ 36
Figure 14: HP 6125G OAM Switch Ports Allocation ............................................................................................. 37
Figure 15: E5424 Controller Drive Tray Front view ............................................................................................... 38
Figure 16: NetApp E5424 Controller Drive .............................................................................................................. 39
Figure 17: HP DL380p WCE Management Servers ............................................................................................... 40
Figure 18: Management Server Disk Allocation .................................................................................................... 42
Figure 19: WCE UMTS Architecture .......................................................................................................................... 49
Figure 20: WCE Internal LAN ...................................................................................................................................... 50
Figure 21: WCE 4 Telecom Internal Configuration with link redundancy ....................................................... 51
Figure 22: WCE 1 Telecom Connectivitys .............................................................................................................. 52
Figure 23: WCE 2 Telecom Connectivitys .............................................................................................................. 53
Figure 24: Standard WCE4 Uplink Configuration (Single Network one pair of router) ............................... 54
Figure 25: Standard WCE4 Uplink Configuration (Dual Network two pair of routers)................................. 55
Figure 26: Packets Flow for two tenants ................................................................................................................. 56
Figure 27: Packets Flow for a single tenant ........................................................................................................... 56
Figure 28: OAM Network Context for a WCE 2 Configuration ............................................................................ 57
Figure 29: WCE4 OAM Configuration with in-rack aggregation switch (6125G) ........................................... 58
Figure 30: VMs Distribution ........................................................................................................................................ 63
Figure 31: CMU Role ..................................................................................................................................................... 65
Figure 32: VM CMU Structure ..................................................................................................................................... 66
Figure 33: UMU Role ..................................................................................................................................................... 67
Figure 34: VM UMU Structure ..................................................................................................................................... 68
Figure 35: PC Role......................................................................................................................................................... 69
Figure 36: VM PC Structure ........................................................................................................................................ 70
Figure 37: Carrier Grade Description ....................................................................................................................... 75
Figure 38: Tenant Description .................................................................................................................................... 77
Figure 39: IP Tunnelling ............................................................................................................................................... 81
Figure 40: Tunnelling Deployment ............................................................................................................................ 81
Figure 41: Tunnelling Addressing ............................................................................................................................. 82
Figure 42: RNC tenant VLAN Configuration ........................................................................................................... 83
Figure 43: IP@ requirements for UTRAN Telecom ............................................................................................... 84
Figure 44: IP@ requirements for UTRAN OAM ...................................................................................................... 85
Figure 45: VCenter Server ........................................................................................................................................... 87
Figure 46: Disk Access Tenant Configuration ....................................................................................................... 90
Figure 47: RNC Cplane and Uplane DataPaths ......................................................................................................... 92
Figure 48: WCE VLAN Non Telecom Strategy........................................................................................................... 92
Figure 49: Maximum set of telecom VLANs for the RNC ....................................................................................... 94
Figure 50: Maximal RNC Telecom VLAN Separation ............................................................................................... 95
Figure 51: RNC vNIC with IuPS terminates on UMU ................................................................................................ 96
Figure 52: WCE Transport Reference Architecture .............................................................................................. 96
Figure 53: WCE Transport Component .................................................................................................................... 98
Figure 54: NIC Teaming and Data Flow.................................................................................................................... 99
Figure 55: Shadow Upgrade ..................................................................................................................................... 112

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 9 sur 112


9771 WCE RNC Product Engineering Information

LIST OF TABLES
Table 1: WCE Elements Physical Specifications ...................................................................................................... 43
Table 2: WCE Cabinet Single Enclosure Weight ...................................................................................................... 43
Table 3: WCE Cabinet Dual Enclosure Weight ......................................................................................................... 43
Table 4: WCE external power requirements ........................................................................................................... 44
Table 5: WCE internal power requirements ............................................................................................................ 44
Table 6: C7000 enclosure power dissipation ........................................................................................................... 44
Table 7: BL460G8 power dissipation ......................................................................................................................... 44
Table 8: WCE Primary Cabinet Power Level ............................................................................................................ 45
Table 9: WCE Primary Cabinet Dual Enclosure Power Level................................................................................ 45
Table 10: WCE Expansion Cabinet Single Enclosure Power Level ...................................................................... 46
Table 11: WCE Expansion Cabinet Dual Enclosure Power Level ......................................................................... 46
Table 12: LRC MGR Interface ....................................................................................................................................... 60
Table 13: WCE VM description .................................................................................................................................... 63
Table 14: WCE RNC Tenant Rules ............................................................................................................................... 64
Table 15: Multiple RNCs within a single data center ............................................................................................. 65
Table 16: NodeB Id Mapping ........................................................................................................................................ 71
Table 17: Carrier Grade Requirement....................................................................................................................... 77
Table 18: WCE IP Address Mapping ............................................................................................................................ 78
Table 19: WCE VMs Internal Address Usage ............................................................................................................. 79
Table 20: WCE Internal Reserved IP Ranges ............................................................................................................ 80
Table 21: WCE External IP Address Dimensioning .................................................................................................. 80
Table 22: Overload level actions .............................................................................................................................. 102
Table 23: WCE vRNC Dimensioning Rules ............................................................................................................... 103
Table 24: RNC Zones Traffic Profile......................................................................................................................... 107
Table 25: WCE Capacity evaluation zone1 ............................................................................................................. 107
Table 26: WCE Capacity evaluation zone 2 ............................................................................................................ 108

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 10 sur 112


9771 WCE RNC Product Engineering Information

1 INTRODUCTION
1.1 OBJECT
This document aims at:
Providing reader with a list of pointers to reference documents on some of WCE specific aspects where
Engineering inputs, margins and actions are limited or which are a good source to understand WCE
background.
Providing platform description for WCE and the tenants modules description.
Providing global architecture and network interfaces
Providing the mandatory guideline for usual capacity and dimensioning.
Consolidating some information from miscellaneous sources and across several releases if needed to provide
with a good vision on WCE RNC context and/or requirements.
Being a repository of Engineering Guidelines for WCE RNC in relation with platform, system as well as some
functional aspects in order to help in taking best of its capabilities within customer contexts, to ease
avoidance of quite impacting re-engineering and to capture best practices.

Important Note: This document mainly focuses on topics in relation with WCE platform itself as well as
options offered on its side from an integration perspective into a Network Architecture; please refer to Iu
LR14.2 TEG for more information on a given interface from a both ends and detailed perspective. As a
matter of fact features related with Cell Selection/Re-selection, Call Management, Power Management
and associated algorithms are not to be covered here; please refer to UPUG for those.
Please note that as range of applicable engineering rules is quite large, the content of this document is
subject to changes without notifications.

1.2 HOW THIS DOCUMENT IS ORGANIZED

1.3 AUDIENCE FOR THIS DOCUMENT


The Product Engineering Information document is primarily an external document and secondarily an
internal document; the target audience is:
Alcatel-Lucent Customers for the external version (Network Engineering and Network operations)

Alcatel-Lucent Network Engineering, Presales, Tendering, Sales, Product Marketing and Account teams for
the internal version (for information & alignment)
This version is an internal document. All the technical rules shall be confidential

1.4 SCOPE OF THIS DOCUMENT


Targeted release of this version is LR14.2W. This is the release 1 of the WCE RNC product

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 11 sur 112


9771 WCE RNC Product Engineering Information

1.5 RULES AND RECOMMANDATIONS


Engineering rules (mandatory to be followed) are presented as the following:

Rule:

Current known system/documentation restrictions are presented as the following:

Restriction:

Engineering recommendations (Alcatel-Lucent recommendation for optimal system behavior) are presented
as the following:

Engineering Recommendation:

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 12 sur 112


9771 WCE RNC Product Engineering Information

1.6 RELATED DOCUMENTS


Reminder: This document does not aim at covering all topics where WCE is involved; as such references
provided below are only the ones related with its scope.

1.7 STANDARDS
1.7.1 3GPP

All 3GPP specifications are available at:


http://www.3gpp.org

1.8 PROCESS
1.8.1 EXTERNAL

Tag Reference Title

[Ext_PRO_001] UMT/DCL/APP/042150 UMTS Overall Documentation Plan for


LR14.2W

1.9 PLM
1.9.1 EXTERNAL

Tag Reference Title

[Ext_PLM_001] CMN/CTRL/DD/036972 Wireless Cloud element Product Description

1.10 TECHNICAL PUBLICATIONS

NOTA: For any external communication on the content, it is the responsibility of each customer account
team to check information according to any previous communication done to the customer or specific
customer strategy.
1.10.1 EXTERNAL

Tag Reference Title

[Ext_NTP_001] 3MN-01652-0002-TQZZA 9771 WCE Functional Description

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 13 sur 112


9771 WCE RNC Product Engineering Information

1.11 TECHNICAL PUBLICATIONS OPERATIONS


1.11.1 EXTERNAL

Tag Reference Title

1.12 R&D DOCUMENTS (INTERNAL)


Tag Reference Title
[Int_R&D_001] UMT/RNC/DD/XXXX LRC Mgr Functional Specifications

[Int_R&D_003] WCE System Charts WCE System Charts:


Standard https://wcdma-ll.app.alcatel-
Document lucent.com/livelink/livelink.exe?func=ll&objId=66052722&objAction=
browse&viewType=1

[Int_R&D_004] Telecom Link Wiki Documentation: https://umtsweb.ca.alcatel-


Topology lucent.com/wiki/bin/view/WcdmaRNC/TelecomLinkTopology
[Int_R&D_005] OAM Network Wiki documentation : https://umtsweb.ca.alcatel-
Context lucent.com/wiki/bin/view/WcdmaRNC/OAMLinkTopology
[Int_R&D_006] Carrier Grade Wiki Documentation : http://umtsweb.ca.alcatel-
Context lucent.com/wiki/bin/view/WcdmaRNC/LRCCarrierGrade#WCE_3G_
RNC_Carrier_Grade_Overvie

1.13 ENGINEERING

1.13.1 GLOBAL DOCUMENT

Tag Reference Title

[Gl_Eng_001] UMT/IRC/APP/011676 Iu TEG LR14-2 last edition

1.13.2 EXTERNAL

Tag Reference Title

[Ext_ENG_002] UMT/DCL/DD/0020 EN UTRAN Parameters User Guide (v15.02).

[Ext_ENG_003] UMT/IRC/APP/041273 WCE RNC CIQ V01.03/EN

[Ext_ENG_004] UMT/IRC/APP/41274 WCE Datafill Cookbook for LR14.2 V01.03/EN

1.13.3 INTERNAL

Tag Reference Title

[Int_ENG_001] UMT/PFO/APP/042330 WCE MOPG V02.02/EN

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 14 sur 112


9771 WCE RNC Product Engineering Information

1.14 I&C
1.14.1 CUSTOMER DOCUMENTATION

Tag Reference Title

[Ext_I&C_001] 8BL 00704 0087 DRZZA SPP-70 Specification for 9771 WCE site preparation
https://all1.eu.alcatel-
lucent.com/sites/Notesmigrati_2/Operation%20Support/Enviro
nmental%20Product%20Support/Site%20Preparation%20for%20P
roduct/Site%20preparation%20documents%20for%20BSS%20equi
pments/Forms/AllItems.aspx

[Int_I&C_002] Internal document WCE RNC Hardware Installation


3MN-01652-0003-RJZZA

[Int_I&C_003] 9YZ-04157-0125-RJZZA WCE RNC Platform Management Software Installation

[Int_I&C_004] IEH 522 Section 301 ed8 WCE RNC Commissioning and Integration Manual

9YZ-04157-0143-RJZZA https://wcdma-ll.app.alcatel-
lucent.com/livelink/livelink.exe?func=ll&objId=67367614&obj
Action=browse&viewType=1

[Int_I&C_005] 9YZ-04157-0126-RJZZA WCE RNC Platform Commissioning manual

[Ext_I&C_006] 3MN-01652-0013-PCZZA WCE RNC Maintenance

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 15 sur 112


9771 WCE RNC Product Engineering Information

2 ABBREVIATIONS AND DEFINITIONS

2.1 ABBREVIATIONS
3GPP 3rd Generation Partnership Project

CMU Cell Management Unit

DA Disk Access

DPDK Data Plane Development Kit

DRS Distributed Resource Scheduler (VmWare feature)

VMWare VM management server


ESXi

GRE Generic Routing Encapsulation

iLO Integrated Light Out

IRF Intelligent Resilient Framework

iSCSI Internet Small Computer System Interface

LRC Mgr Light Radio Controller Management

LKDI Licence Key Delivery Infrastructure

MAD Multi Active Detection

MC-LAG Multi Chassis Link Aggregation

MTF Messaging Transport Framework

NBAP NodeB Application Part -- signalling protocol responsible for the control
of the Node B by the RNC

NAS Network Attached Storage

NIC Network Interface Controller

NI Network Interface

NHR Next Hop router

OA On Board Administrator

PC Protocol Converter

RAN Radio Access Network

RPM Redhat Packet Manager

SCSI Small Computer System Interface

SAN Storage Area Network

TBM Transport Bearer Management

TRM Transport Resource Management

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 16 sur 112


9771 WCE RNC Product Engineering Information

UDP NAT UDP Protocol with Network Address Translation

UMU UE Management Unit

vDS Virtual Distributed Switch

VLAN Virtual Local Area Network

VNIC Virtual Network Interface Controller

VMM Virtual Machine Manager or Hypervisor

WCE Wireless Cloud Element

WMS Wireless Management System

Vxell VxWorks Emulator Call Server Application

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 17 sur 112


9771 WCE RNC Product Engineering Information

2.2 DEFINITIONS
NEBS (Network Equipment Building System): Telcordia standards for power cabling, grounding,
and environmental safety, power and operation interfaces for telecommunications equipment. The NEBS
frame is used to house telecommunications equipment.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 18 sur 112


9771 WCE RNC Product Engineering Information

3 WCE ARCHITECTURE INTRODUCTION

3.1 WCE PLATFORM


The ALU system dedicated for managing and controlling the 3G tenant system is named: 9771
Wireless Cloud Element or WCE as abbreviate.

If we detail the WCE architecture, there is:


HP hardware, managed by HP Insight administrator
VMware virtualisation layer providing virtual machine (VM) and virtual switch (vDS) managed by
vSphere Client.
The virtualisation layers are configured by Alcatel-Lucent LRC Mgr software to provide further
isolation and flexibility, vCenter manages LRCMgr and LRCMgr does communicate to the WMS
The UMTS RNC 3G application runs in Linux after migration from VXWorks via Vxell, it is
managed by WMS, which provide a GUI view to the operator(s).

3.2 WIRELESS CLOUD ELEMENT ARCHITECTURE


The Wireless Cloud Element is a pivotal element in the success of any Radio Access Network
(RAN). Using cloud and virtualization technology, the WCE combines the Alcatel-Lucent Radio Network
products into one unified manageable package providing a flexible, scalable, and customizable wireless
platform solution. In its first release, the WCE hosts the following Alcatel-Lucent Network applications
known as tenants:
For LR14.2W release, the description is based on WCE RNC 3G element.
Universal Mobile Telecommunication Systems (UMTS) Radio Network Controller (RNC).

The list of tenants offered within the Wireless Cloud Element can be configured as individual
systems or they can be combined to share hardware computer processing units (CPU), disk storage, and
transport networking equipment.

This consolidated product reduces the system hardware required for each individual product
while increasing capacity and network flexibility demanded by the Radio Access Networks.

The Wireless Cloud Element needs to span a wide variety of deployment models:
Single or multi-technology (WCDMA/LTE) solutions
Large configurations built by adding additional commoditized hardware
Software deployment on top of existing cloud
Virtualization of the controller applications and independence from the computing platform allows
us to address the entire market while controlling the verification costs.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 19 sur 112


9771 WCE RNC Product Engineering Information

Wireless Cloud Element Applications need to modified to limit the volume of inter VM messaging
and to mitigate the impact of extra latency and jitter in order to allow for horizontal scalability. As a result
the architecture of the controller applications differ from previous versions as described in the following
sections:

Figure 1: Wireless Cloud Element Structure

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 20 sur 112


9771 WCE RNC Product Engineering Information

3.3 VIRTUALIZATION OVERVIEW

Virtualization technology effectively provides an environment where each application sees itself
operating on a simple PC interconnected to other VMs by a simple LAN.
Each VM essentially has private:
CPU & Memory Space,
Network Interface
Storage
A centralized management system (vCenter Server in this case) manages the hardware to
provide the virtual environment. Applications are unaware of the existence of the vCenter Server.

Figure 2: Virtual Platform


In a virtual environment, each Virtual Machine (VM) sees itself operating on a simple x86 PC.
This virtual environment provides the CPU, memory, network interface (NIC) and storage device
via standard drivers.
The Virtual Machine Manager (VMM or Hypervisor) implements the simple PC abstraction by
ensuring access to hardware is controlled and possibly running drivers for the applications without its
knowledge.
A Virtual Machine Manager (VMM) or Hypervisor creates a standard environment for each
Virtual Machine (VM) independent of other VMs.
While the Guest Application is executing normal x86 instructions, it uses the CPU as it would
without virtualization

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 21 sur 112


9771 WCE RNC Product Engineering Information

When a Guest attempts to execute kernel instructions (e.g. SYSCALL to enter a driver) the CPU
causes a VM exit and it allows the VMM to run. The VMM implements the restricted function on behalf
of the Guest then uses a VM entry to allow the Guest to continue from where it was stopped
The VMM can be invoked for interrupts (e.g. TTi) and network I/O which can significantly slow the
application vs. a non-virtualized implementation.

Figure 3: Virtualization Operation

3.3.1 HORIZONTAL VS VERTICAL SCABILITY

Virtual machines are created with a specific number of virtual cores thus emulating any size of
physical computer
VMs with a large number of virtual cores show vertical scalability while large numbers of smaller
VMs show horizontal scalability
Applications that scale horizontally will generally provide higher ultimate capacity
RNC software changes are still underway to achieve maximum scalability

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 22 sur 112


9771 WCE RNC Product Engineering Information

3.4 WCE RNC ARCHITECTURE

The attributes of the RNC Architecture for the Wireless Cloud Element are as follows:
There are four roles within the RNC application and three roles provided by the Wireless Cloud Element
Platform. The RNC application roles are:

3gOAM (1+1 sparing) - Provides the wireless (OMU) and OAM interfaces.
CMU or Cell Management Unit (N+M sparing) - Maintains UMTS cells, acts as the SS7 termination
point and provides a proxy to the platform for transport resource management.
UMU or UE Management Unit (Unspared role) - Provides the entire control and user plane processing
for individual UEs.
PC or Protocol Converter (N+1 sparing) - Provides both UDP and GTP-U NAT points and transport
bandwidth management for Wireless Cloud Element applications.

The RNC architecture for the Wireless Cloud Element was changed from that of the 9370 primarily to
enable scalability by moving the bulk of the computing load to an unspared replicatable entity. The
numbers of VMs required achieving the capacity targets are described in the RNC Capacity Section

The Wireless Cloud Element platform component roles are:

LRC Mgr (no sparing) - Manages the configuration of applications or tenants and the VMs used to
create them within the Wireless Cloud Element.
vCS or vCenter Server (no sparing) - The VMware provided VM manager.
Disk Access NAS Front end to SAN

These roles allow for one or more of each of the Wireless Cloud Element applications to be configured and
brought into operation on a wide variety of hardware systems.

Each of the roles in the RNC application and Wireless Cloud Element platform are described in the following
sections.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 23 sur 112


9771 WCE RNC Product Engineering Information

Figure 4: WIRELESS CLOUD ELEMENT RNC ARCHITECTURE

3.4.1 3GOAM

The 3gOAM role acts as the termination point for Operation and Maintenance of the RNC and consists of two
primary sub-roles:

3g application management as the termination as implemented by the RNC 9370 OMU largely
unchanged and
Platform management via a Netconf Interface (Interface to the WMS)

In addition the 3gOAM acts as a host for monitoring and control of the internal components of the
virtual RNC. The 3gOAM node is 1+1 spared.
More information is provided on the UMTS RNC VM Description.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 24 sur 112


9771 WCE RNC Product Engineering Information

3.4.2 CMU CELL MANAGEMENT UNIT

The CMU is the role which is responsible for the creation and management of all of the UMTS
cells in the RAN. It consists primarily of the following sub-roles:

the C-Plane for cells which consists of the NodeB Core Process from the RNC 9370 architecture,
the U-Plane for cells which is an instance of the 9370 RAB processes but specially targeted to handle
common channel traffic on the cells,
the lower two layers of the SS7 networking stack (for IP only), specially the SCTP and M3UA
protocols, and
A proxy for a distributed version of TBM which will handle management of transport resources (UDP
port numbers and link bandwidth).

CMUs are N+M spared where M is defined as the number of instances of the emulated VxWorks
running within a virtual machine

More information is provided on the UMTS RNC VM Description

3.4.3 UMU UE MANAGEMENT UNIT

The UMU role is responsible for all aspects of UE management and consists of the following sub-roles:

the C-Plane for UEs with consists of the UE Core Process from the RNC 9370 architecture,
the U-Plane for UE traffic which is an instance of the 9370 RAB processes but specially targeted to
handle UE traffic, and
The upper layer of the SS7 networking stack, specially SCCP.

All of the context and processing for a single UE happens within a single UMU role and does not
depend on the presence of any other UMU role. In-fact the UMU roles are not aware of the existence of
any other UMU role. As UMUs do not support any form of sparing at either the control, user or signalling
plane level a failure of an UMU will cause the calls that were hosted on that UMU to be lost. Notification
of UMU failure is provided by the new MTF messaging system and will result in each of the
CMUs independently recovering the resources that the failed UMUs were using.

More information is provided on the UMTS RNC VM Description

3.4.4 PC PROTOCOL CONVERTER

One of the primary functions of the PC is UDP NAT which allows for private IP addresses unique
to the controllers to be hidden from external nodes thus allowing for many network advantages such as
reduction in consumption of externally visible addresses and enhanced security. As a central traffic
handling point a failure of the PC will impact the RAN but the impact is not uniform across all connections
- loss of a particular UEs traffic has very limited scope while loss of the common channel traffic for a cell
will result in that cell becoming unavailable (which potentially has wide ranging impact). The Wireless

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 25 sur 112


9771 WCE RNC Product Engineering Information

Cloud Element architecture recognizes these differences and allows common channel connections to be
handled differently than other connections; especially common channel terminate directly on
the CMU that is associated with the cells while other traffic is terminated on the PCs but treated as fully
dynamic and not shared between PCs. A PC failure will result in dropped connections thus the Wireless
Cloud Element will inform other nodes such that appropriate actions much be taken to clean up the failed
calls. Note that this behaviour is similar to the impact of a failure CMU where current calls will be dropped
but the cells will stay active.
The transport bandwidth management function statically reserves bandwidth for common
channels, or other traffic, that is not accounted for by direct measurement within the PC.
The protocols as defined by the 3GPP standard requires the NAT function to be stateful as UDP
port numbers are exchanged via a NBAP protocol side channel; therefore, the PC requires two parts:

a component (TRM) that is responsible for allocation of UDP port numbers and setting up
of connections this is much simplified version of the 9370s TBM, and
The PC NAT component that implements the stateful NAT, again a simplified version of
the PC component from the 9370.
The simplifications result from the absence of a need to support ATM transport networks and the
actual conversion of traffic from AAL2/ATM to UDP/IP.

More information is provided on the UMTS RNC VM Description

3.5 WCE SITE PREPARATION / COMMISIONING/ INTEGRATION


For site preparation, refer to the CID document Site Engineering Method SPP70
[Ext_I&C_001]
For hardware installation, refer to 9771 WCE Hardware Installation [Int_I&C_002]
For Platform Management Software Installation, please refer to [Int_I&C_003]
For WCE commissioning and integration, please refer to [Int_I&C_004]
For knowing about WCE platform commissioning manual, refer to the document
[Int_I&C_005]
About RNC Maintenance, refer to the document [Ext_I&C_006]

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 26 sur 112


9771 WCE RNC Product Engineering Information

4 WCE HARDWARE & MODULES DESCRIPTION


The primary configuration cabinet (WCE1) contains all the necessary modules for operation on a
customer network. The Platform management server module has to be installed on a customer
place.
The WCE hardware contains a HP BladeSystem c7000 Enclosure which accommodates BladeSystem c-
Class server blades, storage blades, and interconnect modules; and provides all the power, cooling, and I/O
infrastructure needed to support them throughout the next several years.
The HP c7000 chassis is composed of: (Refer to Figures below)
Up to 16 half-height (HH) server, storage, or other option blades per enclosure
Up to 8 interconnect modules simultaneously supporting a variety of network interconnect fabrics
such as Ethernet, Fibre Channel (FC), InfiniBand (IB), Internet Small Computer System Interface SCSI),
or Serial-attached SCSI (SAS)
Up to 10 Active Cool 200 fan kits
Up to six power supplies
Redundant Blade System Onboard Administrator (OA) management modules (optional active-
standby design)

4.1 HP BLADE SYSTEM C7000 ENCLOSURE


The following figure shows the c7000 enclosure front face:

Up to 16 half-height Blade
Server BL460c G8

Up to 6 power supply units

HP Insight Display for Enclosure


Management

Figure 5: HP Blade System c7000 Enclosure Front View

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 27 sur 112


9771 WCE RNC Product Engineering Information

The c7000 enclosure backplane is as follow:

OA Onboard Administrator, HP c7000 hardware management GUI


6125G WCE OAM Management L2 switch
6125XLG Telecom Traffic and IRF (Intelligent Resilient Framework) Ring L2
switch

Figure 6: HP Blade System c7000 Enclosure Rear View

The heart of the c7000 enclosure management is the Onboard Administrator module. It performs
several management and maintenance functions for the entire enclosure:
Detecting component insertion and removal
Identifying components and required connectivity
Managing power and cooling
Controlling components
Managing component firmware upgrades
Each C7000 enclosure is equipped with an active and standby Onboard Administrator (OA) for
shelf management. The OAs appear in a tray at the bottom of the enclosure (below the interconnect
bays). At the center of the tray is a pair of 1Gbps ports which are used to connect multiple c7000 shelves
in an open daisy chain. This allows an OAM user to log into a single OA and have log-in access to all
OAs in the chain. The tray hosting the OAs also contains a 1G switch which connects the OAs of a given
shelf to all its integrated Lights Out (iLO) controllers. These iLO controllers are a management
microcontroller integrated onto each of the blades and provide blade hardware and firmware
management functions. The OAs also provide some limited management of the blade switches used in
the interconnect bays of each shelf. Each OA has an external 1Gbps management port to provide
connectivity to the HP Systems Insight Manager (HP SIM) software running on a remote server.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 28 sur 112


9771 WCE RNC Product Engineering Information

The WCE c7000 blade line-up consists of up to 16 half-height BL460c Gen8 server blades. The
WCE design for these blades consists of a pair of Intel Xeon e5-2680v2 10-core (Ivy Bridge) CPUs
running at 2.8GHz. Each blade is equipped with 64GB of DRAM, and the NIC (Network Interface Chip)
which connects to the interconnect fabric is an Emulex BE3 Converged Network Adapter (CNA). This
Emulex component offers two 10Gbps Ethernet ports with hardware iSCSI acceleration. Each 10Gbps
port is connected to a different blade switch for path redundancy. The blades are diskless and take
advantage of the hardware iSCSI capability of the BE3 to boot directly from the SAN used in the WCE
architecture. (See Hardware Description below)

Each c7000 enclosure is equipped with a pair of 6125XLG 10/40G Ethernet switches in the top
two interconnect bays. Each 6125XLG offers 16 x 10 Gbps ports for fabric interconnect (downlinks to
blades), 8 x 10 Gbps and 4 x 40 Gbps faceplate ports (uplinks), and 4 x 10 Gbps cross-connect
backplane ports between the 6125XLG switches in adjacent slots. The uplink ports provide for
connectivity to the external network (Next Hop Router) NHR ports as well as for inter-c7000 connectivity
in a multi-shelf WCE system. Connections to the storage array (SAN) are also made through the
6125XLG uplink ports.
On the primary cabinet WCE, the first c7000 enclosure (lower one) contains two redundant
switches (HP6125G) for OAM aggregation purpose.
The remaining element of WCE hardware is the storage array. This consists of an iSCSI SAN to
provide centralized storage for all the blades in the system. WCE uses a NetApp e5400 SAN as the
storage element. The NetApp e5400 has dual controllers with 12GB of replicated battery-backed write
cache. Each controller is equipped with a pair of 10Gbps iSCSI ports which are cross-connected to the
6125XLG switches in the primary (first) c7000 enclosure. The SAN is equipped with 24 SFF (small-form
factor) hard drives. An important factor in the selection of the NetApp e5400 for WCE is that it is available
in a DC powered, NEBS certified variant to meet Telecom customer needs.
WCE plans to make two cabinet/power configurations available, one suitable for deployment in a
central office (Telecom) environment (e.g. DC power, seismic rated cabinet, 50C operation), and the
other intended for datacenter deployment (AC power, no earthquake rating, 40C operation). The initial
offering to customers is the DC powered seismic system. A maximum of two c7000 enclosures are
supported in the Telecom rack. This is due to:
Available rack space in the seismic cabinet
Overall weight of the c7000 shelves and the seismic cabinet (e.g. one c7000 loaded with
blades weighs 227 kg)
Overall power consumption (one c7000 can use almost 6.4 kW) and heat release
The datacenter variant can support more c7000 enclosures in the physical cabinet racking space,
but for consistency with the seismic rack configuration, and due to weight and powering concerns of
multiple c7000 enclosures in a single cabinet, WCE has set the limit to 2 c7000 enclosures regardless of
cabinet type.
A large WCE system supports two cabinets for up to four c7000 enclosures and 64 blades as a
single large network node. The two cabinets in the large system configuration must be located adjacent

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 29 sur 112


9771 WCE RNC Product Engineering Information

to each other due to length limitations of the 40G networking cables used to interconnect the two cabinets
(Max distance 100 m).
The following figures shows the different available shelf enclosure from one cabinet with one
c7000 enclosure through two fulfilled cabinets (Primary and expansion - Four C7000 enclosures)

Figure 7: WCE Primary Cabinet Single Enclosure with OAM switches

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 30 sur 112


9771 WCE RNC Product Engineering Information

Figure 8: WCE Primary Cabinet Dual C7000 Enclosure

Figure 9: WCE Expansion Cabinet Single C7000 Enclosure

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 31 sur 112


9771 WCE RNC Product Engineering Information

Figure 10: WCE Expansion Cabinet Dual C7000 Enclosure

4.2 HP PROLIANT BL460C G8 SERVER BLADE MODULE DESCRIPTION


4.2.1 BLADE SERVER FRONT VIEW

This is the actual HP model Blade Server provided with the LR14.2W release. This model could
be change with blade evolution.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 32 sur 112


9771 WCE RNC Product Engineering Information

Up to 16 half-height BL460c G8 blades can be plugged in at


the front of the c7000 enclosure.

The blades are equipped with 2 Intel Xeon processors with


10 cores per CPU and 64 GB of RAM memory.

The blade also uses a dual 10G Ethernet port network


module for communication.

Figure 11: HP Blade Server BL460 G8

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 33 sur 112


9771 WCE RNC Product Engineering Information

4.2.2 BLADE SERVER DESCRIPTION

Figure 12: HP Blade Server BL460 G8 Layout

1. Two (2) PCIe 3.0 mezzanine I/O expansion slots 7. Up to two (2) Intel Xeon E5-2600 family processors
2. FlexibleLOM adapter 8. Internal USB 2.0 and Trusted Platform Module (TPM)
connectors
3. MicroSDHC card connector 9. Two (2) small form factor (SFF) hot-plug drive bays
4. FlexibleLOM connectors (supporting one (1) 10. HP c-Class Blade SUV (Serial, USB, VGA) connector
FlexibleLOM)
5. Sixteen (16) DDR3 DIMM memory slots (8 per 11. HP Smart Array P220i Controller with 512MB FBWC
processor)
6. HP Smart Array P220i Controller connector 12. Access panel

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 34 sur 112


9771 WCE RNC Product Engineering Information

4.3 CONNECTIVITY MODEL WITH HP BLADE SWITCH


Each c7000 enclosure is equipped with a pair of 6125XLG 10/40G Ethernet switches in the top
two interconnect bays. Each 6125XLG offers 16 x 10 Gbps ports for fabric interconnect (downlinks to
blades), 8 x 10 Gbps and 4 x 40 Gbps faceplate ports (uplinks), and 4 x 10 Gbps cross-connect
backplane ports between the 6125XLG switches in adjacent slots. The uplink ports provide for
connectivity to the external network (Next Hop Router) NHR ports as well as for inter-c7000 connectivity
in a multi-shelf WCE system. Connections to the storage array (SAN) are also made through the
6125XLG uplink ports.
The WCE network interfaces can be configured in different ways and at different rates to meet
customer requirements. WCE network ports will operate at 10 Gbps or 1 Gbps. Each WCE shelf offers 8
uplink ports on each 6125XLG switch offering redundancy at the port level through various networking
protocols such as HPs IRF (Intelligent Resilient Framework) and MC-LAG (Multi-Chassis Link
Aggregation Groups). IRF is used in the WCE configuration and the Next-Hop Routers (NHR) connecting
the WCE to the network use MC-LAG in either an active-active or active-standby configuration
There are some port restrictions on the WCE primary shelf for WCE infrastructure needs (i.e.
SAN storage ports, RNC/VMware OAM ports)
The WCE ports can be allocated to support a single common pair of NHR (common Core
Network and RAN interfaces) or dual pairs of NHR where the customer separates the Core Network
interfaces through one pair of NHR and the RAN interfaces through a different pair of NHR.

NB: For OAM aggregation purpose, an in-rack switch is installing on the primary WCE1 shelf.
Two HP6125G switches which support only 8 x 1 Gbps uplinks will be installed in the bays
just below the primary shelf's 6125 XLGs. The tenant OAM uplinks from the 6125XLGs will connect to
the 6125G faceplates. All the OA RJ-45 links and the two RJ-45 links from the SAN controllers will also
connect to the 6125G faceplates. Two 1 Gbps links (one from each 6125G) will connect to the customer's
OAM network edge equipment. This reduces the overall OAM link count from 12 to 2 and allows for 1
Gbps to 100 Mbps down-rating for all WCE OAM flows.

Depending of the WCE configuration, the connectivity and the port assignment will be
different. Please refer to the chapter internal and external connectivity description

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 35 sur 112


9771 WCE RNC Product Engineering Information

4.3.1 HP6125XLG FACEPLATE PORT ASSIGNMENT

Figure 13: HP 6125XLG Faceplate Port Assignment


Summary of the HP6125 XLG Throughput ports

Throughputs Ports
HP6125 XLG 8 X 10Gb/s and 4X40Gb/s uplink ports (towards the
network
HP6125 XLG 16 X 10Gb/s downlink ports (toward the blades)
HP6125 XLG 4 X 10Gb/s cross connect ports

4.3.2 HP6125G PORT ASSIGNMENT

For OAM aggregation purpose, an in-rack switch is installing on the primary WCE1 shelf.
Two HP6125G switches which support only 8 x 1 Gbps uplinks will be installed in the bays just
below the primary shelf's 6125 XLGs. The tenant OAM uplinks from the 6125XLGs will connect to the
6125G faceplates. All the OA RJ-45 links and the two RJ-45 links from the SAN controllers will also
connect to the 6125G faceplates. Two 1 Gbps links (one from each 6125G) will connect to the customer's
OAM network edge equipment. This reduces the overall OAM link count from 12 to 2 and allows for 1
Gbps to 100 Mbps down-rating for all WCE OAM flows.
WCE provides an integrated pair of OAM switches to consolidate all the internal OAM interfaces
onto a single pair of OAM ports facing the customer OAM network.
WCE OAM ports are tri-speed and can be configured for operation as follows:
o 10Base-T
o 100Base-T (Fast Ethernet)
o 1000Base-T (Gigabit Ethernet)
The following WCE internal OAM ports are aggregated onto this pair of customer OAM ports:
o c7000 Onboard Administrator ports: 2 per c7000 enclosure in the WCE system
o NetApp e5400 SAN controller management ports: 2 per WCE system
o RNC/VMware OAM ports: 2 per WCE system
For a WCE1 system, 6 internal OAM ports are aggregated onto a single pair of external
OAM ports
For a WCE4 system, 12 internal OAM ports are aggregated onto a single pair of external
OAM ports

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 36 sur 112


9771 WCE RNC Product Engineering Information

Figure 14: HP 6125G OAM Switch Ports Allocation

Summary of the HP6125G Throughput ports

Throughputs Ports
HP6125G 16 X 1Gb/s downlink ports (towards the blades)
HP6125G Up to 8 X 1GB/S uplink ports (towards the network)
HP6125G Up to 2 X 10Gb/s IRF stacking ports
HP6125G One 10Gb/s cross-link port

For the connectivity configurations please see the chapter internal and external connectivity
description

4.4 NETAPP E5400 SAN DESCRIPTION


The storage subsystem is composed of the NetApp e5424 Storage Area Network (SAN) with 24
600GB 10K RPM SAS drives and fully redundant controllers. Software from Symantec running in a pair
of virtual machines within the WCE provides Network Attached Storage (NAS) functionality to all of the
WCEs virtual machines. The combination of these technologies provides highly available access to
storage volumes from virtual machines independent of where these virtual machines are physically
allocated.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 37 sur 112


9771 WCE RNC Product Engineering Information

Figure 15: E5424 Controller Drive Tray Front view

SAN STORAGE
Fully redundant path from host ports to drives
Each controller can access all drive ports
Each drive chip can access every drive
Top down, bottom up cabling ensures continues access
Expandable via an expansion unit from 24 to 48 drives
DC powered, NEBS certified

The NetApp SAN E5400 has active components on the back panel. While a failure of one of
these components does not cause any SAN outage, SAN is configured using DDP which is like RAID 6
with two logical spares drives. We cannot bring the whole WCE down while the back panel is replaced.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 38 sur 112


9771 WCE RNC Product Engineering Information

Figure 16: NetApp E5424 Controller Drive

4.5 WCE PLATFORM MANAGEMENT SERVER


The Wireless Cloud Element requires external servers in-order to manage the hardware
components (HP HPSim and NetApp SANtricity) and the virtual machine environment (VMware vCenter).
- HP Sim: Manages the WCE hardware component (both HP and NetApp SAN)
- VmWare vCenter: manages the WCE VM
- SANtricity: manages the SAN software.
The WCE Platform Management Server runs Windows Server 2012 as the operating system
and includes Microsoft SQL server 2014 as the database engine for use by the HP SIM and VMware
vCenter applications. The Windows SQL server 2012 shall be used also.

These servers are capable of managing a large number of Wireless Cloud Element systems and
therefore are intended to be installed as a central resource by the customer much like the OAM system.
Note that HPSim, vCenter and SANtricity are all Windows applications so these servers
are configured as Windows machines.
It is physically connected to the first c7000 Enclosure via 6125G switch module. Additional
c7000s are linked from 6125G switch to On Board Administrator modules by cat5 Ethernet cables
included in the connectivity kits.
The vCenter is a centralized management system. It is connected to all the ESXi hosts and to
each LRCE Mgr for VM creation, VM management.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 39 sur 112


9771 WCE RNC Product Engineering Information

vCenter would be used for:


- I&C of WCE Platform and tenants
- Upgrade of WCE platform and tenants
- Shutting down an ESXi server for blade replacement
- Monitoring for faults, ...

Figure 17: HP DL380p WCE Management Servers

The WCE Platform Management Server is the maintenance platform for HP hardware, VM
management, and NetApp SAN management
This server is not integrated into the WCE cabinets.
These are installed into racks provided by the customer in an operations center.

4.5.1 CHARACTERISTICS

Central management server for one or more WCE network nodes


HP DL380p Gen 8 Rack Mount Server hardware platform
Microsoft Windows Server 2012/Microsoft SQL Server 2014 software platform
HP c7000 hardware monitoring/management/administration through HP System Insight
Manager
NetApp e5400 SAN monitoring/management/administration through SANtricity
Virtual Machine monitoring/management/administration through VMware vCenter

4.5.1.1 SERVER HARDWARE

Dual Intel Xeon e5-2658 2.1GHz, 8-core CPUs


64 GB DDR3-1600 DRAM
4-port GbE Network Adapter
2GB Flash-Backed Write Cache for hardware RAID controller
4 SAS 300GB 15KRPM hard drives
4 SATA 1TB 7.2KRPM hard drives

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 40 sur 112


9771 WCE RNC Product Engineering Information

Optical drive
Redundant 750W AC or DC power supplies

4.5.1.2 PHYSICAL SPECIFICATIONS

Dimensions (HxDxW):
3.44 x 27.5 x 17.54 in
8.74 x 69.85 x 44.55 cm
Racking:
2 RU
Weight (approx.):
50 lb / 22.7 kg

4.5.1.3 POWER REQUIREMENTS (AC & DC)

100% Utilization: 380W


80% Utilization: 340W
50% Utilization: 280W

4.5.1.4 DISK ALLOCATION

Disks 1 & 2: fast 300GB 15K RPM SAS drives configured as RAID 1 for fault tolerance
Contain Windows Server 2012 OS, SQL, and WCE management applications

Disks 3 & 4 : fast 300GB 15K RPM SAS drives configured as RAID 1 for fault tolerance
Contain SQL databases for vCenter and HP System Insight Manager

Disks 5: 1TB Enterprise SATA drive Contain SQL event/error logs and WCE load image
repository

Disks 6, 7 & 8 : 1TB Enterprise SATA drives configured as RAID 5 for fault tolerance
System backup drives for OS and SQL database emergency recovery

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 41 sur 112


9771 WCE RNC Product Engineering Information

Figure 18: Management Server Disk Allocation

4.6 WCE DIMENSIONS / POWER SUPPLY AND CONSUMPTION


4.6.1 CHARACTERISTICS FOR MODULES USED FOR CENTRAL OFFICE VERSION

Redundant Power
Each c7000 has its own independent breaker panel with A and B feeds
Each of the six power supplies in the c7000 has a dedicated circuit breaker and -48V
feeder/return pair
The e5400 SAN has its own independent source of power, also with A and B feeds

NetApp e5400 SAN


Redundant 10GbE iSCSI Controllers
Redundant fan/power supply modules
24x SFF 600GB 10K RPM SAS HDD

c7000 BladeSystem
Redundant 6125G Blade Switches
Redundant 6125XLG Blade Switches
Redundant OnBoard Administrator OAM modules
N+N spared fan modules, 10 total.
N+N spared power supplies, 6 total.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 42 sur 112


9771 WCE RNC Product Engineering Information

4.6.2 WCE TELECOM VERSION PHYSICAL SPECIFICATIONS


36U Cabinet C7000 Blade DC Breaker Bl460c G8 NetApp e5400
System Panel Blade SAN
Dimensions 72 x 40 x 26,7 17,4 x 32 x 17,6 3,47 x 17 x 3,47 x 19,0 x 19,6
(HxDxW) inches inches 30,65 inches inches
182,9 x 101,6 x 44,2 x 81,3 x 44, 8,81 x 43,18 x 8,81 x 48,26 x
67,8 cm cm 77,85 cms 49,78 cms
Weight 485 lb / 220 kgs W no blades 45,5 lb / 20,6 14,2 lb / 6,44 84,1 lb / 38,14
Load : 248 lb / 112 kgs kgs kgs kgs (loaded
1200 lb / 544,3 Primary Includes: config)
kgs 239 lb / 108 kgs 2 x Xeon e5
Expansion 2680v2 CPU
Seismic Brace 8 x 8GB LV
35lb / 15,9 kgs DDR3 DIMMs
Racking 36 RU 10 RU (C7000 2 RU 2 RU
Enclosure)
1 RU (seismic
brace)

Table 1: WCE Elements Physical Specifications


4.6.3 WCE CABINET SINGLE ENCLOSURE

Cabinet equipped with single C7000 enclosure


Unit Weight (kg) Quantity Total Weight (kg)
36U Cabinet 227 1 227
DC Breaker Panel 20.6 1 20.6
C7000 Blade System Enclosure 95 1 95
BL460c G8 Blade 6.44 16 103.04
NetApp e5400 SAN 38.14 1 38.14
Total (kg) 484

Table 2: WCE Cabinet Single Enclosure Weight


4.6.4 WCE CABINET DUAL ENCLOSURE

Cabinet equipped with dual C7000 enclosure


Unit Weight (kg) Quantity Total Weight (kg)
36U Cabinet 227 1 227
DC Breaker Panel 20.6 2 41.2
C7000 Blade System Enclosure 95 2 190
BL460c G8 Blade 6.44 32 206.08
NetApp e5400 SAN 38.14 1 38.14
Total (kg) 703
Table 3: WCE Cabinet Dual Enclosure Weight

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 43 sur 112


9771 WCE RNC Product Engineering Information

4.6.5 WCE POWER SPECIFICATIONS TABLES

4.6.5.1 EXTERNAL POWER REQUIREMENTS

Input Voltage Number of Inputs Max Current per Max Output Load
Input per circuit
DC Breaker Panel -36 to -72 VDC 6 (3A and 3B Power Feeds 100 A 80A

Table 4: WCE external power requirements

4.6.5.2 INTERNAL POWER REQUIREMENTS

Input Voltage Max Current Input Max Current per


Power Supply
C7000 Blade -36 to -72 VDC 6 power supplies used for 75 A
System N+1 redundancy
NetApp e5400 SAN -36 to -72 VDC 21.7A (48V); 15.3A
(60V)

Table 5: WCE internal power requirements

Please refer also to Power Dissipation Tables below

4.6.6 C7000 ENCLOSURE POWER DISSIPATION TABLES

4.6.6.1 C7000 BLADE SYSTEM ENCLOSURE

Utilization (%) 100 90 80 70 60 50 40 30 20 10 0


Power Dissipation 772 744 716 688 659 631 602 573 544 514 484
(W)

Table 6: C7000 enclosure power dissipation

Power dissipation values include all infrastructure components in the enclosure:


2x 6125XLG Blade Switch
2X 6125G Blade Switch
2x OnBoard Administrator
6x Power Supplies
10x Fans

4.6.6.2 BL 460 G8 BLADE SERVER E5-2680V2 CPU

Utilization (%) 100 90 80 70 60 50 40 30 20 10 0


Power Dissipation 277 258 238 216 196 177 157 138 118 96 76
(W)

Table 7: BL460G8 power dissipation

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 44 sur 112


9771 WCE RNC Product Engineering Information

Power dissipation values include all elements of the blade active at this level of utilization. These
numbers are valid only for a BL460c G8 blade equipped as shown below.
2x Xeon e5-2680v2 CPUs
8x 8GB LV-DDR3 DIMMs
Dual Port 10GbE Network Interface
Different CPUs and more memory will change these power numbers.

4.6.7 SYSTEM POWER DISSIPATION

Power dissipation values for the c7000 Enclosure and Blades at a given utilization level is
determines by looking up the utilization level (in %) for the Enclosure and adding the utilization level of
the Blade (multiplied by the number of blades) to the Enclosure power.

Engineering Recommendation: Power Dissipation

For a system with 16 blades at a utilization level of 40%, the power dissipation
would be:
Enclosure power (602W) + BL460 G8 power (157W) x 16 blades
602+ (157x 16) = 3114W

4.6.8 WCE SYSTEM LEVEL POWER AND HEAT DISSIPATION

4.6.8.1 PRIMARY CABINET SINGLE ENCLOSURE 80% COMPUTE UTILIZATION

Component Qty % Utilization Max Maximum WCE Primary Cabinet


WCE Primary Cabinet

Component Shelf Power Power Dissipation 5092


Power (Watts)
NetApp e5400 1 100 566 566 Heat Release 17391
C7000 infrastructure 1 80 716 716 (BTU/hr)
BL460c G8 Server 16 80 238 3810 Rate of Heat 4429
Dissipation (W/m2)
Total Cabinet Power 5092

Table 8: WCE Primary Cabinet Power Level

4.6.8.2 PRIMARY CABINET DUAL ENCLOSURE 80% COMPUTE UTILIZATION

Component Qty % Utilization Max Maximum WCE Primary Cabinet


WCE Primary Cabinet

Component Shelf Power Power Dissipation 9619


Power (Watts)
NetApp e5400 1 100 566 566 Heat Release 32850
C7000 infrastructure 2 80 716 1431 (BTU/hr)
BL460c G8 Server 32 80 238 7621 Rate of Heat 8365
Dissipation (W/m2)
Total Cabinet Power 9619

Table 9: WCE Primary Cabinet Dual Enclosure Power Level

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 45 sur 112


9771 WCE RNC Product Engineering Information

4.6.8.3 EXPANSION CABINET SINGLE ENCLOSURE 80% COMPUTE UTILIZATION

Component Qty % Utilization Max Maximum WCE Expansion Cabinet


Expansion

Component Shelf Power Power Dissipation 4526


Power (Watts)
C7000 infrastructure 1 80 716 716 Heat Release 15458
BL460c G8 Server 16 80 238 3809 (BTU/hr)
Cabinet

Rate of Heat 3936


WCE

Total Cabinet Power 4526 Dissipation (W/m2)

Table 10: WCE Expansion Cabinet Single Enclosure Power Level

4.6.8.4 EXPANSION CABINET DUAL ENCLOSURE 80% COMPUTE UTILIZATION

Component Qty % Utilization Max Maximum WCE Expansion Cabinet


Expansion

Component Shelf Power Power Dissipation 9053


Power (Watts)
C7000 infrastructure 2 80 716 1431 Heat Release 30917
BL460c G8 Server 32 80 238 7621 (BTU/hr)
Cabinet

Rate of Heat 7873


WCE

Total Cabinet Power 9053 Dissipation (W/m2)

Table 11: WCE Expansion Cabinet Dual Enclosure Power Level

4.6.9 C7000 ENCLOSURE POWER SIZING

The site power infrastructure must to be sized appropriately to accommodate a power draw of
6300W at -40Vdc per c7000 Enclosure to allow for future-proofing and system capacity growth.

4.6.9.1 SITE POWER AND GROUND CABLING

Warning: Site power and ground cabling is not provided by Alcatel Lucent. Electrical
codes may vary by country, region and locality, thus the site power and ground cables and lugs
must be determined by the customer or installation provider. Power and ground cables should be
assembled by a certified electrician only and must be compliant to all local regulations. All
cabling must be compatible with operation in a 55 C environment.

4.6.10 WCE CABINET LEVEL POWER INFRASTRUCTURE REQUIREMENTS

The WCE cabinets have the following requirements:


Per c7000 Enclosure
Six -48VDC and return feeder pairs for the DC Breaker Panel
Maximum wire gauge on inputs terminals is AWG #2/0 (~67 mm2)
WCE cabinets are pre-wired for power to support two c7000 enclosures
Install the second DC Breaker Panel
Install the second c7000 enclosure and connect up the internal power cables

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 46 sur 112


9771 WCE RNC Product Engineering Information

NOTES:
Auxiliary equipment is only required in the primary cabinet
Field wiring to the WCE cabinet must be designed to meet local electrical codes,
installed, and approved by a licensed electrician.

For Site preparation information and complementary knowledge about power supply requirements,
please refer to [Ext_I&C_001]

4.7 INTERNAL & EXTERNAL CONNECTIVITY DESCRIPTION


For the internal and external connectivity description about Management server, internal cabinet
configuration, networking ports allocation and OAM networking configuration please refer to R&D
hardware document: [Int_R&D_003] and to Iu LR14.2W TEG [Gl_ENG_001]

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 47 sur 112


9771 WCE RNC Product Engineering Information

5 WCE SYSTEM ARCHITECTURE

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 48 sur 112


9771 WCE RNC Product Engineering Information

5.1 WCE IN UMTS ARCHITECTURE

1 Gb p s RJ-45
Two OA links p er C7000
WMS
NTP Server OAa OAs
OAa OAs

WCE
HPI
SANtricity OAM
OAM Edge
Edge Switch
Switch 6125G L 6125G R BL460c
vCenter Appln & Mgmt
OAM Network OAM
OAM Edge
Edge Switch
Switch OOB OAM links
10G -> 1G 6125 IRF Domain

May no t b e co nnected 1 Gb p s RJ-45


to a netwo rk
SAN

iLO NIC NIC NIC NIC

WCE Mgmt Server

NHR NHR NHR


NHR

RAN Core
Network

The 2 Switches 6125G located on WCE MSC


MSC
shelf 1 are connected to the 6125 XLG
and linked to the OAM customer
switches RNC
SGSN
SGSN
RNC
NodeBs

Figure 19: WCE UMTS Architecture

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 49 sur 112


9771 WCE RNC Product Engineering Information

5.2 WCE INTERNAL LAN


The largest WCE system (WCE4) comprises 4 shelves, each shelf containing 16 blades
and two L2 switches (6125XLG).
There is one primary shelf and 3 expansion shelves in a WCE4 system.
Each blade has two 10G NICs, each NIC connecting to one of the two switches in its
shelf via an internal fabric.
The two switches in a shelf are connecting to each other via 4 x 10G ports on the internal
fabric.
Each switch has 4 x 40G uplink ports and 8 x 10G uplink ports on its faceplate.
The 40G uplinks are used to create a ring architecture, thus connecting all 8 switches
together on an internal L2 LAN.
This LAN is managed by the HP proprietary Intelligent Resilient Framework (IRF) to
present a single network element.
The 8 x 10G uplinks are used to connect to the external network (not shown).

Figure 20: WCE Internal LAN

5.3 WCE LINK AND SWITCHES REDUNDANCY


WCEs hardware configuration ensures there is no single point of failure
Redundant, active-active, 10 Gigabit Ethernet data-path links
Redundant switches
Link or switch failure is rapidly detected and traffic is moved to alternate links

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 50 sur 112


9771 WCE RNC Product Engineering Information

Figure 21: WCE 4 Telecom Internal Configuration with link redundancy

5.4 WCE TELECOM CONNECTIVITY TO EXTERNAL NETWORK


On the network side, the WCE appears as a single network element.
On the WCE side, the network edge appears as a single network element.
The number of links required is determined by a combination of path redundancy and
application bandwidth requirements in the presence of a single fault.
Multi-chassis link aggregation (MC-LAG) is used to aggregate bandwidth as well as to
provide a quick failover mechanism. The number of LAG interfaces depends on whether there is
a single network edge one pair of Next Hop Routers (NHRs) for both core and RAN or dual
networks two pairs of NHRs; one for the core network and one for the RAN or even
multiple networks multiple pairs of NHRs. See the Networks configuration.
For example, some RNC customers will have a single network edge for both core and
RAN interfaces while others will provide separate networks for these interfaces.
Two pairs of routers when the Core Network nodes and the nodeB are reachable through
distinct backbones: the Utran backbone and Core Backbone.

5.5 NETWORK CONFIGURATION


5.5.1 WCE1 PLATFORM

In case of WCE1 platform with one shelf and one pair of routers, there is one LAG
instance on the WCE side over the two blade switches. The LAG instance is composed of four
links including two standby links. On the backbone side a MC-LAG is created over the two
adjacent routers
If two pairs of routers, there will be on the WCE side two LAG instances over the two
blade switches.
One LAG composed of four links including two standby links, on the interface to
the Iub backbone and

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 51 sur 112


9771 WCE RNC Product Engineering Information

One LAG composed of four links including two standby links, on the interface to
the Core backbone
On the backbone side, there is a MC LAG over the two adjacent routers.

Engineering Recommendation:
Because of the ring architecture (IRF) connecting all the blade switches
within a WCE platform on an internal L2 LAN, then from the LAG point of
view, the Ethernet ports from distinct blade switches on same or distinct
shelves are considering belonging to the same Ethernet switch (same
System Identifier value for all the blade switches within a WCE platform).

Figure 22: WCE 1 Telecom Connectivitys

5.5.2 WCE2 PLATFORM

In case of one pair of routers (single network), The WCE side is with one LAG instance
over the four blade switches. Four links with 2 standby links
On the backbone side, a MC-LAG is created over the two adjacent routers.

In case of two pairs of routers (dual network), the WCE side has two LAG instances over
the four blade switches.
- One LAG instance composed of 4 links (2 active and 2 standby) on the interface
to the Iub backbone and
- One LAG instance composed of 4 links (2 active and 2 standby) on the interface
to the Core backbone.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 52 sur 112


9771 WCE RNC Product Engineering Information

On the backbone side, a MC-LAG is created over the two adjacent routers on each
backbone.

Figure 23: WCE 2 Telecom Connectivitys

5.5.3 WCE4 PLATFORM SINGLE NETWORK

On the WCE side, one LAG instance over the eight blade switches. LAG instance is
composed of 4 active (or selected) 10 Gbps links from the IRF domain and 4 standby (or
unselected) links.
All 8 links are in the same MC-LAG interface. The solid lines indicate links that have been
selected from an LACP perspective. The dotted lines indicate links that are in-service but
unselected by the LAG distributor and are therefore not forwarding frames.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 53 sur 112


9771 WCE RNC Product Engineering Information

Figure 24: Standard WCE4 Uplink Configuration (Single Network one pair of router)

5.5.4 WCE4 PLATFORM DUAL NETWORK

Figure 25 below shows the recommended uplink configuration for a WCE4 system
connected to dual external networks in which the network edges implement MC-LAG in an
active/standby manner.
On the WCE side, two LAG instances are created in the WCE over the four blade
switches.
One LAG instance composed of 8 links including 4 standby links, on the interface
to the Iub backbone and
One LAG instance composed of 8 links including 4 standby links, on the interface
to the Core backbone
On the backbone side a multi chassis LAG is created over the two adjacent routers on
each backbone.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 54 sur 112


9771 WCE RNC Product Engineering Information

Figure 25: Standard WCE4 Uplink Configuration (Dual Network two pair of routers)

5.6 DATA FLOWS


In the first release of the WCE, the extent to which tenants can span blades is limited by
VMwares cluster size. The cluster size is currently 32, so a WCE4 would have at least two
tenants. To simplify deployment, these tenants will be configured in separate frames as shown in
Figure 26 below. So tenant-1 would generate packets emanating from shelf1 and shelf2 while
tenant-2 would generate packets emanating from shelf3 and shelf4. Once the packets enter the
local 6125, however, IRF will disseminate the packets over the uplinks throughout the ring as
depicted below by the blue and pink arrows. This behaviour is triggered by setting the IRF
parameter local-first as follows:
undo link-aggregation load-sharing mode local-first
This setting is chosen because local-first directs all packets entering the 6125 from the
blades to the local uplink. Normally this behaviour would be desirable, but it forces IRF to discard
packets if the local uplink is oversubscribed, even if there is excess uplink bandwidth available
elsewhere in the IRF domain. In order to take advantage of that excess bandwidth, the local first
behaviour must be overridden. So even though there are two distinct routing domains within the
WCE, all the uplinks (selected and unselected) are in the same LAG. The routing domains (or
traffic types within the routing domains) are separated from each other on the LAG by VLANs
defined in vSphere.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 55 sur 112


9771 WCE RNC Product Engineering Information

Figure 26: Packets Flow for two tenants

To get an even distribution of traffic over the LAG, the LAG hashing should be based on a
combination of source and destination IP address and source and destination L4 port, both in the
uplink (IRF perspective) and the downlink (NHR perspective). Within the IRF domain, this is
accomplished by:
link-aggregation global load-sharing mode destination-ip source-ip destination-
port source-port
Once the VMware restriction of 32 blades per cluster has been lifted, a single tenant may
occupy all 64 blades of a WCE4. As in the previous example, different traffic types within the
single routing domain may be separated by VLANs on the single LAG.

Figure 27: Packets Flow for a single tenant

Note: For all over external links configuration, active/active network edge, limited tenant
bandwidth and complete links connectivity, please refer to the Telecom Link Topology
[Int_R&D_004]

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 56 sur 112


9771 WCE RNC Product Engineering Information

5.7 WCE OAM INTERFACES AND CONFIGURATION


WCE OAM Network context can be described as follow.
Each HP C7000 enclosure (aka shelf) has two Onboard Administrators; one active (OAa)
and one standby (OAs). Each OA has a 1 Gbps RJ-45 link through which it communicates with
the HP Insight (HPI) server. Figure below depicts a WCE2 configuration so there are 4 OA links to
be connected to the OAM network (a WCE4 would have 8 OA links). The Blade Switch OAM path
is through the HP C7000 backplane and the active OA for each shelf.

Figure 28: OAM Network Context for a WCE 2 Configuration

The SAN's payload path is connected to the WCE via 10Gbps links on the 6125
faceplates. The SAN has two controllers and each controller has a 1 Gbps RJ-45 connection to
the SANtricity server. Tenant OAM traffic and VMware ESXi OAM traffic go through a pair of 10
Gbps faceplate ports. Configuration above assumes a pair of customer OAM network L2 edge
switches with 1 Gbps ports (hence the down-rating of the 10 Gbps tenant OAM links). However,
this interface to the OAM network may be serviced by a single L2 switch or L3 router and they
may have 100 Mbps ports. The WCE Management server is assumed to be somewhere in this
OAM network and may be serving more than one WCE instance. It has a dedicated iLO port
which may or may not be connected to the network.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 57 sur 112


9771 WCE RNC Product Engineering Information

5.7.1 IN-RACK AGGREGATION SWITCH CONFIGURATION

The WCE offer the standard configuration in which a second pair of switches will be
included in the primary shelf only. These switches are the HP 6125G which supports only 8 x 1
Gbps uplinks. They will be installed in the bays just below the primary shelf's 6125 XLGs.
The tenant OAM uplinks from the 6125XLGs will connect to the 6125G faceplates. All the OA RJ-
45 links and the two RJ-45 links from the SAN controllers will also connect to the 6125G
faceplates. Two 1 Gbps links (one from each 6125G) will connect to the customer's OAM network
edge equipment. This reduces the overall OAM link count from 12 to 2 and allows for 1 Gbps to
100 Mbps down-rating for all WCE OAM flows.

OAM 6125G
Switch

Figure 29: WCE4 OAM Configuration with in-rack aggregation switch (6125G)

Since the standby OA's (OAs) must also communicate with the HPI, the 6125G pair cannot operate in
an active/standby mode. Therefore an IRF domain separate from the one formed by the 6125XLGs
will be configured for the 6125Gs. The 6125G crosslinks will provide the IRF link between the two
switches and one faceplate port on either switch will provide a MAD path.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 58 sur 112


9771 WCE RNC Product Engineering Information

Rule: 6125G OAM Switch Configuration

Remind: The port 11 of each 6125XLG switch of the primary shelf is assigned to the
OAM. The throughput is downgraded to 1Gb/s by the rate adaptation. It is used to
communicate with the OMC. The OAM flow transmitted over the port 11is an aggregation of
the following flows:
RNC OutOfBand OAM, Call Traces, ESXi to vCenter, LRCEMgr to vCenter & WMS,
WNode and SEPE.
There are two OA (On Board administrators) in each WCE shelf. For each OA, one
1Gb/s Ethernet Management port connected to the OAM platform. HPSIM can remotely
manage all aspects of the WCE Shelf hardware including remote upgrades of firmware on
the blades.
There are two NetApp Controllers:
Per SAN controller, one 1Gb/s management port connected to OAM platform for
remote SAN management and configuration.
Nota Bene: For one WCE shelf, four 1gb/s management ports (2OA + 2SAN)
For four WCE shelves, ten 1gb/s management ports (8OA + 2SAN)
All these 1gb/s management ports may be aggregated by an Ethernet switch to
reduce physical links up to the OAM platform

OAM network context details can be found with R&D document located on Wiki
link: [Int_R&D_005]

6 WCE SOFTWARE PLATFORM

6.1 LRC MANAGER VIEW


The LRC Mgr is responsible for the initial creation and maintenance of the VMs that
compose a Wireless Cloud Element application. It also ensures the network resources required
for the application are present in the network and they are appropriately configured. The LRC Mgr
(and/or vCenter) uses a set of predefined rules or policies to control the placement of the VMs on
physical hardware. For example, these rules avoid an active and spare being located on the
same server. Maintenance of VMs outside of the scope of the applications just as repair of
hardware is currently outside of the scope of 3G applications.
For more knowledge on LRC Mgr, please follow the LRC Mgr Functional Specifications
[Int_R&D_001]

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 59 sur 112


9771 WCE RNC Product Engineering Information

6.2 FUNCTIONALITY
The LRCE Mgr functionality includes:
Interface to the WMS for the purpose of initiating Tenant level actions (e.g. initial
deployment, resource allocation, removal)
Interface to VmWare vCenter server as a virtualization platform
Common mechanisms to handle networking and storage needs for LightRadio Cloud
Elements
Manage cloud-internal network resources: this includes ensuring a unique VLAN tagged
interface per Tenant Instance, and a unique IPV4 IP address to be used for communication
between the LRCE Mgr and the root VM of a Tenant Instance.
LRCE Mgr unifies usage of the VmWare vSphere features in a consistent manner for all
types of Elements/Tenants
LRCE Mgr provides functionality equivalent to that of the hardware commissioning in the
non-virtualized environment. LRCE Mgr provides mechanism to supply a minimum set of critical
configuration data into the guest OS space of the Tenants VMs (similar to I&C parameters in
non-virtualized environment)
LRCE Mgr provides APIs that could be used by either external configuration interface (for
the initial Tenant creation) or by the Tenants themselves for various actions to be executed on
Tenants VMs.
Managing the Wireless Cloud Elements deploying the Element/Tenant into the cloud
respecting the Tenant deployment rules.

6.3 LRC MGR PLATFORM INTERFACES


The LRC Manager is providing many services to the LRC tenants, one of which is an
abstraction layer between the tenants and the virtualization layer which is provided by VMware.

Interface Purpose / Type of functionality


LRCE Mgr to vCenter VM management functions (query what is already deployed,
create/deployTenant, create storage and networking infrastructure for
the Tenants, add VMs to Tenant, define inter-VM rules specific to the
Tenant)
Tenant to LRCE Mgr Allows Tenants root VMs to create additional VMs for the tenant and
initiate some management actions on the Tenants VMs
Tenant internal LRC LRC Platform provided APIs that can be used within the Tenants VMs
Platform interfaces guestOS space to obtain some useful local VM information related to
the calling VM
LRC to WMS To trigger the initial Tenant deployment and Tenant deletion

Table 12: LRC MGR Interface

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 60 sur 112


9771 WCE RNC Product Engineering Information

6.4 SOFTWARE MANAGEMENT FOR WCE PLATFORM


WCE Platform software includes:
Hardware related software
Virtualization related software
Platform VMs
DiskAccessTenant (optional)
For 1st release, tenant software management is the responsibility of each tenant.

DiskAccess software management will use local VMDKs of the two NAS-FE VMs.
RNC software management will require DiskAccess tenant as a prerequisite. Once
DiskAccess tenant exists, sw management will be done by LrcMgr and use Software Disk
mounted via DiskAccess VM. 3gOAM VMs will simply use the sw available on the Software disk
(read-only mount).
Hardware Related Software Management:

For first release, HP provides the "cloud in a box" hardware. Software to be managed
from HP includes:

on-switch firmware (for HP c7000)


o OA
o OA tray
o blade iLO
o blade System ROM
o blade Power Mgmt Controller
o blade Emulex
o blade Driver
o 6125 Firmware
HP Insight (HP SIM)
NetApp e5400 controller firmware
NetApp e5400 NVSRAM firmware
NetApp Santricity

Virtualization Related Software Management :

For first release, VmWare provides our virtualization "cloud" layer. Software to be managed from
VmWare includes:

ESXi hypervisor on each server


vCenter server
vSphere client

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 61 sur 112


9771 WCE RNC Product Engineering Information

Engineering Recommendation: Vcenter Application

NB: Vcenter application is also described on VmWare Vcenter server


chapter

ESXi server in c7000:

Each server needs its own 5.2 GB LUN at the SAN. If this has not been created already
as part of I&C for the whole LRC, it needs to be done for commissioning the new server.
The C7000 has an On-board Administrator (OA) which handles management of
hardware. A web-based OA-client GUI is run to commission the new server. This GUI only needs
IP connectivity to the OA, so it can be run locally at I&C time, or from WMS if server is being
added later for system growth.

6.5 WCE 3G LINUX DISTRIBUTION MANAGEMENT


Our product will be based upon RedHat Enterprise Linux 6.4 (RHEL 6.4 version 9). RHEL
is distributed via DVD format as a collection of RPMs (Redhat Package Manager Files). Each
RPM is individually versioned, as such, security patches and bug fixes are on a per RPM basis.
The biggest challenge we have is the need to have the Linux Distribution versionized such that,
from a release management point of view, we can, at any time, for any software load, get an
exact image on the Linux distri that was used.

7 WCE RNC REFERENCE ARCHITECTURE

7.1 UMTS RNC VM CONFIGURATION

The RNC tenant consists of four different VM types - 3gOAM, CMU, UMU and PC- and it
requires the Disk Access (DA) tenant (which consists of a pair of VMs) and the LRCE Mgr VM to
create a complete system. Only one DA and LRC Mgr are required per WCE independent of the
number of other tenants in the system. Within the RNC a VXworks emulator is used called VXell
which essentially a single thread locked to an individual processor core with Linux core affinity so
it is important to also consider the number and location of the VXell instances. The size of each of
these VMs and the number of VXells is as follows:

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 62 sur 112


9771 WCE RNC Product Engineering Information

VM Max # # Cores # VXell Virtual Disk [GB] RAM [GB] Note

3gOAM 2 2 1 100 8 Most of this is the 3gData disk

CMU 16 4 4 0 6 3 X Application VXell + 1 NI


VXell

UMU 65 4 3 0 6

PC 30 4 2 0 6 2 x NAT VXell + 1 DPDK

LRC Mgr 1 1 0 4

DA 2 2 0 10 3 10GB local disk, 125 MB shared


disk

Table 13: WCE VM description

Note: DA VM configures (on the SAN) and hosts 50 GB data volume for each RNC and 50 GB
software volume to be shared by all RNCs in a cluster.

The VMware DRS system (in semi-automatic mode) will distribute these VMs to physical
servers following the VM to VM anti-affinity rules that we define. These rules are simple, no more
than one 3gOAM, CMU or PC can reside on a single server. The hardware that we are using for
our first commercial release is the HP 460G8+ board with 20 cores which are sufficient to
generally host 4 VMs each with 4 VCPUs without over-subscription.
Please see examples below:

Figure 30: VMs Distribution

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 63 sur 112


9771 WCE RNC Product Engineering Information

RNC Tenant VMs Configuration with anti affinity rules


Blade Server Rules
CMU 1 No more than one CMU on each host
UMU N Dynamic application vs server load
PC 1 No more than one PC on each host
3gOAM 1 No more than one 3gOAM on each host
CMU & 1 Ensure 3GOAMs are not on a host with a CMU
3gOAM
Table 14: WCE RNC Tenant Rules

7.1.1 DYNAMIC VM ALLOCATION - ANTI AFFINITY RULES

WCE uses fully dynamic VM allocation there is no static association between VMs and
blades
VMs are created on any blade assigned to the cluster even it exists in another physical
frame
Multiple Tenant can share a single data-center
To ensure services are highly available anti-affinity rules separate redundant VMs onto
different physical blades
To avoid dual failure scenarios that are not protected by 1+1 or N+1 sparing mechanisms,
the WCEs LRC Mgr has the ability to configure virtual machine with anti-affinity rules.
These simple rules ensure that a spare virtual machine is not allocated to the same server
as the active virtual machine during the dynamic virtual machine allocation phase.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 64 sur 112


9771 WCE RNC Product Engineering Information

Table 15: Multiple RNCs within a single data center

7.2 CMU VM STRUCTURE


The N + M spared cell management unit which handles physical cells and common
channel traffic. In the CMU structure, there is Call Processing, User Plane and Signalling (NI).

Figure 31: CMU Role

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 65 sur 112


9771 WCE RNC Product Engineering Information

Figure 32: VM CMU Structure

7.2.1 CALL PROCESSING AKA CALL-P OR C-PLANE

CallP consists of several core processes. One of the primary core processes is the ue
core process, or UeCp, which handles User Equipment oriented processing residing in the UMU
- and the other is the nob core process, or NobCp, which handles NodeB oriented processing
residing in the CMU. The UeCp contains several components such as UeCall, UeRrc and IuCall
amongst others. The components within NobCp are NobRrc, which handles common channels,
and NobCall, which handles nbap procedures (like cell setup) and NobCch which handles
common channels.

7.2.2 USER PLANE PROCESSING AKA U-PLANE

DPH serves 3 distinct functions:


Serves as the PSE process for Uplane-EE (receives UplaneEE messages and calls
the appropriate function in other app layers. Typically used to transfer processing
from pollTasks to UplaneEE.
Handles VSock creation/deletion for the call
Acts as the interface (entry point) to CallP for SRB messages (RlcDataRequests,
RlcDataIndications, Paging requests, CBS messages, MCS messages)

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 66 sur 112


9771 WCE RNC Product Engineering Information

7.3 UMU VM STRUCTURE


The un-spared UE management unit which handles dedicated and shared high-speed UE
traffic. The following description shows the two main components of the UMU: Call Processing
and User Plane. In the figure 'UMU Structure' below, the Call Processing components are shown
in colour while the User Plane is shown in grey scale.

Figure 33: UMU Role

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 67 sur 112


9771 WCE RNC Product Engineering Information

Figure 34: VM UMU Structure

7.3.1 CALL PROCESSING AKA CALL-P OR C-PLANE

CallP consists of two core processes. The first is called the ue core process, or UeCp,
which handles User Equipment oriented processing residing in the UMU - and the other is the
nob core process, or NobCp, which handles NodeB oriented processing residing in the CMU.
The UeCp contains several components such as UeCall, UeRrc and IuCall amongst others. The
components within NobCp are NobRrc, which handles common channels, and NobCall, which
handles nbap procedures (like cell setup) and NobCch which handles common channels.

7.3.2 USER PLANE PROCESSING AKA U-PLANE

DPH serves 3 distinct functions:


Serves as the PSE process for Uplane-EE (receives UplaneEE messages and calls
the appropriate function in other app layers. Typically used to transfer processing
from pollTasks to UplaneEE.
Handles VSock creation/deletion for the call
Acts as the interface (entry point) to CallP for SRB messages (RlcDataRequests,
RlcDataIndications, Paging requests, CBS messages, MCS messages)

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 68 sur 112


9771 WCE RNC Product Engineering Information

7.4 PC VM STRUCTURE
The load shared protocol converter which acts as a NAT point and traffic-shaper for the
RNC.
The PC as implemented for the Wireless Cloud Element platform is different than the
implementation of the PC in the 9370 RNC. There is several significant ways such as using
internal IP addresses to represent external NodeBs and the use of the source IP address in the
UDP NAT translation. Please keep this is mind while reading further.

7.4.1 OVERVIEW

The PC role consists of two component parts: TRM which is a native Linux application
which manages the connections in the system and the "fast-path" which is a highly optimized
packet processor application based on the 6Windgate product provided by 6Wind which realizes
Intel's DPDK technology. In order to better balance load across the PCs and to allow new
features like dynamic load balancing across the PC, the UMUs need to be able to address the
PCs with finer granularity than what is possible with a single IP address for each PC; therefore,
the concept of an internal IP address representing a number of NodeBs - referred to as n@ below
- is introduced to the system. From many respects this n@ can directly replace the IP address of
the PC as used within the 9370 RNC. UMUs, for example, will use this internal IP address when
sending traffic without specific knowledge of which PC terminates this traffic. Note that in the
following figure internal IP addresses are designated with a lower case letters, so n@1 is the
internal IP address for the NodeB with the external IP address N@1. The internal IP address n@1
is used for both the control part of the PC (TRM) as well as the data-path part of the PC (Fast
Path).

Figure 35: PC Role

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 69 sur 112


9771 WCE RNC Product Engineering Information

Figure 36: VM PC Structure

To further illustrate the use of this address scheme, lets consider the case of traffic
flowing in the downlink direction - that is from the UMU to the NodeB. Once a connection is
operational, the UMU will communicate with the PC that is handling this connection with these
internal addresses (for example n@1). The IP infrastructure of the Wireless Cloud Element
ensures that this IP packet gets transferred to the PC that has bound n@1 but the UMU does not
have any specific knowledge of this binding. When this packet arrives at one of the PCs, it uses
the same NAT tables found in the 9370 implementation to create a new IP packet header with the
source IP address as that of the PC and the destination IP address of N@1. By using the same
NAT table technology as that of the 9370 RNC, each connection uses independent NAT tables
which support flexible address assignment at the NodeB including supporting multiple addresses
for a single interface.
In addition to the IP @ translation discussed the PC continues to operate the same
Bandwidth Pool functionality as found on previous versions of the RNC so that there is much
more processing required than discussed here. A significant difference between the 9370 RNC
Bandwidth Pool service and that of the Wireless Cloud Element is that all of the bandwidth pool
data is replicated across all of the PCs. This is possible as all of this data is less than 20MB and
therefore does not contribute significantly to the overall memory requirements of the PC.

7.4.2 MAPPING NODEB IDS TO N@

In order to use the n@ addresses within the Wireless Cloud Element RNC application,
they must be mapped to physical NodeBs each identified by a unique NodeB Id. This mapping is
done once for any given configuration of NodeBs based entirely on configuration data and can be
considered pseudo static data afterwards. As NodeBs are added and deleted from the RNC this
mapping will have to be re-done. The goal of the mapping is to evenly distribute the load of the
Iub network across the entirety of the n@ addresses; however, as there are many n@ addresses
(most likely in the range of 100 to 1000), precise balancing is not required. The ability of the
UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 70 sur 112
9771 WCE RNC Product Engineering Information

system to dynamically balance the work represented by the n@ address further lessens the
requirement for precise balancing. The following table describes a simple assignment algorithm
that should be sufficient:

NBid @n
237 0
99 1
37 2
126 3
...
33 511
12 510
174 509
...
Note: the NBids are sorted in decreasing bandwidth usage.
Table 16: NodeB Id Mapping

It is the role of the 3gOAM to do this initial NBid to n@ assignment and distribute this
information to all of the CMUs.
Once the NBid to n@ mapping is complete the system is ready for the NodeBs to attach
to the RNC.
Once the n@s have been mapped and distributed to all of the ARP tables in the system,
the PCs are ready for NodeB traffic.

7.5 RESSOURCE ALLOCATION


By recomposing the component pieces of the 9370 RNC into the new virtual RNC
architecture the two primary limitations of the 9370 were addressed:

N2 Communication Mesh: By integrating both the control plane and user plane functions
into a single VM there is no need for the 9370s pooled architecture and therefore no need to
establish a mesh of connections between all of the TMU and all of the RAB. There is a need to
send messages between the UMUs and CMUs but the number of CMUs in a typical RNC will be
under ten and the UMUs do not require frequent communications with each other.
Scope of Failure: As both the control-plane and user-plane components for a single call
are hosted at a UMU, when that component fails it has minimal impact to the other nodes in the
system. Specifically, the CMUs and PCs need to clean-up any resources consumed by the failed
UMU but these actions do not need to be tightly coordinated between CMU and PC instances.
Another benefit of integrating the control-plane and user-plane components into a single
VM is automatic and fully dynamic balancing between the amounts of computing allocated to
each function. The ratio between control and user plane has shifted significantly, and occasionally

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 71 sur 112


9771 WCE RNC Product Engineering Information

rapidly, with the introduction of new features and handsets. Automatic balancing reduces
operational expenses for the operator.

7.5.1 BOTTLENECK ELIMINATION

The 9370 RNC contains two central resource allocation components: RMAN and TBM
and a central SS7 protocol termination point, the NI. RMAN is a resource manager that allocates
user-plane resources (RABs) as part of the connection setup and release procedures as well as
being involved in recovery from failed nodes. TBM is a transport bearer manager that allocates
UDP port numbers for the external links used within the RAN network. The rate of change of
these ports is approximately once every ten seconds per active user due to both link allocation for
multiple services on mobile device and as a result of link optimization for minimizing mobile
device power consumption. Given the central role of both RMAN and TBM in the operation of the
RNC and their use in each connection setup, takedown or modification these nodes rapidly
become bottlenecks as the scale of the RNC is increased.
The virtual RNC eliminates the RMAN bottleneck by integrating the control-plane and
user-plane components into the new UMU virtual machine. By eliminating the pool of user-plane
resources and enforcing a one-to-one relationship between the two components RMAN becomes
a trivial selection algorithm with only one choice. Importantly, major changes to neither control-
plane nor user-plane component were required to eliminate RMAN.
TBM was not eliminated but instead distributed to all of the PC virtual machines within the
virtual RNC. Each of the PCs connects to a predetermined set of cell sites each with their own set
of UDP/IP links. A new component called TRM composed of software from the 9370s TBM
function is used to allocate all of the ports on the links allocated to a given PC. The overall
message sequence for allocation of resource is largely the same between the two versions of the
RNC thus maintaining maximum commonality between the two software streams.
The 3GPP standards that define the function of the 3G RNC specify that it must be
associated with a single SS7 signalling point code. The 9370 RNC satisfies this requirement by
terminating the higher layers of the SS7 protocol stack on a single, redundant, node referred to as
the Network Interface (NI). Since the rate of SS7 signalling is directly proportional to the size of
the RNC, the centralized NI rapidly becomes a bottleneck
The virtual RNC distributes the SS7 signalling associated with user equipment SS7
connection oriented messages - to the same UMU VMs handling the control and user plane for
the particular user. Not only does this change allow for near linear scalability of overall capacity,
but it also follows the previous pattern of limiting the scope of failure by co-locating all of the
functions required for a particular user in a single location. Connectionless messages (e.g. paging
requests for mobile terminated calls) are terminated on the CMU as there may not be a UMU
currently handling that users mobile device. More specifically, the UMU hosts the SCCP protocol
and the CMU hosts the SCTP and M3UA protocols although the implementation is a little more
complex than this to handle complex procedures like RANAP relocation.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 72 sur 112


9771 WCE RNC Product Engineering Information

7.5.2 NO SINGLE POINT OF FAILURE

In order to achieve 5x9s availability a network function must be able to autonomously


react to inevitable failures and continue to offer service. The typical sparing mechanisms used to
ensure continued service are 1:1, N:1 and N:M.
1:1 systems have an active and a spare component with the spare being in various states
of readiness to take over from the active component.
N:1 systems are found where it is impractical to create a large number of spare
components so a single spare is used to protect N active components.
N:M systems are an extension of N:1 where more than one spare is available; typically
these systems are used where it is likely that more than one of the M components may fail
simultaneously.

The WCE recognizes that one or more of these mechanisms are likely to be active within
a virtualized network function and provide a platform fully compatible with such sparing
mechanism. Primarily this is achieved by ensuring that when the platform components
themselves fail, these failures result in a single fault in the application space. All components
within the WCE mini data centre are redundant as follows:
The HP c7000 chassis provides slots for 16 dual socket Intel Xeon servers, a set of load
sharing power supplies and fan units, and a pair of 6125XLG Blade Switches.
Each of the servers has a pair of 10 GE NICs which are connected to the 6125 switches
in an active-active configuration such that either a link or switch failure does not reduce bandwidth
below 10 Gbps per blade.
The NetApp e5424 Storage Area Network (SAN) with 24 600GB SAS drives DDP RAID 6
support. This isolation ensures that RAID operations, such as rebuilding a RAID set after a failed
drive is replaced, do not consume link bandwidth from the network functions as RAID operation
are unpredictable and consume bandwidth for a large amount of time (rebuild times can exceed
24 hours).
As iSCSI SAN technology only allows a single node to mount a unit, a fully redundant
Network Attached Storage (NAS) function is provided by WCE in the form of a pair of virtual
machines running a Symantec cluster file system. This implementation of a NAS is more cost-
effective than dedicated hardware while still providing highly available access to storage volumes
from virtual machines independent of where these virtual machines are physically allocated. The
Disk Access virtual machines are redundant such that the only impact to applications with
mounted volumes is a 20 to 30 second pause in connectivity all mounts remain valid.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 73 sur 112


9771 WCE RNC Product Engineering Information

7.6 RESSOURCE RESERVATION


When multiple virtual machines share a physical CPU, the virtual machine scheduler
must ensure both VMs progress by multiplexing the hardware thus suspending one VM while the
other operates. The suspension of VMs may result in unacceptable jitter for isochronous
applications so this type of over-subscription of the physical resources must be controlled.
The WCE uses both CPU and memory reservation to control the oversubscription of
physical resources. If an application is jitter sensitive, when that VM is created both CPU and
memory reservation should be set in the VM template which will result in the dynamic VM
allocation procedure avoiding over-subscription. The LRC Mgr provides a simple mechanism to
set both CPU and memory reservation.

7.7 WCE RNC CARRIER GRADE DESCRIPTION


Regardless of the changes introduced in WCE RNC architecture and different techniques
required to build a carrier grade system, the ultimate goal for carrier grade remains the same, i.e.
WCE RNC shall be to 99.999% available. This is often referred as 5x9s availability which can also
be translated into 5.25 minutes of unplanned outage per system per year.
Based on RQMS, the total outage for the RNC is the total loss of capacity for origination
and/or termination for voice and data traffic due to causes affecting the RNC for more than 30
seconds. A partial outage incurs when a loss of 10% or greater of the
provisioned RNC capacity for origination and/or termination (for combined voice and data traffic)
lasts for a period greater than 30 seconds. The duration of partial outages is weighted by the
related service impact. For example, a partial outage lasting 20 minutes and affecting 10% of
RNC capacity accounts for 20 minutes*10% which is 2 minutes of outage time. RNC total outage
is defined as the duration when the last cell is lost to the recovery of the first cell.
It is obvious that the availability of the system could be compromised by failures in the
system. Reducing or eliminating the number of failures, failure impact and failure duration is the
key to achieve a highly available system. With the experiences gained from building 9370 RNC
and the consideration of the new architecture, WCE RNC general and role specific carrier grade
requirements are established as references for designing and testing for carrier grade.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 74 sur 112


9771 WCE RNC Product Engineering Information

Figure 37: Carrier Grade Description

7.8 CARRIER GRADE REQUIREMENT FOR A TENANT


The requirements below are defined and to be measured against a nominal system which
has the following configuration:
12 hosts that are not shared with other tenants
A dedicated SAN which is not shared with other tenants
2 3GOAM VMs, i.e. 1 active 3GOAM VM and 1 standby 3GOAM VM
10 + 1 PC VMs, i.e. 10 active PC VMs and 1 standby PC VM
9 + 1 CMU VMs, i.e. 9x3 active CMU Vxells and 1x3 standby CMU VxElls
A number of UMU VMs (See below on how to determine the number of UMU
VMs)
SNMP 9 traffic profile
Standard customer's configuration (e.g. no debug logs turned on)

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 75 sur 112


9771 WCE RNC Product Engineering Information

1 General 5x9s availability

In a nominal system configuration, single HW component failure (e.g. single server failure,
2 General single link/port failure, single disk drive failure) should not cause any reportable outage
which is equal to or greater than 10% of capacity and lasts longer than 30s

3 General No single point of HW or SW failure

4 General 72 hours of capacity run without any partial or total outage

5 General All internally generated CG related CRs fixed

6 CMU Sparing: CMU roles are N:M spared

Capacity Impact: Single active CMU role failure should not cause more capacity loss than
the capacity supported by the failed CMU role

Switchover/return to capacity: CMU role switchover/failover should be completed in 30s,


i.e. from the time the first cell loss till the time of last cell restored should be less than 30s
for all types of NodeB supported

Return to redundancy: Failed CMU role needs to be recovered as active or standby in 4


minutes (exclude server failure)

7 PC Sparing: PCs are N:1 spared

Capacity Impact: Single active PC role failure should not cause more capacity loss than the
capacity supported by the failed PC role

Return to capacity: New calls coming in from the impacted NodeB after a PC failure should
be handled by the spare PC in less than 30s. Iu/Iur connections for any new calls are
handled by the remaining PCs in the system including the spare PC.

Return to redundancy: Failed PC role needs to be recovered in 3 minutes (exclude server


failure)

Load balancing: Any PC load balancing shall not impact the existing calls.

8 3GOam Sparing: 3GOam role is 1:1 spared

Switchover: 3GOam role switchover/failover should be completed in less than 30s, i.e.
from the time the active role goes down till the time the standby role becomes fully
active.

Return to redundancy: Failed 3GOam role needs to be recovered as standby in 4 minutes


(exclude server failure)

Return to service: RNC northbound interface is available in 30s

Capacity: Single UMU role failure should not cause more capacity loss than the capacity
9 UMU
supported by the failed UMU role

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 76 sur 112


9771 WCE RNC Product Engineering Information

Return to capacity: Failed UMU role needs to be recovered in 4 minutes (exclude server
failure)

Return to service: RNC reset* should complete within 7 minutes, i.e. from the time the
10 RNC
last cell is down to the first call is up

Return to service: RNC northbound interface is available in 7 minutes after RNC reset

11 RNC Simultaneous multiple role failures need to be recovered automatically in 12 minutes

After successful initialization, RNC should continue to provide call processing service (i.e.
12 RNC originating and terminating calls) when disk access is lost and recovered, i.e. no partial or
total service outage.

After successful creation or upgrade of RNC, any failure of LrcMgr should not impact
13 RNC normal operation of the RNC and cause no impact on services provided by RNC, e.g. no
partial or total service outage.

RNC should recover automatically after a total power outage in 12 minutes (dead office
14 RNC
recovery)

Table 17: Carrier Grade Requirement

Note: All the informations regarding carrier grade architecture, sparing model and VMs failure
handling is described in R&D document: [Int_R&D_006]

7.9 WCE IP ARCHITECTURE


A VLAN connects cloud tenant internal components with a VLAN tag that is unique within
the cloud thus allowing each tenant to reuse IP addresses as shown below:

Figure 38: Tenant Description

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 77 sur 112


9771 WCE RNC Product Engineering Information

7.9.1 TENANT INTERNAL IP ADDRESS

IP addresses used within the tenants are typically statically allocated by the application
software and are in the 169.254.0.0 range which is a link-local address subnet (L.L.A), an IP
address that is intended only for communications within a local network. Routers do not forward
packets with link-local addresses so all internal communication is kept within the Wireless Cloud
Element.

Addresses per Wireless Cloud Element shelf

OA 2

iLO 16

Fabric Switch 2

ESXi 16

Ports (see the following table) 32

Total per shelf 68

Others

6125 XLG Switch 2

6125G Switch 2

SAN 2 mgmt; 4 iSCSI on internal subnet


169.254.223.0

Wireless Cloud Element 4 shelf total 280


Table 18: WCE IP Address Mapping

Note: IPv6 addresses can be used for everything except ports and SAN disk access
addresses. For these we can addresses from the 169.254.0.0 subnet since they do not need to be
routable. We are also investigating the use of IPv6 addresses on the RNC OAM interface. Also note
that either GRE or VPN type tunneling can be used to reduce the number of public IP addresses
required, see the WceIpTunneling section for more information.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 78 sur 112


9771 WCE RNC Product Engineering Information

7.9.2 RNC INTERNAL ADDRESS USAGE

Internal Fixed IPs are calculated as follows

Fixed IP
Fixed IP = <internal_subnet> + <hw_id> + 1

Internal subnet 169.254.0.0

Netmask ( / 21) 255.255.248.0

HW_Id 16 bits HWId of the RTE Instance

NOTE - although the internal encoding is described here for information, it could change at any time
and assumptions must not be made about the IP address of a given RTE instance. Use the HwId apis
to derive an IP address since these will always be up to date.

Examples:
3GOAM-0-1 = 169.254.0.0 + 0x0001 + 1 = 169.254.0.2

CMU-2-1 = 169.254.0.0 + 0x1021 + 1 = 169.254.16.34

PC-4-2 = 169.254.0.0 + 0x2042 + 1 = 169.254.32.67

UMU-10-1 = 169.254.0.0 + 0x30a1 + 1 = 169.254.48.162


Table 19: WCE VMs Internal Address Usage

Reserved IP Ranges:

The following IP ranges are reserved:


Address Range Netmask Length Purpose Size
169.254.0.1 169.254.0.50 3GOAM 4K

169.254.16.1 169.254.16.151 CMU 4K


169.254.0.0 / 18
169.254.32.1 169.254.32.149 PC 4K

169.254.48.1 169.254.48.151 UMU 4K

169.254.64.0 - 169.254.175.255 reserved-1 24K

169.254.176.0 169.254.191.255 reserved-2 4K

169.254.192.0 169.254.219.255 HW & OAM 7K == 4 K + 3 K

169.254.220.0 169.254.221.255 LRvG-MGR 512

169.254.222.0 169.254.222.255 reserved-3 256

169.254.223.0 169.254.223.255 iSCSI 256

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 79 sur 112


9771 WCE RNC Product Engineering Information

169.254.224.0 - 169.254.239.255 PXE/DHCP 4K

169.254.240.0 - 169.254.255.255 reserved-4 4K


Table 20: WCE Internal Reserved IP Ranges

7.9.3 EXTERNAL IP ADDRESS DIMENSIONING

The number of external IP@ used depends on network size and feature activation. The
following table describes the number of IP @s in a single shelf RNC:

Interface # IP@ Subnet

Iub 1 / PC vxell <48 /26

Iur up to 1 / PC vxell <48 /26


UP
IuCS up to 1 / PC vxell <48 /26

IuPS up to 1 / PC vxell <48 /26

Iub CP 3/CMU <9 /27

CP Iub CCU 3/CMU <9 /27

IuxCP 1/CMU (max16) <9 /28

OAM Oam 1/3gOAM 2 /29


Up to LR15

Table 21: WCE External IP Address Dimensioning

Engineering Recommendation: IP Addressing


Nota: For complementary information on external IP addresses, please
refer to the Iu LR14.2 TEG [Gl_ENG_001]

7.9.4 WCE IP TUNNELING

In-order to limit the use of external, routable IP addresses in operator's networks it is possible
to use GRE - IPv4 / IPv4 tunnels - and possibly VPN tunnels to allow local non-routable addresses to
be used for the infrastructure components. The following diagram depicts how these tunnels could be
used:

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 80 sur 112


9771 WCE RNC Product Engineering Information

Figure 39: IP Tunnelling

7.9.5 DEPLOYMENT OF THE TUNNEL

The most significant benefit of the use of GRE (Generic Routing Encapsulation) is in a
situation where many Wireless Cloud Elements are configured within a single network as the number
of IP addresses required for the infrastructure can be very significant. Such an operators network is
depicted in the following diagram:

Figure 40: Tunnelling Deployment

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 81 sur 112


9771 WCE RNC Product Engineering Information

7.9.6 SCOPE OF THE TUNNELED IP ADDRESSING (192.168.X.Y)

Figure 41: Tunnelling Addressing

The following notes apply to the use of tunnels:


Internal Address Scope is limited to the WCE Domain
Public/Corporate address is used inside the corporate network
Address cannot be reused for the same WCE Management server
Tunnel should be optional and technology selection will depend on the operator
network

7.9.7 GRE IMPLEMENTATION NOTE

When using GRE the size of the IP packet increases as an extra IP header is required
which can be a problem if the original packet was already set to the maximum MTU size In-order
to avoid this problem the router doing the encapsulation must send out the ICMP Control
Message with "Don't Frag DF bit set" back to the Wireless Cloud Element.

7.9.8 IP ENGINEERING ARCHITECTURE WITH 2 RNC TENANT

For a configuration with 64 blades servers (WCE4), there will be 2 RNC tenants. The sub
netting will be separated between the two tenants.
With that respect, the IP addressing plan is as follows:

- One UTRAN Telecom and UTRAN OAM traffic IP requirements for Tenant#1
- One UTRAN Telecom and UTRAN OAM traffic IP requirements for Tenant#2
-
Additionally for each tenant, it assumed there is full traffic flows separation per interface
and traffic type (UP and CP). Each flow has its own VLAN and its own subnet. This multi VLAN

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 82 sur 112


9771 WCE RNC Product Engineering Information

solution is compliant with the legacy RNC configuration. With that respect, each RNC has its own
set of VLAN.

Telecom routable external traffic (one subnet and one


VLAN per UTRAN I/F and CP&UP)
L3 L2

Iub UP/CP RNC tenant


IuPS UP

IuPS CP PC
Gateway
Router IuCS UP

IuCS CP
CMU
IuR UP

IuR CP

/28
Subnet/VLAN
/26

Figure 42: RNC tenant VLAN Configuration

Then the IP addressing could be described as on the following tables.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 83 sur 112


9771 WCE RNC Product Engineering Information

IP sub netting requirements for the UTRAN Telecom CP and UP part of a WCE tenant, applicable for both
Tenant#1 and Tenant #2

Interface Subnet Subnet Mask Number of Reserved IP@ Comments


Size IP@

IuCS Cplane /28 255.255.255.240 13 16


Can be configured as one
IuPs Cplane /28 13 16 subnet for all 3 interfaces
or one subnet for each
IuR Cplane /28 13 16 interface.
Maximum Number of CMUs
OR is 13; with 1 NI IP@ per
CMU instance.
Iu Cplane /28 13 16

IuCS Uplane /26, 255.255.255.192, 60, 64,


/27, 255.255.255.224, 28, 32,
IuPS Uplane /28 255.255.255.240 12, 16 Maximum number of PC is
30; with 2 IP@ per PC
IuR Uplane instance.
Similar to Cplane; the
OR Uplane terminations can
also be shared.
Iu Uplane /26, /27, 60, 28, 12 64, 32, 16
/28

Iub Cplane /26, 255.255.255.192, 39, 64,


/27 255.255.255.224 27 32
Iub CC Uplane
Maximum Number of CMUs
is 13; with 3 Iub IP@ per
OR
CMU instance.
Iub Cplane + Iub CC /26, /27 39, 26 64, 32
Uplane

Iub Uplane /26, 255.255.255.192, 60, 64, Maximum Number of PC is


/27, 225.255.255.224, 28, 32, 30; with 2 IP@ per PC
/28 255.255.255.240 12 16 instance.

IuPC /29 255.255.255.248 5 8 Can be its own subnet or


can reuse one of the Iu
Cplane subnets.

Figure 43: IP@ requirements for UTRAN Telecom

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 84 sur 112


9771 WCE RNC Product Engineering Information

IP sub netting requirements for the UTRAN OAM for Tenant#1 and Tenant #2

IP subnet requirements on UTRAN OAM


Tenant #1
OAM 3 (/29) The first usable IP@ from the
OAM subnet is for NetConf, the
second IP@ is for SEPI and the
third is for CallTrace
Tenant #2
OAM 3 (/29) The first usable IP@ from the
OAM subnet is for NetConf, the
second IP@ is for SEPI and the
third is for CallTrace
OA/6125/ILO, SAN 82 (/25) Considering 64 blades: 1 for
each OA, 1 for each ILO, 1 for
each 6125, 1 for each SAN
EsXi/ LrcMgr 65 (/25) 64 Blade ESXi + 1 for LrcMGR

Figure 44: IP@ requirements for UTRAN OAM

8 WCE PLATFORM OVERVIEW


The WCE supports the segregation of different traffic flow types onto separate VLANs. A
number of infrastructure VLANs are defined within the WCE to separate non-telecom flows such
as OAM, disk access, and LRCE Manager flows from tenant payload flows. In addition, each
tenant will have its own internal VLAN for inter-component communication. Since each tenants
VLAN will be unique within the WCE, the 169 component IP addresses may be reused from one
tenant to the next. In the case of the RNC tenant, VLANs are used to create distinct telecom flows
such as IuCS, IuPS, Iur and Iub Uplane and Cplane. The overall WCE VLAN strategy is
described in VLAN Tagging and Configuration.
Each VLAN-tagged frame will be marked with a p-bit according to application-assigned
DSCP values. Switches in the WCE LAN will provide up to 8 emission priority queues per port. P-
bits are used to map outgoing frames to the appropriate priority queue. Minimum bandwidth
guarantees may be provisioned against each queue. WCE switches support a variety of
scheduling algorithms including SP, WFQ and WRR. All scheduling algorithms are work-
conserving.
Another responsibility of the WCE transport domain is to provide a consistent strategy for
failover across all transport elements in the L2 LAN regardless of the size or throughput of the
WCE. The objective is to have the LAN redirect traffic to alternate paths within 500 ms of the
occurrence of a transport fault. This strategy is described in Transport Failover Scenarios.
Security is a concern for all elements of the WCE. WCE switches provide various
transport security capabilities including several types of ACLs. See the Security section for the
overall WCE security strategy.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 85 sur 112


9771 WCE RNC Product Engineering Information

A subset of Ethernet OAM will be supported by the WCEs switches. The WCE will
support IEEE802.3ah/IEEE802.1ag in order to verify and troubleshoot connectivity with adjacent
L2 switches or NHRs.

8.1 WCE PLATFORM COMPONENT ROLES


The Wireless Cloud Element platform will only provide minimal levels of carrier grade
capabilities, specifically the ability to detect failed VMs and restart them (an operation that may
take 10s of seconds). Carrier grade is the responsibility of the applications. Fault detection is
provided by process failure detection, heartbeat and similar mechanisms.

8.1.1 LRC MGR

The WCEs LRC Mgr is an application that resides in its own virtual machine that acts as
a bridge between a virtualized network function and a Cloud O/S. Given that there are many
Cloud O/Ss currently in existence, often with incompatible APIs, the LRC Mgr isolates network
elements from this complexity. LRG Mgr is not a carrier grade component in itself and is not
required for the normal operation of any network element. LRC Mgr extends the functionality of
standard Cloud O/S systems in order to lessen the overhead when virtualizing standard network
functions. Specially, the LRC Mgr introduces or refines the concept of a Tenant.
The OpenStack definition of a tenant is as follows: A container used to group or isolate
resources and/or identity objects. Depending on the service operator, a tenant may map to a
customer, account, organization, or project.
LRC Mgr explicitly extends this definition to a network function as virtualized network
functions typically consist of multiple sets of VMs with each set composed of multiple instances of
a specific VM (or servers in OpenStack parlance). Each of the sets of VMs may implement a
specific sub-function of the overall network element. For example, the 3G RNC tenant is
composed of the following sets of VMs: 3gOAM, CMU, UMU and PC. Each set of VMs is
described by its own VM template which describes requirements for virtual CPUs, memory, etc.
To ensure there are sufficient physical resources to create a tenant the LRC Mgr provides
the ability to specify either a resource reservation for the entire tenant or a single VM type or both.
These reservations take the form of MHz of compute and MB of dynamic memory.
LRC Mgr provides mechanisms to act on an entire tenant with a single operation. For
example it is possible to reset an entire tenant, power off or on an entire tenant or add a single
VM to an existing tenant with the LRC Mgr.
In recognition that network functions require and will have implemented application
specific sparing mechanisms to achieve high levels of availability, LRC Mgr supports and
facilitates these mechanisms with the ability to create affinity and anti-affinity rules. Such rules
ensure that VMs are appropriately distributed across the physical compute infrastructure such
that the failure of one of the hosts does not result in a VM level failure beyond what the
application is designed to handle. For example, within the 3G tenant there exists a set of PC VMs
that implement an N+1 sparing mechanism within the set. To ensure the N+1 mechanism
continues to function correctly in a cloud environment where the allocation of VMs to physical

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 86 sur 112


9771 WCE RNC Product Engineering Information

hardware is dynamic, no more than one of the VMs within this set may be allocated to a physical
host.

8.1.2 VMWARE VCENTER SERVER (VCS)

The vCenter Server is the central point for configuring, provisioning, and managing virtualized IT
environments or datacenters.

Figure 45: VCenter Server

vCenter Server aggregates physical resources from multiple ESX/ESXi hosts and
presents a central collection of flexible resources for the system administrator to provision to
virtual machines in the virtual environment. vCenter Server components are user access control,
core services, distributed services, plug-ins, and various interfaces.
The User Access Control component allows the system administrator to create and
manage different levels of access to vCenter Server for different classes of users.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 87 sur 112


9771 WCE RNC Product Engineering Information

For example, a user class might manage and configure the physical virtualization server
hardware in the datacenter. Another user class might manage virtual resources within a particular
resource pool in the virtual machine cluster.

8.1.3 VCENTER SERVER CORE SERVICES

Core Services include the following services:


Virtual machine provisioning - Guides and automates the provisioning of virtual
machines and their resources.
Host and VM Configuration - Allows the configuration of hosts and virtual machines.
Resources and virtual machine inventory management - Organizes virtual
machines and resources in the virtual environment and facilitates their management.
Statistics and logging - Logs and reports on the performance and resource use
statistics of datacenter elements, such as virtual machines, hosts, storage, and
clusters.
Alarms and event management - Tracks and warns users on potential resource
overuse or event conditions. You can set alarms to trigger on events and notify when
critical error conditions occur. Alarms are triggered only when they satisfy certain time
conditions to minimize the number of false triggers.
Task scheduler - Schedules actions such as vMotion to occur at a given time. (Not
supported in LR14.2W release)
Consolidation - Analyzes the capacity and use of a datacenters physical resources.
Provides recommendations for improving use by discovering physical systems that
can be converted to virtual machines and consolidated onto ESX/ESXi. Automates
the consolidation process, but also provides the user with flexibility in adjusting
consolidation parameters.
vApp - A vApp has the same basic operation as a virtual machine, but can contain
multiple virtual machines or appliances. With vApps, you can perform operations on
multitier applications as separate entities (for example, clone, power on and off, and
monitor). vApps package and manage those applications.
Distributed Services are solutions that extend VMware vSphere capabilities beyond a
single physical server. These solutions include: VMware DRS, VMware HA, and VMware
vMotion. Distributed Services allow the configuration and management of these solutions
centrally from vCenter Server. VMotion not supported in LR14.2W release
Multiple vCenter Server systems can be combined into a single connected group. When a
vCenter Server host is part of a connected group, you can view and manage the inventories of all
vCenter Server hosts in that group.

8.1.4 VMWARE FUNCTIONS

ESXi hypervisor for single server with APIs for starting VM, restarting VM, killing VM,
querying VMs, etc.
vCenter centralized entity for managing many servers together. Uses ESXI apis.
UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 88 sur 112
9771 WCE RNC Product Engineering Information

Cluster set of servers managed by the same vCenter. A server can belong to only one
cluster.
Resource Pool logical representation of the computing capacity (CPU, Memory).
resource reservation represents resources (CPU, memory) that are guaranteed
vMotion ability to move a VM from one server to another server when both servers are
managed by the same vCenter (Not supported in LR14.2W release)
vApp set of VMs that logically comprise a single application. vApp can be defined and
deployed in a server or in a DRS-enabled cluster. DRS allows to define anti-affinity rules
to enforce rules for individual VM placement
HA feature that runs in a cluster and sets up ESXIs to monitor each other. If a server in
the cluster fails the VMs on that server are restarted on other server(s).
Failover host new HA feature available with vSphere 5.0 that allows one or more hosts
to be specified as failover hosts. No VMs are placed on a failover host. If a server fails, HA
will restart all of the VMs on a failover host.
DRS feature that runs in a cluster and is based on vMotion. DRS has 3 modes:
Manual - no automatic vMotion, no automatic startup of VMs on "the best"
server. Can still have vSphere API indications about what is suggested by DRS.
Semi-automatic - no automatic vMotion for load balancing, but will do automatic
startup of VMs on "the best" server.
Automatic - load balancing via vMotion (at various levels depending on how
balanced you care about servers being) and automatic startup of VMs on "the best"
server. The decisions are based on a past performance data collected from the hosts in
the DRS cluster and stored in vCenter server database, so for automatic DRS to function
properly it requires vCenter server to be up and running
Anti-affinity DRS feature that allows specification of VMs such that they cannot be
on the same server
DPM power management feature for DRS based clusters. Under-utilized servers
can be shut down and powered back up when needed. Uses vMotion to move VMs to
other servers.

Engineering Recommendation:
There is a VMWare white paper which address the WCE Carrier Cloud solution:
https://umtsweb.ca.alcatel-
lucent.com/wiki/pub/WcdmaRNC/LightRadioReferenceArchitecture/Alcatel_Lucent_WCE
_Success_Story_FINAL_20131210-r1v70.pdf

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 89 sur 112


9771 WCE RNC Product Engineering Information

8.1.5 DISK ACCESS VM

The WCE platform will include an optional Disk Access tenant. For release 1, the Disk
Access tenant must be deployed before any 3G tenant is deployed as the 3G tenant will make
use of this functionality.

8.1.5.1 REQUIREMENT FOR 3G DISK ACCESS

Single copy of 3G software on the SAN for all VMs to use (software download of a given
version should not need to occur more than once)
Reliable spooling of counter data, call trace data all written from active 3gOAM VM
Reliable config data (specifically MIB) read/written from active 3gOAM VM
no single point of failure
ln general we need at minimum both 1:1 spared 3gOAM VMs to access the same disk
partition for software download, config data mgmt, and counter/CT spooling. The reliability comes
from RAID now we should not need software based shadowing/mirroring. We are trying to get
out of the middleware business.
In order to be able to safely share the same disk across more than one VM, we will make
use of Disk Access software. The Disk Access VMs will provide NFS and SFTP servers for
shared disk access.

Figure 46: Disk Access Tenant Configuration

8.1.5.2 TENANT SOFTWARE DISK

A separate Disk per WCE -- Initially only used for 3g software but may be converted in a later
release to include LRC Platform software and other tenant software as well.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 90 sur 112


9771 WCE RNC Product Engineering Information

All 3gOAM VMs (across multiple RNC tenants in the same WCE) will have the ability to NFS
mount this disk via the Disk Access VMs.
file system provided by Disk Access VMs
partitions and directory structure used to separate software versions
populated with RNC software as part of I&C
LrcMgr northbound interface handles software version management on the disk after initial
deployment
3gOAM creates a RAM disk with that RNC's current software version
other VMs PXE boot from 3gOAM using RAM disk

8.2 WCE RNC CPLANE AND UPLANE DATAPATH


The Figure 46 below depicts the telecom data paths for the WCE RNC. Each CMU will
support SCTP associations to a group of NodeB (i.e. all the NodeB in that CMUs service group)
and to each core network element (MSC, SGSN, nRNC). This implies that each core network
element will have at least as many RNC associations as there are CMUs in the RNC.
The CMUs will also directly terminate the common channels (CCuplane) instead of the
common channels being forwarded through the PCs. All other RNC Uplane (IuCS, IuPS, Iur & Iub
Dch) traffic will go through the PCs. Each PC will manage a group of NodeBs. The PCs will have
dual functions. The current TBM functionality will be distributed over the PCs (i.e. performing
CAC, allocating UDP ports, etc). In addition, the PCs will continue to manage bandwidth pools
and do UDP & GTP NAT.

The RNC Cplane is only visible to the external network through the CMUs and the Uplane is only
visible to the external network through the PCs. This means that externally routable subnets for
all traffic types will be provisioned against those two roles. Since the Iu Cplane, Iub Cplane and
Iub common channel Uplane will probably be implemented on different OS/core instances, three
subnets will be needed for the CMUs. The customer will have the option to provision one subnet
for all Uplane traffic on the PCs or provision individual subnets for each of the four Uplane traffic
types.
As further simplifications to the existing RNC architecture, there will no longer be
separate PCs for the Iu and Iub legs of a given call. Instead, both legs of a call will be handled by
the same PC. As well, PCs will no longer be warm-spared but will be cold-spared instead. When
a PC fails, all the calls associated with that PC will be dropped and the PC will be re-instantiated
elsewhere in the cloud.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 91 sur 112


9771 WCE RNC Product Engineering Information

Figure 47: RNC Cplane and Uplane DataPaths

8.3 VLAN TAGGING AND CONFIGURATION


8.3.1 NON TELECOM VLAN DESCRIPTION

Figure 48: WCE VLAN Non Telecom Strategy


UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 92 sur 112
9771 WCE RNC Product Engineering Information

The colored lines in this figure imply a separate VLAN and VIDs. The point at which each
of these colored lines meets a VM implies a separate vNIC in a VMs template. The VID for each
VLAN is configured by the customer in the port group VLAN attribute of the vNIC. Frames leaving
the VM via the vNIC will be tagged and assigned the appropriate VID by the virtual Distributed
Switch (vDS). It is important that each tenant maintain its own unique internal VLAN (shown as
dark blue and yellow lines) for communication between tenant components so that the
169.254.0.0 /16 IP subnet may be reused between tenants.
VMware also allows the assignment of a single p-bit value to a port group. However, this
will be insufficient for most application flows since any given flow (as identified by a VLAN or
vNIC) will carry different priority frames within the flow. An example of this from the telecom traffic
domain is the Iub Uplane flow whose frames require different priorities depending on the type of
call they represent (e.g. voice, streaming, interactive/background, etc). Therefore, a more flexible
p-bit marking mechanism is required. As frames leave the blade and enter the Blade Switch, the
DSCP value from a frames IP header will be translated to a configured p-bit value and inserted
into the Ethernet header (i.e. the VLAN tag) by the switch.
This tagging and p-bit marking strategy applies to all the VLANs in Figure above with the
exception of the ESXi/SAN and ESXi/vCenter paths which are depicted by the solid black and red
lines emanating from the ESXi elements.
Non Telecom VLAN Recommendations:
The WCE will support at least 3 non-telecom VLANs: OAM (red), LRCEMgr internal (grey) and
iSCSI (black).
The 3G tenant introduces 2 more non-telecom VLANs: 3G tenant internal (blue) and DA
internal (purple).
The iSCSI VLANs VID will be 4094 and the p-bit will be 0. These values are hardcoded in the
blades BIOS by System House but may be changed by I&C.
The iSCSI vNIC must have the same VID (4094) configured in its port group.
Recommendation for the OAM VID is 4093. This value must be configured manually in each
ESXi by I&C, followed by a reset of the blade.
The same VID must be configured in the OAM vNIC port group.
All other VLANs (LRCEMgr internal, DA internal, 3G tenant internal) will have a customer-
selected VID configured in the appropriate port group.
P-bit values for all frames leaving VMs will be set by the Blade Switch downlink ports based
on a configurable DSCP to p-bit mapping for DSCP values other than 0.
Recommendation for call trace frames coming from the CMUs and UMUs is a DSCP value of
CS1 which is the lowest possible value.
Another rule on the Blade Switch will identify OAM VLAN frames from the ESXi (DSCP = 0)
based on VID = 4093 and set the p-bit to 2 according to recommendations. I&C or the
customer may configure any other p-bit value in this rule.
I&C or the customer may also configure a rule for the iSCSI VLAN (VID = 4094) to override
the 0 p-bit set by the BIOS.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 93 sur 112


9771 WCE RNC Product Engineering Information

Uplink ports to the SAN and OAM devices (WMS, vCenter) are configured as access ports
with PVIDs of 4094 and 4093 respectively. Rules to set the p-bits in ingress frames must be
defined on these ports.

8.3.2 TELECOM VLAN DESCRIPTION

In addition to the non-telecom VLANs described earlier, customers will want to separate
telecom traffic types onto different VLANs. Figure below depicts a typical way of separating
external/telecom traffic types for the RNC. Most customers want to separate Cplane and Uplane
on all major interfaces as well as separating IuPC traffic. As explained later, IuBC and iSMLC
traffic cannot be put on different VLANs due to the way these functions are implemented in the
RNC. The telecom VLANs are depicted terminating on the gateway router since this is likely the
scope of their relevance. As packets for the various RNC traffic types are forwarded to the
external network (i.e. to the left of the gateway router) they may be put onto a number of different
technologies as described in the Transport Overview.

Figure 49: Maximum set of telecom VLANs for the RNC

8.3.3 RNC TELECOM VLAN SEPARATION

Figure below shows which VLANs (both telecom and non-telecom) are associated with
each VM type for WCE platform and RNC VMs. Each VLAN requires a separate vNIC on a VM.
Each colour indicates a different VID configured in the vNICs port group. Although the WCE/RNC
may be required to support as many as 15 telecom and non-telecom VLANs, each individual VM
type (i.e. RNC role) will only have to support a subset of those VLANs based on the type of
external traffic a given role generates. Note the largest number of VLANs any role will have to
support is 8 (on the CMU).

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 94 sur 112


9771 WCE RNC Product Engineering Information

Figure 50: Maximal RNC Telecom VLAN Separation

The red star under the OAM vNIC highlights the fact that this VLAN will carry several
different types of application traffic including 3G OAM, RNC CallTrace traffic from the 3G OAM,
CMU & UMU, LRCE Manager traffic to the vCenter & WMS, ESXi traffic to the vCenter and any
other WCE tenants OAM traffic. In addition, this VLAN will carry IuBC or iSMLC traffic if these
functions are configured on the RNC. These functions must use the OAM VLAN because they will
use the OAM IP address for the OMU which is part of the 3G OAM VM.
The Iub CP vNIC on the 3G OAM VM is there to allow the OMU to receive Attach
messages from the NodeBs. The first green star brings attention to the fact that this vNIC is on
the same VLAN as the Iub Cplane. The Attach functionality is assigned an IP address from the
Iub Cplane subnet and therefore must be on the same VLAN as the Iub Cplane.
The second green star in Figure above highlights the fact that this VLAN may carry a new
traffic type for the WCE version of the RNC. This new traffic type is the Iub Common Channel
(CC) Uplane. Common channels reside on the PC in the 9370 RNC but have been moved to the
CMU for the WCE. A few options were contemplated for this VLAN depending on the IP address
strategy adopted. The first option involves using the same IP address as the Iub Cplane for that
CMU. This means that the Iub CC Uplane frames must be carried on the Iub Cplane VLAN. This
is the preferred option for the final product due to IP address consumption concerns.
A second option involves defining a separate IP subnet for the Iub CC Uplane traffic. This
option would allow the Iub CC Uplane traffic to be carried on the same VLAN as the Iub Cplane or
on its own VLAN. The latter VLAN option is highlighted by the turquoise star in Figure above. This
IP address strategy is what is currently used in the labs because at the moment the software
cannot be configured to separate traffic for two different functions (Cplane and Uplane) on the
same IP address. This is a temporary arrangement until the software can make such a distinction.
Note that the CMU template will include a vNIC for the Iub CC Uplane. This will allow a customer
to segregate the Iub CC Uplane from other CMU traffic if IP addresses are not a concern.
Please note that the Figure above shows the maximum VLAN configuration. If all the
RNC functionalities are not available, in particular IuPS NAT function; that means IuPS Uplane
terminates on the UMUs. Please look at the figure below:

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 95 sur 112


9771 WCE RNC Product Engineering Information

Figure 51: RNC vNIC with IuPS terminates on UMU

9 WCE TRANSPORT OVERVIEW


As a platform for various telecom functions, the Wireless Cloud Element (WCE) must be
able to interact with many diverse types of transport technologies which could include IPoE1,
MPLS & VPLS VPNs, Ethernet, E-pipes, microwave, IPV6, etc. These technologies may vary
from one customers network to another for a given WCE tenant; they may vary between the core
and the access network for a given tenant and will almost certainly vary from tenant to tenant.
For the WCE, the platform transport domain defines the physical infrastructure required to
maintain the interface to the NHRs as well as support the virtual networking interface provided to
the applications. For the first release of the WCE, this means selecting the appropriate hardware
configuration to deliver the required capabilities. In future releases this will mean providing a well-
defined SLA that data center/cloud provider managers would use to ensure proper operation of
the WCE tenants. See Figure below

Figure 52: WCE Transport Reference Architecture

The WCE platform transport is built upon a single L2 Ethernet LAN. This is important for
several reasons, principal of which is to alleviate customer concerns regarding IPV4 address
usage by allowing internal WCE components to use link-local (169.254.0.0) addresses and to
facilitate failover of software components within the WCE. The platform will provide a basic set of

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 96 sur 112


9771 WCE RNC Product Engineering Information

transport capabilities as shown in Figure above. This set includes 10 Gbps interfaces that may be
down-rated to 1Gbps if that is what is supported on the NHRs.
The WCE supports the segregation of different traffic flow types onto separate VLANs. A
number of infrastructures VLANs are defined within the WCE to separate non-telecom flows
such as OAM, disk access, and LRCE Manager Flows from tenant payload flows. In addition,
each tenant will have its own internal VLAN for inter-component communication. Since each
tenants VLAN will be unique within the WCE, the 169 component IP addresses may be reused
from one tenant to the next. In the case of the RNC tenant, VLANs are used to create distinct
telecom flows such as IuCS, IuPS, Iur and Iub Uplane and Cplane. The overall WCE VLAN
strategy is described in VLAN Tagging and Configuration.

Engineering Recommendation:
For the Transport Engineering Guidelines WCE RNC tenant, please
refer to the Iu TEG LR14.2 [Gl_ENG_001]

9.1 WCE TRANSPORT COMPONENT


From a transport perspective, the WCE hardware environment comprises an HP C7000
enclosure (aka shelf) which contains 16 BL460 blades and a pair of 6125XLG blade switches. A
WCE can have up to 4 enclosures, 64 blades and 8 switches. The blades have two 10Gbps NICs,
each of which is connected to a different blade switch via backplane connections. The two
switches are inter-connected via four 10Gbps backplane links and one 40 Gbps faceplate link.
These are known as IRF links. IRF (Intelligent Resilient Framework) is a proprietary HP feature
which allows multiple switches to appear as a single network element. An IRF domain may
contain up to 10 switches. IRF also allows uplinks from different switches to be aggregated
together, effectively implementing Multi-Chassis Link Aggregation Groups (MC-LAG). Each switch
will be equipped with a single 10 Gbps uplink (see Telecom Link Topology). IRF allows all links in
a LAG to be actively forwarding frames. However, many NHR suppliers implement MC-LAG in an
active/standby fashion as depicted in Figure 43 below. Consequently, LACP must be supported in
the NHRs to determine which links will be selected (solid line in Figure) and unselected (dotted
line) wrt the LAG.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 97 sur 112


9771 WCE RNC Product Engineering Information

Figure 53: WCE Transport Component


Summary of requirement for NHR:
Static routes to WCE on downlink ports.
VLAN tagging on downlink frames.
DSCP to p-bit mapping and marking on downlink frames.
Multi-Chassis LAG on downlink ports.
Use of LACP to establish LAGs and specify selected and unselected links within
the LAGs.
Switchover from failed selected ports in less than 500 msec.
Switchover from failed router in less than 500 msec.
LAG hashing algorithm that uses both L3 & L4 data (e.g. src+dst IP & src+dst UDP).
Support configurable MTU size up to 1540 bytes on downlink ports.
Mapping of VLANs to appropriate transport technology (e.g. VPRNs, E-pipes, VPLS,
etc) on uplinks.
Fragmentation on uplink ports.

VMware supplies many of the software aspects of WCE transport.


VMs may have multiple vNICs to accommodate different traffic requirements. vNICs with
the same set of requirements are part of the same port group. Port group attributes include virtual
switch name, VLAN ID, teaming policy and L2 security options (promiscuous mode, MAC address
change lockdown, forged transmit blocking). In Figure above, vNICs of the same colour are from
the same port group.
The virtual Distributed Switch (vDS) acts as a forwarding engine between VMs, whether
they are on the same blade or not. Each vNIC is assigned a MAC address by VMware. The vDS
uses these MACs to determine whether a frame can be delivered directly to another VM or must
be sent out the physical NIC (vmnic in VMware parlance) to another blade. On ingress, the vDS
UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 98 sur 112
9771 WCE RNC Product Engineering Information

will filter frames based on VLAN IDs. The vDS has a presence on all blades of a given VMware
cluster.

9.2 NIC TEAMING AND DATA FLOW


The RNC tenant uses port group attributes to establish VLANs for different WCDMA
traffic types and for port teaming policy enforcement. Port teaming may be active/standby (all
traffic goes out one port and switches to the other only upon failure of the original port) or
active/active (aka load-sharing). VMware offers 3 types of load-sharing: alternating mapping of
vNICs to vmnics, hashing of source MAC to vmnics and hashing of IP addresses to vmnics.
Mapping of vNICs is statically determined as VMs power on. The other two forms of hashing are
done dynamically and hence consume more blade CPU. Consequently, the RNC tenant has
chosen vNIC mapping as the load-sharing technique. The algorithm proceeds in a 0-1-1-0
manner. That is, the first vNIC to power on gets mapped to vmnic 0; the next two get mapped to
vmnic 1, the fourth gets mapped back to vmnic 0 and then the pattern repeats. The coloured lines
in Figure 44 depict, not only this mapping, but the VLANs which are associated with the various
WCDMA traffic types. The binding of vNICs to vmnics persists until a vmnic fails, either due to
blade h/w failure or due to failure on the attached blade switch. When the vmnic recovers, the
vDS resumes the original bindings.

Figure 54: NIC Teaming and Data Flow

Note the paths the various frames will take through the IRF domain. Since, in this
example, the NHR routers have implemented MC-LAG in an active/standby manner, there is only
one uplink path available in the domain. IRF will redirect frames over the IRF links to reach a
viable uplink.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 99 sur 112


9771 WCE RNC Product Engineering Information

10 WCE DIMENSIONING RULES

10.1 OVERLOAD INDICATORS


PSE Overload Indicators: not changed from 9370 RAB
CNP Overload Indicators:
1) CNP Mailbox
CPU will not be used as overload indicator for UMU and CMU since U-Plane EE can use
the maximum available CPU to process delay tolerant traffic.
Note: PSE overload thresholds and CNP overload thresholds used in 9370 need to be
verified for CMU and UMU by LRC capacity testing.

10.1.1 LOCAL OVERLOAD

PSE have 3 overload levels: minor, major, critical. CNP has 5 overload levels: minor,
major, critical, critical platform 1, critical platform 2.
In 9370 a mapping is used between them:
PSE minor overload level <-> CNP minor overload level
PSE major overload level <-> CNP critical overload level
PSE critical overload level <-> CNP critical platform 2 overload level
The mapping will be kept for LRC for computing the local overload for CMU and
UMU:
Local PSE Overload level = max of {PSE overload levels}
Local CNP Overload level = max of {CNP overload levels}
Local Overload Level = max of {PSE overload levels, CNP overload levels}.

10.1.2 OVERLOAD DECISION TABLE

The strategy is to keep 9370 behavior unless changes are necessary. The changes from
9370 are color-coded in the table:
1) RRC Connection Request filtering in CMU U-Plane due to TMU System Overload is
changed to Local CNP overload.
2) Paging filtering in NI due to RAB System Overload is removed.
3) U-Plane CAC functions due to overload are removed. Should there be a need to CAC,
we will add it in Callp and make it OAM configurable like PCH state transition case
Overload control functions performed by NI:
1) For ingress direction (from CN to RNC), since SCCP Access runs in C-Plane EE, it
makes sense for sccpRouting to check C-Plane EE Q overload level before sending messages to
it.
2) For egress direction, it does not make sense for NI to drop messages that are already
received, but this is what implemented in 9370.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 100 sur 112


9771 WCE RNC Product Engineering Information

Processor Level of Overload Corrective Action


RRC Connection CMU CallP NO OVERLOAD Nominal Behaviour
Request Minor
MAJOR
Reject 1 out of X
except emergency
CRITICAL calls [1]

CRITICAL PLATFORM 1
Reject all except
emergency calls1

CRITICAL PLATFORM 2 Reject all


CMU UPlane NO OVERLOAD Nominal Behaviour
(CCCH) MINOR Filter 1 out of 2 except
emergency calls [2]
MAJOR
Filter all except
CRITICAL
emergency calls

CRITICAL PLATFORM 2
Filter all
Paging CMU U-Plane No Overload Nominal Behavior
Request [3] MINOR Filter 1 out of 2
MAJOR CRITICAL Filter all
UMU Callp No OVERLOAD Nominal Behaviour
(RncCom) MINOR
MAJOR Filter 1 out of Y
CRITICAL
CRITICAL PLATFORM 1 Filter all
&2
NI No OVERLOAD Nominal Behaviour
(SccpRouting) MINOR Filter 1 out of 2
MAJOR Filter all
CRITICAL
CRITICAL PLATFORM 2 Filter all
SCCP messages NI No OVERLOAD Nominal Behaviour
except Paging (SccpRouting) MINOR Filter 1 out of 10
MAJOR Filter 1 out of 5
CRITICAL and Above Filter all
PCH upsizing or UMU Callp No OVERLOAD Nominal Behaviour
Downsizing MINOR
MAJOR Filter 1 out of Z
CRITICAL
UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 101 sur 112
9771 WCE RNC Product Engineering Information

CRITICAL PLATFORM 1 Filter all


&2
UPlane Background CMU and No OVERLOAD Nominal Behaviour
Class UMU UPlane MINOR Filter all
MAJOR
CRITICAL
UPlane delay CMU and No OVERLOAD Nominal Behaviour
tolerant class UMU UPlane MINOR Filter 1 out of 10
MAJOR Filter 1 out of 5
CRITICAL Filter all
Call Trace CMU and No OVERLOAD Nominal Behaviour
UMU Callp MINOR
Suspension of all
sessions except CTa,
CTb, OT-RNC and
CTn sessions

MAJOR
Suspension of CTa,
CTb, OT-RNC and
CTn sessions

CRITICAL Suspension of all


sessions
CMU and NO OVERLOAD Normal Behaviour
UMU UPlane Suspension of all
MINOR_OVERLOAD
sessions

MAJOR_OVERLOAD
CRITICAL_OVERLOAD

[1] A reject message is sent to the UE


[2] No reject message is sent to the UE
[3] Does not distinguish between Paging Type 1 and Paging Type 2

Table 22: Overload level actions

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 102 sur 112


9771 WCE RNC Product Engineering Information

10.2 VRNC DIMENSIONING RULES

Engineering Recommendation:
For LR14.2W Release, one cluster can handle 32 blades; this is due to a
VMWare restriction. The WCE1 configuration handles 16 blades servers
through the WCE4 configuration which handles 64 blades servers. As
already write on the VM configuration, the table below gives the rules
applicable for the VMs implantation for the LR14.2W.

On a WCE system, blades/hosts can be added one or more at a time, in service with no
impact, and then roles dynamically added for growth. The minimum configuration supported is 6
blades/hosts, but this does not give the full carrier grade availability.

One vRNC (virtual or logical RNC) can handle one 32 hosts/cluster. See the table below
to see the max composition of one vRNC:

Blade Rules per blade Rules per vRNC


Server
CNP 255 Max nodes per RNC
CMU 1 No more than one CMU on 16 CMUs max per RNC
each blade server
UMU N Dynamic application vs server 65 UMUs max per RNC
load
PC 1 No more than one PC on each 30 PCs max per RNC
blade server
3gOAM 1 No more than one 3gOAM on 2 3gOAM per RNC
each blade server and not on
the same blade as CMU
Remind there is 32 Hosts max in one cluster
Table 23: WCE vRNC Dimensioning Rules

10.2.1 TRAFFIC

One UMU VM is composed of 3 UMU-Vxells each handling 4K total users (3K PCH, 1K
DCH/FACH)
Up to 780 000 users per vRNC
3K for PCH users of which is a max combined of 3K URA_PCH and 1K CELL_PCH
1K for FACH/DCH users of which is a max combined of 850 DCH and 700 FACH
Which means total of 585 000 PCH & 195 000 DCH/FACH per vRNC

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 103 sur 112


9771 WCE RNC Product Engineering Information

10.2.2 VRNC DIMENSIONNING RULES

Remind: The HP Blades used on WCE Shelf are builded with 20 cores each.

Rule:
WCE1 Conf: 16 blades X 20 = 320 Cores
WCE2 Conf: 32 blades X 20 = 640 Cores
WCE3 Conf: 48 blades X 20 = 960 Cores
WCE4 Conf: 64 blades X 20 = 1280 Cores
Virtual Machine Configuration:
1 x UMU VM uses 4 Cores
1 x CMU VM uses 4 Cores
1 X PC VM uses 4 Cores
1 x 3gOAM VM uses 2 Cores
1 x DA VM uses 2 Cores
1 x LRc Mgr uses 2 Cores

The maximum size of one vRNC is up to 65 UMU, 16 CMU, 30 PC, 2 3gOAM, 2 DA and 2 LRcMgr
located on a 32 blades (WCE2) system Configuration.

Rule:

This configuration consumes (65X4) + (16X4) + (30X4) + (2X2) + (2X2) + (2X1) =


454 Cores or about 70% Core utilization on a WCE2 system.

10.2.3 COVERAGE LIMITS

Rule:
WCE in LR14 (Release 1) can manage up to 12000 cells with 2400 cells per
vRNC (up to five vRNCs per WCE).

11 WCE RNC 3G CAPACITY DESCRIPTION

11.1 WCE RNC 3G CAPACITY LICENSING

The capacity licensing is the ability to issue a license related capacity. We will license the following
RNC capacity:

Erlangs
Throughput (Mbps)
Cells
Ref Subscribers (future proof for anticipated M2M)

Using the call profile of the customer, the network configuration/engineering team will translate the
required erlangs and Mbps into a specific number of UMUs and CMUs. The RNC will not limit the

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 104 sur 112


9771 WCE RNC Product Engineering Information

throughput of a cell, but more the overall throughput of the RNC by limiting the number of UMUs and
CMUs.

Each UMU is rated for the maximum capacity with each release. The number of UMUs licensed to the
customer will depend on the contract negotiations for the system. The number of UMUs licensed will
be applied to the License Key Delivery Infrastructure (LKDI). During provisioning of the Radio Network
Controller (RNC) tenant, the Management System (WMS) verifies the UMU licenses and provisions
the correct number of UMUs on the RNC.

The WCE RNC combines the control plane and user plane functions within a single virtual
machine, on a per user basis.
This User Management Unit (called UMU) is the single element responsible for all UE
management aspects: including both data traffic as well as signalling.
UMU is under licensing control. 1 License per UMU
Please refer to the WCE MOPG for UMU licensing Order Code [Int_Eng_001]

11.2 WCE RNC 3G CAPACITY STATUS

Engineering Recommendation:
For LR14.2W, The current status indicates that one blade server is able
to bring about 100 Mbps. So 64 blades server with Ivy Bridge Processor
(scaling factor > 1.33 vs Sandy Bridge) are able to provide 6.4 Gbps Iu
Throughput. The result include normal mode provisioning but exclude
ciphering, multi VLAN and GTPU-NAT features.

Configuration Mesured or Predicted Iu


Throughput
24 Server Blades (Ivy Bridge G8+) 2.4 Gbps
32 Server Blades 3.2 GBps
48 Server Blades 4.8 Gbps
64 Server Blades 6.4 Gbps

Traffic Profile Characteristics influence the Iu throughput a 9771 WCE can achieve.

Virtual Machines CPU Utilization: represents the CPU utilization of each Virtual
Machine while processing calls and handling traffic.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 105 sur 112


9771 WCE RNC Product Engineering Information

Traffic: represent with


RAB assignments success PS, CS and number of SMS reflect the traffic type of
the vRNC on the WCE
HSDPA and HSUPA traffic of NodeB on the WCE vRNC
IuPS throughput:
Represent the amount of IuPS total traffic sent and received
This traffic represent all information going through the IuPS IP interface, including
the protocol headers, signalling, data, ..

11.2.1 CAPACITY EVALUATION FROM CUSTOMER PROFILE

For the Blades and VMs number estimation regarding a customer traffic profile, the RNC
RCT / Companion Tool is able to provide the evaluation.

This example concerns a call profile with the following characteristics:

1. Two different regions in the customers network were identified to have distinctly different call
profiles, particularly in terms of User Plane throughput:
Zone 1 Profile: (High smart phone network) Contains (8) 9370 RNCs represented by
RNC-A through H
Zone 2 Profile: (Data at home network) Contains (5) 9370 RNCs represented by RNC-I
through M

Zone1
RNC Thput in Mbps (UL+DL)
RNC A 362
RNC B 428
RNC C 359
RNC D 381
RNC E 375
RNC F 138
RNC G 329
RNC H 341
Aggregated 2713

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 106 sur 112


9771 WCE RNC Product Engineering Information

Zone2
RNC Thput in Mbps (UL+DL)
RNC-I 625
RNC-J 634
RNC-K 753
RNC-L 667
RNC-M 542
Aggregated 3221

Table 24: RNC Zones Traffic Profile

According to simulations for Zone 1, a 9771WCE could off-load the following capacity
given Zone 1 Traffic Profile:

Table 25: WCE Capacity evaluation zone1

Note: Capacities shown are from an application layer perspective, not at the L1 level
In this case, the (32) Bladeserver configuration, given Zone 1 traffic profile, achieves
~2750 Mbps
Recall from previous zone 1 table: Aggregated 2713

According to simulations for Zone 2, a 9771 WCE could off-load the following capacity
given Zone 2 Traffic Profile:

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 107 sur 112


9771 WCE RNC Product Engineering Information

Table 26: WCE Capacity evaluation zone 2

Note: Capacities shown are from an application layer perspective, not at the L1 level
In this case, the (22) Blade server configuration, given Zone 2 traffic profile, achieves
~3300 Mbps
Recall from previous zone 2 table: Aggregated 3221

The Reference Customer Iu throughput was as follows:


Zone 1: 2713 Mbps
Zone 2: 3221 Mbps
Given this, the following WCE configurations would be needed in order to swap all 9370
RNC existing in the complete Customer network:
Zone 1: WCE with 32 blades (WCE2 configuration) Replaces (8) 9370 RNCs
Zone 2: WCE with 22 blades (WCE2 configuration) Replaces (5) 9370 RNCs
Given the capacity simulation results, installing (2) 9771 WCEs with a total of 54 blades
would off-load the capacity for the entire Customer network (currently 13 9370 RNCs). And
remind that a complete configuration for 2 WCE2 systems is 64 blades.
Of course, you must be aware this is an estimation made with Companion tool which
could be a little optimistic and take a margin of about 20%, the result is clearly dependant of the
call profile calculation.

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 108 sur 112


9771 WCE RNC Product Engineering Information

11.2.2 THROUGHPUT ESTIMATION METHODOLOGY

To have an approximate value of the maximal IuPs that could be reaching while the
vRNC has the maximum number of VM (65 active UMUs in LR14.2), the following formula will be
used for the projection:
IuPS Throughput of vRNC = Max total throughput per UMU * (1+0.024) * 65
The 2.4 % represents the coefficient to scale the throughput from computing format to the
metric format.
65 is the max number of UMU in a vRNC in LR14.2

Rule: Cells Number Calculation


If you have the average Throughput of a vRNC and the average cell throughput,
you can calculate the number of cells for a vRNC
Example: Average throughput: 3.8 Gbps
Cell throughput: 2Mbps
Number of cells: 1900 cells
Then depending of the average number of cells per NodeB, you can evaluate the
number of NodeB per vRNC.

Restriction:
Note: For the WCE, there is not yet a capacity engineering limit clearly
defined. We cannot talk about a max CPU engineering limit of 80%.
Engineering limits are specific to the metric and system they are used
against.

12 WCE SHADOW UPGRADE DESCRIPTION


The concept of a virtual machine is scaled up to the concept of a virtual or Shadow
Tenant - composed of one or more virtual machines within a virtual network function - that has all
of the functionality of the original network function providing tenant but is hidden from the rest of
the network until a time chosen by the cloud operator. Operational procedures can be conducted
on the shadow tenant without impacting the provided service and once the procedures are
complete the operator can switch between the old and new instance of the tenant with minimal
service impact.
The Shadow Tenant concept has many advantages. The most obvious one is for
software upgrade. This allows us to deploy a Shadow Tenant with n+1 version of the software,
UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 109 sur 112
9771 WCE RNC Product Engineering Information

even including a different or patched version of Linux. It means we have complete flexibility to
change anything we want on a software upgrade.
This is something we could not do in the physical world. Then, when the Shadow Tenant
becomes the Service Providing Tenant we will have minimal outage. We won't have a scalability
problem with needing to boot up, load, and deliver config data from centralized VM to many VMs.
This will have all been done in advance. It is a true improvement. It also means that after upgrade
happens, the n-1 VMs still exist and become the new Shadow Tenant. This means that rollback
is also very fast.
Another 3G specific use of Shadow concept in release 1 is for MIB activation. Today in
some critical reconfiguration the cnode mib should be rebuilt and the RNC is reset to come up
with the new MIB. If we have 1000 VMs an RNC reset will be much longer than the 5 minutes the
customer gets today. For WCE we can deploy a Shadow RNC with the new MIB and have
minimal outage.
Additionally, now that our machines are virtual instead of physical, we can actually modify
the machines that we run on by deploying new VM templates. So as part of deploying the
Shadow Tenant, we can modify the number of cores, memory and cpu reservation values, and
number and types of machines that make up the Tenant.

12.1 SHADOW MANAGEMENT GENERIC INFRASTRUCTURE


1. Shadow Management is a generic infrastructure. As with any generic platform /
middleware, ShadowMgr and ShadowAgent implementations assume that different Tenants
would want to do very different implementations for the same action triggered by Shadow
Management.
2. ShadowMgr sends action requests to ShadowAgent on the Tenant VM (like 3GOAM
for 3g tenant).
ShadowAgent invokes the tenant specific action implementation. It is a requirement that
the actions are implemented to support retries (i.e, action goes about doing its functionality
irrespective of the result from its the previous execution).
3. For initial implementation, Shadow Management does not support automatic
recoveries. Failure in any phase (see below) of the Shadow Management activity requires the
Operator to retry the phase again.
4. Switchover phase (phase3) includes 3 steps at ShadowMgr:
Send getStatus action request to both Service Providing (SP) and Shadow
(Shw) ShadowAgent. This is to get the current view of state as switchover
phase can be done after a long delay from the completion of earlier phase
(start Shadow phase2). This also helps in retries (as can be seen below).
On success from getStatus, send switchover action request to SP
ShadowAgent (3GOAM).
On success from switchover (SP), send switchover action request to Shw
ShadowAgent (3GOAM).
UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 110 sur 112
9771 WCE RNC Product Engineering Information

The following phases or states exist for the Shadow Management activity:
phase1: creation shadow Tenant Instance,
phase2: start Shadow,
phase3: switchover,
phase3r: rollback,
phase4: remove Old Service Providing Tenant Instance

12.2 SHADOW DEPLOYMENT PROCEDURE


There are the high-level 7 steps involved in using a Shadow Tenant:
0. pre-shadow work this includes uploading and preparing new VM templates, as well as
anything needed for Shadow tenant to power-up and become Shadow ready. Like new software
to use, or new config databases.
1. Create Shadow
LMS will configure a second TenantInstance under the correct Element on the
LrcMgr northbound interface
o LrcMgr will deploy a Shadow Tenant. This will create the "root" VMs
which are specified in the Tenant template of the shadow tenant. As
additional VMs are added to the shadow tenant,
A new API with LrcMgr will allow a percentage of what is in the VM template
wrt Cpu/memory reservation. So Shadow VMs can be created with full
memory reservation (100%) and no cpu reservation (0%). The service
providing tenant is responsible for powering up the shadow tenant root VM(s).
2. Power up Shadow
LMS will use a new "start TenantInstance" action at the LrcMgr northbound
interface
When all Shadow VMs are powered up and shadow-ready the Shadow tenant
sends a response message to the shadow manager.
LrcMgr generates northbound alarm for "Shadow Ready"
3. Shadow-swact
LMS will issue a new northbound command "switchover Element" on LrcMgr
netconf interface to trigger this
LrcMgr sends messages to Shadow agents telling one side to become Shadow
and the other to become Service Providing
Newly-SP tenant asks LrcMgr to modify cpu reservation of newly-shadow VMs to
make it 0% of template value.
Newly-SP tenant asks LrcMgr to modify cpu reservation of newly-SP VMs to
make it 100% of template value.
4. Remove Shadow

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 111 sur 112


9771 WCE RNC Product Engineering Information

When the customer is convinced they will not want to rollback, the shadow
TenantInstance is deleted as a config change at LrcMgr northbound interface
LrcMgr will power-down and remove the shadow VMs.

Figure 55: Shadow Upgrade

To ensure the shadow tenant does not interfere with operation of the service providing
tenant, the shadow tenant is isolated from the network until the switch of activity between the two
instances. This switch of activity is accomplished by selectively disabling and enabling virtual
network interfaces on the service providing and shadow tenant virtual machines.
A powerful by product of the shadow upgrade is the ability to perform a roll-back of an
entire network function to its prior state should the upgrade be unsuccessful. The freedom to
almost seamlessly switch between versions of a network element has the potential to radically
change traditional maintenance procedures.
i

i
End of the document

UMT/IRC/APP/041734 02.07/EN Standard 09/dec/2014 Page 112 sur 112

Potrebbero piacerti anche