Sei sulla pagina 1di 106

PE/DCL/DD/014282

GSM BSS 850/900/1800/1900 V18 OMC-R


Engineering Rules
Standard 04.02 October 2008

Whats inside...
Product Overview OMC-R Server Engineering Consideration Client, X-terminal and RACE Engineering Considerations Preventive Backup and Disaster Recovery Plan OMC-R Solution Architecture and Interfaces Bandwidth Requirements Hardware Specifications DCN Hardware Specifications & Recommendations Appendix A: Software Load line-up Appendix B: OMC-R LAN Engineering

ii

Copyright 2008 Nortel, All Rights Reserved The information contained herein is the property of Nortel and is strictly confidential. Except as expressly authorized in writing by Nortel, the holder shall keep all information contained herein confidential, shall disclose the information only to its employees with a need to know, and shall protect the information, in whole or in part, from disclosure and dissemination to third parties with the same degree of care it uses to protect its own confidential information, but with no less than reasonable care. Except as expressly authorized in writing by Nortel, the holder is granted no rights to use the information contained herein. Nortel, the Nortel logo, the Globemark are trademarks of Nortel. SOLARIS, is a trademark of Sun Microsystems Inc. UNIX is a trademark licensed exclusively through X/Open Company Ltd. NIMS-PrOptima is a trademark of Mycom International.

Printed in Canada

OMC-R Engineering Rules PE/DCL/DD/014282

Publication history iii

Publication history
October 2008 Standard version 04.02 Version after internal IPOR 390810 review. September 2008 Preliminary version 04.01 Applicable to GSM BSS V18 release. June 2008 Standard version 03.04 Removed all references to the Ultra 5 workstation as it is not supported in v17 release. March 2008 Standard version 03.03 Updated to Standard Version. Added 1800Mhz cpu to SF V890 configuration. September 2007 Preliminary version 03.02 Removal of the WQA Server and Application Engineering guidelines section as WQA is offered uniquely as part of a services offering July 2007 Preliminary version 03.01 Applicable to GSM BSS V17 release. BR+21 compliant. October 2006 Preliminary version 02.02 Addition of the WQA Server and Application Engineering guidelines section May 2006 Preliminary version 02.01 Applicable to GSM BSS V16 release March 2006 Preliminary version 01.05 Add engineering rules for concurrent activation of Call Path Trace, Call Drop Analysis and Radio Measurement Distribution October 2005 Preliminary version 01.04 Correct and update list of OEM software is installed on an OMC-R server in Appendix A: Software Load line-up document section
OMC-R Engineering Rules PE/DCL/DD/014282

iv Publication history Rewrite Q3 on Ethernet link via TCP/IP in OMC-R Solution Architecture and Interfaces document section Add note about the need to provision 2 additional IP addresses for the BSCe3 CEM boards in OMC-R Solution Architecture and Interfaces document section July 2005 Standard version 01.03 Precise nominal hardware configuration with regards to OMC-R releases in Hardware Specifications & Internal Redundancy Strategy document section June 2005 Preliminary version 01.02 Add SIE OMC-R OMC Administration, PE/OMC/DD/000182 in Reference document section Clarify OMC-R Capacity Configuration naming and point to OMC-R Capacity parameters in OMC-R Server Engineering Considerations section Clarify and update Dimensioning of Call Trace and Call Path Trace in OMC-R Server Engineering Considerations section Change the number of SDOs supported back to 5 as it is the contractually agreed value in NIMS-PrOptima for GSM BSS Server Engineering Considerations Pull out the NTP time synchronization section and remove the info about the data availability KPI in NIMS-PrOptima for GSM BSS Server Engineering Considerations section Clarify maximum of simultaneous Graphic MMI in Client, X-terminal and RACE Engineering Considerations section Clarify possibility to share a single switch for multiple PCUSN locally connected to the OMC-R server in OMC-R Solution Architecture and Interfaces section March 2005 Preliminary version 01.01

OMC-R

Table of Contents v

Table of Contents
Table of Contents About this document
Audience for this document Scope Whats new in this release

v vii
- vii - vii - vii

Reference documents
References

ix
- ix

Product Overview
OMC-R interconnection with the GSM/GPRS network OMC-R Architecture and Functions Hardware Platform New Items in the Release

1-1
1-1 1-2 1-4 1-4

OMC-R Server Engineering Considerations


OMC-R architecture OMC-R Capacity Configuration Management Fault Management Performance Management Security Management Redundancy Purge Q3 (CMIP) interface SDO server PCUOAM server

2-1
2-1 2-2 2-3 2-4 2-4 2-7 2-8 2-8 2-8 2-9 2-9

Client, X-terminal and RACE Engineering Considerations


OMC-R Client CIUS (centralized installation upgrade server) X-terminal RACE Client workstations allowed configurations Number and type of end-user connections to OMC-R

3-1
3-1 3-2 3-3 3-6 3-8 3-8

Preventive Backup and Disaster Recovery Plan


Preventive backup Backup media or server Disaster Recovery Plan

4-1
4-1 4-1 4-2

OMC-R Solution Architecture and Interfaces


OMC-R Solution Architecture OAM Interfaces

5-1
5-1 5-3

OMC-R Engineering Rules PE/DCL/DD/014282

vi Table of Contents
OMC-R PCUSN Connection OMC-R BSC 3000 Interface OMC-R to NMS interface 5-5 5-8 5 - 12

Bandwidth Requirements
OMC-R LAN OMC-R - Client bandwidth requirements STATX Workstation - X-terminal bandwidth requirements OMC to BSC bandwidth requirements BSC to OMC bandwidth requirements OMC-NMS bandwidth requirements OMC - SDO bandwidth requirements PCUOAM - OMC bandwidth requirements

6-1
6-1 6-1 6-1 6-2 6-3 6-4 6-4 6-5

Hardware Specifications
Supported configurations Detailed hardware specifications Sun Server Hardware

7-1
7-1 7-3 7-8

DCN Hardware Specifications & Recommendations


LAN switching device WAN routing switch Terminal Server Console Server Alarm Relay Box RS-422/RS232 Interface Converter

8-1
8-1 8-2 8 - 11 8 - 14 8 - 16 8 - 20

Appendix A: Software Load line-up


Third Party software Load line-up

9-1
9-1

Appendix B: OMC-R LAN Engineering


IP Addresses planning IPMP Implementation T5140 IP Adrress Planning

10 - 1
10 - 1 10 - 3 10 - 5 10 - 5

List of Terms

11 - 1

OMC-R

About this document vii

About this document


This document is intended for Network Designers and Application Engineers who perform network dimensioning tasks. The document provides all the required information for including OMC-R in GSM 850/900/1800/1900 networks.

Audience for this document


This OMC-R Engineering Information has been specifically prepared for the following audience: Network Engineers Installation Engineers Network & System administrators Network Architects

Scope
This document applies to the version 18 of OMC-R intended to manage the V18 BSS release and to manage temporarily the previous BSS releases to allow release upgrade transitions. This version of OMC-R supports the introduction of the new server T5140 with the disk array device ST2510.

Whats new in this release


Introductions of V18 features with engineering impacts on the OMC-R solution 29389 Solaris 10 support 31737 New hardware introduction 32092 Removal of X25 and OSI software packages 34153-34157 Windows Vista compliancy for select client applications 34453 File System backup to tape 34497 BTS inventory 34535 Addition of TCU to multiple BSCs in parallel upgrade 34604 MDM 16.2 introduction and support 34617 End of support of synchronous PCUSN Data TRAU frame 34618 Support of NFS backups to non-local drives 34840 OMC-R upgrade enhancements

OMC-R Engineering Rules PE/DCL/DD/014282

viii About this document

OMC-R

Reference documents ix

Reference documents
The documents listed below contain all references used herein. Additional updates and corrections can be found in the OMC-R Release Notes

References
[R1] [R2] [R3] [R4] [R5] [R6] [R7] [R8] [R9] [R10] [R11] [R12] [R13] [R14] [R15] [R16] [R17] [R18] [R19] [R20] [R21] [R22] [R23] V18.0 Release Documentation List, PE/DCL/APP/019983 V18.0 Feature Planning Guide, PE/BSS/APP/019562 V18.0 Release Reference Book, PE/SYS/DPL/019036 V18.0 External Release Definition, PE/BSS/DJD/022189 OMC-R CUSTOMER PRODUCT OVERVIEW, PE/OMC/DD/000170 NIMS -PrOptima Customer Product Overview, PRO/MKT/SYD/NOR/008 SFS OMC-R DATA SERVER, PE/OMC/DD/000103 OMC-R PRODUCT CATALOGUE, PE/OMC/INF/0066 OMC-R Modelled Offer Provisioning Guide, to be defined W-NMS OAM Engineering guide, NTP 450-3101-638 Centralized Installation and upgrade service, PE/SYS/DD/006464 SIE OMC-R Web Access: RACE, PE/OMP/DD/0045 SIE OMC-R DATA SERVER, PE/OMC/DD/102 SIE OMC-R OMC Administration, PE/OMC/DD/000182 OMC-R CUSTOMIZATION PARAMETER NOTEBOOK, DS/OMC/APP/000019 GSM OAM SYSTEM SPECIFICATION, Firewall Support Information, PE/OMC/DD/004749 OMC-R OEM EQUIPMENT INSTALLATION PROCEDURE, DS/OMC/APP/000001 OMC-R VERSION UPGRADE PROCEDURE, DS/OMC/APP/000002 OMC-R SOFTWARE INSTALLATION PROCEDURE, DS/OMC/APP/000003 SDO INSTALLATION PROCEDURE, DS/OMC/APP/000008 OMC-R PREVENTIVE MAINTENANCE BACKUP, DS/OMC/APP/000016 OMC-R SYSTEM GLOBAL RESTORATION, DS/OMC/APP/000017 MULTIOMC WORKSTATION CONFIGURATION, DS/OMC/APP/000023

OMC-R Engineering Rules PE/DCL/DD/014282

x Reference documents [R24] [R25] [R26] [R27] [R28] [R29] [R30] OMC-R MAINTENANCE CHECKS, DS/OMC/APP/000024 OMC-R STATIONS MOVING, DS/OMC/APP/000032 OMC-R MULTI-MMI DISPLAY CONFIGURATION, DS/OMC/APP/000033 OMC-R CAPACITY INCREASE PROCEDURE, DS/OMC/APP/0000037 OMC-R MONOBOX PREVENTIVE MAINTENANCE BACKUP, DS/OMC/APP/000043 OMC-R MONOBOX SYSTEM GLOBAL RESTORATION, DS/OMC/APP/000044 COLD REDUNDANCY / DISASTER PLAN PROCEDURE, DS/OMC/APP/008020

OMC-R

Product Overview 1-1

Product Overview

1-

The OMC-R is the Operation and Maintenance Center of the NORTEL GSM Base Station sub-system (BSS). It is at a remote location where operations and maintenance functions for the network radio sub-system equipment (BSC, TCU, BTSs) attached to the OMC-R are centralized. The BTSs and TCU are managed from the OMC-R through the BSC to which they are connected. In case of an additional GPRS service, the OMC-R provides also a centralized management of the Nortel manufactured PCUs. The OMC-S/CEM or W-NMS OAM solutions (not described in this documentation) manage the Nortel GSM Network Switching Subsystem (NSS). The OMC-D or W-NMS OAM solutions (not described in this documentation) solutions manage the Nortel General Packet Radio Service core Subsystem (GPRS core). The CT2000 (not described in this documentation) offers a centralized configuration of entire Nortel Networks BSS including multiple OMC-Rs, as well as a centralized view of all BSS networks parameters.

OMC-R interconnection with the GSM/GPRS network


The different communication links between the OMC-R and the GSM network equipment follow: The OMC-R/BSC interconnection.This links the OMC-R to the BSCs it controls. There are multiple ways to establish this link using a PSPDN network, the GSM A interface or private interconnection solutions. The OMC-R/PCUSN interconnection.This links the OMC-R to the PCUSNs it controls. There are multiple ways to establish this link using a PSPDN network, the GSM A interface or private interconnection solutions. The OMC-R/NMS link. This links the OMC-R to one or more NMSs, which is a central network management system. There are two ways to establish this link: through the Q3 interface if the NMS provides this interface or through an ASCII type interface (also known as: non-Q3 interface) The Remote Access links. These links are used to administrate the OMC-R from remote locations. They can be linked to the OMC-R central site through a PSTN connection or through private interconnection solutions. The following drawing shows the OMC-R within a GSM network.

OMC-R Engineering Rules PE/DCL/DD/014282

1-2 Product Overview


Figure 1-1: OMC-R interconnection with a GSM/GPRS network

BSS Subsystem

A
MSC

Gs

TCU

Ater

SIG

Gr

Abis
BTS BSC

Agprs
PCUSN

Gb
SGSN

Gn
GGSN

WAN/LAN WAN/LAN
WPS

OMC-R Server

OMC-R client workstation

OMC-R DATAEXP workstation

CT2000

OMC-R Architecture and Functions Network Operating Reliability


The OMC-R is the supervision system of the BSS. However, if the OMC-R fails this does not mean that the GSM network radio sub-system fails. If there is an OMC-R failure, the GSM network keeps working. The BSS stores supervision data, but it has a limited capacity (3 days). The only impact of an OMC-R failure is a temporary loss of supervision visibility on the network (during the failure) and a possible loss of supervision data.

Architecture
The OMC-R is composed of two logical entities which are part of the same physical equipment:

OMC-R

Product Overview 1-3 One Mediation Device (MD) function to manage the BSC and PCUSN network elements. The MD handles mediation between the standard Q3 interface and the OMC-R/BSC & OMC-R/PCUSN interfaces. It converts Q3 requests into OMC-R/BSC OMC-R/& PCUSN interfaces requests and BSS spontaneous event reports into Q3 notifications. One manager (MNGR) function to interface with the OMC workstations. The Q3 interface is used as the inner OMN standard interface as specified in the TMN model. It enables communication between the MD-R and remote NMS (also known as External Manager). The MNGR and MD-R also communicate internally through the Q3 interface.
Figure 1-2: OMC-R software architecture

OMC-R External Functions


OMC-R external functions correspond to the four functional areas defined in the TMN principles of the ITU-T M.3010 recommendation. These are accessible to the operating staff through the local HMI or remote HMI. Configuration management This function handles control and synchronization of BSS equipment resources and configures BSS objects.

OMC-R Engineering Rules PE/DCL/DD/014282

1-4 Product Overview Fault management This functions manages and stores the flow of BSS information concerning BSS operational anomalies and breakdowns and the associated return to work procedure. The OMC-R furnishes this information to the operating staff through the HMI. Performance (and Observation) management This function handles the Call monitoring feature and all the collecting and reporting functions of the performance counters. Security management For OMC-R external functions, security management refers to access security management for the OMC-R operating staff and not to management of the GSM network security from the OMC-R.

OMC-R Internal Functions


Administration and Communication management This required system function is available to the end user. Administration management handles OMC-R maintenance and operating functions through local HMI, backup storage of data files, power on and off, and command files management. Communication management handles data file transfers, communication with the supervised systems, and management of the Q3 interface.

Hardware Platform
The OMC-R system is composed of workstations and servers. It is made up of commercial third party equipment (computers, communication equipment, etc.) that runs industry standard and Nortel Networks developed proprietary software. The Network Management functions are hosted by Sun servers; the OMC-R Server for the Network Management platform, Fault Management, Configuration Management and Performance Collection functions. These servers are based on Sun servers with or without external storage arrays. The client workstations supported for management of the Wireless network are Sun workstations.

New Items in the Release


The OMC-R v18 release introduces and supports a new type of hardware, Suns T5140 server and Sun StorageTek 2510 disk array. The following upgrade paths to the T5140 server are supported: SunFire V880/T3 v16 -> T5140/ST2510 v18 SunFire V880/T3 v17 -> T5140/ST2510 v18 Also, the v18 OMC-R will no longer support the BSC 2G, therefore the OMC-R will not require OSI and X25 software packages.

OMC-R

Product Overview 1-5 A major feature in the v18 release is Abis over IP; this feature consists in enabling packet based backhaul transmission as an alternative to TDM-based E1/T1 links, on the BSC'BTS Abis interface. All information on the Abis over IP feature and its impact to engineering will be covered in an ABIS over IP specific document; this document will not cover this feature.

OMC-R Engineering Rules PE/DCL/DD/014282

1-6 Product Overview

OMC-R

OMC-R Server Engineering Considerations 2-1

OMC-R Server Engineering Considerations 2OMC-R architecture


The OMCR includes two entities, a local manager and an agent. The two entities communicate across the Q3 interface. The agent, called the MD, supports the mediation function. It converts Q3 format messages into requests in a standard format and forwards them to the OMCR/BSC & OMCR/PCUSN interface. Conversely, it converts messages coming from the BSS into Q3 format messages.
Figure 2-1: OMC-R General Architecture

OMC-R General Architecture


External Manager (OSS) OMC-R Manager aka local manager Part Q3Interface Interface Q3 OMC-R Agent aka MD Part (Vn) BDE (Vn)

MOD Vn-2

MOD Vn-1

MOD Vn

BDA

BSS Vn-2

BDA

BSS Vn-1

BDA

BSS Vn

BDA: Applicative Data Base, the Database of the BSC BDE: Management Database, the DB of the OMC MOD: Managed Object Dictionary it is the BSS-OMC interface definition; as such, it is versioned.

The OMCR provides the following functions: Man Machine Interface (MMI) Communication Management Configuration Management Fault Management Performance Collection Security Management Common Functions

OMC-R Engineering Rules PE/DCL/DD/014282

2-2 OMC-R Server Engineering Considerations RACE management OMCR databases GPRS management

OMC-R Capacity
The OMC-R is able to handle: up to 40 BSC / 4800 cells for the Enhanced Capacity configuration option with a Sun Fire V890 based OMC-R and the T5140 based server. This configuration is known as the Ultra High Capacity (UHC) configuration. up to 40 BSC / 3200 cells for the Enhanced Capacity configuration option with a Sun Fire V880. This configuration is also known as the Very High Capacity (VHC) configuration. up to 30 BSC / 2400 cells for the Basic Capacity configuration. The Basic Capacity configuration corresponds to the High Capacity (HC) configuration. The OMC-R is by default configured with the Basic Capacity configuration. To increase the OMC-R capacity from Basic to Enhanced Capacity, one must first verify that the OMC-R server is compliant with OMC-R very high capacity hardware requirements. The enhanced configuration cannot be reached with all supported OMC-R server hardware configurations. See "Hardware Specifications" section. Refer to SIE OMC-R OMC Administration, PE/OMC/DD/000182 for the appropriate values of the OMC-R configuration files variables related to the OMC-R Capacity.

Number of managed objects


The capacity limits for OMC-R exploitation depend on the number of managed objects and not the volume of supervised traffic. The following table shows the capacity limits for the OMC-R in V18 software release. The classes in bold are the most dimensioning factor for the OMC-R. Possibility to override these limits can be envisioned through an engineering service.

OMC-R

OMC-R Server Engineering Considerations 2-3


Table 2-1: OMC-R object management capacities Managed Objects Description Basic Capacity Enhanced Enhanced Capacity Capacity for for SF V880 SF V890/ based T5140 based OMC-R OMC-R 3200 40 3200 102400 102400 153600 19200 4800 40 4800 153600 153600 230400 28800

Bts (cells) Bsc BtsSiteManager

maximum number of cells per OMC-R 2400 maximum number of BSCs per OMC-R 30

maximum number of sites per OMC-R 2400

AdjacentCellHand defines a neighbor cell of a serving cell 76800 over for handover management purposes AdjacentCellResel defines reselection management ection parameters for a serving cell Channel FrequencyHoppin gSystem LapdLink PcmCircuit (BSC2G/BSC 3000) Pcu Transceiver Transcoder_2G (BSC2G/BSC 3000) Transcoder_e3 (BSC 3000) max number of channels (TCH, etc.) max number of frequencyHoppingSystem objects (Hopping Seq Nb, Mob. All.) max number of LAPD objects max number of pcmCircuit objects 76800 76800 9600

22380 3120/9000 (T1) or 7740 (E1) 30

44960 4160 / 12000 (T1) or 10320 (E1) 40 19200 560/1280

62400 4160 / 12000 (T1) or 10320 (E1) 40 28800 560/1280

max number of PCUSN

maximum number of TRX per OMC-R 9600 Max number of TCU2G 420/960

Max number of TCUe3

60

80

80

Configuration Management BSS Software File Transfer


The OMC-R can simultaneously transfer a BSS software version (EFT) up to 10 different BSCs.

OMC-R Engineering Rules PE/DCL/DD/014282

2-4 OMC-R Server Engineering Considerations The OMC-R can simultaneously build the BDA of up to 10 different BSCs, whatever the build type is (on-line, off-line). Also, it can simultaneously audit the BDA of up to 10 different BSCs.

Scope and Filter Limits


The depth limit for a scope GET, SET or ACTION command is limited to 3. There is no limit in depth for a scope DELETE command

Fault Management Duration of Notification Data Storage


The Mediation Part stores the notification log of the current day and the last 3 days (D_MD_CONSULT_MAX). The Manager part of the OMC-R stores the fault data of the current day and the last 7 days (D_MGR_CONSULT_MAX).

Number and Duration of Storage for Alarms


The maximum number of outstanding alarms (N_ALARM_MAX) managed at the Manager part of the OMC-R is 10000. The OMC-R stores the alarm history of the current day and the last 7 days (D_MGR_CONSULT_MAX). The OMC-R can simultaneously manage 40 instances of alarm criteria

Performance Management Observation Reporting Period


The BSC observation granularity periods (T2) are: 15, 30 or 60 minutes for Fast Statistic Observations 1 day for General Statistic Observations 1, 2, 3, 4 or 5 minutes for Real Time Observations 15 minutes for Diagnostic Observations 1 5, 10, 15, 30 or 60 minutes for temporary Observations Note: PCUSN counters are NOT available at the OMC-R Performance Monitor window level and they are not available to External Managers via Q3 PM interface. However, PCUSN counters are available at the SDO level (observation files).

The value of T2 can be affected given the OMC-R server configuration, number of BSC managed, number of cells per BSC and number of neighbors (adjacentCellHandover) per BSC.

1Only available for BSC 3000: T2 ODIAG can be set to 5 mn only if one BTS is in mdOb-

jectList or if its set to 10 mn and there are less than 3 BTS


OMC-R

OMC-R Server Engineering Considerations 2-5 These limitations are linked to the number of cells handled per BSC and the number of neighbors (adjacentCellHandover) per cell. According to these values the minimum T2 value for Fast Statistic Observation cannot always be 15mn. These limitations can appear even with mixed BSC 2G/BSC 3000 in the OMC-R.

Nature and duration of Observation Data Storage


The Fast Statistic Observation, General Statistic Observation, Diagnostic Observation and Temporary Observation logs are stored in the OMC-R Mediation Part and Manager Part. The Mediation Part stores the observation logs of the current day and the last 3 days (D_MD_CONSULT_MAX). The Manager part of the OMC-R stores the observation files of the current day and the last 7 days (D_MGR_CONSULT_MAX).

BSC Call Monitoring Observation


The OMC can simultaneously manage the following number of Call Trace (traceControl) and Call Path Trace (callPathTrace) objects classes:
Table 2-2: BSC Call monitoring capacity

Objects traceControl callPathTrace

Basic Capacity 30 30

Enhanced Capacity 40 40

Call Tracing
This function is the GSM 12.08 trace facility of the BSC. It is used to trace the activities associated to specific communications (identified by IMSI or IMEI) in a BSC and to transfer this data to the associated OMC-R. This function is invoked by MSC. The following considerations apply: No more than one call trace object can be created per BSC. Only one Call Trace session per BSC can be activated. The OMC-R can process at any given time the traces of 10 IMSIs, in radio or others modes, per day, in the whole BSS network it manages. It is assumed that each IMSI performs 90 communications per day of 2 minutes duration with 1 handovers, on an average, when the traces are done in radio or others modes.

Call Path Tracing


This function enables the BSC to trace the communication activities on specific devices of the BSC (CICs, TRX or cells). This function is initiated at the OMC-R level. The following considerations apply: No more than one call path trace object can be created per BSC.

OMC-R Engineering Rules PE/DCL/DD/014282

2-6 OMC-R Server Engineering Considerations Only one Call Path Trace session per BSC can be activated, monitoring 36 communications in parallel. It is assumed that communications are, on average, 2 minutes long with 2 handovers. The OMC-R can handle 6 or 8 active Call Path Trace sessions simultaneously (N_CPT_MAX) respectively for basic (HC) and enhanced (VHC/UHC) capacity configurations. The total duration of the call path trace session (H_CPT_DURATION_MAX) determines the amount of CPT Data collected and therefore will be limited by the related OMC-R server & SDO server partitions size. With allowing only one day of storage of CPT data in the SDO, the maximum active CPT duration (hours) for 8 BSCs monitoring 36 parallel communications with T2 at 30 minutes is limited to 10 hours. For data transfer, FTAM is used for non-priority traces, and event reports are used for priority traces.

Duration of Trace Data Storage


The H_CPT_DURATION_MAX is available for 4 days (current + 3). If the H_CPT_DURATION_MAX is used on one day, backup of the CPT data must be done to free disk space for the next day. The storage duration of the C[P]T data cannot therefore be guaranteed. However, and as long as the H_CPT_DURATION_MAX is not reached, the storage duration is more than one day and allows the user to retrieve these data.

Call Drop Analysis


This function enables the operator to analyze the call drops and thus optimize its RF network based on the drop call causes. This function is activated directly at the OMC-R level so as to retrieve the call drop data from the BSC. When activated, the feature allows the BSC to ask for BTS data when a drop is detected by the BSC, to store BTS and BSC data on BSC disk and, when deactivated, to warn the OMC-R that data are available. The CDA files are eventually stored on the SDO in XML format to be made available for an external post-processing tool.

Distributions on Radio measurements


This function helps the operator monitor his network by providing for the TRX of given cells and on a periodic time frame, the distribution at TDMA level of some measurements done on radio interface. The activation of distribution retrieval is settable on a cell basis at the OMC-R level. The distributions are stored on the BSC on a cell basis and then automatically uploaded by the OMC-R via FTAM. The distributions files are eventually stored on the SDO in XML format to be made available for an external post-processing tool.

OMC-R

OMC-R Server Engineering Considerations 2-7

Concurrent activation of Call Path Trace, Call Drop Analysis and Radio Measurement Distribution
The activation of the Call Drop Analysis feature and - in a much lesser extent - the activation of the Radio Measurement Distribution feature both concur significantly to increase the amount of data stored in the OMC and SDO disks. Therefore there are potential restriction in using those functions in parallel with the Call Path Tracing activity to avoid to reach the disk used space threshold monitored by the OMC-R purge defense mechanism. With the Call Drop Analysis activated, activating thereafter the Call Path Tracing will potentially launch the OMC-R defense mechanism after a given period of time which will depend on the OMC-R hardware configuration. Otherwise, there is no restriction in activating simultaneously Radio Measurement Distribution and Call Path Tracing. With the following settings on those different functions that are activated simultaneously, Call Drop Analysis typically featuring 100 drop calls per cell for potentially 3200 cells and assuming the data are kept on the SDO during 4 days, Radio Measurement Distribution monitoring 3200 cells per day and assuming the data are kept on the SDO during 4 days, Call Path Trace initiated for 8 BSCs monitoring 36 parallel communications with the BSC observation granularity period (T2) set at 30 minutes, we can estimate that maximum active CPT duration (hours) for 8 BSCs monitoring 36 parallel communications with T2 at 30 minutes is limited to 10 hours. Within this duration, the SDO disk used is over 70% for a SFV880 with T3 Integrated OMC-R. If CPT exceeds maximum CPT duration, the OMC data will be either purged (if the CPT database contains data older than the current day) and/or recording will be stopped (if the CPT database contains only current day data). If CPT exceeds 10 hours, old SDO data will be purged on SDO based on U5 or SB150. With an SDO device of 27 or 36GB, i.e. either dedicated SDO running on a U5 or SB150 with Multipack or integrated in a SV880 with T3s, the SDO used disk space will suggest to only activate those features in parallel with precaution and requires daily backup of CPT data to prevent from having the oldest archive to be purged immediately at the next session of CDA or CPT. With an SDO device of 90GB or more, i.e. either dedicated SDO running on a SB1500 or integrated in a SFV890, the CDA, RMD and CPT can be activated in parallel. However we need to remind you that the limitations described before on CPT when running alone still applies, but in that case it is not the SDO device which will be the limiting factor, but instead the OMC Data partition size

Security Management
The number of user profiles that can be created in the OMC-R is up to 250.

OMC-R Engineering Rules PE/DCL/DD/014282

2-8 OMC-R Server Engineering Considerations

Redundancy
The OMC R secures the storing of dynamic data in mirrored file systems that are identical files (or identical Data Base tables) on two separate disk units. Moreover, in SFV880 Integrated and SFV890 Integrated HDI, system disks and OMC-R static data disks are also mirrored.

Purge
Three mechanisms are available at OMC-R level to purge the old data. The first consists in automatic daily deleting the old data in order to avoid the saturation of the disks. The storage duration is defined in previous sections. This mechanism does not require any operator action. The second is a defense mechanism used to delete automatically the oldest data when a filling threshold for any OMC-R partition is reached. This mechanism does not require any operator action. The third mechanism is provided to avoid the saturation of the Call Trace and Call Path Trace partitions (data base and ASN.1 files, but not the log files). The operator has the possibility to purge the current day or any day available at OMC-R level according to the storage duration of the Call Trace, Call Path Trace information.

Q3 (CMIP) interface
The Q3 interface is based on CMIP, FTAM, ACSE protocols. The OMC-R can simultaneously manage on its Q3 interface: transactional and an event reporting data flow based on CMIP with up to 8 CMISE command every second file transfer data flow based on FTAM The transport layer used for the OMC-R Q3 interface is either X.25 or TCP/IP. Event report Event reporting throughput is caused by Fault Management event reports, Performance Management event reports, Call[[Path]Trace event reports and transactions entailing a notification (attribute value change, object creation, object deletion). We assume that this last point corresponds to 25% of the transactional throughput The mediation part of the OMC-R may support up to 3 managers, including the local manager in the OMC-R, which supports the manager functions and the man-machine interface. As a matter of fact, an OMC-R can be connected to a maximum of 2 NMS simultaneously. The mediation part's managed objects and resources cannot be dedicated to a manager. Therefore a consistent management of the OMC-R (Mediation part) is supposed to be performed by the manager(s).

OMC-R

OMC-R Server Engineering Considerations 2-9 To avoid the congestion of the OMC-R during a load of notifications, each external manager shall be able to acknowledge at least 16 notifications per second and per OMC-R in average during a day on the Q3 interface. In the same way during scope/filtered operations, the external manager shall be able to receive 4 up to 16 linked responses per second and per OMC-R in average during a day

SDO server
The SDO allows to get the OMC-R data records and radio network configuration parameters in an ASCII readable format for peripheral OMC applications (which may get them using 'rcp' or 'ftp' Unix commands). Starting BSS v16.0, the observation reports available from the SDO (Nortel OMC-R Data Server) are compressed when older than one day. The directories which store observation report files, and have a day tag different from current day are arranged into a single destination file (using tar command), then the destination file is compressed (using gzip command, Standard RFC 1952). At the end, original data files are deleted. There are potential impacts on the 3rd Party Post Processing tools as if the file retrieval is interrupted for some time (in case of OAM link failure, for instance), file compression will have been applied by the SDO. In this case, compressed files will have to be retrieved (instead of regular ones) and uncompressed before being processed. Today, NMS for Performance Management (such as METRICA) use SDO data files as inputs (observation files). With the Integrated OMC-R Server configuration, the SDO function is part of the OMC-R server with the same level of performance. For legacy mono or dual OMC-R server configuration, the SDO function is hosted on a local (connected to the same LAN as the OMC-R server) or remote (connected to the OMC-R server via an XLAN) workstation.

PCUOAM server
The MDM software is responsible for managing the PCUSN hardware that is part of GPRS network. Connected to MDM, then OMC-R processes will be providing PCUSN alarms to the OMC-R MMI, and transferring PCUSN counters data from PCUSN to OMC-R SDO. The PCUSN counters are not displayed at the OMC-R MMI. With the Integrated OMC-R Server configuration, the PCUOAM function is part of the OMC-R server. For legacy OMC-R server configuration, the PCUOAM function is hosted on a local (connected to the same LAN as the OMC-R server) or remote (connected to the OMC-R server via an XLAN) workstation. The PCUSN per OMC-R limit is the maximum number of BSC an OMC-R can manage.

OMC-R Engineering Rules PE/DCL/DD/014282

2-10 OMC-R Server Engineering Considerations

OMC-R

Client, X-terminal and RACE Engineering Considerations 3-1

Client, X-terminal and RACE Engineering Considerations 3The OMC-R Client software application can only be hosted on Unix workstation. But it is also possible to emulate multiple client sessions using an X-Window terminal launched from a Unix workstation local to the OMC-R server as well as launched from the Integrated OMC-R server itself. Besides The Remote ACcess Equipment application allows end-users to interact with the OMC-R application from a PC running an Internet browser to perform day-to-day operations, curative maintenance from a remote site.

OMC-R Client
The nominal OMC-R client is hosted on a Unix workstation which can be local (connected to the same LAN as the active server) or on a remote (connected to the active server via an XLAN).
Table 3-1: OMC-R client capacity Server configuration Number of cumulated OMC-R Client Graphic MMI (OMC-R workstations or X terminals) SFV880 16 SFV890/T5140 40

Session windows
During a user session on a workstation, the number of windows of various types that can be opened simultaneously is limited. The performance of the OMC-R system is guaranteed under the following conditions: maximum of simultaneously opened or iconed windows = 10 maximum (Current alarm list + Notification windows + State Change window + OMC-R browsers + Topological view) opened or iconed = 5 The Client workstation allows since release V15.1R the use of a double screen display to optimize Windows management, for example by having configuration management and performance management windows opened on one screen, and Fault management windows opened on a second screen. To benefit from this feature also implies to have a second graphic card installed on the workstation.

Man Machine interface


The whole users activity can be: on an average: 1 MMI unitary command each second for the whole set of users. on a maximum: 8 MMI unitary commands each second for the whole set of users, during 3 hours maximum. The average rate must not be exceeded on a 24 hours period.

OMC-R Engineering Rules PE/DCL/DD/014282

3-2 Client, X-terminal and RACE Engineering Considerations A MMI unitary command is not a "& filter" command, nor a "run command file" command. A "scope & filter" command is counted as N unitary commands, where N is the number of object instances in the scope. A command file is counted as N unitary commands where N is the number of unitary commands in the command file

Multi OMC-R client workstation


A multi OMC-R client workstation is able to connect alternatively to several OMC-Rs, i.e. one after the other. When connected to a remote OMC-R, nothing distinguish this workstation running the MmiKernel and MmiGraphic binary from a local workstation of the remote OMC-R.

Other OMC-R workstation functional roles


The primary role of the client workstation is run the client of the OMC-R client application, but the client workstation hardware can also combine or be dedicated to more specific activities Centralized installation upgrade server (CIUS) X terminal client RACE application server SDO workstation for legacy OMC-R configuration PCUOAM server for legacy OMC-R configuration

CIUS (centralized installation upgrade server)


Centralized Installation & Upgrade Services is based on SUN Technologies (Jumpstart, Live Upgrade and Flash Archive) which facilitate installation and upgrade of Sun Solaris machine by automating (Jumpstart), by performing Operating System Upgrade On-Line (LiveUpgrade) and by speeding-up installation and upgrade phase (Flash Archive). Coupled with GUI sequencer this feature offers a full "hands free" efficient and automatic installation and upgrade package. This service is made up of several components: Centralized Graphical Sequencer which allows to play remotely on each machine belonging to one OMC-R installation and upgrade steps. This Graphical Sequencer realize on unitary functionalities (Capture of data mandatory for Installation/Upgrade, play scenarios of Install/Upgrade) Jumpstart Service: this SUN technology allow to install Operating System from scratch remotely and automatically LiveUpgrade Service: this technology available starting with Solaris 8 allows to Install a machine but without having to stop it at all. This technology is partially provided by SUN but equivalent functionality will also be implemented into OMC-R installation and configuration tools Flash Archive: this SUN technology only available starting Solaris 8 allows to reduce dramatically Jumpstart or LiveUpgrade Operating and 3rd parties software installation by using some Archive produced during previous installation.
OMC-R

Client, X-terminal and RACE Engineering Considerations 3-3 The CIUS is installed on an OMC-R local workstation with a minimum of 20GB internal hard disk. On the CIUS workstation, an internal DVD-ROM drive is required in order to download tools and software. This hardware requirement matches with the following configurations: The SunBlade150 workstation with 512MB RAM and 80GB disk The Sun Blade 1500 workstation The Ultra 45 workstation with 1.6GHz, 2GB RAM, 250GB disk After the install upgrade the workstation goes back to its OAM purpose. As the SUN JumpStart needs to allow any machine to boot on the networks using the RARP service, it is necessary to have JumpStart server residing on the CIUS server and the Sun machines to upgrade on the same subnetwork. For the remote workstation which cannot boot using the Solaris image of the JumpStart server, a specific mechanism is set: a boot from a Solaris CDROM with an automatic JumpStart installation from the JumpStart server. For the remote workstations with a low bandwidth, i.e. less than 1 Mbps, it is also possible to perform the installation from the CIUS CDROM and from the local tape drive with the application software, while the configuration files will be fetched from the CIUS server. The workstations and the SFV8x0 servers can be installed from the CIUS even if they belong to different subnets.
Figure 3-1: workstation as a centralized installation upgrade server

SDO

PCU OAM

WS 2

WS 1 Servers

Boot net - Install

Install/Upgrade server

Self Bootable Self installable CD ROM

X-terminal
An X-terminal (X-Window terminal) can be connected either to the OMC-R server or to a client workstation set up to support X-terminal session, also named STATX workstations which can be local or remote.
OMC-R Engineering Rules PE/DCL/DD/014282

3-4 Client, X-terminal and RACE Engineering Considerations According to the X-Window Client-Server relationship, in this type of configuration an X-terminal is an X11 server and the OMC-R server or the STATX workstation are an X11 client. The benefit of X-terminal is to deploy low cost or poor computing power hardware (obsolete workstation). But actually the OMC-R server with a monitor and keyboard or a nominal workstation running X11 server can also act as a X-terminal. Note that X-terminal based on PC with X11 emulation software such as Exceed are not supported as they could cause crash on the extended workstation.
Figure 3-2: X-Window Client-Server relationship

X11, tftp
OMC-R A ws STATX (X11 client, Boot server) X Terminal (X11 server)

X11

OMC-R Server A Switch / Router OMC-R B ws with X11 server running OMC-R Server B

OMC-R

Client, X-terminal and RACE Engineering Considerations 3-5


Table 3-2: Number of X Terminal sessions allowed X11 Client configuration SF V880 based Integrated OMC-R server 10 SFV890/T51 40 OMC-R server STATX based on SB150 STATX based on SB1500 or Ultra45 9

Maximal simultaneously number of X Terminal sessions

40

Standard X-terminals could be diskless devices and therefore have the capability to boot their X11 server software via the network using standard IP boot protocol (TFTP) from an X11 boot server. The boot server could be the extended workstation itself or a distinct machine. X-terminals running on workstations will boot their X11 software using their own disk avoiding the need of a boot server. An X-Terminal cannot operate without its X11 client, that is to say, for OMC-R with non integrated OCM-R server for which X11 server can only run on the STATX workstation, this comforts the requirement to have a minimum of two local workstations per site if one workstation is out of service.

Multi-MMI workstation
The Multi-MMI workstation is a Unix machine running an X server, or indeed more simply said an X terminal. By definition, an X-terminal can be simultaneously attached to several X11 clients, i.e. STATX workstations or OMC-Rs and therefore perform multiple simultaneous sessions for different OMC-Rs. With the Multi-MMI workstation, we can connect to several remote OMC-R at once through their local workstations set up to act as an X client (STATX). The workstations STATX running the client software (MmiKernel and MmiGraphic binary) are sending back the display to this Multi-MMI machine or X terminal. It implies having, in each remote OMC-Rs you want to reach, one local workstation set up as STATX, or in case of the Integrated OMC-R based on SFV8x0, this one can also be playing the role of STATX.

OMC-R Engineering Rules PE/DCL/DD/014282

3-6 Client, X-terminal and RACE Engineering Considerations


Figure 3-3: Multi-MMI workstation
OMC-R Server

OMC-R site 1

OMC-R Server

OMC-R site 2
Connexion to OMC-R 2 as client

OMC-R ws

OMC-R ws STATX

Multi OMC ws

Connexion to OMC-R 1 thru local OMC-R 1 STATX as X-Terminal


Switch / Router Switch / Router OMC-R ws

RACE
RACE (Remote ACcess Equipment) provides assistance for end users to ensure day-to-day operations, curative maintenance, etc., from a remote site but also from OMC-R site. Despite its capabilities, RACE shall not be considered as a replacement of a workstation or an X-Terminal. Indeed, it is an Internet technology based server application that runs on OMC-R workstations. An Http server also runs on these workstations. A web browser is used on end user side to manage the information.

OMC-R

Client, X-terminal and RACE Engineering Considerations 3-7


Figure 3-4: Remote Access connection solutions
OMC-R Server [SF V890]

Central OMC-R site

OMC-R Clients

X Terminal

NIMS PrOptima Client [PC]

Printer

OMC-R Client & RACE HTTP Server

PC RACE

RACE & HTTP server


This application is the link between the RACE client and the applications that lies on the OMC-R workstation. It transmits the commands to the MmiKernel, the subscriptions to the concerning applications. On the other hand, it translates the internal messages of MMI for the RACE client. The HTTP server, which receives the requests coming from the RACE client and transmits them to the RACE server, must be installed on the same OMC-R station as the RACE server.

RS232

The RACE client sends requests to an HTTP server, which is on the LAN of the OMC-R server and which transmits these requests to a RACE server running on an OMC-R workstation. Up to 3 RACE connections can be made on the same workstation that is recommended to be a local MMI workstation. Special workstations such as PCU OAM, SDO and X-terminal server are not allowed to act as RACE server. On client side, RACE requires a web browser and therefore the associated hardware able to run this browser. The recommended hardware is a high end PC. Finally, on login the end user is proposed two modes and has to choose one according to available communication bandwidth. The RACE can be in a remote site and access the OCM-R LAN with a WAN access. A firewall canbe added to improve the security.

QGE QGE
LX 8020S Console Server

Hub (optional)

Switch / Router

Internet Internet
VPN Router PC RACE

OMC-R Engineering Rules PE/DCL/DD/014282

3-8 Client, X-terminal and RACE Engineering Considerations In relation to the standard configuration for the RACE server, we must configure the slots of the terminal server, and its modems, to support PPP through PSTN. The HTTP server is configured using the configuration shell of the OMC-R.

RACE Client
The RACE PC client is not productized by Nortel Networks and must be purchased locally following the requirements given in "Hardware Specifications" section. One of the advantages of the RACE application is that all the software is on the server HTML pages and Java applets and download to the client if necessary. Hence, if a correction has to be performed on the RACE application, the update is done on the server side, but nothing is modified on the client PCs. The PC client software requirements are: Microsoft Internet Explorer 5.0 and higher Operating system: Windows 2000, Windows XP, Windows Vista Java 1.2.2 Plug-in (in case of Internet Explorer version used not supporting RACE applets that are composed of swings or jdk1.2 components, the Java Plug-in 1.2.2 must be installed). For a connection through a terminal server, a new connection has to be created on the PC, using the Dial-up networking option of Microsoft Windows. Besides, the TCP/IP layer of Microsoft Windows has to be configured, in order to well use the modem by determining its maximum speed, data transferred in compressed form and checked for errors. For more information refer to SIE OMC-R Web Access: RACE, PE/OMP/DD/0045

Client workstations allowed configurations


The table below gives the supported functional configurations of the OMC-R workstations.
Table 3-3: workstation allowed configurations Add functional roles to following workstation Nominal client workstation PCUOAM workstation SDO workstation STATX Supported Not supported Not supported RACE server Supported Not supported Not supported STATX + RACE server Supported Not supported Not supported

Number and type of end-user connections to OMC-R


The table below gives the different mode of end-user connections to interact with the OMC-R application and their engineering limitations.
OMC-R

Client, X-terminal and RACE Engineering Considerations 3-9


Table 3-4: number and mode of end-user connections to an OMC-R End-user connection mode Local workstation Remote workstation X terminal RACE application Specified as Capacity Required Minimum of 2, maximum of 40 Optional No minimum, maximum of 38 Optional Refer to Table 3-2: Number of X Terminal sessions allowed Optional Maximum of 3 RACE sessions per local workstation

Whenever those limits are overrun, this results in time response increase and degraded performance from the OMC-R.

OMC-R Engineering Rules PE/DCL/DD/014282

3-10 Client, X-terminal and RACE Engineering Considerations

OMC-R

Preventive Backup and Disaster Recovery Plan 4-1

Preventive Backup and Disaster Recovery Plan


Preventive backup

4-

The preventive backup can be performed on following data types. Each type of data can be restored separately when needed.

OMC-R file system backup


Starting OMC-R v18, the file system backup may be performed on-line to tape, this type of backup is needed for saving the file system of OMC-R machines. This backup doesnt manage the OMC-R environment data (saved on raw devices), and therefore is not sufficient for a complete OMC-R restoration (a database re-installation is needed). This backup can be done from the OMC-R or from a local workstation equipped with a DAT drive. The time required to perform a file system backup can vary significantly but may take about thirty minutes to complete, and a file system restoration takes about thirty to ninety minutes.

OMC-R environment data archiving


This on-line type backup deals with the OMC-R environment data: the network and the OMC-R configuration data. The backup data include OMC-R configuration files and network description from the database. This backup is usually called the BDE backup. A backup operation takes about thirty (to sixty) minutes to complete (depending on the size of the databases).

OMC-R daily data


This on-line backup deals with data collected the last four days on the OMC-R server. The aim of this backup is to save daily data (i.e. all observations, notifications and alarms), before they are purged, so that they can be restored later for consultation. An archive/restore operation takes about ten to thirty minutes to complete (depending on the amount of data to save).

MDM configuration
This on-line type backup deals with MDM software configuration files and PCUSN view files.

Backup media or server


The data can be backup either: on a labeled DDS/DAT format tapes

OMC-R Engineering Rules PE/DCL/DD/014282

4-2 Preventive Backup and Disaster Recovery Plan on a mounted file system via NFS handled by a backup server There is one SUN StorEdge DDS/DAT tape drive per OMC-R server and one external DDS/DAT tape drive per site to be attached to a local workstation. Sun StorEdge DDS/DAT tape media/drive in use versus supported hardware configurations are described in "Hardware Specifications" section. Maximum data transfer rate of DDS/DAT tape drive before data compression are listed in Figure 4-1: DDS/DAT roadmap. Data compression ratio can vary depending of the type of data, but one can assume the maximum compression ratio is 2:1.
Figure 4-1: DDS/DAT roadmap
DAT 72 DDS4 DDS3 DDS1 DDS2 DDS2 Read/Write Compatibility

Media Type Native Capacity Native Transfer Rate

90mTape 90m 120m Tape Tape 2 GB 4 GB

125m Tape 12 GB

150m Tape

170m Tape

20 GB

36 GB

183KB/s

720KB/s 1.5MB/s 1- 3 MB/s

3 MB/s

Disaster Recovery Plan


If a disaster occurs on an OMC-R site provoking the loss of all the site supervision, the Disaster Recovery Plan feature allows to reconfigure an OMC-R server located in another site to take over the supervision of the lost OMC-R site with the shortest downtime possible (less than 6H). It covers also the restoration procedure of the initial OMC-R site. It also supposes that on each site, there is one or more spare OMC-R integrated server with correctly installed and ready to run OMC-R application which will be used as a recovery OMC-R server to replace the destroyed OMC-R located on another site Whereas PCUSN and spare OMC-R servers may be on different sub-networks, routing tables must be defined previously so that they are available to setup routes between PCUSN and the spare OMC-R servers when needed.

Daily operations
The daily backup is automated and the data are stored on a NFS server, the NFS server is not a local partition as to limit the disk space on the OMC-R. A successful backup completion deletes the d-2 backup version avoiding disk saturation. Data backed-up daily are:

OMC-R

Preventive Backup and Disaster Recovery Plan 4-3 Environment data (BDE) PCUOAM configuration files SDO configuration files (/SDO/base/config) Disaster configuration files <optional> EFT files <optional> daily data (the last three days) <optional> - means the user has a possibility to configure automatic backup with or without the EFT and daily data.
Figure 4-2: Daily operations
NFS server

Site A
DAILY BACKUP NETWORK RECOVERY SERVER

Site B
SPARE integrated OMC-Rs

LAN

OMCA-1

OMCB-1 OMC-R workstation SPARE integrated OMC-Rs OMC-R workstation OMC-R workstation

Recovery phase
The spare OMC-R server that will be used as the recovery server must be operating with the same OMC-R version as the destroyed machine. The spare OMC-R server, now named as the recovery OMC-R server, being used without workstation, a local workstation currently used by another operational OMC-R will be reassigned to the recovery OMC-R server and be called as recovery workstation. It implies that this workstation must be pre-defined in the spare OCM-R server configuration Then, knowing the name parameter of the destroyed machine and the NFS label, the last available backup is searched to be restored on this recovery server. For the specified NFS label, the recovery tool finds the information about the NFS server, to verify that it is reachable and to search the last available backup for the destroyed OMC-R.

OMC-R Engineering Rules PE/DCL/DD/014282

4-4 Preventive Backup and Disaster Recovery Plan


Figure 4-3: Recovery phase initialization
NFS server

Site A

Site B
OMCA-1 DATA RESTORE RECOVERY SERVER SPARE integrated OMC-Rs

NETWORK

LAN

OMCA-1

OMCB-1 OMC-R workstation SPARE integrated OMC-Rs OMC-R workstation OMC-R workstation

The OMC-R application then is stopped on the recovery server to restore the environment data and started to restore the PCUOAM and the three days of daily data. Some parameters that are hardware dependant must be updated, (T3 hostnames and IP addresses, Sybase server names) and the OMC-R re-configuration performed. Some post operations: PCUOAM configuration files update, PCUSNs synchronization, SDO data regeneration, daily backup configuration, are necessary. After the recovery server is installed, the automatic task of centralized archiving is running again. Thus, the recovery server performs its data backup on the NFS server just like the destroyed OMC-R was supposed to do.

OMC-R

Preventive Backup and Disaster Recovery Plan 4-5


Figure 4-4: Recovery phase completion
NFS server

Site A
SPARE integrated OMC-Rs NETWORK

Site B
DAILY BACKUP RECOVERY SERVER SPARE integrated OMC-Rs

OMCA-1

LAN
OMC-R workstation OMC-R workstation

OMCB-1 OMC-R workstation

Site restoration
The return to a normal state occurs after applying the reverse operation of the disaster plan assuming that the integrated OMC-R in failure is operational in its usual OMC-R version. So, the site restoration is performed by restoring the OMC-R data from the NFS server and applying the post operations used during the recovery server configuration. The workstation needs to be integrated again in its usual OMC-R configuration. The recovery server needs to be installed from scratch with the same OMC-R version to belong to the spare integrated OMC-R pool.
Figure 4-5: Site restoration
NFS server

Site A
SPARE integrated OMC-Rs

Site B
RECOVERY SERVER SPARE integrated OMC-Rs

RECOVERY SERVER DATA RESTORE

NETWORK

OMCA-1

LAN
OMC-R workstation OMC-R workstation

OMCB-1 OMC-R workstation

OMC-R Engineering Rules PE/DCL/DD/014282

4-6 Preventive Backup and Disaster Recovery Plan

Engineering considerations
A maximum of approx 32GB per day of disk storage volume for daily data is required to backup an OMC-R with 3200 cells, or 10 MB per day per cell. Thus the maximum storage volume required on an NFS server will be the result of the following calculation: Sum on total of OMC-R server to backup with for each OMC-R the volume given by the number of cells in OMC-R[i] * 4 (number of days which is backuped)*10 MB. As a result, the nominal bandwidth requirement between the OMC-R servers and the NFS server cannot be less than 100 Mb/s to accomplish the daily backup of each OMC-R within less than 3 hours. It is also recommended to have redundancy solution for the NFS server.

OMC-R

OMC-R Solution Architecture and Interfaces 5-1

OMC-R Solution Architecture and Interfaces


OMC-R Solution Architecture

5-

The OMC-R is a client-server system architecture as shown in the figure below with the nominal hardware configuration.
Figure 5-1: Nominal OMC-R system architecture
OMC-R Server

Central OMC-R site

OMC-R Clients

X Terminal

Printer

QGE QGE RS232 RS232


PC RACE

Hub (optional)

Console Server

TCP/IP
Switch / Router

PSDN PSDN or PCM or PCM

Inter Inter net net

VPN Router

Remote site BSCe3 PCUSN

The system architecture implements the following equipment: a single server or dual server, multiple clients, LAN equipment, optional servers (SDO, PCU OAM) when applicable, other optional devices as network printers, or an uninterrupted power supply (UPS).

OMC-R Engineering Rules PE/DCL/DD/014282

5-2 OMC-R Solution Architecture and Interfaces

Clients
The connection of the local client workstations to the OMC-R server is made with an Ethernet LAN. An X-terminal is logically linked to a workstation or to an integrated OMC-R server through an Ethernet LAN. It is not recommended to have the X-terminal access the workstation or the Integrated OMC-R server through a WAN (via routers) because of the bandwidth requirements of the X11 protocol. The RACE PC equipment is linked through a WAN to the OMC-R central site. The RACE PC equipment can be also directly connected to the OMC-R LAN.

Ethernet switch
The OMC-R LAN must be at minimum a 100 Mbps LAN built up around an 100 Mbps Ethernet switch. 1000 Mbps Ethernet switches are supported.

Console Server
For Line-Mode feature a console server is used to connect Serial port A of the OMC-R server to local LAN in order to assume its supervision through Telnet protocol.

Remote Access VPN


The Nortel VPN Router provides IP routing, Ipsec based VPN, stateful firewall, encryption (DES, 3DES, AES, RC4), authentication. It supports both site-to-site and remote access VPNs. The VPN Router is connected to the Ethernet LAN of the OMC-R server site. It gives access to the RACEs and support teams.

Ethernet routing switch


This routing device is connected to OMC-R Ethernet LAN to provide access to the remote sites, the remote BSC 3000 nodes and the remote PCUSN nodes. Another type of remote site is the NMS which manages one or several OMC-R through TMN Q3 interface.

OMC-R/BSC2G X.25 routing switch


This equipment is no longer required starting OMC-R v18.

Optional servers of the OMC-R system not based on the Integrated OMC-R Server
The following optional servers are part of the OMC-R system which are not based on the Integrated OMC-R server. The PCU OAM server hosted by a dedicated local workstation with additional memory and disks capacity will be connected to the Ethernet LAN. The PCU OAM can be installed only local, i.e in the same LAN with the OMC-R. The SDO (data server) is used to export OMC-R data in a pre-defined ASCII format. It can installed on a dedicated workstation with an external disk when necessary. An SDO can be local or remote.

OMC-R

OMC-R Solution Architecture and Interfaces 5-3

Other optional devices


All OMC-R equipment should be connected to an UPS or AC power backup. The AC power must be protected. This requirement is of major importance for servers and work stations with software that runs on Solaris (Unix) systems. A network printer can be connected to the OMC-R Ethernet LAN or be part of the remote sites workstation.

Security Feature - Solaris Secure Shell (SSH)


Solaris SSH provides a suite of tools that are secure alternative/replacement of traditional telnet, ftp, rlogin, rsh commands. Operators and administrator should always use ssh utilities such as ssh, sftp, scp for their administrative tasks. Starting in the v17 release, the OMC-R servers and workstations support SSH v2 (Secure Shell) allowing remote users or management systems to use secure shell and file transfer. The objective is to ensure that passwords are not exchanged in readable text over the network, as they are encrypted. With such security mechanism, a badly disposed person connected to this network, can't retrieve passwords by 'packet sniffing'.

OAM Interfaces OMC-R server X.25 interface


This equipment is no longer required in OMC-R v18. If the X.25 card is available on the hardware, it wont be utilized and the X25 software package is no longer installed.

Integrated OMC-R Server IP interface


The Integrated OMC-R server requires 2 Ethernet links because its implements Solaris Internet Protocol Multi Pathing (IPMP) to support redundancy at the network adapter and connectivity level. With the Integrated OMC-R Server based on the SF V8x0 platforms which have two Quad Fast Ethernet cards (V880) or two Quad Gigabyte Ethernet (V890/T5140), to handle all server traffic on a single subnet 3 IP addresses will be required. The active interface requires 2 IP addresses (one, public, for server data communications and the other, private, for interface tests) and the standby requires 1 private test IP address (interface test only). Upon a failure of the active interface, the public data communication IP address transfers over automatically to the standby interface. The private test address is used only to detect failure and recovery of an interface. In solaris terms, it is marked as depreciated so that applications do not actually use this address for standard server communications. The public data address (assigned to the Active Interface) is used for the standard server communication and migrates automatically between interfaces in the event of an interface failure. The Standby Interface does not have a public data address assigned. Should the Active Interface fail, the public data address will migrate to the Standby Interface. The

OMC-R Engineering Rules PE/DCL/DD/014282

5-4 OMC-R Solution Architecture and Interfaces private test address must be a valid routable address and it must be in the same subnet as its associated data address. As soon as the interface originally set as active goes back in function, the public data address is assigned back to this interface. Refer to "Appendix B: OMC-R LAN Engineering" section. Note 1: The Integrated OMC-R server based on SF V8x0 has one Gb Ethernet port which is not used at the moment. Note 2: The Dual T3 External storage has two 100Mb Ethernet port which must be connected to the OMC-R LAN.

BSC X.25 interface


This equipment is no longer required starting OMC-R v18.

BSC 3000 IP interface


The interface between BSC 3000 and OMC-R is Ethernet TCP/IP unlike X.25 or A interface with the BSC2G. The Ethernet LAN must be 100 Mbps full duplex. At BSC 3000 Control Node, 2 OMU boards (1 Active + 1 Passive), provide connectivity toward the OMC-R each of them using one 10/100 Base-T Ethernet port. Each OMU Ethernet port has its own IP address, this will give us two Ethernet ports at the BSC level, each of these two ports being "called" by the OMC-R by different IP address. These two ports (we can say one active and one passive) are both accessible for local maintenance with a local TML but only the one, corresponding to the OMU active, will "answer" to the OMC-R "call". Note: For a BSC e3, 2 IP addresses should be reserved for the different OMU boards plus 2 other for the CEM boards. These IP addresses need to be defined in the same IP subnetwork. OMU boards are required to be connected to the IP network while CEM IP addresses are not used externally of the BSC except if TML is used remotely. CEM boards can be reached from the OMC-R through the OMUs. However it is recommended to connect the CEM boards to the IP network to ease remote technical support intervention from a TML. In order to facilitate the local maintenance, an optional integrated hub had been introduced as an option with the BSC 3000 order list (a Hub 8 ports dual speed 10.100 Base-T). Another option is to have the two OMU Ethernet links be connected to an Ethernet switch and allows equipment interconnection (TML, RACE). Only the active OMU can be polled by the OMC-R. In case of OMU or Ethernet link failure, the BSC 3000 OMU are automatically switch (passive OMU becomes then active). The BSC informs then the OMC to use the second IP address.

PCUSN IP interface
The interface between PCUSN and OMC-R is Ethernet TCP/IP.

OMC-R

OMC-R Solution Architecture and Interfaces 5-5 The PCUSN hosts 2 Control processor (CP), one active and one hot-standby in a single shelf, but the two CPs have the same IP address. In case of switch over from the active to the hot-standby CP, a hub or switch connected to both CP is required in order to keep the OAM connection alive. IP and MAC addresses are automatically propagated from the failed CP to the standby CP. Local connection performed between the OMC-R and the PCUSN requires the use of a switch to isolate the PCUSN from the IP traffic messages not intended to PCUSN which could cause the PCUSN CP to crash in case of too heavy traffic. A single switch can be shared to connect locally multiple PCUSNs to the OMC-R.

OMC-R to GSM BSS Nodes end to end link solution


OMC-R has the following external interfaces: The OMC-R to PCUSN interface using the TCP/IP communication protocol The OMC-R to BSC 3000 interface using the TCP/IP communication protocol The OMC-R to NMS interface using the TCP/IP communication protocol

OMC-R PCUSN Connection


OMC-R servers are connected to PCUSNs by TCP/IP connection established through either LAN network for PCUSN local to the OMC-R site, or through a WAN network or on PCM time slots of the A interface for PCUSN remote to the OMC-R site. There are four types of scenario, depending on the BSC location, design options, and customer requirements. PCUSNs remote to the OMC-R site are connected via the Agprs interface using a router PCUSNs remote to the OMC-R site are connected through a WAN PCUSNs co-located to the OMC-R site are connected through the OMC-R LAN.

Agprs interface link solution


This configuration is an alternative that can be used when the PCUSN is remote to the PCUSN OAM traffic to the OMC-R LAN network. Using a router, we encapsulate IP traffic over PCM time slots up to the SGSN via the Agprs interface, and de capsulate the IP traffic to the OMC-R LAN. The router used at both ends will require the following interface modules: one ethernet interface (RJ 45 and 15-pin AUI connector) for connectivity to the OMC-R LAN or to the PCUSN Ethernet interface one or more E1 or T1 interfaces The figure below describes the OMC-R PCUSN Agprs interface link.

OMC-R Engineering Rules PE/DCL/DD/014282

5-6 OMC-R Solution Architecture and Interfaces


Figure 5-2: OMC-R PCUSN Agprs interface link
GPRS packet PCUSN (remote)
MA E LA N

PCM : GPRS + PCUSN OAM traffic

PCM : GPRS traffic only

MA E

LA N

Cross Connect

Cross Connect

Gb I/f

Gb I/f ASN
SGSN PCUSN OAM packet

Hub or switch

ASN

PCM links Ethernet


OMC-R

Ethernet routing switch solution


This solution will be used when the PCUSN is remote to the OMC-R LAN network. Interconnection is performed using a WAN (X.25, Frame relay, PCM). The VPN Router 1750 or the Ethernet Routing Switch 8600 will be proposed for WAN access at both ends. The VPN Router 1750 or ERS 8600 router used at both ends will require the following interface modules: one ethernet interface (RJ 45 and 15-pin AUI connector) for connectivity to the OMC-R LAN or to the PCUSN Ethernet interface one or more WAN access modules This solution will also cover the interconnection of remote BSC 3000 and remote workstation to the OMC-R LAN. The figure below describes the OMC-R PCUSN WAN solution.

OMC-R

OMC-R Solution Architecture and Interfaces 5-7


Figure 5-3: OMC-R PCUSN WAN solution

Workstation
SWITCH 1

ROUTER 1

OMC-R Active server

QFE 0 VRRP QFE 4 SWITCH 2 ROUTER 2

WAN (LL,X.25, E1/T1) (FR,X.25,


ROUTER 3

Ethernet : 100 Mbps Cascade Link : 2.5 Gbps Multi-Link Trunking

IP 2

IP 1

BSC1 PCUSN

LAN connection
When the PCUSN is local to the OMC-R site, the PCSUN Ethernet interface will be directly connected to the OMC-R LAN through a switch.

OMC-R Engineering Rules PE/DCL/DD/014282

5-8 OMC-R Solution Architecture and Interfaces

OMC-R BSC 3000 Interface BSC 3000 co-located with the OMC-R LAN
This configuration presents the advantage of using the existing OMC-R LAN equipment but it has to be mentioned that an Ethernet switch is mandatory between BSC 3000s and OMC-R. The OMC LAN traffic being quite high, the active and passive OMUs need to be protected from this traffic load. In case of an OMC-R LAN made with hubs this one should be replaced with Ethernet switch at least for BSC connections. No additional equipment is needed (not even the BSC internal hub). Simple connection using the existing LAN equipment (no redundancy) The Ethernet Switches ranging from 24 ports to 48 ports can be used for this type of configuration, and for each BSC 3000 2 Ethernet ports on the switch should be provisioned; In case of several co-located BSC 3000s, switches can be stacked to provide the required number of ports. The Ethernet Switches BS470 and BS5510 can be used in this configuration.
Figure 5-4: Simple connection using the existing LAN equipment (no redundancy)
No redundancy
Workstation

SWITCH

OMC-R Active Integrated server


Ethernet :

IP 1

QFE 0 QFE 4 Passive link IP 2

BSC 1

100 Mbps

Redundant Switch connection using Spanning Tree This configuration uses a redundant Ethernet Switch configuration and is recommended only as an interface toward the BSS OAM network (it can connect the co-located BSCs and the routers connecting the remote BSC) This configuration can be made using the BS470 and BS5510 Ethernet Switches. Those type of switches have optional or built-in Cascade Modules allowing the interconnection of several switches with a 2.5 Gbps link. Combining the cascade capability with the Spanning Tree Protocol Support (detects and eliminates the logical loops in the network) those switches are perfectly adapted for this kind of configuration.

OMC-R

OMC-R Solution Architecture and Interfaces 5-9


Figure 5-5: Redundant Switch connection using spanning tree

Workstation
SWITCH 1

OMC-R Active Integrated server

QFE 0 QFE 4
SWITCH 2

IP 2

IP 1

Ethernet : Cascade Link :

100 Mbps 2.5 Gbps

BSC 1

Remote BSC 3000


When the BSC e3 is remote from the OMC-R, the BSC and OMC-R LANs must be interconnected through a WANwith a throughput of 256 kbit/s: For a throughput of 256 kbit/s, the use of the Frame Relay protocol is advised.A minimum of 128 kbps can be used for normal operation (OBS, NOTIF, TRACE) except for software upgrade For several BSC 3000 collocated, the minimum recommended throughput is 128 kbit/s for the site plus 128 kbit/s per BSC. Its to guaranteed the upload and download performance and redundancy purpose. This supposes the introduction of IP routers at the OMC-R level and also at the BSC level. The solution proposed here tries to ensure an increased redundancy at the transmission network level and at the OMC-R level (the most critical parts of the OMN). The BSC site will contain only one router but equipped with two WAN interfaces. We will consider here two cases of WAN, one based on Frame Relay network and second one based on a PCM network. Both types of WAN considered will use the same type of routers at the OMC and at the BSC level but equipped with different types of WAN interfaces. For the BSC 3000 site, one router should be considered with one Ethernet interface (an extra Ethernet switch will be required) and two WAN interfaces (V.11 or E1/T1) for redundancy purpose. The available routers are the VPN Router 1750. If the customers request an increased redundancy level, then an additional router for the BSC should be considered. For the OMC-R site a two router configuration is to be considered. The two routers at the OMC-R level should be equipped with one Ethernet interface and one WAN interface and will be configured to work in tandem using VRRP (Virtual Router Redundancy Protocol).

OMC-R Engineering Rules PE/DCL/DD/014282

5-10 OMC-R Solution Architecture and Interfaces


Figure 5-6: OMC-R / BSC 3000 remote connectivity using a WAN

Workstation
SWITCH 1

ROUTER 1

OMC-R Active server

QFE 0
VRRP

WAN (LL,X.25, E1/T1) (FR,X.25,


ROUTER 3

QFE 4
SWITCH 2 ROUTER 2

Ethernet : 100 Mbps Cascade Link : 2.5 Gbps Multi-Link Trunking

IP 2

IP 1 2

BSC 1

The main idea is to keep alive the link between the OMC-R (QF0 IP address) and the active OMU of the BSC (IP1 address) in case of one WAN interface or one router or one Ethernet Switch failure. There are still a single point of failure which but can be considered less critical: the router in the BSC site, but then only one BSC is affected in case of failure. E1/T1 alternatives If the chosen type of transmission network for WAN is E1/T1, than a PCM concentration should be made so that a minimum number of WAN interfaces to be used at the OMC-R routers level. This can be done at the WAN level (by using an SDH backbone):
Figure 5-7: E1/T1 links using WAN concentration
ROUTER 1 ROUTER 2 E1/T1 E1/T1 E1/T1 BSC 1

WAN PCM Network


DxC

BSC 2

BSC 3

Or it can be done by concentrating the PCMs coming from several BSCs at the OMC-R level by using a digital cross-connect (which can be even the MSC):

OMC-R

OMC-R Solution Architecture and Interfaces 5-11


Figure 5-8: E1/T1 links using concentration at the OMC-R site level
ROUTER 1 ROUTER 2 BSC 1

E1/T1

E1/T1

BSC 2

PCM Network
DxC
BSC n

If case of a small number of BSCs (2 - 3 BSCs) a direct connection can be made between the BSCs and the two OMC-R routers and in this case additional E1/T1 interfaces should be provided (one for each BSC connected but no more than 3):
Figure 5-9: E1/T1 links not concentrated (for BSS small configurations)

BSC SIDE
ROUTER 1 E1/T1 BSC 1

OMC SIDE

SDH / SONET
ROUTER 2

BSC 2

E1/T1

BSC 3

All the configurations presented above will have two main dimensioning criteria: 1.The needed throughput for the BSC - OMC-R link 2.The number of available ports (Ethernet or WAN)

OMC-R Engineering Rules PE/DCL/DD/014282

5-12 OMC-R Solution Architecture and Interfaces

OMC-R to NMS interface


The OMC-R/NMS Q3 link solution is based on Ethernet. For the OMC-R/NMS non-Q3 link, it is possible to link the NMS to the OMC-R through the ASCII interface.

Q3 on Ethernet Link via TCP/IP.


The external manager (NMS) can be linked to the OMC-R via the existing Ethernet network, instead of X.25 Network. It can be local or remote. This allows a better response time as Ethernet offer a better bandwidth than an X.25 WAN. Moreover, the TCP/IP stack used is very well optimized. In addition, that solution is cheaper since a simple Ethernet connection is required.

OMC-R/NMS link through the non Q3 interface


This is particularly useful when the NMS integrator does not provide the Q3 interface. The NMS is linked to a local workstation at the OMC-R central site using the TCP/IP protocol. This can be either a local link or a WAN link if the NMS and the OMC-R central site are not on the same LAN. Note that there are a maximum of ten simultaneous command-line sessions which can include up to five RACE sessions. Each NMS active session uses one connection and therefore decrements the number of available connections for the actual RACEs.

OMC-R

Bandwidth Requirements 6-1

Bandwidth Requirements

6-

The following bandwidth requirements are expressed as the Recommended Throughput computed with Highest values of dimensioning Parameters (observations, traces, notification per second, network elements) (RTHP). Even though these throughput values can be doubled in case of huge event flows. Besides, the physical bit rates configured in the data communication equipment which are used to interconnect the OMC-R equipment must be higher than the RTHP. If the bit rate is less than recommended, delays will occur in case of large event flow.

OMC-R LAN
The OMC-R network requires a fully dedicated Ethernet LAN for operation. An Ethernet switch will provide the necessary bandwidth of 100 Mbps for the Standard, Enhanced and Maximal Capacity models. 1000 Mbps Ethernet Switches are also supported.

OMC-R - Client bandwidth requirements


The local workstation will be connected to the OMC-R through the LAN at 100 Mbps. For the remote sites, the client workstations are connected to the OMC-R central site through a WAN with the following throughputs requirements.
Table 6-1: Client workstation minimum required throughputs OMC-R capacity model Standard Capacity (2400 cells) Enhanced Capacity (3200 cells) Enhanced Capacity (4800 cells) RTHP kbps 216 282 312

STATX Workstation - X-terminal bandwidth requirements


An X-terminal can be connected to a workstation - or to the OMC-R server itself if it is an Integrated OMC-R Server based on SF V8x0 - through a WAN with the following throughputs requirements.

OMC-R Engineering Rules PE/DCL/DD/014282

6-2 Bandwidth Requirements


Table 6-2: X-terminal minimum required throughputs OMC-R capacity model Standard Capacity (2400 cells) Enhanced Capacity (3200 cells Enhanced Capacity (4800 cells) RTHP kbps 648 846 934

Throughputs above can double in case of huge event flows. WAN physical bit rates parameter configuration must be higher than the minimum required throughputs to avoid delays occurring in case of huge event flows.

OMC to BSC bandwidth requirements


A BSC will be accessed by the OMC-R through an IP address (BSC 3000). The OMC BSC communication has the following minimum required throughputs.
Table 6-3: OMC-R to BSC minimum required throughputs OMC-R capacity model Standard Capacity (2400 cells) Enhanced Capacity (3200 cells) Enhanced Capacity (4800 cells) RTHP kbps 923 1228 1587

To accomplish a BSC 3000 software download, i.e. approx 300MB with 10 downloads in parallel (taking a 1.3 ratio for protocol overhead and 0.6 efficacity) an estimate of the required bandwidth is given below for duration targets.
Table 6-4: BSC 3000 software download minimum required throughputs Download target duration (in hours) 1 5 10 RTHP kbps 14800 5800 2400

OMC-R

Bandwidth Requirements 6-3

BSC to OMC bandwidth requirements BSC 3000


When the BSC 3000 is remote from the OMC-R central site, the BSC 3000 requires a throughput of 256 kbps. A minimum of 128 kbps throughput can be used for normal operations excluding software upgrade. For several BSC 3000 collocated in the same remote site, the total throughput requirements are calculated as 128 kbps for the remote site plus 128 kbps per BSC 3000.

OMC-R Engineering Rules PE/DCL/DD/014282

6-4 Bandwidth Requirements

OMC-NMS bandwidth requirements


For an external Q3 manager connected to the OMC-R through IP, the throughput requirements are as follows.
Table 6-5: Q3 external manager minimum required throughputs OMC-R capacity model Standard Capacity (2400 cells) Enhanced Capacity (3200 cells) Enhanced Capacity (4800 cells) RTHP kbps 1882 2497 3337

OMC - SDO bandwidth requirements


When the SDO function is hosted on a dedicated workstation (for legacy OMC-R server mono and dual configurations), the minimum throughputs between the active OMC-R server and the SDO workstation are as follows.
Table 6-6: OMC - SDO on workstation minimum required throughputs OMC-R capacity model Standard Capacity (2400 cells) Enhanced Capacity (3200 cells) Enhanced Capacity (4800 cells) RTHP kbps 1881 4296 N/A

WAN physical bit rates parameter configuration must be higher than the minimum required throughputs to avoid SDO being not able compute between two T2 and to compute data in one day. The Peri OMC application collecting all the result files from the SDO has the following throughputs requirements when using the V12 factorized SDO output format.
Table 6-7: SDO to Peri OMC minimum required throughputs with V12 format OMC-R capacity model Standard Capacity (2400 cells) Enhanced Capacity (3200 cells) Enhanced Capacity (4800 cells) RTHP kbps (V12 format) 1662 2183 3029

OMC-R

Bandwidth Requirements 6-5

PCUOAM - OMC bandwidth requirements


When the PCUOAM function is hosted on a dedicated workstation (for legacy OMC-R server mono and dual configurations), the average throughputs between the PCUOAM workstation and the active OMC-R server have been calculated as follows.
Table 6-8: PCUOAM - OMC on workstation minimum required throughputs OMC-R capacity model Standard Capacity (2400 cells) Enhanced Capacity (3200 cells) Enhanced Capacity (4800 cells) RTHP kbps 174 232 335

The PCUOAM workstation is receiving from the PCUSN counters observation in proprietary NMS format (FMIP) which average required throughputs are as follows.
Table 6-9: PCUSN to PCUOAM on workstation minimum required throughputs OMC-R capacity model Standard Capacity (2400 cells) Enhanced Capacity (3200 cells) Enhanced Capacity (4800 cells) RTHP kbps 90 120 168

OMC-R Engineering Rules PE/DCL/DD/014282

6-6 Bandwidth Requirements

OMC-R

Hardware Specifications 7-1

Hardware Specifications
Supported configurations OMC-R server configurations
Table 7-1: OMC-R server supported configurations Configuration OMC-R server hardware name specifications Integrated Sun Fire V880, 4x900Mhz, OMC-R server 8GB RAM, T3 460 GB Sun Fire V880, 4x1200Mhz, 8GB or 16GB RAM, T3 460 GB Sun Fire V890, 4x1350Mhz, 16GB RAM, 12x146GB internal disks Sun Fire V890, 4x1500Mhz, 16GB RAM, 12x146GB internal disks Sun Fire V890, 4x1800Mhz, 16GB RAM, 12x146GB internal disks T5140 server, 2x1200Mhz, 32GB RAM, 4x146GB internal disks and ST2510 disk array Capacity model(s) Notes

7-

Standard Capacity (2400 cells) Supported Enhanced Capacity (3200 cells) Supported Standard Capacity (2400 cells) Supported Enhanced Capacity (4800 cells) Standard Capacity (2400 cells) Supported Enhanced Capacity (4800 cells) Standard Capacity (2400 cells) Supported Enhanced Capacity (4800 cells) Enhanced Capacity (4800 cells) Nominal

OMC-R client configurations


Table 7-2: OMC-R client workstation supported configurations Hardware base Sun Blade 150, 650MHz, 512MB RAM, 40GB HD Sun Blade 150, 650MHz, 512MB RAM, 80GB HD Sun Blade 1500, 1500MHz, 1024MB, 120GB HD Sun Ultra 45, 1600MHz, 2GB, 250GB HD Notes Supported with restriction R1, R2 Supported with restriction R2 Supported Nominal

R1: configuration requires an external DDS Tape Drive for software installation or upgrade. R2: supported as local OMC-R workstations only. Remote OMC-R workstations require DVD drive.

PCUOAM server configurations


Table 7-3: PCUOAM workstation supported configurations Hardware base Sun Blade 1500, 1500MHz, 1024MB, 2x120GB HD Notes Supported

OMC-R Engineering Rules PE/DCL/DD/014282

7-2 Hardware Specifications

SDO server configurations


Table 7-4: SDO workstation supported configurations Hardware base Sun Blade 1500, 1500MHz, 1024MB, 2x120GB HD Notes Supported

RACE Client Client configuration


Table 7-5: RACE Client supported configuration Configuration PC, 1 Pentium 4 CPU 2.4 GHz, 1GB RAM, 40GB HD, W2K Professional Service Pack 3 or Windows XP Professional Service Pack 1or Windows Vista Notes Recommended configuration

OMC-R

Hardware Specifications 7-3

Detailed hardware specifications T5140 Server for OMC-R server


Table 7-6: T5140 hardware and ST2510 disk array specifications Sun Fire V880 Features CPU Value 2 - 8core 1.2-Ghz UltraSparc T2 Plus processors 4-MB External Cache per Processor Minimum OS level required: Solaris 10 update 4 and required kernel patch 32-GB Memory 4 * 146-GB 10000 RPM SAS (Raid 1) + ST2510 Disk Array (2 disks for OS, the 2 others are reserved for future needs) 1 * DVD-ROM Drive +/- R/W 1 * 10/100BASE-T Self-Sensing Ethernet Port 1 * embedded QGE board with 4 available ethernet ports 1 * ILOM board (serial/ethernet) 2 * Power Supply Server License for Solaris 1 QGE/PCI-Express and 1 QGE embedded interfaces used for external Ethernet connection 2 * QGE/PCI-Express and 2 QGE embedded interfaces are used to connect ST2510 Disk Array Device 1 * QGE/PCI-Express and 1 QGE embedded interfaces are free currently 4 USB 2.0 ports, 1 RJ-45 serial management ports, 1 DB-9 serial port No internal tape drive Info on ST2510 below

RAM Hard disk Disk drive Embedded communication interfaces

Supply Software license Interfaces boards

Tape drive Miscellaneous

Sun StorageTek(ST) 2510 Disk Array


Table 7-7: ST2510 Disk Array hardware specifications ST2510 Disk Array Features CPU/RAM Hard disk Embedded communication interfaces Value 2 * 512MB cache iSCSI HW RAID Controller 12 * 300-GB 15000 RPM SAS Drives 2 and 4 1GB/sec iSCSI host ports per controller tray 2 iSCSI Ethernet RJ45 1 Ethernet RJ45 for Disk Array Administration 1 PS2 Serial port for commissioning of Administration Ethernet port

OMC-R Engineering Rules PE/DCL/DD/014282

7-4 Hardware Specifications

Sun Fire V880 for OMC-R server


Table 7-8: Sun Fire V880 hardware specifications Sun Fire V880 Features CPU Value 4 * 900-MHz or 4 * 1200-MHz UltraSPARC III Cu Processors 8-MB External Cache per Processor Minimum firmware level required: OBP 4.7.0 8-GB or 16-GB Memory 6 * 73-GB 10000 RPM FC-AL Disk Drives 1 * DVD-ROM Drive 1 * 10/100BASE-T Self-Sensing Ethernet Port 1 * Gigabit Ethernet Port 2 * RS-232C/RS-423 Serial Ports 3 * Power Supply Server License for Solaris 1 * Sun PGX64 / XVR-100 Graphics Accelerator 2 * High Speed Serial Interface PCI Bus Adapter 2 * Sun Quad Fast Ethernet PCI Bus Adapter 2 * Single Fibre Channel PCI Networks Adapter 1 * 20GB 4mm DDS-4 Internal Tape for 50-pin Narrow SCSI Interface 1 * Serial Port Splitter Cable

RAM Hard disk Disk drive Embedded communication interfaces

Supply Software license Interfaces boards

Tape drive Miscellaneous

Sun Fire V890 for OMC-R server


Table 7-9: Sun Fire V890 hardware specifications Sun Fire V890 Features CPU Value 4 * 1350-MHz or 4 * 1500-MHz or 4 * 1800-MHz UltraSPARC IV Cu Processors1 16-MB External Cache per Processor 16-GB Memory 12 * 146-GB 10000 RPM FC-AL Disk Drives 1 * DVD-ROM Drive 1 * 10/100BASE-T Self-Sensing Ethernet Port 1 * Gigabit Ethernet Port 2 * RS-232C/RS-423 Serial Ports 3 * Power Supply Server License for Solaris

RAM Hard disk Disk drive Embedded communication interfaces

Supply Software license

OMC-R

Hardware Specifications 7-5


Sun Fire V890 Value 1 * XVR-100 Graphics Accelerator 2 * High Speed Serial Interface PCI Bus Adapter 2 * Sun Quad Gigabytes Ethernet PCI Bus Adapter Tape drive 1 * DAT-72 Internal Tape Drive for 50-pin Narrow SCSI Interface Miscellaneous 1 * Serial Port Splitter Cable Note: 1. The UltraSPARC IV Processor is the first generation chip multithreading (CMT) processor with two threads per processor enabling up to twice the application throughput of the current UltraSPARC III processor. Features Interfaces boards

Sun StorEdge T3 Disk Array


Table 7-10: Sun StorEdge T3 Disk Array hardware specifications Sun StorEdge T3 Disk Array Features CPU/RAM Hard disk Value 1-GB Controller Minimum firmware level required: 2.1.3 9 * 73-GB 10000 RPM FC-AL Drives The minimum disk firmware version, according to the manufacturer, must be: Seagate 73GB ST373405FC A538 Seagate 73GB ST373307FC A207 Fujitsu 72GB MAN3735 1204 Fujitsu 72GB MAP3735 0801 Hitachi 72GB HEJ72F FQ0C Native FC-AL Connection

Embedded communication interfaces

Sun Blade 150


Table 7-11: Sun Blade 150 hardware specifications Sun Blade 150 Features CPU RAM Hard disk Disk drive Software license Interfaces boards Graphic board Value 650 MHz Ultra SPARC Module 512-MB Memory 40 or 80 GBytes Disk Board 1 * DVDROM Drive Server License for Solaris 1 * Single-channel, single-ended UltraSCSI host adapter, PCI. Cables not included 1 * Sun PGX64 / XVR-100 Graphics Accelerator

OMC-R Engineering Rules PE/DCL/DD/014282

7-6 Hardware Specifications


Sun Blade 150 Features Miscellaneous Value Video cable adaptor (24 inch): VGA <-> Sun display Localized Power cord kit Continental Europe Type 6 Country kits Unix Universal with USB interface 1 * External 20GB, 4mm DDS4 1 1 * Power cord kit 1 * Fast-wide 68-68-Pin SCSI cable 1

Optional External tape drive

Sun Blade 1500


Table 7-12: Sun Blade 1500 hardware specifications Sun Blade 1500 Features CPU RAM Hard disk Disk drive Software license Interfaces boards Graphic board Miscellaneous Value 1 * 1500 MHz Ultra SPARC-IIIi Module 1024-MB Memory 1 * 120GB / 2 * 120GB Disk Board (Workstation / PCUOAM & SDO) 1 * DVDROM/CDRW Drive Server License for Solaris 1 * Single-channel, single-ended UltraSCSI host adapter, PCI. Cables not included 1 * XVR-100 Graphics Accelerator Video cable adaptor (24 inch): VGA <-> Sun display Localized Power cord kit Continental Europe Type 6 Country kits Unix Universal with USB interface 1 * External 20GB, 4mm DDS4 1 * Power cord kit 1 * Fast-wide 68-68-Pin SCSI cable 1

Optional External tape drive

Sun Ultra 45
Table 7-13: Sun Ultra 45 hardware specifications Sun Ultra 45 Features CPU RAM Hard disk Disk drive Software license Graphic board Value 1 * 1600 MHz Ultra SPARC-IIIi Module 2-GB Memory 1 * 250GB 7200 RPM SATA Disk 1 * Slim DVD-RW/CD-RW Drive Server License for Solaris 1 * XVR-100 Graphics Accelerator

OMC-R

Hardware Specifications 7-7


Sun Ultra 45 Features Miscellaneous Value Localized Power cord kit Continental Europe Type 7 UNIX Style Keyboard with USB Connector

OMC-R Engineering Rules PE/DCL/DD/014282

7-8 Hardware Specifications

Sun Server Hardware T5140 Server


The Sun T5140 server with ST2510 disk array devices provides the following benefits compared to the SunFire V8x0 hardware platforms: Lower power consumption Reduced space requirements Higher processing power for potential future capacity increases The T5140 server is based on the Ultrasparc T2 processor. The picture below shows the rear side of the T5140 server:
Figure 7-1 T5140 server rear view

ILOM ports
Figure Legend: 1 Power supply (PS0) 2 Power supply (PS1) 3 PCIe or XAUI slot 0 4 PCIe or XAUI slot 1 5 PCIe slot 2 7 Serial System controller port (SER MGT) access to the ILOM 8 Ethernet System controller port (NET MGT) 9 10/100/1000 Ethernet ports (left to right; NET0, NET1, NET2, NET3) 10 USB ports (left to right; 0, 1) 11 Host serial port, DB9 connector (TTYA) access to the server

ILOM ports

The T5140 server comes with an integrated ILOM (Integrated Lights out Manager V2.0) board that provides access to the server console through its serial port and/or ethernet port. The T5140 does not hold a tape drive and a graphical board is not present as the server is rack mounted; video display is not required since ILOM board offers the capability to manage server remotely. T5140 comes equipped with 4 internal disks; a pair of disks are used for the OS while the other pair is reserved for future use. The OMC-R data related information is stored on the ST 2510 disk array. The T5140 server comes equipped with an embedded QGE board with 4 available ethernet ports and an additional QGE/PCI express having 4 ethernet ports is installed in the PCI-E slot. The ports on these boards are used as follows: 1 QGE/PCI-Express and 1 QGE embedded interface port is used for external ethernet connections. 2 QGE/PCI-Express and 2 QGE embedded interface ports are used to connect to the ST2510 disk array device. 1 QGE/PCI-Express and 1 QGE embedded interface port is reserved for future growth.

OMC-R

Hardware Specifications 7-9 The ST2510 disk array device is used to store the OMC-R data. The ST2510 comes equipped with the following: 12*300GB 15Krpm SAS drives 2 * 512MB cache iSCSI HW RAID controller 2 redundant AC power supplies 2 redundant cooling fans 2 QGE/PCI-Express and 2 QGE embedded interface ports are used to connect to the ST2510 iSCSi interface directly without the use of a hub or switch. These interfaces can been seen in the figure below:
Figure 7-2 Rear View of ST2510 disk array
Dual Ethernet iSCSI interface Ethernet interface for DAD management (CAM) Serial PS2 6pin DIN connector Dual Ethernet iSCSI interface

SF V8x0
The Sun Fire V8x0 Server is a high-performance, reliable server. CPU is added in pairs via a dual processor/memory module. All memory is accessible by any processor, as workgroup servers do not implement domains or partitions. An internal storage array supports six Fiber Channel disks.

Hardware redundancy Hardware redundancy is provided via internal component redundancy of power supplies, cooling modules, CPUs, memory boards, HSI boards, FC-AL controller boards and RAID disks. Hot pluggable and hot swappable components such as power supplies, cooling modules, and RAID disks can be changed in the event of failure without service interruption.
The SF V8x0 enhances significantly the availability of the server by reducing to 0 the downtime linked to hard disk and power supply failures: Power Supply The SF V8x0 has 3 different power supplies. Each has its own power cable. The server can work with only two power units. Processing

OMC-R Engineering Rules PE/DCL/DD/014282

7-10 Hardware Specifications


Each server hosts 2 CPU/Memory boards with 2 CPUs each. In the event of a failure on one CPU, the pertaining CPU/Memory board is declared blacklisted and this boards CPUs loads are automatically balanced on the 2 remaining CPUs of the second board. Network access Network access redundancy is ensured with the 2 QuadFastEthernet (SF V880) or 2 Quad Gigabytes Ethernet (SFV890) boards implementing IP MultiPathing.

Automatic System Recovery (ASR) The SUN Fire V8x0 provides automatic system recovery from the following types of hardware faults:
CPU modules Memory modules PCI buses System IO interfaces The automatic system recovery allows the system to resume operation after experiencing certain hardware failures. The automatic self-test feature enables the system to detect failed hardware components. An auto-configuration capability designed into the system's boot firmware allows the system to de-configure failed components and restore system operation.

T3 Storage Array
Each T3 disk array is used in RAID5 mode (full redundancy) while the OMC application is using the 2 T3s in RAID1 with the Solstice Disk Suite software from Sun. So all disks can be changed in the event of failure without service interruption.

OMC-R

DCN Hardware Specifications & Recommendations 8-1

DCN Hardware Specifications & Recommendations

8-

This section provides hardware specifications for the various DCN components that are supported with the OMC-R solution architecture as well as recommendations that will help engineer the OAM DCN. Hardware specifications and recommendations are provided for the: LAN switching device WAN routing device Terminal Server Console Server Alarm Relay Box RS-422/RS232 Interface Converter

LAN switching device


For the OMC-R LAN, the following LAN switches are proposed: Ethernet Switch 5510-48T stackable - 48 ports 10/100/1000 BaseT Ethernet Switch 5510-24T stackable - 24 ports 10/100/1000 BaseT Ethernet Switch 470-24T stackable - 24 ports 10/100 BaseT Ethernet Switch 470-48T stackable - 48 ports 10/100 BaseT The Ethernet Switches are recommended for local client workstation connections, local BSC 3000 connections, local PCUSN connections, and local Q3 external manager connections. The following sections provide specifications and recommendations for this switch.

Ethernet Switches
As mentioned above, the Ethernet Switches are recommended for LAN connections including local client site connections and for remote/local NE sites. The 5510 series suits the Sun Fire V890/T5140 based OMC-R LAN as they will connect using Gigabytes Ethernet protocol and optimize backup on remote NFS servers. All models provide network performance and reliability because it provides advanced layer 2, 3, and 4 packet classification, prioritization and quality of service (QoS) capabilities, web-based management, fail-safe stackability, and flexible high speed uplinks.

OMC-R Engineering Rules PE/DCL/DD/014282

8-2 DCN Hardware Specifications & Recommendations


Table 8-1: Switch specifications Features Number of ports Gigabit interface converter Cascading Module Stackability Up to 8 units

Ethernet Switch 5510-24/48T


24/48 ports 10/100/1000 BaseT

Ethernet Switch 470-24/48T


24/48 ports 10/100 BaseT

2 optional Mini-GBIC 1 port 1000-SX/LX 2 optional GBIC 1000-SX/LX Built-in

Switch configuration Unless using special features such as Spanning Tree, no configuration procedure is required. The Ethernet Switches operates with factory default settings, automatically learning the addresses of all end-stations and maintaining a table with more than 16 000 MAC addresses. Switch supervision For network management, the Ethernet Switches includes a standards-compliant SNMP agent. Network management can also be performed in-band using the Telnet application. In addition, a serial console port allows out-of-band management using a standard VT100 or similar terminal (it allows to access the Console Interface screens).

WAN routing switch


For the OMC-R WAN access, the following WAN router is proposed: VPN Router 1750 The VPN Router is recommended for remote site client connections, remote BSC 3000 connections, remote PCUSN connections and remote Q3 external manager connections. Access Stack Node Router The ASN Router is recommended for remote BSC2G connections to the OMC-R using the X.25 protocol. This equipment also enable remote site client workstation connections, remote BSC 3000 connections, remote PCUSN connections, and remote Q3 external manager connections. Note: the ASN Router is not an RoHS compliant equipment, therefore cannot be deployed in countries where the RoHS restriction regulation applies. To implement the local X.25 router solution with an RoHS compliant equipment, contact your local Nortel engineering representatives. The following sections provide specifications and recommendations for these routers.

OMC-R

DCN Hardware Specifications & Recommendations 8-3

VPN Router 1750


The Contivity 1750 enables scalable, secure, and robust IP VPNs across the public data network. The Contivity 1750 provides routing, firewall, bandwidth management, encryption, authentication, and data integrity services to ensure secure tunneling across IP networks and the Internet. The recommended VPN router is VPN Router 1750 which allow up to 500 VPN tunnels. The VPN Router is connected to the Ethernet LAN of the OMC-R server site. It gives access to the RACEs and support teams. The Nortel VPN Router acts as the primary WAN/Internet access device via frame relay, dial-up or leased line connection or be connected to an existing WAN or Internet access device via its standard Ethernet interface. The Contivity 1750 provides four PCI slots that support a combination of the following option cards: Contivity Security Accelerator and Hardware Accelerator cards SSL VPN Module 1000 10/100BASE Ethernet interface card 1000BASE-T Ethernet interface card 1000BASE-SX Ethernet interface card 56/64K CSU/DSU WAN interface card ADSL WAN interface card ISDN BRI interface card T1 CSU/DSU WAN interface card (full-height card) T1/E1 CSU/DSU WAN interface card (half-height card) Quad T1/E1 CSU/DSU WAN interface card V.90 modem interface card Single V.35/X.21 WAN interface card (full-height card) Single V.35/X.21 WAN interface card (half-height card) HSSI WAN interface card VPN Router 1750 specifications
Two 10/100 Ethernet* LAN ports on the base system One serial port for out-of-band management of the Contivity 1750 Four expansion PCI slots that can contain interface cards, a hardware accelerator card, and the SSL VPN Module 1000 128 MB memory upgradable to 256 MB total

Table 8-2: VPN Router 1750 specifications

OMC-R Engineering Rules PE/DCL/DD/014282

8-4 DCN Hardware Specifications & Recommendations


Table 8-2: VPN Router 1750 specifications

VPN Router 1750 specifications


Physical Height 5.25 in. (13.335 cm), Width 17 in. (43.18 cm), Depth 21 in. (53.34 cm) Weight 28 lbs. (12.7 kg) Electrical Voltage 100240 VAC, Current 5 A @ 100 VAC or 3 A @ 240 VAC, Frequency 5060 Hz Environmental Operating temperature 32104oF (040oC)

The Contivity 1750 is available in four models: Contivity 1750 with five tunnels (56-bit or 128-bit) Contivity 1750 with 500 tunnels (56-bit or 128-bit)
Figure 8-1: VPN Router 1750 front

OMC-R

DCN Hardware Specifications & Recommendations 8-5


Figure 8-2: VPN Router 1750 back

Table 8-3: Supported option cards for the Contivity 1750 Option card SSL VPN Module 1000 Contivity Security Accelerator (CSA) Hardware Accelerator 10/100 Ethernet interface 1000BASE-T interface (copper) 1000BASE-SX interface (fiber) Maximum number 1 1 Restrictions Install this card in slot 1 only Install one CSA or one Hardware Accelerator card. Do not install a Hardware Accelerator card in slot 4

4 2 Install two 1000BASE-T cards, two 1000BASE-SX cards, or one card of each type

56/64K CSU/DSU WAN interface 4 ADSL WAN interface ISDN BRI S/T or U interface T1 CSU/DSU WAN interface (full-height) T1/E1 CSU/DSU WAN (half-height) interface 4 4 4 4 For E1 support, you must install the half-height interface card.

OMC-R Engineering Rules PE/DCL/DD/014282

8-6 DCN Hardware Specifications & Recommendations


Option card Quad T1/E1 CSU/DSU WAN V.90 modem interface Maximum number 3 4 If an SSL VPN Module 1000 is installed in slot 1, do not install the V.90 modem interface card in slot 2 Restrictions

Single V.35/X.21 WAN interface 4 (full-height) Single V.35/X.21 WAN interface 4 (half-height) HSSI WAN interface 2 Do not install in slot 4; install in slot 3 or 1 if possible. If an SSL VPN Module 1000 is installed, you can install only one HSSI WAN interface card

Access Stack Node Router


The router is a network equipment that can interconnect networks that use different protocols and media. A router has two main functions regarding the interconnection of networks: dynamic routing by selection of the optimal path for sending data across complex networks static routing in point to point configurations with the aim of building extended networks (LAN and WAN interconnection) The router works at the network layer (layer 3) of the OSI stack level. It performs a protocol conversion between two different types of networks (e.g. TCP/IP, X.25). In the telecommunication world, there are many interworking network protocols (e.g. X400, IPX, LAT,...). Routers that support several protocols are called multi-protocol routers. The Nortel Networks Access Stack Node (ASN) is a stackable router that provides scalable and cost-effective solutions for enterprise network centers. Note: the ASN Router is not an RoHS compliant equipment, therefore cannot be deployed in countries where the RoHS restriction regulation applies. To implement the local X.25 router solution with an RoHS compliant equipment, contact your local Nortel engineering representatives. The ASN is a multi protocol router that provides network connectivity through the following net modules (I/O modules): 10Base-T Dual Ethernet

OMC-R

DCN Hardware Specifications & Recommendations 8-7 100Base-T Ethernet Dual Sync Dual Sync/ISDN BRI Quad BRI Single-mode, Multimode, and Hybrid FDDI Dual Token Ring MCE1, MCT1 Hardware compression (for use with the Dual Sync and Dual Sync/ISDN BRI net modules) The ASN has four positions in which you can install net modules.
ASN router specifications Core features Single-board module based on MC68040 microprocessor 8 MB, 16 MB, 32 MB of DRAM The ASN also supports an 8 MB PCMCIA flash memory card for non-volatile software and configuration storage Interfaces Ethernet Interface (15-pin AUI connector or 8-pin modular) Token Ring Interface (9-pin MAU connector) FDDI (two MIC, one RJ-11 optical bypass) Synchronous Interface (44- and 50-pin connector to RS-422, RS-232, V.35, X.21 adapter cable) ISDN BRI and ISDN PRI 100BASE-T Interface (40-pin MII connector or 8-pin modular) MCT1 (RJ-48C, 15-pin, DB connector) MCE1 (BNC, 75 ohm, 8-pin modular, 120 ohm)

Table 8-4: ASN router specifications

The ASN router exists in three base configurations: Four - slot ASN chassis with single 110/220V DC power supply. Four - slot ASN chassis with redundant 110/220V DC power supply. Four - slot ASN chassis with redundant 48V AC power supply.

OMC-R Engineering Rules PE/DCL/DD/014282

8-8 DCN Hardware Specifications & Recommendations


You can stack your ASNs, which means you can connect as many as four of them together to function as one logical router. Network management software treats all nodes in a stack as a single router, and considers each node a slot. You can use either of the following net modules to connect as many as four ASNs together in a stack: Stack Packet Exchange (SPEX) net module Stack Packet Exchange Hot-Swap (SPEX-HS) net module The ASN offers dynamic random access memory (DRAM) configurations of 8, 16, and 32 megabytes (MB), as well as an optional Fast Packet Cache that enhances performance. An optional high-power redundant power supply unit (HRPSU) is also available. The HRPSU is an external power supply that you can connect to an ASN for continuous operation in the event of an internal power supply failure.
Figure 8-3: ASN - 4 ASN stacks

OMC-R

DCN Hardware Specifications & Recommendations 8-9


Figure 8-4: ASN front and rear view

Front

Net Module #1

Net Module #2 Net Module #4 /SPE X -HS

R ear
Net Module #3 R edundant Power Supply Unit (HR PSU) Console/M odem Port Hardware Diagnostic Port

PCMCIA Flash Memory Card

Router software Bay Networks ships the software image and a default configuration file on the PCMCIA flash memory cards. The router initialization using the file system stored on the PCMCIA card is called local booting.
These files may also be downloaded to the router using a Bootstrap Protocol (BootP) or Trivial File Transfer Protocol (TFTP) device. This procedure is called network booting. In both cases it is strongly recommended to connect a console to the ASN so that it can be issued commands to the router and view messages.

Router configuration For the initial configuration the router can be accessed through a directly attached terminal or PC running terminal emulation software or a Telnet session.
The ASN router uses a software application called Site Manager for router configuration and maintenance. It uses a graphical user interface (GUI) to make router configuration and management tasks easier. This application runs on another machine in the system. The router configuration consists in defining the following parameters: Hostname and addresses (IP and X.25) X.25 configuration (Packet-Level and LAPB protocol) remote X.25 access with the mapping between IP address and X.25 address routing table

Router supervision The ASN router offers a SNMP based management solution. The SNMP agent is included in the router software.

OMC-R Engineering Rules PE/DCL/DD/014282

8-10 DCN Hardware Specifications & Recommendations


A supervision and configuration option is the Technician Interface. This is a terminal (TTY compatible) tool that supports SNMP based access to the Management Information Base (MIB), displays the event log and support file system management.

Connection cables to ASN router The drawings below show how the OMC-R and the BSC2G are connected physically to the ASN router. Figure 8-5: OMC-R to ASN connection cables

OMC / ASN connection

Port 0

DB37

DB37 DB37

DB44 DB44

DB50

A NTQQ0206 (3M)

B A0018042 (4.57M)

C 7947 (3M)

SF V880 HSI connectors

ASN QuadSync Net Module

Tag A B C

Cable description RS-422 cable, L=3M 44-pin to F RS-422 DCE, L=4.57M (15ft) 50-pin to 44-pin cable adapter, L=3M

Connection ends Male db 37 pin Male db 37 pin Female db 37 pin Male db 44 pin Female db 44 pin Male db 50 pin

OMC-R

Figure 8-6: BSC to ASN connection cables

BSC2G / ASN connection

DB25

DB44 DB44

DB50

D 7833 (4.57M)

C 7947 (3M)

ASN QuadSync Net Module


Connection ends Female db 44 pin Male db 50 pin Male db 44 pin Male db 25 pin

BSC2G

Tag C D

Cable description 50-pin to 44-pin cable adapter, L=3M 44-pin to RS-232-C Standard, L=4.57M (15ft)

OMC-R

DCN Hardware Specifications & Recommendations 8-11

Terminal Server
For the OMC-R remote console and remote access, the following terminal server and rackmount modem system are supported but no more proposed within the Nortel sellable offer. The set composed of the modem shelf and modems is used for the RACE connections through the PSTN using the V34 protocol which allows 28.8 Kbps speed. With the OMC-R, up to five modems can be supported. Xyplex Maxserver 1620-014 The Xyplex Maxserver 1620-014 is an equipment that performs an intelligent bridge between 20 asynchronous serial ports and the Ethernet network. It allows both local & remote access via dial-up for a variety of devices. Multitech CC1600-Series rackmount modem system The scalable CC1600-Series 19 rackmountable card cage consolidates up to 16 V.34/33.6K modem cards for dial-up or 2- and 4-wire leased line service to provide dial-in remote access, dial-in/dial-out datacomm or dial-out faxing. The following sections provide specifications and recommendations for this terminal server.

Xyplex Maxserver 1620


The terminal server gives access to the RACE PC client to the OMC-R LAN though the PSTN. It is also used for Line Mode access to the OMC-R server. It concentrates the asynchronous serial links coming from: Modems used for connecting RACE PC clients via the PSTN, the ports numbered 2 to 6 are reserved for communication with RACE PC clients. OMC-R Servers remote access (optional) using Line Mode feature. The Serial port A of each OMC-R server will be used (no Alarm Relay Box will be possible to be used). Firewall (optional), the port number 1 is used in this case. The Xyplex 1620 is connected to the OMC-R LAN with its 10 Base-T interface.
Table 8-5: Maxserver 1620-014 specifications Maxserver 1620-014 specifications Core features 4 MB of DRAM PCMCIA slot for a 2Mb or 4Mb flash Memory Card Run, LAN, Console, Port Status, Memory Card Status diagnostic LEDs Interfaces 2 Ethernet connections: Ethernet/IEEE 802.3 AUI (10Base-5) & RJ-45 (10Base-T)

OMC-R Engineering Rules PE/DCL/DD/014282

8-12 DCN Hardware Specifications & Recommendations


Maxserver 1620-014 specifications 20 8-wire RJ-45s for serial RS-423/232 connections, 50 bps to 115.2 kbps Protocols Network protocols: IP/IPX, LAT, TN3270 (optional), and IPX RIP Access protocols: Telnet, Rlogin, PPP, SLIP, CSLIP, XRemote, ARAP (optional), DECnet multi sessions (optional)

The Xyplex Maxserver is usually bundled with a terminal board or distribution panel enabling adaptation from RJ45 connectors to 25-pin Sub-D connectors cabled as DTE.
Figure 8-7: Xyplex Maxserver 1620-014 views
FRONT VIEW XYPLEX
CARD RUN LAN CONSOLE

...

10

REAR VIEW 1

...
Distribution Panel

20

Ethernet

Power supply

RJ -45s

AUI 10 Base-T TO OMC LAN

...

DB25pts To RACE Modem

Terminal server software The software is downloaded by the Terminal server at boot time using the standardized RARP + TFTP protocol. If the Line Mode Manager is used, the boot machine will be a non-OMC-R machine (means one of the supervisory workstations). Terminal server configuration Firewall connection is made on port 1.
The ports used for modem connections are numbers 2 to 6. One can connect a terminal console to a Maxserver 1620 port (except for ports numbered 1 to 6) via the distribution panel to check the configuration or for maintenance purposes. The connection is done by linking the DCE port of the VT to the Xyplex by means of point to point cable. In case of Line Mode Manager feature, the serial port A of each server will be connected to one Xyplex port (except ports 1 to 6) and the access to the OMC-R servers will be made from any other workstation in the network.

OMC-R

DCN Hardware Specifications & Recommendations 8-13

Terminal Server supervision All terminal server parameters can be observed and changed via any SNMP-based management system.

Multitech CC1600-Series rackmount modem system


An operational CC1600-Series system requires one chassis, one power supply (two are recommended for redundancy), and up to 16 Multi-Tech modem providing V.34/33.6K dial-in/dial-out services over PSTN.
Table 8-6: CC1600-Series rackmount modem system specifications CC1600-Series rackmount modem system specifications Card Cages Power Supply: 70W fully loaded; 5v DC & 15v DC regulated, 1 Cooling fan Connectors: One 3-prong grounded receptacle, 16 DB25 data ports, & 16 RJ-11 jacks Modems MT2834BR 33.6K Rack Modem (d/u,2wLL) Data Standards: Enhanced V.34, V.34, V.32bis, V.32, V.22bis
Error Correction: V.42, Data Compression: MNP 5 and V.42bis

DTE Interface: RS-232C/D

RS-232 point to point cables must be provisioned to connect the MultiTech Shelf system (male DB 25-pin connector) to the Xyplex Maxserver (female DB 25-pin connector) on ports 2 to 6.
Figure 8-8: CC1600-series rackmount modem system view

Modems configuration auto baud disable - parity none - flow xon - speed 9600bit/s modem control enable - access remote - dsrlogout disable - dtrwait forring inactivity logout enabled - idle time-out 15 min

OMC-R Engineering Rules PE/DCL/DD/014282

8-14 DCN Hardware Specifications & Recommendations

Console Server
The LX-8020S-102 is a secure standalone communication server that is designed for applications requiring secure console or serial port management in environments requiring high-reliability and/or dual power. The LX-8020S-102 includes the most comprehensive security features, such as per port access protection, RADIUS, Secure Shell v2.0, PPP PAP/CHAP, PPP dial-back, on-board database, menus, and others. The LX-8020S-102 console management solution enables centrally located or remote personnel to connect to the console or craft ports of any network element or server. This serial connection allows administrators to manage and configure the remote network devices and servers, as well as perform software upgrades as if attached locally. The LX-8020S-102 is available with dual AC power supplies and provides 20 RS-232 DTE RJ45 Serial ports.
Table 8-7:LX-8020S-102 console server specifications Item Processor/Speeds Memory Serial Line Speed (20) Ethernet Interface (2) Height Depth Width Weight Environment Description of LX-8020S-102AC 132 MHz RISC system board processor with integral encryption coprocessor. 16 MB Flash, 128MB SDRAM DTE RS-232 - RJ-45 (up to 230 Kbps default = 9600 bps) 10/100 Auto Sensing/MDI/MDIX 4.3 cm (1.71 in) 25.4 cm (10.0 in) 44.4 cm (17.5 in) - fits in a 19-inch rack LX-8000S w/modem - 3.58 kg (7.9 lbs.) 5% to 85% humidity Long Term, non condensing. Operating Temperature: 0 - 40C (32 - 104 F) Long Term, -5 - 50C

OMC-R

DCN Hardware Specifications & Recommendations 8-15


Item Power Requirements Description of LX-8020S-102AC AC - 100 - 240 VAC, 50 - 60 Hz, 0.5 Amps8040S). Dual AC Supply Unit: 38W (129 BTU). RTS/DTR: 5.0V @ 1.6mA (Nominal), 2.5V @ 7.6mA (Absolute Maximum) Lithium battery. Capacity is 48mAH. Power down shelf-life > 3 years at 200C.

Control Output Ratings

Real Time Clock Battery

Figure 8-9: LX-8020S-102 console server front view

Figure 8-10: LX-8020S-102 console server back view

OMC-R Engineering Rules PE/DCL/DD/014282

8-16 DCN Hardware Specifications & Recommendations

Alarm Relay Box


This piece of equipment converts digital information (coming from a host system via a serial link) into electrical signals intended to control electrical or electronic equipment. These controlled equipment can be buzzers or lights. Thus, the most important BSS alarms notified to the OMC-R may be sent to an alarm panel (provided either by the customer or NMS) via the relay box. Therefore, the relay box is used as a controller equipment for an alarm panel (visual and sound), when selected equipment anomalies are detected by the BSS. For the OMC-R solution, the following Alarm Relay Box is supported but no more proposed within the Nortel sellable offer: Lorin V02 Satellite Systems 64 Relay Model The V02-64R is equipped with 64 relays, two RS 232 interfaces and 64 led on the front panel of the box indicating the relay status. The V02-64R is connected to the OMC-R server via the RS232 serial interface and connected to the Alarm panel using dry electrical contacts as shown below. The Alarm panel and the V02-64R can be remote from the OMC-R site. In this case, V24 modems can be used on the RS232 interfaces. In case of a Dual Server configuration, the active OMC-R server V02-64R and the back up OMC-R server V02-64R must be connected to the same alarm panel.
Figure 8-11: Alarm panel and relay box connection to OMC-R
SFV8x0

External Alarm cable

The following section provide specifications and recommendations for the V02-64R.

RS232

RS232

Alarm system

OMC-R

DCN Hardware Specifications & Recommendations 8-17

Lorin V02-64R
The model used in the OMC-R architecture is a 64 relay box (64 logical outputs) that can control 64 alarms in an alarm panel.
Figure 8-12: V02-64R rear view

Table 8-8: Lorin V02-64R specifications Lorin 64 V02-64R specifications 64 logical outputs through 4 SubD37 Female connectors (4 16 relay terminal blocks). Outputs are organized in 8 groups of 8 relays. Each output can optocoupled open collectors or mechanical relays. 2 RS232 (SubD25 Female connectors) serial interfaces 19-inch (2 U) standard rack (CE and UL/CSA qualification) Power requirements: 110 - 240 VAC, 0.9A Max Relay output characteristics Admissible current: 3A Admissible voltage: 250 VDC Switching response time: <= 10 ms Cut-off capability: 750 VA

Software The software driver managing the relay box is delivered with the OMC-R software application load. Connection to relays The equipment alarms are assigned to the relays by means of OMC-R configuration commands.
Relay 1: immediate alarms (all BSC and BTS linked to the OMC-R). Relay 2: deferred alarms (all BSC and BTS linked to the OMC-R).

OMC-R Engineering Rules PE/DCL/DD/014282

8-18 DCN Hardware Specifications & Recommendations


Relay 3: not used. Relay 4 to 33: immediate alarms mapped to one per BSC. This mapping is dynamically defined in a configuration file in the OMC-R. Relay 34 to 64: not controlled by the OMC-R
Figure 8-13: Relay assignments

33 34

64
non used

immediate alarms all BSS

non used

deferred alarms all BSS

immediate alarms 1 per BSC

The default position on the relays is "open". The OMC-R controls the relays at the "close" state as long as there is no alarm, using the "Work" position only as described in the relay connectors pinout table below. If the V02-64R is switched off, all relays immediately open. After power up and until a command is received from the host, relays remain open.
Table 8-9: Relay connectors pinout Pin nbr 1 2 3 4 5 6 7 8 9 10 11 12 13 Connector 1 (1-16) Output Common Common Common Common Common Common Common Common Common Common Common Common Common Relay nbr 16 15 14 13 12 11 10 9 8 7 6 5 4 Connector 2 (17-32) Output Common Common Common Common Common Common Common Common Common Common Common Common Common Relay nbr 32 31 30 29 28 27 26 25 24 23 22 21 20 Connector 2 (33-48) Output Common Common Common Common Common Common Common Common Common Common Common Common Common Relay nbr 48 47 46 45 44 43 42 41 40 39 38 37 36 Connector 2 (49-64) Output Common Common Common Common Common Common Common Common Common Common Common Common Common Relay nbr 64 63 62 61 60 59 58 57 56 55 54 53 52

OMC-R

DCN Hardware Specifications & Recommendations 8-19


14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 Common Common Common Idle Idle N/A Work Work Work Work Work Work Work Work Work Work Work Work Work Work Work Work Idle Idle 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 13 9 3 2 1 1 5 Common Common Common Idle Idle N/A Work Work Work Work Work Work Work Work Work Work Work Work Work Work Work Work Idle Idle 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 29 25 19 18 17 17 21 Common Common Common Idle Idle N/A Work Work Work Work Work Work Work Work Work Work Work Work Work Work Work Work Idle Idle 48 47 46 45 44 43 42 41 40 39 38 37 36 35 34 33 45 41 35 34 33 33 37 Common Common Common Idle Idle N/A Work Work Work Work Work Work Work Work Work Work Work Work Work Work Work Work Idle Idle 64 63 62 61 60 59 58 57 56 55 54 53 52 51 50 49 61 57 51 50 49 49 53

Connection to the OMC-R The serial cable connecting the OMC-R to the V02-64R is a straight through 25-pin RS232 cable. The V02-64R RS232 connector has a DCE pinout configuration.
A dip switch selector configures the defines the serial link transmission parameters. The default parameters are the following: 9600 bds even parity 8 bits + 1 stop bit RTS/CTS off

OMC-R Engineering Rules PE/DCL/DD/014282

8-20 DCN Hardware Specifications & Recommendations

RS-422/RS232 Interface Converter


This interface converter allows the connection of equipment with incompatible electrical and mechanical interfaces. One interface is RS-442 (with a RS-449 connector) and the other interface is RS-232/V.24. As the OMC-R has no RS-232/V.24 interface, to connect a BSC directly to an OMC-R, it is necessary to use the RS-232/RS-422 converter. The RS-232/RS-422 interface converter is supported but no more proposed within the Nortel sellable offer. It can alternatively be supplied by Black Box. It is designed to provide bi-directional synchronous or asynchronous conversion of all commonly used RS-232 (V.24) and RS-422 (V.11) signals. The unit is designed to operate with one port configured as DTE and the other port as DCE. The maximum transmission rate is approximately 64 Kbps in synchronous applications.

Figure 8-14: Direct link between the OMC-R and the BSC2G without modem

OMC / BSC direct link without modem


Blackbox kit
Port 0
RS422/ V.11 RS332/ V.24

V24 cable
DB25 DB25

DB37

DB37

SF V880 HSI connectors

NTQQ0206 (3M)

Standard cable or custom made up to 50 feet

BSC

OMC-R

OMC-R

DCN Hardware Specifications & Recommendations 8-21

Figure 8-15: Direct link between the OMC-R and the BSC2G using modem
OMC / BSC direct link using modem
Blackbox kit
Port 0
RS422/ V.11 RS332/ V.24

V24 cable
4 wires DB25 DB25

Micro modem kit


4 wires DB25 DB25
A0696643 (3M)

DB37

DB37

SF V880 HSI connectors

NTQQ0206 (3M)

A0730409 (3M)

A0696646 (100M)

BSC

OMC-R

OMC-R Engineering Rules PE/DCL/DD/014282

8-22 DCN Hardware Specifications & Recommendations

OMC-R

Appendix A: Software Load line-up 9-1

Appendix A: Software Load line-up


Third Party software Load line-up OEM Software for OMC-R server
The following OEM software is installed on an OMC-R server.
Table 9-1: OEM software for OMC-R servers SF V8x0/T5140 server Solaris 10 (1) Sun HSI/P card driver V3.0.1 SunLink CMIP 9.0 (2) SunLink FTAM 9.0 (2) Acrobat Reader 7.0 Sybase ASE 15 IlogViews libraries 5.0 Java Help 20.02 MDM 16.2 (3) Xyplex binary file 8.0.1

9-

To check the OMC-R system V18 release load line up software, refer to V18.0 External Release Definition, PE/BSS/DJD/022189.

(1) The Solaris media kit is to be ordered along with the hardware. Refer to the OMC-R Modelled Offer Provisioning Guide, to be defined. (2) The SunLink OSI software product requires a licences which is bundled with the order code of the OMC-R server. Refer to the OMC-R Modelled Offer Provisioning Guide, to be defined. (3) The MDM software is installed by default on the SF V8x0 based OMC-R server whether the GSM BSS network includes GPRS Access elements (PCUSN) or not. The MDM software requires a license which need to be ordered for royalty tracking purpose. Refer to the OMC-R Modelled Offer Provisioning Guide, to be defined.

OEM Software for Client workstation


The following OEM software is installed on an OMC-R client workstation.

OMC-R Engineering Rules PE/DCL/DD/014282

9-2 Appendix A: Software Load line-up


Table 9-2: OEM software for Client workstation Sun Blade 150 / Sun Blade 1500 Solaris 10 (1) Acrobat Reader 5 IlogViews libraries 5.0 Lexmark Printer Driver 5.1.2 MDM 16.2 (2) Sun http server for RACE 2.01 Sun http server config 1.0 (3) Ultra 45 Solaris 10 (1)

(1) The Solaris media kit is to be ordered along with the hardware. Refer to the OMC-R Modelled Offer Provisioning Guide, to be defined. (2) The MDM software is required on the workstation only if the workstation is used as a dedicated PCU OAM server. The MDM software requires a license which need to be ordered for royalty tracking purpose. Refer to the [R12] OMC-R Modelled Offer Provisioning Guide PE/PFO/APP/5914. (3) The Sun http server is required on the workstation only if the workstation serves as a RACE server.

OMC-R

Appendix B: OMC-R LAN Engineering 10-1

Appendix B: OMC-R LAN Engineering


This section provides the engineering considerations for the OMC-R LAN.

10-

IP Addresses planning
The table below details the number of ports and IP addresses for each equipment on the OMC-R LAN.
Figure 10-1: IP Addresses planning Device SF V8x0 Integrated OMC-R server/ Dual T3 Storage Array Client workstation RACE Client Terminal Server or Console Server Printer PCUOAM Server SDO Server IP @ 3 2 1 1 1 1 1 1 Ethernet ports 2 2 1 1 1 1 1 1 Whenever PCUOAM server not included with integrated OMC-R server Whenever SDO server not included with integrated OMC-R server PC based equipment Notes 3 IP @ are required because of IPMP implementation One IP @ per T3

OMC-R Engineering Rules PE/DCL/DD/014282

10-2 Appendix B: OMC-R LAN Engineering


Device BSC3000 TCU3000 IP @ 4 (2) Ethernet ports 4 2 Notes Whenever collocated to the OMC-R LAN Whenever collocated to the OMC-R LAN. 2 IP addresses should be reserved for CEM boards at commissioning. Permanent CEM connection to the IP network is not recommended. Whenever collocated to the OMC-R LAN. The PCUSN hosts 2 Control processor, one active and one hot-standby sharing the same IP address. A hub connected to both CP Ethernet port is mandatory in order to keep the OAM connection alive. IP and MAC addresses are automatically propagated from the failed CP to the standby CP.

PCUSN

OMC-R

Appendix B: OMC-R LAN Engineering 10-3

IPMP Implementation
This section gives the IPMP implementation details supported on the Integrated OMC-R server. The OMC-R server uses two PCI Quad Fast/Gigabyte Ethernet (QFE/QGE) Adapter cards with 1+1 redundancy. Hence, there exists 8 physical ports. Implementing IPMP where traffic is handled on a single subnet (1 group, 2 interfaces, 3 IP addresses), the schema is as follows:
QFE/QGE Card 1: qfe0/qge0 = OMC-R Server - OMC-R Clients & BSS NE <IpAddress0> = The data IP address of the qfe0 interface <IpAddress0T> = The test IP address of the qfe0 interface QFE/QGE Card 2 (Redundant): qfe4/qge4 = OMC-R Server - OMC-R Clients & BSS NE <IpAddress4T> = The test IP address of the qfe4 interface Figure 10-2: SF V8X0 IPMP based interface configuration

SF V8x0
Network Interface Board 2 Network Interface Board 1 Ethernet 100/1000 Mb/s Ethernet 100/1000 Mb/s

Port 0 Port 1 Port 2 Port 3

Data Interface Test Interface

Not used

Port 4 Port 5 Port 6 Port 7

Data Interface Test Interface

Not used

OMC-R Engineering Rules PE/DCL/DD/014282

10-4 Appendix B: OMC-R LAN Engineering


Figure 10-3 T5140 IPMP based interface configuration

T5140 server
Network Interface Board 2 Network Interface Board 1 Ethernet 100/1000 Mb/s Ethernet 100/1000 Mb/s

Port 0 Port 1 Port 2 Port 3

Data Interface Test Interface Connect to ST2510 Not used

Port 4 Port 5 Port 6 Port 7

Data Interface Test Interface Connect to ST2510 Not used

OMC-R

Appendix B: OMC-R LAN Engineering 10-5

T5140 IP Adrress Planning


The ip addressing planning for the T5140 server and ST2510 disk array can be found in the tables below.
Table 10-1 IP Addresses planning for T5140 Server Network External External InternalNote 1: Number of IP Adresses 1 3 4 Details Access to ILOM (Via SSH) IPMP related IP adresses Connection to ST2510 Disk Arrays ISCI ports

Table 10-2 IP Addresses planning for ST2510 Disk Array Network External Internal Note 1: Number of IP Adresses 2 4 Details CAM ports connection to server Connection to T5140 servers ISCI ports

Note 1: The internal network must be configured automatically by CIUS on the server and the disks array. It provides direct connections between the server and the disks array with a set of IP addresses that must not be accessible from the outside unless having a special routing table configuration on the server. This network must have a subnet prefix different from any other subnet accessible from the server. Customer must onlly provide this prefix as the subnet suffixes are assigned automatically to ports on both T5140 server and ST2510 Disk Array as shown in Table 10-3 to interconnect both devices as shown in Figure 10-4.

OMC-R Engineering Rules PE/DCL/DD/014282

10-6 Appendix B: OMC-R LAN Engineering


Table 10-3 T5140 Port 1 2 5 6 Assigned IP address xxx.xxx.xxx.21 xxx.xxx.xxx.22 xxx.xxx.xxx.25 xxx.xxx.xxx.26 ST2510 Port A/1 B/1 A/2 B/2 Assigned IP address xxx.xxx.xxx.31 xxx.xxx.xxx.32 xxx.xxx.xxx.35 xxx.xxx.xxx.36

Figure 10-4 Interconnection of the T5140 and ST2510 ports

OMC-R

List of Terms 11-1

List of Terms
ASCII
American Standard Code for Information Interchange

11-

CD
Compact Disc

CDROM Compact Disc Read-Only Memory CEM


Core Element Manager

CPU
Central Processing Unit

DCN
Data Communication Network

DHCP
Dynamic Host Configuration Protocol

DNS
Domain Name Server

FTP
File Transfer Protocol

GB
GigaByte

GPRS
General Packet Radio Service

GSM
Global System for Mobile communications

GUI
Graphical User Interface

IP
Internet Protocol

IPMP
Internet Protocol Multi Pathing

OMC-R Engineering Rules PE/DCL/DD/014282

11-2 List of Terms KB


KiloByte

Mb
Megabytes

MB
MegaByte

MDM
Multiservice Data Manager

MDP
Management Data Provider

MHz
MegaHertz

MSC
Mobile Switching Center

NE
Network Element

NFS
Network Files System

NMS
Network Management System

NTP
Nortel Technical Publication Network Time Protocol (context will differentiate the 2 variants)

OAM
Operation, Administration and Maintenance

OSS
Operations Support Systems

PC
Personal Computer

PEC
Product Engineering Code

RAM
Random Access Memory

OMC-R

List of Terms 11-3

RTHP
Recommended Throughput computed with Highest values of dimensioning Parameters

SF
Sun Fire

sec.
Second

SGSN
Serving GPRS Support Node

SIG
SS7/IP Gateway

SNMP
Simple Network Management Protocol

Specs
Specifications

TCP/IP
Transport Control Protocol /Internet Protocol

OMC-R Engineering Rules PE/DCL/DD/014282

11-4 List of Terms

OMC-R

OMC-R Engineering Rules PE/DCL/DD/014282

GSM BSS 850/900/1800/1900 V18 OMC-R Engineering Rules


Copyright 2008 Nortel, All Rights Reserved. The information contained herein is the property of Nortel and is strictly confidential. Except as expressly authorized in writing by Nortel, the holder shall keep all information contained herein confidential, shall disclose it only to its employees with a need to know, and shall protect it, in whole or in part, from disclosure and dissemination to third parties with the same degree of care it uses to protect its own confidential information, but with no less than reasonable care. Except as expressly authorized in writing by Nortel, the holder is granted no rights to use the information contained herein. Nortel, the Nortel logo, the Globemark, How the World Shares Ideas and Preside are trademarks of Nortel.

PE/DCL/DD/014282 Standard 04.02 October 2008

OMC-R

Potrebbero piacerti anche