Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Whats inside...
Product Overview OMC-R Server Engineering Consideration Client, X-terminal and RACE Engineering Considerations Preventive Backup and Disaster Recovery Plan OMC-R Solution Architecture and Interfaces Bandwidth Requirements Hardware Specifications DCN Hardware Specifications & Recommendations Appendix A: Software Load line-up Appendix B: OMC-R LAN Engineering
ii
Copyright 2008 Nortel, All Rights Reserved The information contained herein is the property of Nortel and is strictly confidential. Except as expressly authorized in writing by Nortel, the holder shall keep all information contained herein confidential, shall disclose the information only to its employees with a need to know, and shall protect the information, in whole or in part, from disclosure and dissemination to third parties with the same degree of care it uses to protect its own confidential information, but with no less than reasonable care. Except as expressly authorized in writing by Nortel, the holder is granted no rights to use the information contained herein. Nortel, the Nortel logo, the Globemark are trademarks of Nortel. SOLARIS, is a trademark of Sun Microsystems Inc. UNIX is a trademark licensed exclusively through X/Open Company Ltd. NIMS-PrOptima is a trademark of Mycom International.
Printed in Canada
Publication history
October 2008 Standard version 04.02 Version after internal IPOR 390810 review. September 2008 Preliminary version 04.01 Applicable to GSM BSS V18 release. June 2008 Standard version 03.04 Removed all references to the Ultra 5 workstation as it is not supported in v17 release. March 2008 Standard version 03.03 Updated to Standard Version. Added 1800Mhz cpu to SF V890 configuration. September 2007 Preliminary version 03.02 Removal of the WQA Server and Application Engineering guidelines section as WQA is offered uniquely as part of a services offering July 2007 Preliminary version 03.01 Applicable to GSM BSS V17 release. BR+21 compliant. October 2006 Preliminary version 02.02 Addition of the WQA Server and Application Engineering guidelines section May 2006 Preliminary version 02.01 Applicable to GSM BSS V16 release March 2006 Preliminary version 01.05 Add engineering rules for concurrent activation of Call Path Trace, Call Drop Analysis and Radio Measurement Distribution October 2005 Preliminary version 01.04 Correct and update list of OEM software is installed on an OMC-R server in Appendix A: Software Load line-up document section
OMC-R Engineering Rules PE/DCL/DD/014282
iv Publication history Rewrite Q3 on Ethernet link via TCP/IP in OMC-R Solution Architecture and Interfaces document section Add note about the need to provision 2 additional IP addresses for the BSCe3 CEM boards in OMC-R Solution Architecture and Interfaces document section July 2005 Standard version 01.03 Precise nominal hardware configuration with regards to OMC-R releases in Hardware Specifications & Internal Redundancy Strategy document section June 2005 Preliminary version 01.02 Add SIE OMC-R OMC Administration, PE/OMC/DD/000182 in Reference document section Clarify OMC-R Capacity Configuration naming and point to OMC-R Capacity parameters in OMC-R Server Engineering Considerations section Clarify and update Dimensioning of Call Trace and Call Path Trace in OMC-R Server Engineering Considerations section Change the number of SDOs supported back to 5 as it is the contractually agreed value in NIMS-PrOptima for GSM BSS Server Engineering Considerations Pull out the NTP time synchronization section and remove the info about the data availability KPI in NIMS-PrOptima for GSM BSS Server Engineering Considerations section Clarify maximum of simultaneous Graphic MMI in Client, X-terminal and RACE Engineering Considerations section Clarify possibility to share a single switch for multiple PCUSN locally connected to the OMC-R server in OMC-R Solution Architecture and Interfaces section March 2005 Preliminary version 01.01
OMC-R
Table of Contents v
Table of Contents
Table of Contents About this document
Audience for this document Scope Whats new in this release
v vii
- vii - vii - vii
Reference documents
References
ix
- ix
Product Overview
OMC-R interconnection with the GSM/GPRS network OMC-R Architecture and Functions Hardware Platform New Items in the Release
1-1
1-1 1-2 1-4 1-4
2-1
2-1 2-2 2-3 2-4 2-4 2-7 2-8 2-8 2-8 2-9 2-9
3-1
3-1 3-2 3-3 3-6 3-8 3-8
4-1
4-1 4-1 4-2
5-1
5-1 5-3
vi Table of Contents
OMC-R PCUSN Connection OMC-R BSC 3000 Interface OMC-R to NMS interface 5-5 5-8 5 - 12
Bandwidth Requirements
OMC-R LAN OMC-R - Client bandwidth requirements STATX Workstation - X-terminal bandwidth requirements OMC to BSC bandwidth requirements BSC to OMC bandwidth requirements OMC-NMS bandwidth requirements OMC - SDO bandwidth requirements PCUOAM - OMC bandwidth requirements
6-1
6-1 6-1 6-1 6-2 6-3 6-4 6-4 6-5
Hardware Specifications
Supported configurations Detailed hardware specifications Sun Server Hardware
7-1
7-1 7-3 7-8
8-1
8-1 8-2 8 - 11 8 - 14 8 - 16 8 - 20
9-1
9-1
10 - 1
10 - 1 10 - 3 10 - 5 10 - 5
List of Terms
11 - 1
OMC-R
Scope
This document applies to the version 18 of OMC-R intended to manage the V18 BSS release and to manage temporarily the previous BSS releases to allow release upgrade transitions. This version of OMC-R supports the introduction of the new server T5140 with the disk array device ST2510.
OMC-R
Reference documents ix
Reference documents
The documents listed below contain all references used herein. Additional updates and corrections can be found in the OMC-R Release Notes
References
[R1] [R2] [R3] [R4] [R5] [R6] [R7] [R8] [R9] [R10] [R11] [R12] [R13] [R14] [R15] [R16] [R17] [R18] [R19] [R20] [R21] [R22] [R23] V18.0 Release Documentation List, PE/DCL/APP/019983 V18.0 Feature Planning Guide, PE/BSS/APP/019562 V18.0 Release Reference Book, PE/SYS/DPL/019036 V18.0 External Release Definition, PE/BSS/DJD/022189 OMC-R CUSTOMER PRODUCT OVERVIEW, PE/OMC/DD/000170 NIMS -PrOptima Customer Product Overview, PRO/MKT/SYD/NOR/008 SFS OMC-R DATA SERVER, PE/OMC/DD/000103 OMC-R PRODUCT CATALOGUE, PE/OMC/INF/0066 OMC-R Modelled Offer Provisioning Guide, to be defined W-NMS OAM Engineering guide, NTP 450-3101-638 Centralized Installation and upgrade service, PE/SYS/DD/006464 SIE OMC-R Web Access: RACE, PE/OMP/DD/0045 SIE OMC-R DATA SERVER, PE/OMC/DD/102 SIE OMC-R OMC Administration, PE/OMC/DD/000182 OMC-R CUSTOMIZATION PARAMETER NOTEBOOK, DS/OMC/APP/000019 GSM OAM SYSTEM SPECIFICATION, Firewall Support Information, PE/OMC/DD/004749 OMC-R OEM EQUIPMENT INSTALLATION PROCEDURE, DS/OMC/APP/000001 OMC-R VERSION UPGRADE PROCEDURE, DS/OMC/APP/000002 OMC-R SOFTWARE INSTALLATION PROCEDURE, DS/OMC/APP/000003 SDO INSTALLATION PROCEDURE, DS/OMC/APP/000008 OMC-R PREVENTIVE MAINTENANCE BACKUP, DS/OMC/APP/000016 OMC-R SYSTEM GLOBAL RESTORATION, DS/OMC/APP/000017 MULTIOMC WORKSTATION CONFIGURATION, DS/OMC/APP/000023
x Reference documents [R24] [R25] [R26] [R27] [R28] [R29] [R30] OMC-R MAINTENANCE CHECKS, DS/OMC/APP/000024 OMC-R STATIONS MOVING, DS/OMC/APP/000032 OMC-R MULTI-MMI DISPLAY CONFIGURATION, DS/OMC/APP/000033 OMC-R CAPACITY INCREASE PROCEDURE, DS/OMC/APP/0000037 OMC-R MONOBOX PREVENTIVE MAINTENANCE BACKUP, DS/OMC/APP/000043 OMC-R MONOBOX SYSTEM GLOBAL RESTORATION, DS/OMC/APP/000044 COLD REDUNDANCY / DISASTER PLAN PROCEDURE, DS/OMC/APP/008020
OMC-R
Product Overview
1-
The OMC-R is the Operation and Maintenance Center of the NORTEL GSM Base Station sub-system (BSS). It is at a remote location where operations and maintenance functions for the network radio sub-system equipment (BSC, TCU, BTSs) attached to the OMC-R are centralized. The BTSs and TCU are managed from the OMC-R through the BSC to which they are connected. In case of an additional GPRS service, the OMC-R provides also a centralized management of the Nortel manufactured PCUs. The OMC-S/CEM or W-NMS OAM solutions (not described in this documentation) manage the Nortel GSM Network Switching Subsystem (NSS). The OMC-D or W-NMS OAM solutions (not described in this documentation) solutions manage the Nortel General Packet Radio Service core Subsystem (GPRS core). The CT2000 (not described in this documentation) offers a centralized configuration of entire Nortel Networks BSS including multiple OMC-Rs, as well as a centralized view of all BSS networks parameters.
BSS Subsystem
A
MSC
Gs
TCU
Ater
SIG
Gr
Abis
BTS BSC
Agprs
PCUSN
Gb
SGSN
Gn
GGSN
WAN/LAN WAN/LAN
WPS
OMC-R Server
CT2000
Architecture
The OMC-R is composed of two logical entities which are part of the same physical equipment:
OMC-R
Product Overview 1-3 One Mediation Device (MD) function to manage the BSC and PCUSN network elements. The MD handles mediation between the standard Q3 interface and the OMC-R/BSC & OMC-R/PCUSN interfaces. It converts Q3 requests into OMC-R/BSC OMC-R/& PCUSN interfaces requests and BSS spontaneous event reports into Q3 notifications. One manager (MNGR) function to interface with the OMC workstations. The Q3 interface is used as the inner OMN standard interface as specified in the TMN model. It enables communication between the MD-R and remote NMS (also known as External Manager). The MNGR and MD-R also communicate internally through the Q3 interface.
Figure 1-2: OMC-R software architecture
1-4 Product Overview Fault management This functions manages and stores the flow of BSS information concerning BSS operational anomalies and breakdowns and the associated return to work procedure. The OMC-R furnishes this information to the operating staff through the HMI. Performance (and Observation) management This function handles the Call monitoring feature and all the collecting and reporting functions of the performance counters. Security management For OMC-R external functions, security management refers to access security management for the OMC-R operating staff and not to management of the GSM network security from the OMC-R.
Hardware Platform
The OMC-R system is composed of workstations and servers. It is made up of commercial third party equipment (computers, communication equipment, etc.) that runs industry standard and Nortel Networks developed proprietary software. The Network Management functions are hosted by Sun servers; the OMC-R Server for the Network Management platform, Fault Management, Configuration Management and Performance Collection functions. These servers are based on Sun servers with or without external storage arrays. The client workstations supported for management of the Wireless network are Sun workstations.
OMC-R
Product Overview 1-5 A major feature in the v18 release is Abis over IP; this feature consists in enabling packet based backhaul transmission as an alternative to TDM-based E1/T1 links, on the BSC'BTS Abis interface. All information on the Abis over IP feature and its impact to engineering will be covered in an ABIS over IP specific document; this document will not cover this feature.
OMC-R
MOD Vn-2
MOD Vn-1
MOD Vn
BDA
BSS Vn-2
BDA
BSS Vn-1
BDA
BSS Vn
BDA: Applicative Data Base, the Database of the BSC BDE: Management Database, the DB of the OMC MOD: Managed Object Dictionary it is the BSS-OMC interface definition; as such, it is versioned.
The OMCR provides the following functions: Man Machine Interface (MMI) Communication Management Configuration Management Fault Management Performance Collection Security Management Common Functions
2-2 OMC-R Server Engineering Considerations RACE management OMCR databases GPRS management
OMC-R Capacity
The OMC-R is able to handle: up to 40 BSC / 4800 cells for the Enhanced Capacity configuration option with a Sun Fire V890 based OMC-R and the T5140 based server. This configuration is known as the Ultra High Capacity (UHC) configuration. up to 40 BSC / 3200 cells for the Enhanced Capacity configuration option with a Sun Fire V880. This configuration is also known as the Very High Capacity (VHC) configuration. up to 30 BSC / 2400 cells for the Basic Capacity configuration. The Basic Capacity configuration corresponds to the High Capacity (HC) configuration. The OMC-R is by default configured with the Basic Capacity configuration. To increase the OMC-R capacity from Basic to Enhanced Capacity, one must first verify that the OMC-R server is compliant with OMC-R very high capacity hardware requirements. The enhanced configuration cannot be reached with all supported OMC-R server hardware configurations. See "Hardware Specifications" section. Refer to SIE OMC-R OMC Administration, PE/OMC/DD/000182 for the appropriate values of the OMC-R configuration files variables related to the OMC-R Capacity.
OMC-R
maximum number of cells per OMC-R 2400 maximum number of BSCs per OMC-R 30
AdjacentCellHand defines a neighbor cell of a serving cell 76800 over for handover management purposes AdjacentCellResel defines reselection management ection parameters for a serving cell Channel FrequencyHoppin gSystem LapdLink PcmCircuit (BSC2G/BSC 3000) Pcu Transceiver Transcoder_2G (BSC2G/BSC 3000) Transcoder_e3 (BSC 3000) max number of channels (TCH, etc.) max number of frequencyHoppingSystem objects (Hopping Seq Nb, Mob. All.) max number of LAPD objects max number of pcmCircuit objects 76800 76800 9600
maximum number of TRX per OMC-R 9600 Max number of TCU2G 420/960
60
80
80
2-4 OMC-R Server Engineering Considerations The OMC-R can simultaneously build the BDA of up to 10 different BSCs, whatever the build type is (on-line, off-line). Also, it can simultaneously audit the BDA of up to 10 different BSCs.
The value of T2 can be affected given the OMC-R server configuration, number of BSC managed, number of cells per BSC and number of neighbors (adjacentCellHandover) per BSC.
1Only available for BSC 3000: T2 ODIAG can be set to 5 mn only if one BTS is in mdOb-
OMC-R Server Engineering Considerations 2-5 These limitations are linked to the number of cells handled per BSC and the number of neighbors (adjacentCellHandover) per cell. According to these values the minimum T2 value for Fast Statistic Observation cannot always be 15mn. These limitations can appear even with mixed BSC 2G/BSC 3000 in the OMC-R.
Basic Capacity 30 30
Enhanced Capacity 40 40
Call Tracing
This function is the GSM 12.08 trace facility of the BSC. It is used to trace the activities associated to specific communications (identified by IMSI or IMEI) in a BSC and to transfer this data to the associated OMC-R. This function is invoked by MSC. The following considerations apply: No more than one call trace object can be created per BSC. Only one Call Trace session per BSC can be activated. The OMC-R can process at any given time the traces of 10 IMSIs, in radio or others modes, per day, in the whole BSS network it manages. It is assumed that each IMSI performs 90 communications per day of 2 minutes duration with 1 handovers, on an average, when the traces are done in radio or others modes.
2-6 OMC-R Server Engineering Considerations Only one Call Path Trace session per BSC can be activated, monitoring 36 communications in parallel. It is assumed that communications are, on average, 2 minutes long with 2 handovers. The OMC-R can handle 6 or 8 active Call Path Trace sessions simultaneously (N_CPT_MAX) respectively for basic (HC) and enhanced (VHC/UHC) capacity configurations. The total duration of the call path trace session (H_CPT_DURATION_MAX) determines the amount of CPT Data collected and therefore will be limited by the related OMC-R server & SDO server partitions size. With allowing only one day of storage of CPT data in the SDO, the maximum active CPT duration (hours) for 8 BSCs monitoring 36 parallel communications with T2 at 30 minutes is limited to 10 hours. For data transfer, FTAM is used for non-priority traces, and event reports are used for priority traces.
OMC-R
Concurrent activation of Call Path Trace, Call Drop Analysis and Radio Measurement Distribution
The activation of the Call Drop Analysis feature and - in a much lesser extent - the activation of the Radio Measurement Distribution feature both concur significantly to increase the amount of data stored in the OMC and SDO disks. Therefore there are potential restriction in using those functions in parallel with the Call Path Tracing activity to avoid to reach the disk used space threshold monitored by the OMC-R purge defense mechanism. With the Call Drop Analysis activated, activating thereafter the Call Path Tracing will potentially launch the OMC-R defense mechanism after a given period of time which will depend on the OMC-R hardware configuration. Otherwise, there is no restriction in activating simultaneously Radio Measurement Distribution and Call Path Tracing. With the following settings on those different functions that are activated simultaneously, Call Drop Analysis typically featuring 100 drop calls per cell for potentially 3200 cells and assuming the data are kept on the SDO during 4 days, Radio Measurement Distribution monitoring 3200 cells per day and assuming the data are kept on the SDO during 4 days, Call Path Trace initiated for 8 BSCs monitoring 36 parallel communications with the BSC observation granularity period (T2) set at 30 minutes, we can estimate that maximum active CPT duration (hours) for 8 BSCs monitoring 36 parallel communications with T2 at 30 minutes is limited to 10 hours. Within this duration, the SDO disk used is over 70% for a SFV880 with T3 Integrated OMC-R. If CPT exceeds maximum CPT duration, the OMC data will be either purged (if the CPT database contains data older than the current day) and/or recording will be stopped (if the CPT database contains only current day data). If CPT exceeds 10 hours, old SDO data will be purged on SDO based on U5 or SB150. With an SDO device of 27 or 36GB, i.e. either dedicated SDO running on a U5 or SB150 with Multipack or integrated in a SV880 with T3s, the SDO used disk space will suggest to only activate those features in parallel with precaution and requires daily backup of CPT data to prevent from having the oldest archive to be purged immediately at the next session of CDA or CPT. With an SDO device of 90GB or more, i.e. either dedicated SDO running on a SB1500 or integrated in a SFV890, the CDA, RMD and CPT can be activated in parallel. However we need to remind you that the limitations described before on CPT when running alone still applies, but in that case it is not the SDO device which will be the limiting factor, but instead the OMC Data partition size
Security Management
The number of user profiles that can be created in the OMC-R is up to 250.
Redundancy
The OMC R secures the storing of dynamic data in mirrored file systems that are identical files (or identical Data Base tables) on two separate disk units. Moreover, in SFV880 Integrated and SFV890 Integrated HDI, system disks and OMC-R static data disks are also mirrored.
Purge
Three mechanisms are available at OMC-R level to purge the old data. The first consists in automatic daily deleting the old data in order to avoid the saturation of the disks. The storage duration is defined in previous sections. This mechanism does not require any operator action. The second is a defense mechanism used to delete automatically the oldest data when a filling threshold for any OMC-R partition is reached. This mechanism does not require any operator action. The third mechanism is provided to avoid the saturation of the Call Trace and Call Path Trace partitions (data base and ASN.1 files, but not the log files). The operator has the possibility to purge the current day or any day available at OMC-R level according to the storage duration of the Call Trace, Call Path Trace information.
Q3 (CMIP) interface
The Q3 interface is based on CMIP, FTAM, ACSE protocols. The OMC-R can simultaneously manage on its Q3 interface: transactional and an event reporting data flow based on CMIP with up to 8 CMISE command every second file transfer data flow based on FTAM The transport layer used for the OMC-R Q3 interface is either X.25 or TCP/IP. Event report Event reporting throughput is caused by Fault Management event reports, Performance Management event reports, Call[[Path]Trace event reports and transactions entailing a notification (attribute value change, object creation, object deletion). We assume that this last point corresponds to 25% of the transactional throughput The mediation part of the OMC-R may support up to 3 managers, including the local manager in the OMC-R, which supports the manager functions and the man-machine interface. As a matter of fact, an OMC-R can be connected to a maximum of 2 NMS simultaneously. The mediation part's managed objects and resources cannot be dedicated to a manager. Therefore a consistent management of the OMC-R (Mediation part) is supposed to be performed by the manager(s).
OMC-R
OMC-R Server Engineering Considerations 2-9 To avoid the congestion of the OMC-R during a load of notifications, each external manager shall be able to acknowledge at least 16 notifications per second and per OMC-R in average during a day on the Q3 interface. In the same way during scope/filtered operations, the external manager shall be able to receive 4 up to 16 linked responses per second and per OMC-R in average during a day
SDO server
The SDO allows to get the OMC-R data records and radio network configuration parameters in an ASCII readable format for peripheral OMC applications (which may get them using 'rcp' or 'ftp' Unix commands). Starting BSS v16.0, the observation reports available from the SDO (Nortel OMC-R Data Server) are compressed when older than one day. The directories which store observation report files, and have a day tag different from current day are arranged into a single destination file (using tar command), then the destination file is compressed (using gzip command, Standard RFC 1952). At the end, original data files are deleted. There are potential impacts on the 3rd Party Post Processing tools as if the file retrieval is interrupted for some time (in case of OAM link failure, for instance), file compression will have been applied by the SDO. In this case, compressed files will have to be retrieved (instead of regular ones) and uncompressed before being processed. Today, NMS for Performance Management (such as METRICA) use SDO data files as inputs (observation files). With the Integrated OMC-R Server configuration, the SDO function is part of the OMC-R server with the same level of performance. For legacy mono or dual OMC-R server configuration, the SDO function is hosted on a local (connected to the same LAN as the OMC-R server) or remote (connected to the OMC-R server via an XLAN) workstation.
PCUOAM server
The MDM software is responsible for managing the PCUSN hardware that is part of GPRS network. Connected to MDM, then OMC-R processes will be providing PCUSN alarms to the OMC-R MMI, and transferring PCUSN counters data from PCUSN to OMC-R SDO. The PCUSN counters are not displayed at the OMC-R MMI. With the Integrated OMC-R Server configuration, the PCUOAM function is part of the OMC-R server. For legacy OMC-R server configuration, the PCUOAM function is hosted on a local (connected to the same LAN as the OMC-R server) or remote (connected to the OMC-R server via an XLAN) workstation. The PCUSN per OMC-R limit is the maximum number of BSC an OMC-R can manage.
OMC-R
Client, X-terminal and RACE Engineering Considerations 3The OMC-R Client software application can only be hosted on Unix workstation. But it is also possible to emulate multiple client sessions using an X-Window terminal launched from a Unix workstation local to the OMC-R server as well as launched from the Integrated OMC-R server itself. Besides The Remote ACcess Equipment application allows end-users to interact with the OMC-R application from a PC running an Internet browser to perform day-to-day operations, curative maintenance from a remote site.
OMC-R Client
The nominal OMC-R client is hosted on a Unix workstation which can be local (connected to the same LAN as the active server) or on a remote (connected to the active server via an XLAN).
Table 3-1: OMC-R client capacity Server configuration Number of cumulated OMC-R Client Graphic MMI (OMC-R workstations or X terminals) SFV880 16 SFV890/T5140 40
Session windows
During a user session on a workstation, the number of windows of various types that can be opened simultaneously is limited. The performance of the OMC-R system is guaranteed under the following conditions: maximum of simultaneously opened or iconed windows = 10 maximum (Current alarm list + Notification windows + State Change window + OMC-R browsers + Topological view) opened or iconed = 5 The Client workstation allows since release V15.1R the use of a double screen display to optimize Windows management, for example by having configuration management and performance management windows opened on one screen, and Fault management windows opened on a second screen. To benefit from this feature also implies to have a second graphic card installed on the workstation.
3-2 Client, X-terminal and RACE Engineering Considerations A MMI unitary command is not a "& filter" command, nor a "run command file" command. A "scope & filter" command is counted as N unitary commands, where N is the number of object instances in the scope. A command file is counted as N unitary commands where N is the number of unitary commands in the command file
Client, X-terminal and RACE Engineering Considerations 3-3 The CIUS is installed on an OMC-R local workstation with a minimum of 20GB internal hard disk. On the CIUS workstation, an internal DVD-ROM drive is required in order to download tools and software. This hardware requirement matches with the following configurations: The SunBlade150 workstation with 512MB RAM and 80GB disk The Sun Blade 1500 workstation The Ultra 45 workstation with 1.6GHz, 2GB RAM, 250GB disk After the install upgrade the workstation goes back to its OAM purpose. As the SUN JumpStart needs to allow any machine to boot on the networks using the RARP service, it is necessary to have JumpStart server residing on the CIUS server and the Sun machines to upgrade on the same subnetwork. For the remote workstation which cannot boot using the Solaris image of the JumpStart server, a specific mechanism is set: a boot from a Solaris CDROM with an automatic JumpStart installation from the JumpStart server. For the remote workstations with a low bandwidth, i.e. less than 1 Mbps, it is also possible to perform the installation from the CIUS CDROM and from the local tape drive with the application software, while the configuration files will be fetched from the CIUS server. The workstations and the SFV8x0 servers can be installed from the CIUS even if they belong to different subnets.
Figure 3-1: workstation as a centralized installation upgrade server
SDO
PCU OAM
WS 2
WS 1 Servers
Install/Upgrade server
X-terminal
An X-terminal (X-Window terminal) can be connected either to the OMC-R server or to a client workstation set up to support X-terminal session, also named STATX workstations which can be local or remote.
OMC-R Engineering Rules PE/DCL/DD/014282
3-4 Client, X-terminal and RACE Engineering Considerations According to the X-Window Client-Server relationship, in this type of configuration an X-terminal is an X11 server and the OMC-R server or the STATX workstation are an X11 client. The benefit of X-terminal is to deploy low cost or poor computing power hardware (obsolete workstation). But actually the OMC-R server with a monitor and keyboard or a nominal workstation running X11 server can also act as a X-terminal. Note that X-terminal based on PC with X11 emulation software such as Exceed are not supported as they could cause crash on the extended workstation.
Figure 3-2: X-Window Client-Server relationship
X11, tftp
OMC-R A ws STATX (X11 client, Boot server) X Terminal (X11 server)
X11
OMC-R Server A Switch / Router OMC-R B ws with X11 server running OMC-R Server B
OMC-R
40
Standard X-terminals could be diskless devices and therefore have the capability to boot their X11 server software via the network using standard IP boot protocol (TFTP) from an X11 boot server. The boot server could be the extended workstation itself or a distinct machine. X-terminals running on workstations will boot their X11 software using their own disk avoiding the need of a boot server. An X-Terminal cannot operate without its X11 client, that is to say, for OMC-R with non integrated OCM-R server for which X11 server can only run on the STATX workstation, this comforts the requirement to have a minimum of two local workstations per site if one workstation is out of service.
Multi-MMI workstation
The Multi-MMI workstation is a Unix machine running an X server, or indeed more simply said an X terminal. By definition, an X-terminal can be simultaneously attached to several X11 clients, i.e. STATX workstations or OMC-Rs and therefore perform multiple simultaneous sessions for different OMC-Rs. With the Multi-MMI workstation, we can connect to several remote OMC-R at once through their local workstations set up to act as an X client (STATX). The workstations STATX running the client software (MmiKernel and MmiGraphic binary) are sending back the display to this Multi-MMI machine or X terminal. It implies having, in each remote OMC-Rs you want to reach, one local workstation set up as STATX, or in case of the Integrated OMC-R based on SFV8x0, this one can also be playing the role of STATX.
OMC-R site 1
OMC-R Server
OMC-R site 2
Connexion to OMC-R 2 as client
OMC-R ws
OMC-R ws STATX
Multi OMC ws
RACE
RACE (Remote ACcess Equipment) provides assistance for end users to ensure day-to-day operations, curative maintenance, etc., from a remote site but also from OMC-R site. Despite its capabilities, RACE shall not be considered as a replacement of a workstation or an X-Terminal. Indeed, it is an Internet technology based server application that runs on OMC-R workstations. An Http server also runs on these workstations. A web browser is used on end user side to manage the information.
OMC-R
OMC-R Clients
X Terminal
Printer
PC RACE
RS232
The RACE client sends requests to an HTTP server, which is on the LAN of the OMC-R server and which transmits these requests to a RACE server running on an OMC-R workstation. Up to 3 RACE connections can be made on the same workstation that is recommended to be a local MMI workstation. Special workstations such as PCU OAM, SDO and X-terminal server are not allowed to act as RACE server. On client side, RACE requires a web browser and therefore the associated hardware able to run this browser. The recommended hardware is a high end PC. Finally, on login the end user is proposed two modes and has to choose one according to available communication bandwidth. The RACE can be in a remote site and access the OCM-R LAN with a WAN access. A firewall canbe added to improve the security.
QGE QGE
LX 8020S Console Server
Hub (optional)
Switch / Router
Internet Internet
VPN Router PC RACE
3-8 Client, X-terminal and RACE Engineering Considerations In relation to the standard configuration for the RACE server, we must configure the slots of the terminal server, and its modems, to support PPP through PSTN. The HTTP server is configured using the configuration shell of the OMC-R.
RACE Client
The RACE PC client is not productized by Nortel Networks and must be purchased locally following the requirements given in "Hardware Specifications" section. One of the advantages of the RACE application is that all the software is on the server HTML pages and Java applets and download to the client if necessary. Hence, if a correction has to be performed on the RACE application, the update is done on the server side, but nothing is modified on the client PCs. The PC client software requirements are: Microsoft Internet Explorer 5.0 and higher Operating system: Windows 2000, Windows XP, Windows Vista Java 1.2.2 Plug-in (in case of Internet Explorer version used not supporting RACE applets that are composed of swings or jdk1.2 components, the Java Plug-in 1.2.2 must be installed). For a connection through a terminal server, a new connection has to be created on the PC, using the Dial-up networking option of Microsoft Windows. Besides, the TCP/IP layer of Microsoft Windows has to be configured, in order to well use the modem by determining its maximum speed, data transferred in compressed form and checked for errors. For more information refer to SIE OMC-R Web Access: RACE, PE/OMP/DD/0045
Whenever those limits are overrun, this results in time response increase and degraded performance from the OMC-R.
OMC-R
4-
The preventive backup can be performed on following data types. Each type of data can be restored separately when needed.
MDM configuration
This on-line type backup deals with MDM software configuration files and PCUSN view files.
4-2 Preventive Backup and Disaster Recovery Plan on a mounted file system via NFS handled by a backup server There is one SUN StorEdge DDS/DAT tape drive per OMC-R server and one external DDS/DAT tape drive per site to be attached to a local workstation. Sun StorEdge DDS/DAT tape media/drive in use versus supported hardware configurations are described in "Hardware Specifications" section. Maximum data transfer rate of DDS/DAT tape drive before data compression are listed in Figure 4-1: DDS/DAT roadmap. Data compression ratio can vary depending of the type of data, but one can assume the maximum compression ratio is 2:1.
Figure 4-1: DDS/DAT roadmap
DAT 72 DDS4 DDS3 DDS1 DDS2 DDS2 Read/Write Compatibility
125m Tape 12 GB
150m Tape
170m Tape
20 GB
36 GB
183KB/s
3 MB/s
Daily operations
The daily backup is automated and the data are stored on a NFS server, the NFS server is not a local partition as to limit the disk space on the OMC-R. A successful backup completion deletes the d-2 backup version avoiding disk saturation. Data backed-up daily are:
OMC-R
Preventive Backup and Disaster Recovery Plan 4-3 Environment data (BDE) PCUOAM configuration files SDO configuration files (/SDO/base/config) Disaster configuration files <optional> EFT files <optional> daily data (the last three days) <optional> - means the user has a possibility to configure automatic backup with or without the EFT and daily data.
Figure 4-2: Daily operations
NFS server
Site A
DAILY BACKUP NETWORK RECOVERY SERVER
Site B
SPARE integrated OMC-Rs
LAN
OMCA-1
OMCB-1 OMC-R workstation SPARE integrated OMC-Rs OMC-R workstation OMC-R workstation
Recovery phase
The spare OMC-R server that will be used as the recovery server must be operating with the same OMC-R version as the destroyed machine. The spare OMC-R server, now named as the recovery OMC-R server, being used without workstation, a local workstation currently used by another operational OMC-R will be reassigned to the recovery OMC-R server and be called as recovery workstation. It implies that this workstation must be pre-defined in the spare OCM-R server configuration Then, knowing the name parameter of the destroyed machine and the NFS label, the last available backup is searched to be restored on this recovery server. For the specified NFS label, the recovery tool finds the information about the NFS server, to verify that it is reachable and to search the last available backup for the destroyed OMC-R.
Site A
Site B
OMCA-1 DATA RESTORE RECOVERY SERVER SPARE integrated OMC-Rs
NETWORK
LAN
OMCA-1
OMCB-1 OMC-R workstation SPARE integrated OMC-Rs OMC-R workstation OMC-R workstation
The OMC-R application then is stopped on the recovery server to restore the environment data and started to restore the PCUOAM and the three days of daily data. Some parameters that are hardware dependant must be updated, (T3 hostnames and IP addresses, Sybase server names) and the OMC-R re-configuration performed. Some post operations: PCUOAM configuration files update, PCUSNs synchronization, SDO data regeneration, daily backup configuration, are necessary. After the recovery server is installed, the automatic task of centralized archiving is running again. Thus, the recovery server performs its data backup on the NFS server just like the destroyed OMC-R was supposed to do.
OMC-R
Site A
SPARE integrated OMC-Rs NETWORK
Site B
DAILY BACKUP RECOVERY SERVER SPARE integrated OMC-Rs
OMCA-1
LAN
OMC-R workstation OMC-R workstation
Site restoration
The return to a normal state occurs after applying the reverse operation of the disaster plan assuming that the integrated OMC-R in failure is operational in its usual OMC-R version. So, the site restoration is performed by restoring the OMC-R data from the NFS server and applying the post operations used during the recovery server configuration. The workstation needs to be integrated again in its usual OMC-R configuration. The recovery server needs to be installed from scratch with the same OMC-R version to belong to the spare integrated OMC-R pool.
Figure 4-5: Site restoration
NFS server
Site A
SPARE integrated OMC-Rs
Site B
RECOVERY SERVER SPARE integrated OMC-Rs
NETWORK
OMCA-1
LAN
OMC-R workstation OMC-R workstation
Engineering considerations
A maximum of approx 32GB per day of disk storage volume for daily data is required to backup an OMC-R with 3200 cells, or 10 MB per day per cell. Thus the maximum storage volume required on an NFS server will be the result of the following calculation: Sum on total of OMC-R server to backup with for each OMC-R the volume given by the number of cells in OMC-R[i] * 4 (number of days which is backuped)*10 MB. As a result, the nominal bandwidth requirement between the OMC-R servers and the NFS server cannot be less than 100 Mb/s to accomplish the daily backup of each OMC-R within less than 3 hours. It is also recommended to have redundancy solution for the NFS server.
OMC-R
5-
The OMC-R is a client-server system architecture as shown in the figure below with the nominal hardware configuration.
Figure 5-1: Nominal OMC-R system architecture
OMC-R Server
OMC-R Clients
X Terminal
Printer
Hub (optional)
Console Server
TCP/IP
Switch / Router
VPN Router
The system architecture implements the following equipment: a single server or dual server, multiple clients, LAN equipment, optional servers (SDO, PCU OAM) when applicable, other optional devices as network printers, or an uninterrupted power supply (UPS).
Clients
The connection of the local client workstations to the OMC-R server is made with an Ethernet LAN. An X-terminal is logically linked to a workstation or to an integrated OMC-R server through an Ethernet LAN. It is not recommended to have the X-terminal access the workstation or the Integrated OMC-R server through a WAN (via routers) because of the bandwidth requirements of the X11 protocol. The RACE PC equipment is linked through a WAN to the OMC-R central site. The RACE PC equipment can be also directly connected to the OMC-R LAN.
Ethernet switch
The OMC-R LAN must be at minimum a 100 Mbps LAN built up around an 100 Mbps Ethernet switch. 1000 Mbps Ethernet switches are supported.
Console Server
For Line-Mode feature a console server is used to connect Serial port A of the OMC-R server to local LAN in order to assume its supervision through Telnet protocol.
Optional servers of the OMC-R system not based on the Integrated OMC-R Server
The following optional servers are part of the OMC-R system which are not based on the Integrated OMC-R server. The PCU OAM server hosted by a dedicated local workstation with additional memory and disks capacity will be connected to the Ethernet LAN. The PCU OAM can be installed only local, i.e in the same LAN with the OMC-R. The SDO (data server) is used to export OMC-R data in a pre-defined ASCII format. It can installed on a dedicated workstation with an external disk when necessary. An SDO can be local or remote.
OMC-R
5-4 OMC-R Solution Architecture and Interfaces private test address must be a valid routable address and it must be in the same subnet as its associated data address. As soon as the interface originally set as active goes back in function, the public data address is assigned back to this interface. Refer to "Appendix B: OMC-R LAN Engineering" section. Note 1: The Integrated OMC-R server based on SF V8x0 has one Gb Ethernet port which is not used at the moment. Note 2: The Dual T3 External storage has two 100Mb Ethernet port which must be connected to the OMC-R LAN.
PCUSN IP interface
The interface between PCUSN and OMC-R is Ethernet TCP/IP.
OMC-R
OMC-R Solution Architecture and Interfaces 5-5 The PCUSN hosts 2 Control processor (CP), one active and one hot-standby in a single shelf, but the two CPs have the same IP address. In case of switch over from the active to the hot-standby CP, a hub or switch connected to both CP is required in order to keep the OAM connection alive. IP and MAC addresses are automatically propagated from the failed CP to the standby CP. Local connection performed between the OMC-R and the PCUSN requires the use of a switch to isolate the PCUSN from the IP traffic messages not intended to PCUSN which could cause the PCUSN CP to crash in case of too heavy traffic. A single switch can be shared to connect locally multiple PCUSNs to the OMC-R.
MA E
LA N
Cross Connect
Cross Connect
Gb I/f
Gb I/f ASN
SGSN PCUSN OAM packet
Hub or switch
ASN
OMC-R
Workstation
SWITCH 1
ROUTER 1
IP 2
IP 1
BSC1 PCUSN
LAN connection
When the PCUSN is local to the OMC-R site, the PCSUN Ethernet interface will be directly connected to the OMC-R LAN through a switch.
OMC-R BSC 3000 Interface BSC 3000 co-located with the OMC-R LAN
This configuration presents the advantage of using the existing OMC-R LAN equipment but it has to be mentioned that an Ethernet switch is mandatory between BSC 3000s and OMC-R. The OMC LAN traffic being quite high, the active and passive OMUs need to be protected from this traffic load. In case of an OMC-R LAN made with hubs this one should be replaced with Ethernet switch at least for BSC connections. No additional equipment is needed (not even the BSC internal hub). Simple connection using the existing LAN equipment (no redundancy) The Ethernet Switches ranging from 24 ports to 48 ports can be used for this type of configuration, and for each BSC 3000 2 Ethernet ports on the switch should be provisioned; In case of several co-located BSC 3000s, switches can be stacked to provide the required number of ports. The Ethernet Switches BS470 and BS5510 can be used in this configuration.
Figure 5-4: Simple connection using the existing LAN equipment (no redundancy)
No redundancy
Workstation
SWITCH
IP 1
BSC 1
100 Mbps
Redundant Switch connection using Spanning Tree This configuration uses a redundant Ethernet Switch configuration and is recommended only as an interface toward the BSS OAM network (it can connect the co-located BSCs and the routers connecting the remote BSC) This configuration can be made using the BS470 and BS5510 Ethernet Switches. Those type of switches have optional or built-in Cascade Modules allowing the interconnection of several switches with a 2.5 Gbps link. Combining the cascade capability with the Spanning Tree Protocol Support (detects and eliminates the logical loops in the network) those switches are perfectly adapted for this kind of configuration.
OMC-R
Workstation
SWITCH 1
QFE 0 QFE 4
SWITCH 2
IP 2
IP 1
BSC 1
Workstation
SWITCH 1
ROUTER 1
QFE 0
VRRP
QFE 4
SWITCH 2 ROUTER 2
IP 2
IP 1 2
BSC 1
The main idea is to keep alive the link between the OMC-R (QF0 IP address) and the active OMU of the BSC (IP1 address) in case of one WAN interface or one router or one Ethernet Switch failure. There are still a single point of failure which but can be considered less critical: the router in the BSC site, but then only one BSC is affected in case of failure. E1/T1 alternatives If the chosen type of transmission network for WAN is E1/T1, than a PCM concentration should be made so that a minimum number of WAN interfaces to be used at the OMC-R routers level. This can be done at the WAN level (by using an SDH backbone):
Figure 5-7: E1/T1 links using WAN concentration
ROUTER 1 ROUTER 2 E1/T1 E1/T1 E1/T1 BSC 1
BSC 2
BSC 3
Or it can be done by concentrating the PCMs coming from several BSCs at the OMC-R level by using a digital cross-connect (which can be even the MSC):
OMC-R
E1/T1
E1/T1
BSC 2
PCM Network
DxC
BSC n
If case of a small number of BSCs (2 - 3 BSCs) a direct connection can be made between the BSCs and the two OMC-R routers and in this case additional E1/T1 interfaces should be provided (one for each BSC connected but no more than 3):
Figure 5-9: E1/T1 links not concentrated (for BSS small configurations)
BSC SIDE
ROUTER 1 E1/T1 BSC 1
OMC SIDE
SDH / SONET
ROUTER 2
BSC 2
E1/T1
BSC 3
All the configurations presented above will have two main dimensioning criteria: 1.The needed throughput for the BSC - OMC-R link 2.The number of available ports (Ethernet or WAN)
OMC-R
Bandwidth Requirements
6-
The following bandwidth requirements are expressed as the Recommended Throughput computed with Highest values of dimensioning Parameters (observations, traces, notification per second, network elements) (RTHP). Even though these throughput values can be doubled in case of huge event flows. Besides, the physical bit rates configured in the data communication equipment which are used to interconnect the OMC-R equipment must be higher than the RTHP. If the bit rate is less than recommended, delays will occur in case of large event flow.
OMC-R LAN
The OMC-R network requires a fully dedicated Ethernet LAN for operation. An Ethernet switch will provide the necessary bandwidth of 100 Mbps for the Standard, Enhanced and Maximal Capacity models. 1000 Mbps Ethernet Switches are also supported.
Throughputs above can double in case of huge event flows. WAN physical bit rates parameter configuration must be higher than the minimum required throughputs to avoid delays occurring in case of huge event flows.
To accomplish a BSC 3000 software download, i.e. approx 300MB with 10 downloads in parallel (taking a 1.3 ratio for protocol overhead and 0.6 efficacity) an estimate of the required bandwidth is given below for duration targets.
Table 6-4: BSC 3000 software download minimum required throughputs Download target duration (in hours) 1 5 10 RTHP kbps 14800 5800 2400
OMC-R
WAN physical bit rates parameter configuration must be higher than the minimum required throughputs to avoid SDO being not able compute between two T2 and to compute data in one day. The Peri OMC application collecting all the result files from the SDO has the following throughputs requirements when using the V12 factorized SDO output format.
Table 6-7: SDO to Peri OMC minimum required throughputs with V12 format OMC-R capacity model Standard Capacity (2400 cells) Enhanced Capacity (3200 cells) Enhanced Capacity (4800 cells) RTHP kbps (V12 format) 1662 2183 3029
OMC-R
The PCUOAM workstation is receiving from the PCUSN counters observation in proprietary NMS format (FMIP) which average required throughputs are as follows.
Table 6-9: PCUSN to PCUOAM on workstation minimum required throughputs OMC-R capacity model Standard Capacity (2400 cells) Enhanced Capacity (3200 cells) Enhanced Capacity (4800 cells) RTHP kbps 90 120 168
OMC-R
Hardware Specifications
Supported configurations OMC-R server configurations
Table 7-1: OMC-R server supported configurations Configuration OMC-R server hardware name specifications Integrated Sun Fire V880, 4x900Mhz, OMC-R server 8GB RAM, T3 460 GB Sun Fire V880, 4x1200Mhz, 8GB or 16GB RAM, T3 460 GB Sun Fire V890, 4x1350Mhz, 16GB RAM, 12x146GB internal disks Sun Fire V890, 4x1500Mhz, 16GB RAM, 12x146GB internal disks Sun Fire V890, 4x1800Mhz, 16GB RAM, 12x146GB internal disks T5140 server, 2x1200Mhz, 32GB RAM, 4x146GB internal disks and ST2510 disk array Capacity model(s) Notes
7-
Standard Capacity (2400 cells) Supported Enhanced Capacity (3200 cells) Supported Standard Capacity (2400 cells) Supported Enhanced Capacity (4800 cells) Standard Capacity (2400 cells) Supported Enhanced Capacity (4800 cells) Standard Capacity (2400 cells) Supported Enhanced Capacity (4800 cells) Enhanced Capacity (4800 cells) Nominal
R1: configuration requires an external DDS Tape Drive for software installation or upgrade. R2: supported as local OMC-R workstations only. Remote OMC-R workstations require DVD drive.
OMC-R
OMC-R
Sun Ultra 45
Table 7-13: Sun Ultra 45 hardware specifications Sun Ultra 45 Features CPU RAM Hard disk Disk drive Software license Graphic board Value 1 * 1600 MHz Ultra SPARC-IIIi Module 2-GB Memory 1 * 250GB 7200 RPM SATA Disk 1 * Slim DVD-RW/CD-RW Drive Server License for Solaris 1 * XVR-100 Graphics Accelerator
OMC-R
ILOM ports
Figure Legend: 1 Power supply (PS0) 2 Power supply (PS1) 3 PCIe or XAUI slot 0 4 PCIe or XAUI slot 1 5 PCIe slot 2 7 Serial System controller port (SER MGT) access to the ILOM 8 Ethernet System controller port (NET MGT) 9 10/100/1000 Ethernet ports (left to right; NET0, NET1, NET2, NET3) 10 USB ports (left to right; 0, 1) 11 Host serial port, DB9 connector (TTYA) access to the server
ILOM ports
The T5140 server comes with an integrated ILOM (Integrated Lights out Manager V2.0) board that provides access to the server console through its serial port and/or ethernet port. The T5140 does not hold a tape drive and a graphical board is not present as the server is rack mounted; video display is not required since ILOM board offers the capability to manage server remotely. T5140 comes equipped with 4 internal disks; a pair of disks are used for the OS while the other pair is reserved for future use. The OMC-R data related information is stored on the ST 2510 disk array. The T5140 server comes equipped with an embedded QGE board with 4 available ethernet ports and an additional QGE/PCI express having 4 ethernet ports is installed in the PCI-E slot. The ports on these boards are used as follows: 1 QGE/PCI-Express and 1 QGE embedded interface port is used for external ethernet connections. 2 QGE/PCI-Express and 2 QGE embedded interface ports are used to connect to the ST2510 disk array device. 1 QGE/PCI-Express and 1 QGE embedded interface port is reserved for future growth.
OMC-R
Hardware Specifications 7-9 The ST2510 disk array device is used to store the OMC-R data. The ST2510 comes equipped with the following: 12*300GB 15Krpm SAS drives 2 * 512MB cache iSCSI HW RAID controller 2 redundant AC power supplies 2 redundant cooling fans 2 QGE/PCI-Express and 2 QGE embedded interface ports are used to connect to the ST2510 iSCSi interface directly without the use of a hub or switch. These interfaces can been seen in the figure below:
Figure 7-2 Rear View of ST2510 disk array
Dual Ethernet iSCSI interface Ethernet interface for DAD management (CAM) Serial PS2 6pin DIN connector Dual Ethernet iSCSI interface
SF V8x0
The Sun Fire V8x0 Server is a high-performance, reliable server. CPU is added in pairs via a dual processor/memory module. All memory is accessible by any processor, as workgroup servers do not implement domains or partitions. An internal storage array supports six Fiber Channel disks.
Hardware redundancy Hardware redundancy is provided via internal component redundancy of power supplies, cooling modules, CPUs, memory boards, HSI boards, FC-AL controller boards and RAID disks. Hot pluggable and hot swappable components such as power supplies, cooling modules, and RAID disks can be changed in the event of failure without service interruption.
The SF V8x0 enhances significantly the availability of the server by reducing to 0 the downtime linked to hard disk and power supply failures: Power Supply The SF V8x0 has 3 different power supplies. Each has its own power cable. The server can work with only two power units. Processing
Automatic System Recovery (ASR) The SUN Fire V8x0 provides automatic system recovery from the following types of hardware faults:
CPU modules Memory modules PCI buses System IO interfaces The automatic system recovery allows the system to resume operation after experiencing certain hardware failures. The automatic self-test feature enables the system to detect failed hardware components. An auto-configuration capability designed into the system's boot firmware allows the system to de-configure failed components and restore system operation.
T3 Storage Array
Each T3 disk array is used in RAID5 mode (full redundancy) while the OMC application is using the 2 T3s in RAID1 with the Solstice Disk Suite software from Sun. So all disks can be changed in the event of failure without service interruption.
OMC-R
8-
This section provides hardware specifications for the various DCN components that are supported with the OMC-R solution architecture as well as recommendations that will help engineer the OAM DCN. Hardware specifications and recommendations are provided for the: LAN switching device WAN routing device Terminal Server Console Server Alarm Relay Box RS-422/RS232 Interface Converter
Ethernet Switches
As mentioned above, the Ethernet Switches are recommended for LAN connections including local client site connections and for remote/local NE sites. The 5510 series suits the Sun Fire V890/T5140 based OMC-R LAN as they will connect using Gigabytes Ethernet protocol and optimize backup on remote NFS servers. All models provide network performance and reliability because it provides advanced layer 2, 3, and 4 packet classification, prioritization and quality of service (QoS) capabilities, web-based management, fail-safe stackability, and flexible high speed uplinks.
Switch configuration Unless using special features such as Spanning Tree, no configuration procedure is required. The Ethernet Switches operates with factory default settings, automatically learning the addresses of all end-stations and maintaining a table with more than 16 000 MAC addresses. Switch supervision For network management, the Ethernet Switches includes a standards-compliant SNMP agent. Network management can also be performed in-band using the Telnet application. In addition, a serial console port allows out-of-band management using a standard VT100 or similar terminal (it allows to access the Console Interface screens).
OMC-R
The Contivity 1750 is available in four models: Contivity 1750 with five tunnels (56-bit or 128-bit) Contivity 1750 with 500 tunnels (56-bit or 128-bit)
Figure 8-1: VPN Router 1750 front
OMC-R
Table 8-3: Supported option cards for the Contivity 1750 Option card SSL VPN Module 1000 Contivity Security Accelerator (CSA) Hardware Accelerator 10/100 Ethernet interface 1000BASE-T interface (copper) 1000BASE-SX interface (fiber) Maximum number 1 1 Restrictions Install this card in slot 1 only Install one CSA or one Hardware Accelerator card. Do not install a Hardware Accelerator card in slot 4
4 2 Install two 1000BASE-T cards, two 1000BASE-SX cards, or one card of each type
56/64K CSU/DSU WAN interface 4 ADSL WAN interface ISDN BRI S/T or U interface T1 CSU/DSU WAN interface (full-height) T1/E1 CSU/DSU WAN (half-height) interface 4 4 4 4 For E1 support, you must install the half-height interface card.
Single V.35/X.21 WAN interface 4 (full-height) Single V.35/X.21 WAN interface 4 (half-height) HSSI WAN interface 2 Do not install in slot 4; install in slot 3 or 1 if possible. If an SSL VPN Module 1000 is installed, you can install only one HSSI WAN interface card
OMC-R
DCN Hardware Specifications & Recommendations 8-7 100Base-T Ethernet Dual Sync Dual Sync/ISDN BRI Quad BRI Single-mode, Multimode, and Hybrid FDDI Dual Token Ring MCE1, MCT1 Hardware compression (for use with the Dual Sync and Dual Sync/ISDN BRI net modules) The ASN has four positions in which you can install net modules.
ASN router specifications Core features Single-board module based on MC68040 microprocessor 8 MB, 16 MB, 32 MB of DRAM The ASN also supports an 8 MB PCMCIA flash memory card for non-volatile software and configuration storage Interfaces Ethernet Interface (15-pin AUI connector or 8-pin modular) Token Ring Interface (9-pin MAU connector) FDDI (two MIC, one RJ-11 optical bypass) Synchronous Interface (44- and 50-pin connector to RS-422, RS-232, V.35, X.21 adapter cable) ISDN BRI and ISDN PRI 100BASE-T Interface (40-pin MII connector or 8-pin modular) MCT1 (RJ-48C, 15-pin, DB connector) MCE1 (BNC, 75 ohm, 8-pin modular, 120 ohm)
The ASN router exists in three base configurations: Four - slot ASN chassis with single 110/220V DC power supply. Four - slot ASN chassis with redundant 110/220V DC power supply. Four - slot ASN chassis with redundant 48V AC power supply.
OMC-R
Front
Net Module #1
R ear
Net Module #3 R edundant Power Supply Unit (HR PSU) Console/M odem Port Hardware Diagnostic Port
Router software Bay Networks ships the software image and a default configuration file on the PCMCIA flash memory cards. The router initialization using the file system stored on the PCMCIA card is called local booting.
These files may also be downloaded to the router using a Bootstrap Protocol (BootP) or Trivial File Transfer Protocol (TFTP) device. This procedure is called network booting. In both cases it is strongly recommended to connect a console to the ASN so that it can be issued commands to the router and view messages.
Router configuration For the initial configuration the router can be accessed through a directly attached terminal or PC running terminal emulation software or a Telnet session.
The ASN router uses a software application called Site Manager for router configuration and maintenance. It uses a graphical user interface (GUI) to make router configuration and management tasks easier. This application runs on another machine in the system. The router configuration consists in defining the following parameters: Hostname and addresses (IP and X.25) X.25 configuration (Packet-Level and LAPB protocol) remote X.25 access with the mapping between IP address and X.25 address routing table
Router supervision The ASN router offers a SNMP based management solution. The SNMP agent is included in the router software.
Connection cables to ASN router The drawings below show how the OMC-R and the BSC2G are connected physically to the ASN router. Figure 8-5: OMC-R to ASN connection cables
Port 0
DB37
DB37 DB37
DB44 DB44
DB50
A NTQQ0206 (3M)
B A0018042 (4.57M)
C 7947 (3M)
Tag A B C
Cable description RS-422 cable, L=3M 44-pin to F RS-422 DCE, L=4.57M (15ft) 50-pin to 44-pin cable adapter, L=3M
Connection ends Male db 37 pin Male db 37 pin Female db 37 pin Male db 44 pin Female db 44 pin Male db 50 pin
OMC-R
DB25
DB44 DB44
DB50
D 7833 (4.57M)
C 7947 (3M)
BSC2G
Tag C D
Cable description 50-pin to 44-pin cable adapter, L=3M 44-pin to RS-232-C Standard, L=4.57M (15ft)
OMC-R
Terminal Server
For the OMC-R remote console and remote access, the following terminal server and rackmount modem system are supported but no more proposed within the Nortel sellable offer. The set composed of the modem shelf and modems is used for the RACE connections through the PSTN using the V34 protocol which allows 28.8 Kbps speed. With the OMC-R, up to five modems can be supported. Xyplex Maxserver 1620-014 The Xyplex Maxserver 1620-014 is an equipment that performs an intelligent bridge between 20 asynchronous serial ports and the Ethernet network. It allows both local & remote access via dial-up for a variety of devices. Multitech CC1600-Series rackmount modem system The scalable CC1600-Series 19 rackmountable card cage consolidates up to 16 V.34/33.6K modem cards for dial-up or 2- and 4-wire leased line service to provide dial-in remote access, dial-in/dial-out datacomm or dial-out faxing. The following sections provide specifications and recommendations for this terminal server.
The Xyplex Maxserver is usually bundled with a terminal board or distribution panel enabling adaptation from RJ45 connectors to 25-pin Sub-D connectors cabled as DTE.
Figure 8-7: Xyplex Maxserver 1620-014 views
FRONT VIEW XYPLEX
CARD RUN LAN CONSOLE
...
10
REAR VIEW 1
...
Distribution Panel
20
Ethernet
Power supply
RJ -45s
...
Terminal server software The software is downloaded by the Terminal server at boot time using the standardized RARP + TFTP protocol. If the Line Mode Manager is used, the boot machine will be a non-OMC-R machine (means one of the supervisory workstations). Terminal server configuration Firewall connection is made on port 1.
The ports used for modem connections are numbers 2 to 6. One can connect a terminal console to a Maxserver 1620 port (except for ports numbered 1 to 6) via the distribution panel to check the configuration or for maintenance purposes. The connection is done by linking the DCE port of the VT to the Xyplex by means of point to point cable. In case of Line Mode Manager feature, the serial port A of each server will be connected to one Xyplex port (except ports 1 to 6) and the access to the OMC-R servers will be made from any other workstation in the network.
OMC-R
Terminal Server supervision All terminal server parameters can be observed and changed via any SNMP-based management system.
RS-232 point to point cables must be provisioned to connect the MultiTech Shelf system (male DB 25-pin connector) to the Xyplex Maxserver (female DB 25-pin connector) on ports 2 to 6.
Figure 8-8: CC1600-series rackmount modem system view
Modems configuration auto baud disable - parity none - flow xon - speed 9600bit/s modem control enable - access remote - dsrlogout disable - dtrwait forring inactivity logout enabled - idle time-out 15 min
Console Server
The LX-8020S-102 is a secure standalone communication server that is designed for applications requiring secure console or serial port management in environments requiring high-reliability and/or dual power. The LX-8020S-102 includes the most comprehensive security features, such as per port access protection, RADIUS, Secure Shell v2.0, PPP PAP/CHAP, PPP dial-back, on-board database, menus, and others. The LX-8020S-102 console management solution enables centrally located or remote personnel to connect to the console or craft ports of any network element or server. This serial connection allows administrators to manage and configure the remote network devices and servers, as well as perform software upgrades as if attached locally. The LX-8020S-102 is available with dual AC power supplies and provides 20 RS-232 DTE RJ45 Serial ports.
Table 8-7:LX-8020S-102 console server specifications Item Processor/Speeds Memory Serial Line Speed (20) Ethernet Interface (2) Height Depth Width Weight Environment Description of LX-8020S-102AC 132 MHz RISC system board processor with integral encryption coprocessor. 16 MB Flash, 128MB SDRAM DTE RS-232 - RJ-45 (up to 230 Kbps default = 9600 bps) 10/100 Auto Sensing/MDI/MDIX 4.3 cm (1.71 in) 25.4 cm (10.0 in) 44.4 cm (17.5 in) - fits in a 19-inch rack LX-8000S w/modem - 3.58 kg (7.9 lbs.) 5% to 85% humidity Long Term, non condensing. Operating Temperature: 0 - 40C (32 - 104 F) Long Term, -5 - 50C
OMC-R
The following section provide specifications and recommendations for the V02-64R.
RS232
RS232
Alarm system
OMC-R
Lorin V02-64R
The model used in the OMC-R architecture is a 64 relay box (64 logical outputs) that can control 64 alarms in an alarm panel.
Figure 8-12: V02-64R rear view
Table 8-8: Lorin V02-64R specifications Lorin 64 V02-64R specifications 64 logical outputs through 4 SubD37 Female connectors (4 16 relay terminal blocks). Outputs are organized in 8 groups of 8 relays. Each output can optocoupled open collectors or mechanical relays. 2 RS232 (SubD25 Female connectors) serial interfaces 19-inch (2 U) standard rack (CE and UL/CSA qualification) Power requirements: 110 - 240 VAC, 0.9A Max Relay output characteristics Admissible current: 3A Admissible voltage: 250 VDC Switching response time: <= 10 ms Cut-off capability: 750 VA
Software The software driver managing the relay box is delivered with the OMC-R software application load. Connection to relays The equipment alarms are assigned to the relays by means of OMC-R configuration commands.
Relay 1: immediate alarms (all BSC and BTS linked to the OMC-R). Relay 2: deferred alarms (all BSC and BTS linked to the OMC-R).
33 34
64
non used
non used
The default position on the relays is "open". The OMC-R controls the relays at the "close" state as long as there is no alarm, using the "Work" position only as described in the relay connectors pinout table below. If the V02-64R is switched off, all relays immediately open. After power up and until a command is received from the host, relays remain open.
Table 8-9: Relay connectors pinout Pin nbr 1 2 3 4 5 6 7 8 9 10 11 12 13 Connector 1 (1-16) Output Common Common Common Common Common Common Common Common Common Common Common Common Common Relay nbr 16 15 14 13 12 11 10 9 8 7 6 5 4 Connector 2 (17-32) Output Common Common Common Common Common Common Common Common Common Common Common Common Common Relay nbr 32 31 30 29 28 27 26 25 24 23 22 21 20 Connector 2 (33-48) Output Common Common Common Common Common Common Common Common Common Common Common Common Common Relay nbr 48 47 46 45 44 43 42 41 40 39 38 37 36 Connector 2 (49-64) Output Common Common Common Common Common Common Common Common Common Common Common Common Common Relay nbr 64 63 62 61 60 59 58 57 56 55 54 53 52
OMC-R
Connection to the OMC-R The serial cable connecting the OMC-R to the V02-64R is a straight through 25-pin RS232 cable. The V02-64R RS232 connector has a DCE pinout configuration.
A dip switch selector configures the defines the serial link transmission parameters. The default parameters are the following: 9600 bds even parity 8 bits + 1 stop bit RTS/CTS off
Figure 8-14: Direct link between the OMC-R and the BSC2G without modem
V24 cable
DB25 DB25
DB37
DB37
NTQQ0206 (3M)
BSC
OMC-R
OMC-R
Figure 8-15: Direct link between the OMC-R and the BSC2G using modem
OMC / BSC direct link using modem
Blackbox kit
Port 0
RS422/ V.11 RS332/ V.24
V24 cable
4 wires DB25 DB25
DB37
DB37
NTQQ0206 (3M)
A0730409 (3M)
A0696646 (100M)
BSC
OMC-R
OMC-R
9-
To check the OMC-R system V18 release load line up software, refer to V18.0 External Release Definition, PE/BSS/DJD/022189.
(1) The Solaris media kit is to be ordered along with the hardware. Refer to the OMC-R Modelled Offer Provisioning Guide, to be defined. (2) The SunLink OSI software product requires a licences which is bundled with the order code of the OMC-R server. Refer to the OMC-R Modelled Offer Provisioning Guide, to be defined. (3) The MDM software is installed by default on the SF V8x0 based OMC-R server whether the GSM BSS network includes GPRS Access elements (PCUSN) or not. The MDM software requires a license which need to be ordered for royalty tracking purpose. Refer to the OMC-R Modelled Offer Provisioning Guide, to be defined.
(1) The Solaris media kit is to be ordered along with the hardware. Refer to the OMC-R Modelled Offer Provisioning Guide, to be defined. (2) The MDM software is required on the workstation only if the workstation is used as a dedicated PCU OAM server. The MDM software requires a license which need to be ordered for royalty tracking purpose. Refer to the [R12] OMC-R Modelled Offer Provisioning Guide PE/PFO/APP/5914. (3) The Sun http server is required on the workstation only if the workstation serves as a RACE server.
OMC-R
10-
IP Addresses planning
The table below details the number of ports and IP addresses for each equipment on the OMC-R LAN.
Figure 10-1: IP Addresses planning Device SF V8x0 Integrated OMC-R server/ Dual T3 Storage Array Client workstation RACE Client Terminal Server or Console Server Printer PCUOAM Server SDO Server IP @ 3 2 1 1 1 1 1 1 Ethernet ports 2 2 1 1 1 1 1 1 Whenever PCUOAM server not included with integrated OMC-R server Whenever SDO server not included with integrated OMC-R server PC based equipment Notes 3 IP @ are required because of IPMP implementation One IP @ per T3
PCUSN
OMC-R
IPMP Implementation
This section gives the IPMP implementation details supported on the Integrated OMC-R server. The OMC-R server uses two PCI Quad Fast/Gigabyte Ethernet (QFE/QGE) Adapter cards with 1+1 redundancy. Hence, there exists 8 physical ports. Implementing IPMP where traffic is handled on a single subnet (1 group, 2 interfaces, 3 IP addresses), the schema is as follows:
QFE/QGE Card 1: qfe0/qge0 = OMC-R Server - OMC-R Clients & BSS NE <IpAddress0> = The data IP address of the qfe0 interface <IpAddress0T> = The test IP address of the qfe0 interface QFE/QGE Card 2 (Redundant): qfe4/qge4 = OMC-R Server - OMC-R Clients & BSS NE <IpAddress4T> = The test IP address of the qfe4 interface Figure 10-2: SF V8X0 IPMP based interface configuration
SF V8x0
Network Interface Board 2 Network Interface Board 1 Ethernet 100/1000 Mb/s Ethernet 100/1000 Mb/s
Not used
Not used
T5140 server
Network Interface Board 2 Network Interface Board 1 Ethernet 100/1000 Mb/s Ethernet 100/1000 Mb/s
OMC-R
Table 10-2 IP Addresses planning for ST2510 Disk Array Network External Internal Note 1: Number of IP Adresses 2 4 Details CAM ports connection to server Connection to T5140 servers ISCI ports
Note 1: The internal network must be configured automatically by CIUS on the server and the disks array. It provides direct connections between the server and the disks array with a set of IP addresses that must not be accessible from the outside unless having a special routing table configuration on the server. This network must have a subnet prefix different from any other subnet accessible from the server. Customer must onlly provide this prefix as the subnet suffixes are assigned automatically to ports on both T5140 server and ST2510 Disk Array as shown in Table 10-3 to interconnect both devices as shown in Figure 10-4.
OMC-R
List of Terms
ASCII
American Standard Code for Information Interchange
11-
CD
Compact Disc
CPU
Central Processing Unit
DCN
Data Communication Network
DHCP
Dynamic Host Configuration Protocol
DNS
Domain Name Server
FTP
File Transfer Protocol
GB
GigaByte
GPRS
General Packet Radio Service
GSM
Global System for Mobile communications
GUI
Graphical User Interface
IP
Internet Protocol
IPMP
Internet Protocol Multi Pathing
Mb
Megabytes
MB
MegaByte
MDM
Multiservice Data Manager
MDP
Management Data Provider
MHz
MegaHertz
MSC
Mobile Switching Center
NE
Network Element
NFS
Network Files System
NMS
Network Management System
NTP
Nortel Technical Publication Network Time Protocol (context will differentiate the 2 variants)
OAM
Operation, Administration and Maintenance
OSS
Operations Support Systems
PC
Personal Computer
PEC
Product Engineering Code
RAM
Random Access Memory
OMC-R
RTHP
Recommended Throughput computed with Highest values of dimensioning Parameters
SF
Sun Fire
sec.
Second
SGSN
Serving GPRS Support Node
SIG
SS7/IP Gateway
SNMP
Simple Network Management Protocol
Specs
Specifications
TCP/IP
Transport Control Protocol /Internet Protocol
OMC-R
OMC-R