Sei sulla pagina 1di 162

ASON/GMPLS Optical Control Plane Tutorial

MUPBED Workshop at TNC2007, Copenhagen


Acknowledgement: The author thanks all colleagues from the OIF for their work, which has been the basis for this tutorial The responsibility for the content of this tutorial is with the author Hans-Martin.Foisel, T-Systems / Deutsche Telekom OIF Carrier WG Chair, OIF Vice President www.oiforum.com

ASON/GMPLS Tutorial Outline


Introduction Requirements & Architecture Signaling Routing Control Plane Management OIF Interoperability Demonstrations Control Plane Applications Use Cases Concluding remarks

Optical Control Plane Goals


Management Plane
E-1 DS-1 E-3 DS-3 ATM FR 10/100bT

Access

Edge

Metro Core

TMF-814

Long-Haul Core

Optical Control Plane IP Optical Control Plane

Optical Control Plane

Ethernet
OC-3/12/48/192

STM-1o/4/16/64 FC FICON

Offer real multi-vendor and multi-carrier inter-working Enhance service offering with Ethernet and IP / Optical Provide end-to-end service activation Integrated cross-domain provisioning of switched connection services Provide accurate inventory management

Realizing Optical Control Plane Goals


Framework Elements
Robust and scalable transport infrastructure that facilitates carriage of desired services Management plane that complements control plane in facilitating deployment and management of services Control plane architecture spanning user and provider networks that supports multiple provider business models and user service requests Control plane protocols based upon existing and emerging protocols of the data world Robust Data Communications Network architecture and mechanisms that enable interaction of the protocols running at each node

Intelligent Transport Networks introduce ...


A distributed Control Plane
Signaling protocols for dynamic setup and teardown of connections

Routing protocols for automatic routing

Building on concepts/protocols from the data world

Key Concepts Derived from the Data World


Distributed processing/knowledge/storage
Directory services
E.g., DNS, X.500

Open Distributed Processing

Standardized route determination and topology dissemination protocols


Routing information exchange mechanisms
E.g., RIP, OSPF, BGP, IS-IS/ES-IS

Flexibility in binding time decisions


Difference between provisioning and auto-discovery

Security based upon logical versus physical barriers


E.g., authentication, integrity, encryption

Differentiate between provisioning and more dynamic connection management Survivability


Distributed restoration using signaling

Leveraging Existing Protocol Solutions


Caveats
Internet serving community of users with common goals and mutual trust:
Classical Internet architecture When taking protocol solutions developed for the classical Internet, they bring along associated underlying principles and architectural aspects Commercialization of the Internet:
More Business Critical Infrastructure & Availability Requirements

Transport business & operational requirements:


Control plane architecture enabling boundaries for policy and information sharing

Optical Control Plane Capabilities


Optical Control plane Optical Control plane (distributed intelligence)) (distributed intelligence Management System

Bandwidth request or release from clients

X
Network failure

Control Control Plane Plane

Signalling Signalling Routing Routing Discovery Discovery

Improved bandwidth usage/efficiency Improved bandwidth usage/efficiency Scheduled/unscheduled BoD Scheduled/unscheduled BoD OSS simplification OSS simplification Autodiscovery Autodiscovery

Related Standards Development Organizations


ITU-T
Recommendations

IETF
GMPLS Protocols

RFCs

ASON Architecture & Requirements Interop Results ASON/GMPLS E-NNI, UNI Control Plane Mgmt. Use Cases

OIF

Implementation Agreements

TMF
Solution Sets

Signalling for Ethernet Services

Ethernet Services
9

MEF

Technical Specifications

Protocols and Architectures


Control Plane capabilities are implemented in protocols, whose elements can be combined to support different architectures/implementations Different SDOs contribute various protocol elements and architectural components
RFC RFC RFC IA Rec. RFC

Control Plane Solutions

RFC

RFC RFC RFC IA IA Rec.

IETF
10

OIF

ITU-T

Control Plane Specifications - Example


ITU-T
Requirements & Architecture AutoDiscovery G.8080 G.7714 G.7714.1 RFC 4204 RFC 4207

IETF
RFC 3495

TMF
TMF 509

Signaling

G.7713

G.7713.2

RFC 3474 RFC 3946

RFC 3473 RFC 4208

ENNI 1.0 UNI 1.0 E-NNI OSPF 1.0

ENNI 2.0 UNI 2.0

Routing

G.7715

G.7715.1 G.7715.2

RFC 4202

OIF

DCN/SCN

G.7712

Management

G.7718

G.7718.1

GMPLS MIB RFCs

TMF
TMF 814

11

Optical Internetworking Forum (OIF)


Mission: To foster the development and deployment of interoperable products and services for data switching and routing using optical networking technologies The OIF is the only industry group that brings together professionals from the data and optical worlds Its 100+ member companies represent the entire industry ecosystem:
Carriers and network users Component and systems vendors Testing and software companies

12

OIF Technical Committee Working Groups

13

Optical Control Plane

Implement Agreement Status OIF Control Plane IA Dashboard


Signaling
OIF-UNI-01.0-R2 OIF-UNI-01.0-R2-RSVP OIF-ENNI-01.0

Routing
OIF-ENNI-01.0-OSPF

Security
OIF-SEP-01.0 OIF-SEP-02.1 OIF-SMI-01.0 OIF-SMI-02.1

Management
OIF-CDR-01.0 Control Plane Logging and Auditing with Syslog

OIF-UNI-02.0 OIF-UNI-02.0-RSVP OIF-ENNI-02.0

Draft

Straw Ballot

Letter Ballot

Approved IA

http://www.oiforum.com/public/impagreements.html
14

ASON/GMPLS Tutorial Outline


Introduction Requirements & Architecture Signaling Routing Control Plane Management OIF Interoperability Demonstrations Control Plane Applications Use Cases Concluding remarks

15

Business deployment considerations

16

Optical Control Plane

Business Deployment Considerations Optical control plane viability depends upon supporting business as well as technical requirements
Service Provider business models Commercial and operational practices Services and network infrastructure heterogeneity Control and management plane heterogeneity Network and equipment interoperability

Forms foundation of fundamental optical control plane architecture principles


17

Service Provider Business Models


Internet Service Provider (ISP)
Delivers IP-based services Owns all of its infrastructure (i.e., including fiber and duct to the customer premises) Leases some of its fiber or transport capability from a third party

Classical Service Provider


Offers L1/L2/L3 services Owns its transport network infrastructure Sells services to customers who may resell to others

Carriers Carrier (Service Broker)


Provides optical networking services May not own transport infrastructure(s) supporting those services (connection carried over third party networks)

Research networks (NRENs, GEANT2, Internet2)


18

Commercial & Operational Practices (1)


Enable protection of commercial business operating practices and resources from external scrutiny or control
An network operator is likely to support a number of user services networks; a trust relationship cannot be assumed between the network and these users (or among the various users) A network operator will not relinquish control of its resources outside of its administrative boundaries, as the network is a prime asset

Support a pay for service commercial model


Network operators differentiate their services by defining their own branded bundles of functionality, service quality, support, and pricing plans Provided value added services must be verifiable and billable in a value preserving way
19

Commercial & Operational Practices (2)


Protect security and reliability of optical transport network
Optical transport network connection persistence must not be affected failures of its control plane, including that of the control communications network
Signaling Communications Network (SCN)

The network must be safeguarded against attacks that may compromise its control plane, or seek unauthorized use of its resources
Control plane security

20

Services Heterogeneity
A wide range of services may be offered; e.g.,
Classical data (e.g., best effort Internet, Frame Relay) Ethernet (e.g., EPL, EVPL, EPLAN, EVPLAN) L1/L2/L3 Virtual Private Network (VPN) SONET/SDH switched connection (e.g., STS-n, VC-n) OTH switched connection (e.g., ODU, OCh)

Many different service deployment scenarios; e.g.,


All services interface at the IP level Various services interface at L1, L2, and L3 Various options for L1 and L2 topologies and re-configurability in access, metro, and core networks

21

Network Infrastructure Heterogeneity


Extremely diverse network of networks, with widely varying topologies, deployed technologies, services/applications supported Support operator-specific criteria including cost, performance, and survivability characteristics
Breadth of existing and emerging data plane technologies Choice of infrastructure granularity options Flexible capacity adjustment schemes Range of single- and multi-layer survivability strategies Differing infrastructure evolution strategies

22

Control & Management Heterogeneity


Control plane-based subnetworks Management plane-based subnetworks (with various operations support system environments) Hybrid control plane / management plane scenarios; e.g.,
Use of signaling protocols in combination with centralized route calculation Mix of control plane and management plane based subnetworks
Network Provider A
EMS 1 NMS

EMS 2

Management Plane Based Connection Control


SNC 1

SNC 2

Control Plane Based Connection Control

E X A M P L E

Network Connection

23

Optical Control Plane

Network Operator Deployment Observations


Optimal network layering, convergence choices, equipment selection dependent upon multiple factors
Network size, geography, projected growth Service offerings portfolio, QoS committed in SLAs Cost, performance, resiliency trade-offs Operations support system environment Whether services traverse multiple operator domains

Differing network operator transport infrastructure, control & management deployments and evolution strategies Optical control plane architecture must support multidimensional heterogeneity

24

Heterogeneity & Research Projects

NOBEL

25

Fundamental optical control plane architecture principles

26

Optical Control Plane

Fundamental Architecture Principles (1)


Decouple services from service delivery mechanisms
Wide range of network infrastructure options Network operator specific optimizations

Decouple QoS from realization mechanisms


Wide range of survivability options Network operator specific approaches

Introduce call construct, which reflects a service association that is distinct from infrastructure/realization mechanisms

27

Optical Control Plane

Fundamental Architecture Principles (2)


Provide boundaries of policy and information sharing
Range of network operator business models Varying trust relationships among users and providers, among users, among providers Targeted solutions, scalability considerations (scope of information dissemination), etc.

Establish modular architecture with interfaces at policy decision points

28

Optical Control Plane

Fundamental Architecture Principles (3)


Provide for various distributions of control functionality among physical platforms
Different distributions of routing and signaling control Fully centralized to fully distributed system designs

Decouple topology of the controlled network from that of the network supporting control plane communications (SCN)
The transmission medium may be different for control plane messages and transport plane data

Identifiers to distinguish transport resources from, and among, signaling and routing control entities, and SCN addresses

29

ASON architecture and standards status

30

Optical Control Plane


ITU-T ASON Recommendation Framework
Rec. G.8080 Automatically Switched Optical Network (ASON)

Protocol Neutral Recs.

DCN/SCN

Signaling

Autodisc

Routing

initialization

Mgmt. FW

G.7712

G.7713

G.7714

G.7715

G.7716

G.7718
Info Model

Link State G.7713.1 (PNNI-based) G.7713.2 (GMPLS-RSVP-TE)

G.7715.1
G.7714.1 (Discovery SDH/OTN)

G.7715.2

Remote Path Query

G.7718.1
Protocol Specific Recs.

G.7713.3 (GMPLS-CR-LDP)

31

Optical Control Plane

ITU-T ASON Architecture


ITU-T G.8080/Y.1304, Architecture of the Automatically Switched Optical Network
First version Approved Nov.01, several subsequent Amendments, first major revision Approved June 2006 Subsumes and deprecates ITU-T Rec. G.807, Requirements for Automatically Switched Transport Networks, Approved July 01

Architecture considers business and operational aspects of realworld deployments


Call and connection separation, connection persistence, customer/network address space isolation, domain constructs, reference points and interfaces Leverages transport layer network constructs utilized in all transport network architecture and equipment Recommendations Applicable to all connection-oriented transport networks (whether circuit or packet)

32

ITU-T ASON Architecture


Calls and Connections
Objective: Support ability to offer enhanced/new types of transport services facilitated by:
Automatic provisioning of transport network connections Span one or more managerial/administrative domains

Involves both a Service and Connection perspective


Call : Support the provisioning of end-to-end services while preserving the independent nature of the various businesses involved Connection : Automatically provision network connections (in support of a service) that span one or more managerial/administrative domains

33

ITU-T ASON Architecture


Domains
ASON domains represent generalization of existing traditional concepts
Transport definitions of administrative/management domains Internet administrative regions

Domains may express differing:


Administrative and/or managerial responsibilities Trust relationships, addressing schemes Distributions of control functionality Infrastructure capabilities, survivability techniques, etc.

Domains are established by network operator policies

34

ITU-T ASON Architecture


Interfaces (1)
Service demarcation points are where call control is provided Inter-domain interfaces are service demarcation points
UNI Service Demarcation Point

Provider management system

User
Router

IP/MPLS
Router

Management Plane

Ethernet/ ATM / FR SONET / SDH / OTN

Router

Call Control

*Call Control Optical Control Plane

Provider network

Transport Plane

Design modularized around open interfaces at domain boundaries UNI, E-NNI, I-NNI
35

ITU-T ASON Architecture


Interfaces (2)
UNI
NE

E-NNI

UNI
NE

Provider A

NE

Provider B

NE

UNI separates the concerns of the user and provider:


3.6 Modularity is good. If you can keep things separate, do so. - RFC 1548 Objects referenced are User objects, and are named in User terms
I-NNI
Domain 1

E-NNI

UNI enables:
Client driven end-to-end service activation Multi-vendor inter-working Multi-client IP, Ethernet, TDM, etc. Multi-service SONET/SDH, Ethernet, etc. Service monitoring interface for SLA management 36

E-NNI

Domain 2

ITU-T ASON Architecture


Interfaces (3)
UNI
UNI-C UNI-N

UNI
UNI-N UNI-C

E-NNI
NE

Provider A

NE

NE

Provider B

NE

E-NNI enables:
End-to-end service activation Multi-vendor inter-working Multi-carrier inter-working Independence of survivability schemes for each domain
I-NNI
Domain 1

E-NNI

E-NNI

Domain 2

I-NNI supports:
Intra-domain connection establishment Explicit connection operations on individual switches
37

ITU-T ASON Architecture


Call Control & Interfaces
Call state is maintained at network access points, and at key network transit points where it is necessary or desirable to apply policy
Calls that span multiple domains are comprised of call segments, with call control provided at service demarcation points (UNI/E-NNI) One or more connections are established in support of individual call segments, with scope of connection control typically limited to a single call segment
UNI
NE

Domain A
NE

E-NNI
NE

Domain B
NE

UNI

CALL

UNI Call Segment

Domain A Call Segment

E-NNI Call Segment

Domain B Call Segment

UNI Call Segment

CONNECTIONS

38

Components of Control Plane enabled Network Domains


Management plane
CP MANAGEMENT

DCN

CONTROL PLANE

Data plane

39

Optical Control Plane Service


Permanent Connection
All intra-/inter-domain calls and connections are provisioned by Management Plane actions
Management plane Management plane Management plane Management plane

DCN

DCN

DCN

DCN

Data plane

Data plane

Data plane

Data plane

C1

Provisioned

TN1

Provisioned

TN2

Provisioned

C2

Permanent connection
C: Client network domain TN:Transport Network provider domain 40

Optical Control Plane Service


Soft Permanent Connection
Management plane of a transport network provider domain is initiating a call/connection
SPC initiating domain
Management plane functions
Management plane functions
CP management

Management plane functions


CP management

Management plane functions


CP management

DCN

DCN

DCN

DCN

Control plane functions

Control plane functions

Control plane functions

Data plane functions

Data plane functions

Data plane functions

Data plane functions

C1 Permanent connection

TN1

E-NNI
Switched connection

TN2 Permanent connection

C2

Soft Permanent Connection (SPC)


C: Client network domain TN:Transport network provider domain 41

Optical Control Plane Service


Switched Connection
Management plane of a client domain is initiating a call/connection
SC initiating domain
Management plane functions
CP management

Management plane functions


CP management

Management plane functions


CP management

Management plane functions


CP management

DCN

DCN Control plane functions

DCN

DCN

Control plane functions

Control plane functions

Control plane functions

Data plane functions

Data plane functions

Data plane functions

Data plane functions

C1

TN1

TN2

C2

UNI

E-NNI

UNI

Switched connection
C: Client network domain TN:Transport Network provider domain 42

G.805 Transport Foundation

43

G.805 Foundation Elements Transport Resources


Introduction of automated control doesnt remove/change the attributes of transport resources
Control Plane needs to be able to configure the same attributes

Introduction of automated control doesnt modify the functional components that exist within the transport plane

44

Transport Network/Equipment Architecture


Informal Specification Approaches
Described in terms of network elements, facilities, and cross-connections Facilities identified in terms of the physical layer characteristics Cross-connections between constituents of facilities or embedded facilities
DS1 Service example

DS1

X
3:3 DCS

DS3

X
3:1 DCS

SONET

X
Regen

SONET

X
3:1 DCS

SONET

X
3:3 DCS

DS1

45

Transport Network/Equipment Architecture


Informal Specification Approaches
Issues
Model specific to the technologies used in the NEs Difficult to understand network topology without understanding details of the NEs Subject to differing interpretations of equipment specifications/behaviors arising from natural language description Usage of different terminology; e.g., in doing a functional decomposition, different specifiers may group functionality in different ways but use the same term to denote the functional block

Development of more formalized specification techniques initiated during 1988 time frame
46

Transport Network Constructs


Formal Specification Techniques
Recognize new challenges of emergent multi-carrier, multi-vendor telecommunications environment
Increasingly complex networks and behaviors, arising from deployment of multi-technology networks & equipment No single network architecture, or single set of network elements, that suited all operators

Better support for multi-carrier/multi-vendor interoperability Unambiguous specifications that dont impose unnecessary architectural constraints
Network operator transport infrastructure technology deployment choices and evolution strategies Network equipment provider innovation re equipment types

47

Transport Network Constructs

Formal Specification Techniques - G.805


Describes the generic characteristics of networks using a common language
Transcends technology and physical architecture choices Provides a view of functions or entities that may be distributed among a number of equipments

Defines elements that support the modeling of topological and functional concepts
Topology refers to how elements of the network are interconnected Functions refer to how signals are transformed during their passage through the network

Defines small number of architectural components that may be interconnected to represent various network/equipment configurations

48

Transport Network Constructs


Topological G.805 Layers
A layer is defined in terms of its set of signal properties - characteristic information Networks can be represented in terms of a stack of client/server relationships Helps manage the complexity created by the presence of different types of characteristic information in networks Allows the management of each layer to be similar

49

Transport Network Constructs


Topological G.805 Example
DS3 Signal

DS3 Client Layer

DS3 Client Layer Network

DS3 payload mapping into C-3 Container

VC-3 Path overhead insertion

VC-3 Path Layer


VC-3 Path Layer Network

Multiplex Section Layer

Mapping & muxing Alignment Mul tiplex sec tion overhead genera tion into STM-1 f ra me of VC-3 for each VC-3

Multiplex Section Layer Network

Regenerator Section Layer

Regenerator section Mapping regenerator overhead generation section overhead & muxing for each STM-1 into STS-N f ra me

Regenerator Section Layer Network

Physical Media Layer

Vertical
Conversion into STM-N physical interface STM-N

Phy sical Media Layer Network

DS3 client carried over STM-N Signal

50

Transport Network Constructs

DS-1 Service Architecture & Equipment


DS -1 Path Trail DS1 Path Connection DS1 Line Trail DS -3 Path Connection DS -3 Line Trail X DS -1 Path Connection DS1 Path Connection DS1 Line Trail

mux

mux

DS -3 Path Trail DS -3 Path Connection

STS -1 Trail STS -1 Connection SONET Line Trail 3:3 DCS SONET Line Connection Section Trail Section Conn Optical Trail X SONET Line Connection Section Trail Section Conn Optical Trail 3:3 DCS X STS -1 Connection SONET Line Trail SONET Line Conn Section Trail Section Conn Optical Trail 3:1 DCS

3:1 DCS

regen

51

Transport Functional Modeling


Topological G.805 Partitioning
Even for a single layer, complexity arises from the many different network nodes and connections between them Partitioning is defined as the division of layer networks into separate subnetworks that are interconnected by links representing the available transport capacity between them Helps manage complexity by using the principle of recursion to tailor the amount of detail to be understood at a particular time according to the need of the viewer Allows the management of each partition to be similar
52

Transport Functional Modeling


DS3 Layer network

Topological G.805 Partitioning Example

DS3 Layer network

DS3 Layer network

Horizontal
subnetwork link

53

G.805 Transport Network Constructs


Architectural Component Definitions
Functional Entities
Adaptation: Adapts client signal into a form suitable for the server layer Termination: Where information concerning the integrity and supervision of adapted information may be generated and added, extracted and analyzed

Topological Entities
Trail: Provides end-end connection offering means to check transport quality Network Connection: Same scope as trail but without ensuring integrity Link: Represents available transport capacity between subnetworks (static) Link Connection: Transfers information transparently across a link Subnetwork: Describes flexible connectivity Subnetwork Connection: Transfers information across a subnetwork

Points
Termination Connection Point (TCP): Any binding involving a termination function source or sink Connection Point (CP): Any binding involving an adaptation source or sink Access Point (AP): Delimits a layer network

54

Transport Functional Entities


Trail Termination
Trail Termination Source
Adds overhead (OH) to input information (payload) to allow the integrity of the transfer to be monitored
Payload Trail Payload

Trail Termination Sink


Removes overhead and outputs remaining payload information Determines integrity of the transfer

The Characteristic Information for a trail is the payload plus the overhead
OH

Network Connection Payload

55

Transport Functional Entities


Adaptation
Adaptation Source
Converts client layer characteristic information to a form suitable for transport over a trail in the server layer network This is termed Adapted Information
Client Layer CI

Client Layer Adapted Information Trail

Adaptation Sink
Converts the adapted information from the server layer network to the client layer characteristic information

Server Layer CI

56

G.805 Transport Network Constructs


DS3 Client Signal

Multi-layer Architecture: DS3 over STM-N


DS3 Client Signal VC-3/DS3 Adaptation AP
VC-3 Trail

VC-3 Trail Termination TCP VC-3 SNC CP

VC-3 Network Connection

VC-3 Subnetworks VC-3 Link Connection VC-3 SNC

STM-1 MS/VC-3 Adaptation


CP Adaptation

AP

STM-1 Trail

Trail Termination TCP

STM-1 MS Trail Termination

Etc.

57

Key Observations
Each layer network has its own topology
NEs may have different neighbors in different layer networks NEs do not necessarily appear in all layer networks NEs may perform different functions within a layer network, or in different layer networks

Link connections in a client layer are created by configuring trails and adaptation functions in a server layer Differences in server layer networks are transparent to the client

58

Control Components

59

G.8080 Control Plane Constructs


Topological Entity Definitions
Subnetwork Point (SNP): Abstraction of a G.805 CP or TCP. They are associated to form a connection. Subnetwork Point Pool (SNPP): A set of subnetwork points that are grouped for the purposes of routing SNPP link: A link associated with SNPPs in different subnetworks. Routing Area: Defined by a set of subnetworks, the SNPP links that interconnect them, and the SNPPs representing the ends of the SNPP links exiting that routing area.
A routing area may contain smaller routing areas interconnected by SNPP links. The limit of subdivision results in a routing area that contains a single subnetwork.

60

G.8080 Control Plane Constructs


Topological Entity Relationships
SNP CP Adaptation Subnetwork

Relationship between the architectural entities in Transport Plane and Control plane

Trail Termination TCP SNP

SNC

SNP Link Connection Link Connection

SNC

Trail
SNP: Subnetwork Point SNPP: SNP Pool SNPP Link

61

G.8080 Control Plane Constructs


Control plane architecture described in terms of components and interfaces Represent logical functions (abstract entities) rather than physical implementations The actual location/distribution of the components is not constrained To facilitate the construction of different scenarios, leverages the Unified Modeling Language (UML) Not all of the reference points (UNI, E-NNI) need to be instantiated A single instantiation of a G.8080 control plane may control multiple layer networks with an explicit definition of the interlayer interaction (including none)

62

Introduction to ASON Components


Monitor port Policy port Config port

LRM CCC/ NCC

DA

CC

RC

TAP

PC
LRM - Link Resource Manager CCC Calling/Called Party Call Controller NCC Network Call Controller CC - Connection Controller RC - Routing Controller PC Protocol Controller
63

TP

DA Discovery Agent TAP Termination & Adaptation Performer TP Traffic Policing Component

Link Resource Manager


Responsible for control-plane local link connection inventory
Resources provided through configuration or discovery Receives requests for resources from Connection Controller Provides information to Routing to facilitate Topology advertisements
Monitor port Policy port Config port

LRM CCC/ NCC CC RC

DA TAP TP

PC

64

Call Controller
Responsible for providing a service across the network
Orchestrates components to meet service requested
Different domains can have different policies
Monitor port Policy port Config port

LRM CCC/ NCC CC RC

DA TAP TP

Invoked by Management Request or by Signaling messages Interacts with peer Call Controllers via Protocol Controller

PC

65

Connection Controller
Responsible for establishing connections across a domain
Requests Route to use from Routing Controller Requests specific local link resources from LRM Interacts with peer Connection Controllers via Protocol Controller
Monitor port Policy port Config port

LRM CCC/ NCC CC RC

DA TAP TP

PC

66

Routing Controller
Responsible for providing paths between two points in the network
Maintains topology view Paths are calculated to meet service constraints
Signal type Diversity
Monitor port Policy port Config port

LRM CCC/ NCC CC RC

DA TAP TP

PC

Interacts with peer Routing Controllers via Protocol Controller

67

Protocol Controller(s)
Responsible for providing protocol specific behavior
Can be separate per client function, or a merged function
CCC/NCC and CC
CCC/ NCC
Monitor port Policy port Config port

LRM CC RC

DA TAP TP

PC

68

Example Component Interactions


Call Request Call Accept Connection Request Call Accept

Connection Indication Path Computation function in Routing Component CC CC CC

NCC

NCC

69

Identifiers

70

Identifiers Names & Addresses


An identifier provides a set of characteristics for an entity that makes it uniquely recognizable
Name: identifies an entity
Unique only if it is unique within the context, or namespace, it is being used in The same entity may have more than one name in different namespaces

Address: identifies a position in a specific topology


Unique for the topology Typically hierarchically composed; allows for address summarization for locations that are close together

Addresses should reflect connectivity, not identity

71

Categories of Identifiers
Management plane identifiers Transport plane identifiers (G.805) Identifiers for transport resources that are used by the control plane Identifiers for Signaling & Routing Protocol Controllers (PCs) Identifiers for locating PCs in the SCN

Identifiers to distinguish transport resources from, and among, signaling and routing control entities, and SCN addresses

72

Identifier Spaces
MANAGEMENT PLANE (MP)
(e.g. CTP, TTP)

DCN
MCN/SCN addresses

CONTROL PLANE (CP)


SC PC, RC PC ID
(e.g., G.8080 CCC, NCC, CC, RC)

UNI/E-NNI TRI SNPP, SNP ID

DATA PLANE
(e.g., G.805 CP, TCP)

Node ID

73

Relationship with GMPLS Architecture

74

Relationship with GMPLS Architecture Models


Differing terminology and descriptive techniques More classical MPLS terminology (e.g., LSP) as compared to transport functional modeling terminology Natural language architecture descriptions as compared to formalized control plane component architecture

Peer model, also called the integrated model, corresponds to ASON architecture with no UNI or E-NNI interfaces instantiated Assumes a community of users with mutual trust and shared goals No inherent policy or security boundaries Routing and signaling protocols flow within the network without any filtering or other constraints imposed

75

Relationship with GMPLS Architecture Models


Overlay model, most closely corresponds to ASON architecture with UNI (with no E-NNI interfaces instantiated) Edge nodes are not aware of the topology of the core nodes (core nodes act more as a closed system) Core and edge nodes may have a routing protocol interaction for exchange of reachability information to other edge nodes

Augmented model, most closely corresponds to an ASON architecture in which E-NNI interfaces have been instantiated Reflects the case of policy driven exchange of routing and topology information between core and edge nodes

76

ASON/GMPLS Tutorial Outline


Introduction Requirements & Architecture Signaling Routing Control Plane Management OIF Interoperability Demonstrations Control Plane Applications Use Cases Concluding remarks

77

Signaling in Transport Networks


Essentially a Management Plane function
Distributed Connection Management

Signaling has existed for many years in telephony, ISDN, ATM, and MPLS. Signalling is extended for transport networks due to
Fixed granularities defined multiplexing hierarchy Protection functions in the data plane Separation of data plane from control and management planes Addressing/Naming - Separation of spaces between data plane and control plane

Connection centric rather than Protocol centric


Connection exists even if control plane ceases

78

Protocols and Architectures


Signaling capabilities are implemented in protocols, whose pieces can be combined according to different architectures. Different SDOs contribute pieces and architectures.
Control Plane Solutions
RFC RFC RFC IA RFC

Rec. RFC

RFC RFC RFC IA IA Rec.

IETF
79

OIF

ITU-T

Signaling in ASON Architecture


Architectural concepts for ASON signaling include:
Calls, connections, call/connection separation Reference points, Addressing

Signaling protocols implemented at UNI, INNI, ENNI reference points.


Call and Connection setup implemented in protocol with user/service and network addressing.
UNI
NE

Domain A
NE

E-NNI
NE

Domain B
NE

UNI

CALL

UNI Call Segment

Domain A Call Segment

E-NNI Call Segment

Domain B Call Segment

UNI Call Segment

CONNECTIONS

80

ASON Protocol-Neutral Signaling


ITU-T Rec. G.7713/Y.1704, Distributed Call and Connection Management (DCM)
First version Approved Nov.01, several subsequent Amendments, first major revision Consented Feb. 06 Protocol neutral specifications encompassing UNI, I-NNI and E-NNI, supporting both soft-permanent and switched connections

Provides distributed call and connection management requirements


Operations procedures, signaling network resilience to user and network defects, signal flow exception handling Restoration for single and multiple rerouting domains

Includes attribute specifications, message specifications, state diagrams, Call and Connection Controller management Basis for mapping to specific protocol solutions (G.7713.x series)

81

Protocol Specific Signaling


ITU-T Recommendations for ASON signaling protocol extensions Approved March 03
Rec. G.7713.1, DCM Signaling Mechanism Using PNNI Rec. G.7713.2, DCM Signaling Mechanism Using GMPLS RSVP-TE Rec. G.7713.3, DCM Signaling Mechanism Using GMPLS CR-LDP

IETF base GMPLS signaling protocol RFCs Approved by IESG, published Jan. 03
RFC 3471, GMPLS Signaling Functional Description RFC 3472, GMPLS CR-LDP Extensions RFC 3473, GMPLS RSVP-TE Extensions

IETF Informational RFCs containing ASON GMPLS signaling protocol extensions (aligned with G.7713.2 & G.7713.3) and IANA Code Point Assignments Approved by IESG, published March 03
RFC 3474, IANA Assignments for GMPLS RSVP-TE Usage and Extensions for ASON RFC 3475, IANA Assignments for GMPLS CR-LDP Usage and Extensions for ASON RFC 3476, IANA Assignments for LDP, RSVP, and RSVP-TE Extensions for Optical UNI Signaling

82

OIF User Network Interface


Signaling Specifications

Control Plane work driven by Carrier Working Group requirements


Architecture consistent with ITU-T ASON Recs. G.8080, G.7713 Signaling specifications in IAs based upon IETF GMPLS RFCs and ITU-T Recs. G.7713.2/3 Specifies detailed usage of selected options in protocols Defines the signaling protocols and mechanisms implemented by client and transport network equipment from different vendors to invoke services Feature focus on SDH/SONET VC-3/STS-1 and higher
OIF-UNI-01.0-R2-Common - User Network Interface (UNI) 1.0 Signaling Specification, Release 2: Common Part OIF-UNI-01.0-R2-RSVP - RSVP Extensions for User Network Interface (UNI) 1.0 Signaling, Release 2

OIF UNI 1.0 Signaling Specification published Oct. 01

OIF UNI1.0R2: UNI 1.0 Signaling Specification, Release 2 published Feb. 04

Updates UNI 1.0, but does not change UNI 1.0 functionality
Reflects subsequent developments in other standards bodies Builds upon lessons learned from the OIFs multi-vendor interoperability event conducted at OFC 2003

83

OIF User Network Interface


Signaling Specifications (cont)
OIF UNI 2.0
Incorporates architectural enhancements per ITU-T ASON Rec. G.8080 and G.7713 evolution

Base features
Support of Ethernet services (almost complete) Support of G.709 (complete) Enhanced security (complete) Call/connection separation (complete) Support of sub-STS1 granularity (complete)

84

OIF External Network Node Interface


Signaling Specifications
Control Plane work driven by Carrier Working Group requirements
Architecture consistent with ITU-T ASON Recs. G.8080, G.7713, G.7715, G.7715.1 Signaling specifications in IAs based upon IETF GMPLS RFCs and ITU-T Recs. G.7713.2/3 Specifies detailed usage of selected options in protocols

OIF E-NNI 1.0, Intra-Carrier E-NNI Signaling IA, published Feb. 04


Enables end-to-end connection management by providing a uniform way for carriers to interconnect network domains; feature support consistent with UNI 1.0/1.0R2

OIF E-NNI 2.0, E-NNI Signaling IA, work in progress


Updated with E-NNI Signaling 1.0 Principal Ballot comments (from Feb. 04) Updated to reflect ITU-T Recommendation and IETF RFC progress Includes updates based upon lessons learned from 2004 and 2005 OIF World Interoperability Demonstrations Includes features to support UNI 2.0

85

ITU-T/OIF and IETF


ITU-T G.7713.2

Signaling Protocol Differences


Consistent

OIF UNI 1.0 R2 OIF E-NNI 1.0

Additionally specifies detailed usage of selected options in protocols

RFC3473 and other base RFCs

Both utilize signaling protocols defined In IETF GMPLS RFCs

Due to concerted effort, the signaling protocols are mostly the same!
Same RSVP-TE PATH/RESV processing Same RSVP-TE refresh mechanism No change to defined RSVP objects No new messages

What are the differences between ITU-T/OIF and IETF ASON/GMPLS signaling protocols?
Three new call-related objects, and some new C-Types associated with UNI and E-NNI Need for usage of ResvTear/RevErr (no change to procedures if used)

86

Signaling Protocol Interworking Scenario


Dynamic signalling and routing control over OTN/SONET/SDH network Dynamic signalling for Ethernet services using ASON interlayer architecture ITU-T/OIF
OIF UNI
Client

IETF
Provider B Protocol i/w Provider C IETF UNI
Client

Provider A

OIF E-NNI

OIF Signalling based on G.7713, G.7713.2, G.7713.3 Ethernet services based on G.8010, G.8011, MEF.10
87

OIF ENNI routing based on G.7715, G.7715.1

RFC RFC RFC RFC

3472 3473 3946 4203

RFC 4139 RFC 4208

OIF ASON/GMPLS Interworking Project


OIF guideline document on Signaling Protocol Interworking of ASON / GMPLS network domains Document defines signaling protocol interworking methods between network domains utilizing OIF/ITU-T and IETF GMPLS
Interworking of ASON UNI and E-NNI (based on GMPLS RSVP-TE with ASON extensions, per G.7713.2 and OIF IAs) and IETF interfaces (based on GMPLS RSVP-TE, per RFC 3473 and RFC 4208)

Detailed interworking scenarios and functions; e.g.,


Required translation, resolution or re-mapping of address and identifier objects List of messages or objects supported in one specification, but not the other, along with the resultant behavior List of objects which are examined or processed in one specification, but are tunneled or opaque to the other

Describes pragmatic implementations of interoperable solutions

88

Interlayer Call Technology


Client makes an Ethernet call to destination Network triggers SONET/SDH calls to match Ethernet service request Control plane sets up Ethernet and SONET/SDH connections, and controls GFP/VCAT
Ethernet call Ethernet call completes Client UNI-C
Ethernet GFP VCAT GFP VCAT

SONET/SDH call OXC UNI-N

Ethernet call progresses OXC UNI-N


Ethernet

Client UNI-C

Interlayer call invoked

UNI-N
SONET/SDH

UNI-N

Ethernet connection

connections
89

Interlayer Signaling
Interlayer architecture enables business boundary between layers. Service separation between layers is at interlayer NCC relationship Note that VCAT is a separate layer

ETH NCC ETH MAC Client

ETH NCC ETH MAC Client

Layer boundary

VC-3 NCC

VC-3 NCC

Interlayer Within Layer

90

ASON/GMPLS Tutorial Outline


Introduction Requirements & Architecture Signaling Routing Control Plane Management OIF Interoperability Demonstrations Control Plane Applications Use Cases Concluding remarks

91

Basics of IP Routing
IP routing protocol Exchange of information between IP routers that allow them to determine how to forward IP packets There are different types of routing protocols
Distance Vector (RIP, IGRP) Path Vector (BGP) Link State (OSPF, IS-IS)

Link State Routing protocols in particular support distribution of network topology as links and nodes For IP, every router must have exactly the same network topology information (links, nodes, and link wts.) Every router must run exactly the same path computation algorithm Failure to insure these last two requirements can result in routing loops and black holes
92

Operation of Link State Routing Protocols


Node B Node D Node G Node H

Node A Node J Node E Node C Node F

Nodes establish routing adjacencies Exchange local link information Forward received link/node information

93

Routing Topology Database


Link State Advertisements (LSA) and other advertisements form the Topology Database
Identify link by remote link endpoint Carry link information, e.g., capacity, weight
Node A Topology DB Node B Topology DB Request Info DB A Summary DB B Summary

Periodic or triggered updates reliably flooded Neighbors keep identical topology databases Each node ends up with the full topology of the network

D C A B E C A B

D E

94

Shortest Path Calculation Determines Packet Forwarding


Shortest Path Techniques
2 NE 2 4 1 NE 7 NE 1 2 NE 4 2

Links are characterized by a single link weight

NE 5

2 2 2

2 2 NE 3 2 NE 6

95

How is this Useful for Transport Networks?


Basic Network Inventory
Routing Protocols provide network link inventory Useful for operations and planning

Topology and Resource Utilization


Required for distributed connection path selection/computation

Disaster Recovery
Want timely information of whats available in the network (nodes, link, spare capacity, etc)

96

Extended for Non-IP Networks in IETF GMPLS


New Link and Router advertisements in RFC 3630, 4202/3 Kept separate from IP link information to avoid confusion Opaque LSAs kept out of the IP Topology DB
Link Switching Type and Metric
Non-IP types, e.g., TDM, WDM

Link Characteristics, e.g., Protection


Linear (1+1, 1:1, 1:N), Ring, etc...

Diverse Routing Information


Shared Risk Link Groups (SRLGs)

Other non-IP link characteristics

97

ASON Routing Specifications and Activities


Protocol-neutral routing fundamentals (G.7715, G.7715.1, G.7715.2)
Function of routing and routing protocols in ASON ASON link state routing New routing protocol requirements for ASON

Protocol-specific routing (OIF)


OSPF extensions based on ASON routing requirements Application of topology abstraction

Future work
PCE

98

ASON Routing

Routing Components
Monitor port Policy port Config port

RC CCC/ NCC

CC

LRM

PC

CC - Connection Controller RC - Routing Controller LRM - Link Resource Manager PC Protocol Controller
99

Primary function for ASON transport routing is to provide path computation to Connection Management (Control Plane). Key modules are shown in light blue: Path computation and associated distribution of topology information is done by the Routing Controller (RC) Conversion into a specific routing protocol and associated protocol functions (e.g., state machines) are done by the Protocol Controller (PC)

ASON Routing
OSPF LSA

IP Routing and Transport Network Routing


IP router Peers G.7715 Compliant Protocol

Peer Routing Controllers

Topology Database

Shortest Path Algorithm (Dijkstra)

L1 Bearer Topology

Source Route Algorithm

Control Plane Data Plane


IP Forwarding Table Header IP IP address Next Hop

Signaling

Control Plane Data Plane


SDH Path

SDH Path

Cross Connec t

Transport Routing and Forwarding


IP Forwarding

Data Plane in Transport Networks and classic IP Networks differ For classic IP, every packet is forwarded based on address translation For label switching (generalized to TDM or WDM), once a cross connection is made, data flows without needing further path computation

IP Routing and Forwarding

100

ASON Routing
Some differences between IP and Transport Network Routing Classic IP Routing Distribution of Routing Always distributed Protocol Entities Path computation Forwarding process Forwarding dependency Looping Transport Routing
Domain-specific: may be distributed or centralized

Identical path computation May be different path algorithm at each node computation algorithms at different nodes Path computed for each packet at each node Data cannot be forwarded without stable routing database Path computed only at connection setup, usually only at the source

Data can be forwarded on existing connections but new connections cannot be created Potential problem any time Prevented by strict source the routing table changes routing

101

ASON Routing
Specifications
ITU-T Rec. G.7715, ASON Routing, Approved in July 02
Applicable after network has been subdivided into Routing Areas, and necessary network resources accordingly assigned Focus upon inter-domain routing supporting optical transport networking application Provides architecture, requirements, high-level attributes, messages, and state diagrams from a protocol-neutral perspective

Protocol neutral routing requirements include support for, e.g.,


Hierarchically contained Routing Areas Non-congruent routing adjacency topology and transport network topology Independence from intra-domain protocol and control distribution choices Policy constraints on information exchange (e.g., imposed at E-NNI) Architectural evolution (levels, aggregation, segmentation) Multiple links between nodes, allowing for link and node diversity.

Encompasses different classes of protocols (e.g., link-state, path vector) Facilitates comparison of specific inter-domain routing protocol proposals against quantifiable requirements
102

Link State Routing


Objective
Disseminate and update a common network topology view across all nodes in a domain

Basic Link State Routing Functions:


Hello/Link Adjacency Procedure Database Synchronization Procedure Periodic or Event-driven Link Status Updates

Link State Routing Protocols


OSPF IS-IS PNNI
103

ASON Routing

Architecture & Requirements Link State


ITU-T Rec. G.7715.1/Y1706.1, ASON Routing Architecture and Requirements for Link State Protocols, Approved Feb. 04
Based upon ASON foundation Recommendations (G.8080, G.7715) Further architectural analysis for link state routing

Encompasses exchange of routing information between hierarchical routing levels, including visibility re reachability and topology Node and Link routing attributes
Path computation and routing are impacted by layer specific, layer independent, and client/server adaptation information elements Routing protocol must be applicable to any transport layer network, and representation of routing attributes should not preclude their applicability to other transport network layers Layer specific characteristics (per link attribute)
104

G.7715.1 Link Characteristics


Layer Specific Characteristics Signal Type Link Weight Resource Class Local ConnectionType Link Capacity Link Availability Diversity Support Local Client Adaptations Supported Capability Mandatory Mandatory Mandatory Mandatory Mandatory Optional Optional Optional Usage Optional Optional Optional Optional Optional Optional Optional Optional

105

Comparison with IP Link State Routing Protocols


ASON Link State Routing relies on basic link state functions
Adjacency Database synchronization Periodic or event-driven advertisements

Differences
Control plane and data plane topology may be different
Automated discovery of routing peers cannot be done based on SCN topology data plane neighbors may not be neighbors in the SCN

Optical routing advertisements are for Traffic Engineering rather than IP routing table
Optical link state advertisements are marked as opaque and not used for IP routing

Instead a separate transport topology database is created


106

Separation of Data and Control Plane


Pre-ASON, routing protocols have assumed a Label Switching Router Single node with both data and control plane functions Single source for data, signaling and routing messages ASON explicitly separates these Data plane entities can be separate from control plane Routing entity can be separate from signaling entity Routing Implication Must be able to separately identify the data plane entity (link or node) from the routing controller

107

Examples of different Distributions


Possible distribution of control
Fully distributed (1:1) each network element also participates in the control plane Fully centralized (1:n) only one network element or proxy participates in the control plane Variable (m:n) small number of network elements or proxy servers participate in the control plane

Some potential applications


Proxy for a legacy (management controlled) domain Centralization of interoperability/E-NNI translation functions for ease of administration

108

Client Reachability Advertisement


Routing Protocols have assumed a peer model where client is a full peer to network elements Clients are advertised as IP address reachability Access links are part of the TE topology ASON explicitly separates client and network address spaces Clients are identified by a separate namespace Routing to clients needs to be supported by a separate mechanism
Client reachability advertisement Directory type service

109

Layering in the Data Plane


Pre-ASON, optical routing specifications gave a single parameter for link capacity Assumes that any signal type can use the link, subject to pure bandwidth availability Does not take into account layering issues ASON requires routing to advertise per signal type connection availability Takes into account possible limitations (link supports some signal types but not others) Takes into account blocking issues (smaller signal type can block larger signal type due to positioning in the frame)

110

Hierarchy in the Routing Architecture


Pre-ASON, routing protocols have had limited hierarchy support OSPF and IS-IS have limited levels (see next slide for OSPF) PNNI has richer hierarchy up to 104 theoretical levels ASON requires flexible hierarchy in the routing architecture To match transport network organization For greater scalability For greater policy control Protocol extensions to support hierarchy are needed

111

Routing Hierarchy compared to OSPF


Existing routing protocols need extension to meet ASON requirements. E.g., for OSPF,
Area boundaries fall within a router (vs. IS-IS area boundaries, which fall on links so router belongs to a single RA) Needs extensions for more than two hierarchical routing levels Requires operator intervention for re-definition of areas

Transport network architecture (G.805) allows more flexible partitioning and multiple levels

112

ASON Routing Hierarchy


RA

Level 1
RA.1

Level 2

RA.3 RA.2

RA.1.1

RA.1.2 RA.2.1

RA.2.2

RA.3

Level 3

In ASON, multiple levels of hierarchy are supported Domains at lower levels are encompassed by higher levels Domains are organized as part of carrier administration

113

Protocol Extension Work in Standards


ITU-T
Has defined requirements but not protocol at this point

IETF
Has begun work through analysis of ASON requirements and evaluation of existing routing protocols Some initial proposals for extensions are in progress Will need review through OSPF and IS-IS groups

OIF
Has developed and tested prototype extensions to meet ASON requirements Working with IETF/ITU-T to extend the standards
114

OIF External Network Node Interface


Routing Specifications
E-NNI Routing 1.0, Intra-Carrier E-NNI Routing using OSPF, approved by Q1/07 Principal Ballot
Consistent with ITU-T ASON Recs. G.8080, G.7715 and G.7715.1 architecture and requirements Prototypes an instantiation of a routing protocol addressing ASON routing requirements

Intended to enable interoperable multi-domain SPC and SC services similar to those implemented for the OIF Worldwide Interoperability Demonstrations in 2004 and 2005
Documents routing protocol requirements supporting the ENNI 1.0 interface, and prototype encodings used in OIF Interop testing Will support services provided by OIF UNI 1.0R2, UNI2.0 and ENNI Signaling 1.0
115

OIF E-NNI Prototype Extensions


Separation of Routing Controller and Node Identifier
Routing Controller is the control plane entity, Node ID identifies the transport plane entity Enabled by the addition of Local/Remote Node ID parameters in the link status update Identifies the link ends (data plane topology) separate from the advertising entity (control plane topology)

Advertisement of TNA
TNA is OIFs terminology for client address Reachability to TNA is advertised through OSPF prototype extension This supports a separate client namespace, in theory could be non-IPv4

116

OIF E-NNI Prototype Extensions


Link Bandwidth
OIF extension specifies available connections for each signal type (e.g., STS-1/VC-3, STS-3c/VC-4, etc.) More detailed and accurate than a simple measure of total available bandwidth for the link

Routing Hierarchy
Currently not implemented but under study Leaking of information up and down levels and protection from looping are key elements

117

E-NNI Topology Advertisement


Client Device OIF UNI
NE NE NE NE

Domain A

Domain B

Client Device OIF UNI

RC

SCN
RC RC NE

RC RC

Domain C
NE

Carrier Network

OIF UNI

Each domains Routing Controller (RC) advertises to its peers across the E-NNI boundary An abstracted topology can be advertised

118

Routing Domain Abstraction Models


Abstraction must improve scalability, yet provide more than just reachability information

1
Real domain topology

2 3

Abstraction Models 1. Abstract node domain collapsed to a single


node; most scalable, least accurate

2. Abstract link series of interconnected edge


nodes; less scalable, more accurate also shows potential server layer connectivity

3. Pseudo-node variation of abstract link that

Abstract topology

119

ASON/GMPLS Tutorial Outline


Introduction Requirements & Architecture Signaling Routing Control Plane Management OIF Interoperability Demonstrations Control Plane Applications Use Cases Concluding remarks

120

Motivation of Control Plane Management


To achieve and sustain automatic call & connection service management (service management), there are many things that need to be managed

Control plane (Cp) entity management


Initialization, configuration, policy setting Ongoing monitoring, maintenance, recovery

Transport plane (Tp) management for ASON


ASON functionality Installation & configuration ASON resource provisioning (e.g., names) & hand-over (from Mp to Cp) Ongoing monitoring, maintenance, & recovery

Control plane (Cp) & Management plane (Mp) ongoing interaction for

121

Connection management by Mp as needed Centralized routing (i.e., Mp calculated) Call performance measurement Management of call admission control Transfer of call/connection between Mp

Cp

Challenges of Control Plane Management


Ensure consistent management policy across multicarrier environment, e.g.,
Network wide consistency for Cp configuration, such as time-out setting for timers

Balance between delegation (to Cp) and ultimate control (by Mp) (i.e., centralized vs. distributed) e.g.,
Avoid duplication of data & process Maintain consistency between Mp and Cp database Restore consistency without affecting active services

Smooth migration from Mp-driven service management (Call/Connection mgmt) to hybrid or Cp-driven SM Faults correlation and root cause analysis across Cp and Tp in multi-domain multi-layer environment
122

Scope of Cp Management & Interactions

Management plane
Di re ct s

Directs

cts re Di

Reports

Supports
ts

Data communication Supports network


rts po Re

Re

po r

Supports

Control plane

Directs

Reports

Transport plane

123

Transport Resources in Mp and Cp View


SNP CP Adaptation CTP TTP Trail Termination TCP SNP
Transport entities Adaptation function Trail Termination function CP: Connection point TCP: Termination connection point Control plane view Management plane view SNP: Subnetwork Point TTP: Trail Termination Point SNPP: SNP Pool CTP: Connection Termination Point SNPP Link

Relationship between the architectural entities in Transport plane, Management plane, and Control plane

Subnetwork

SNC

SNP Link Connection Link Connection

SNC

Trail

124

Standards for Control Plane Management


TMF MTNM v3.5 NMS Management Plane ITU-T G.7718 EMS EMS EMS ITU-T G.7718.1 OIF-CDR-0.10
GR-1110-CORE

MTNM
Control Plane Transport Plane

G.7718.1

G.7718
Network Element
G.8080 G.7710 M.3010

125

Architecture & Requirements


Rec. G.7718/Y.1709, Framework for ASON Management, Approved Feb. 05
Deemed essential for supporting viable network deployments

Addresses the management aspects of the ASON control plane and the interactions between the OSS (NMS, EMS) and the ASON control plane
Provides architecture and requirements context
Management perspective on control plane components and constructs, control-related services, domain, transport resources, policy Management of restoration and protection

ASON management requirements


FCAPS Heavy input from Service providers
126

G.7718 ASON Management Requirements


Fundamental requirements:
Impact of Mp failure, Mp-Cp interface failure, and Cp failure

Configuration management
Control plane resources
Identifiers, addresses, protocol parameters (signaling & routing)

Routing areas
RA hierarchies, (dis) aggregation, assignment of Cp resources

Transport resources (in control plane view)


(de)allocation, names and identifiers, discovery, topology, resource and capacity inventory

Call and connection


setup(SPC)/modification/release

Policy

Fault management
Control plane components, resource/connection/call (service),

Performance management
Control plane components

Accounting management
Usage and call details record
127

TMF MTNM v3.5 Control Plane Management


MTNM for Multi-technology management
TMF 513 Requirements & Use cases TMF 608 Protocol-neutral model (UML) TMF 814 CORBA solution TMF 814A Implementation Statement Templates and Guideline

Version 3.5 addition: Control plane & VLAN management Key modeling approaches
Re-use the v3.0 Multi-layer approach for
Routing area (ML-RA), SNPP (ML-SNPP), SNPP Link (ML-SNPP Link),

Re-use of the Subnetwork connection (SNC) object for


Cp Connection

Scope:
Limited to retrieval of Control Plane resources, retrieval of network topology and end-to-end Call/Connection management (provisioning of SPCs)
128

OIF-CDR-01.0 for OIF UNI 1.0 Billing


OIF-CDR-01.0, Call Detail Records for OIF UNI 1.0 Billing, Approved April 02 Implementation Agreement (IA) for the usage measurement functions that an Optical Switching System will need to perform in order to enable carriers to bill for OIF UNI 1.0 optical connections using their legacy billing systems. Usage measurement functions: Automatic Message Accounting (AMA)
Data generation: UNI 1.0 CDR Information Content, as generic as possible Data formatting (resulted in CDR)
Billing AMA Format (BAF) ASCII CDR (ACDR) Format XML CDR (XCDR) Format

Data transmitting (of CDR): Typically via FTP between management system and billing system

129

ASON/GMPLS Tutorial Outline


Introduction Requirements & Architecture Signaling Routing Control Plane Management OIF Interoperability Demonstrations Control Plane Applications Use Cases Concluding remarks

130

Interoperability Demonstrations
Objectives / Goals OIF Perspective
Member evaluation, validation, proof of concept of current OIF draft specifications & IA for interoperable network solutions Feedback assessment from multi-vendor testing environment to standardization/specification work

Carrier Perspective
Early adoption, evaluation, of interoperability testing results demonstrated in multi-vendor environment. Feedback to vendor community on early implementations and integrations based on practical experiences and lessons learned

Industry Perspective
Showcase OIF contributions, build market awareness of emerging technologies, services and networking solutions. Public forums (Optical conference & exhibitions) utilized

131

Interoperability Demos Role in Standards to Deployment


Deployment
OIF supports close relation of standardization and R&D and early implementations

Field trials Carrier sites

Interoperability Tests & demonstrations Standards Specifications

OIF
OIF performs / organizes the next major step towards implementation interoperability evaluations of prototype implementations: Prove of concept Feedback to standardization Fosters follow up activities

OIF ITU-T IETF

Feedback

132

Ethernet Switched Connection Characteristics


Ethernet Client OIF UNI NE NE Carrier A Domain OIF E-NNI NE NE Carrier B Domain OIF E-NNI NE NE Carrier C Domain OIF UNI Ethernet Client

Ethernet
UNI-C UNI-N

SDH
Ethernet Layer Call/Connection Flow

Ethernet
UNI-N UNI-C

SONET/SDH Layer Call/Connection Flow

OIF UNI 2.0 support for Ethernet clients OIF UNI 2.0 call control based on ASON specifications Transport devices integrate multi-layer functions at control plane and data plane level Ethernet Private Line Service (E-Line Service Type) triggered by OIF UNI 2.0 connection requests and provisioned by E-NNI

133

2005 Worldwide Interoperability Demo


7 participating carrier labs around the world: China, Japan, France, Germany, Italy and USA 13 participating vendors First multi-layer & multi-domain call/connection demonstration
Orchestrates actions between client and server layers Integration of control plane (UNI2.0 Ethernet, E-NNI) and NGSONET/SDH (GFP-F/VCAT/LCAS) functions Sow of on-demand Ethernet Private Line service by using

Creation of end-to-end calls and connections across multiple network layers, network domains, multiple vendors equipment, multiple carrier labs OIF IAs based on ITU-T ASON standards including:
Requirements and Architecture (G.8080, G.7713, G.7715, G.7715.1) Signaling protocols (G.7713.2)

World Interoperability Demonstration public observation: SUPERCOMM 2005 (June 7-9, 2005, Chicago, IL)
134

Interoperability Demonstrations
Global Test Network Topology
USA
Avici Ciena Cisco AT&T Deutsche Telekom Alcatel Ciena Cisco Marconi Lucent Avici Fujitsu Sycamore

Europe

Asia
NTT

Alcatel Ciena Cisco Fujitsu Lucent Mahi Nortel Sycamore Tellabs Verizon

France Telecom Avici Marconi Sycamore

Telecom Italia Avici Cisco

Ciena Huawei China Telecom

Marconi Huawei Lambda OS

135

OIF Interoperability Labs in 2005


Lannion, France
Waltham, MA-USA

Beijing, China Berlin, Germany Torino, Italy Musashino, Japan

Middletown, NJ-USA

SuperComm 2005 booth


136

137

2007 Worldwide Interoperability Demo


On-Demand Ethernet Services over multi-domain transport networks 7 participating carrier labs around the world: China, Japan, France, Germany, Italy and USA Public demonstration at ECOC2007, Sept 16 20th, 2007:
ECOC2007 Workshop on Global Interoperability in MultiDomain and Multi-Layer ASON/GMPLS Networks ECOC2007 exhibition: Live demonstration of the OIF Worldwide Interoperability Test results ECOC2007 accompany program: Lab tours to DT premises, demonstrating live the ASON/GMPLS functions of the OIF Worldwide Test Network, the MUPBED European scale network and enabling hands on real telecom world for the visitors

138

ASON/GMPLS Tutorial Outline


Introduction Requirements & Architecture Signaling Routing Control Plane Management OIF Interoperability Demonstrations Control Plane Applications Use Cases Concluding remarks

139

Application 1: CP for Bandwidth Defragmentation


Scenario After running the NG-SONET/SDH network for a while, available time slots over SONET/SDH links become fragmented (I.e., many discontinuous, small size clusters of bandwidth). Network Operations can invoke the control plane on a regular basis to (1) identify the clusters for each span in the network, and (2) run a defragmentation algorithm to pack in-use time slots into a contiguous space. Site 2 Core Technologies NG-SONET/SDH Defragmentation over SONET Path 1-2 Site 1 a single vendor domain Path 2-3 OTN Control Plane (Auto-Discovery & SONET Self Inventory) Path 1-3 Site 3 OTN Mgmt Plane (EMS/NMS update) SONET

140

Application 2: A-Z Provisioning via EMS/NMS and Control Plane


Scenario
NMS/EMS receives a service order for SONET STS /SDH VC from an enterprise customer that has three sites in the region. The order specifies points A & Z (e.g., from Site 1 to Site 2), payload rate, transparency, protection class, and other constraints. The NMS/EMS issues a command to the source node (attached to Site 1), which then triggers the control plane to setup the SONET/SDH path to Site 3 according to the requirements specified in the order. Similarly, when the customer terminates the service, NMS/EMS will invoke the control plane to tear down the path.

Core Technologies
OTN Control Plane (E-NNI, I-NNI) OTN Mgmt Plane (EMS/NMS SPC support)
Path 1-2 Site 1

Site 2 SONET Path 2-3


-

A
SONET Path 1-3 SONET

Site 3

141

Application 3: Bandwidth on Demand (BoD) in Transport Networks


Scenario An enterprise customer with three sites subscribes to BoD SONET/SDH service with a range of SONET/SDH payload rates. The service plan applies to all SONET/SDH connections between the sites. Based on business needs, the customer uses UNI signaling to dial-up the service between any two sites, sends information over the SONET/SDH path for a unspecified period of time, then hangs up. NG-SONET/SDH GFP/VCAT Site 2 OTN Control Plane (O-UNI, E-NNI, UNI SONET and I-NNI) Path 1-2 Site 1 OTN Mgmt Plane (EMS/NMS SC Path 2-3 support, TMF814) SONET Two Sub-Cases Path 1-3 Case 3a: With NG-SONET/SDH SONET Virtual Concatenation (VCAT) Site 3 Case 3b: Without VCAT

142

Application 3 (cont): Scheduled BoD


Customers with highly predictable traffic profile Service bandwidth provisioned according to user provided time-of-day and/or day-of-week schedules, with the capability to make bandwidth changes as needed. Automatic tailoring service bandwidth to traffic profile
Measured Bandwidth Usage of a SAN Application

200 160 Mb/sec 120

200 Mb/s

100 Mb/s

80 40
19 Mar (Th) 20 Mar (F) 21 Mar (Sa) 22 Mar (Su) 23 Mar (M) 24 Mar (Tu) 25 Mar (W)

Source: EMC

143

Application 4: GbE Service with Bandwidth Schedule


Scenario
An enterprise customer with three sites subscribes to a GbE service with customized bandwidth schedules for weekdays and weekend/holidays as shown below.
Weekdays
Schedule\Path 8 am 5pm 6pm 11pm 12am 7am Path 1-2 200M 300M 50M Path 1-3 100M 200M 500M Path 2-3 50M 100M 500M Schedule\Path 8 am 5pm 6pm 11pm 12am 7am 50M 50M 10M

Weekend
Path 1-2 Path 1-3 50M 50M 10M Path 2-3 50M 50M 10M

Site 2

Core Technologies
NG-SONET/SDH GFP/VCAT/LCAS
Site 1

Path 1-2

GbE Path 2-3


-

GbE OTN Control Plane (E-NNI, I-NNI) Path 1-3 OTN Mgmt Plane (EMS/NMS w/scheduling support)

GbE

Site 3

144

Application 5: BoD - GbE Service


Scenario
An enterprise customer with three sites subscribes to a BoD GbE service with a specified peak rate (P). The service plan applies to all GbE connections between the sites. Based on business needs, the customer uses UNI signaling to dial-up the service between any two sites, sends information at rates <= P for a certain period of time, then hangs up.

Core Technologies

OTN Control Plane (O-UNI, E-NNI, and I-NNI)


UNI Path 1-2 UNI Site 1

Site 2 GbE Path 2-3


-

OTN Mgmt Plane (EMS/NMS support)

GbE Path 1-3 GbE

Site 3

145

Application 6: OSS Simplification


Traditional OSS
Service Accountin Service g Assuranc Activatio & Security e n Fault Customer Net Topology Correlations Billing Assignments Path Admission Facility Computation Exceptions Control Fault Resource Parameter Isolation Access Cntl Mapping Equipment Testing CoS Assign. Srvc Circuit Protection & Restoration Inventory Inventory Customer Assignments Facility

NG-OSS
Service Accountin g Assuranc e Fault & Security Correlations Billing Exceptions Admission Control Resource Access Cntl

Passive roles for all control plane supported functions Fault NG-OTN Net Topology Isolation Control Plane Path Computation Exceptions Testing Equipment Parameter Mapping Protection & Srvc Circuit CoS Assign. Restoration

Transport Network

Transport Network

146

Application 7: Control Plane for Auto-Discovery and Self-Inventory


Scenario
Upon start up of a CP-equipped network, all NEs will discover each other, identify resources, and create a high quality network database containing the complete topological view of the network and a highly accurate resource map. During network operation, the database will be instantly updated to reflect any change of network state, such as resource usage/addition, path setup/tear-down, etc. A high quality network database is essential to the high Site 2 quality OAM&P required for NG-OTN

Core Technologies
OTN Control Plane (I-NNI, E-NNI) OTN Mgmt Plane (EMS/OSS update)

Path 1-2 Site 1 SONET

SONET Path 2-3

Path 1-3 SONET

Site 3

147

Application 8: Control Application Plane enabled Network interworking

Applications communicate with Adaptation Function through API Adaptation Function administrates access to UNI Application integrates an API or manual control

148

ASON/GMPLS Tutorial Outline


Introduction Requirements & Architecture Signaling Routing Control Plane Management OIF Interoperability Demonstrations Control Plane Applications Use Cases Concluding remarks

149

ASON Reqts. & Architecture Recap


Requirements intended to enable support for business/commercial operating practices Formalized specification technique utilizing components and interfaces that can be associated in various ways to describe actual control plane implementations
The actual location/distribution of the control plane components is not constrained, allowing for the range of fully distributed to centralized implementations Architecture does not require that the reference points always be instantiated as external interfaces (UNI, E-NNI); instantiation of interfaces and degree of information sharing are based upon operator business model/policy A single instantiation of an ASON control plane may control multiple layer networks with an explicit definition of the interlayer interaction (including none)

Reference point concepts similar to those of Resource and Admission Control Function (RACF) model

150

Standards Development Organizations (SDO) Interaction


-

1999/2000 MPLS: flat peer model, data/signaling congruent, IP only, data behavior (e.g., connection tear-down w/o request)
ITU-T ASON Umbrella OIF Implementation Agreements IETF GMPLS Umbrella

2001: Carrier requirements across IETF, OIF, and ITU-T re need for support of commercial business & operational practices 2003: Evolution of GMPLS signaling protocol, used as normative base for ASON extensions 2004-2006: Ongoing communications among all three SDOs on requirements and protocol work

Goal - Evolution towards convergence of requirements & protocols

151

Network of the Future Future Internet Clean Slate Internet Design (FIND, GENI)
Activities in Europe and USA
Goal: Basic re-design of the (multi-layer) network architecture, including Internet

Paradigm shift: Customer view (business and residential) impose a number of additional, mostly non-technical requirements
The Internet turned into a non-trusted business environment Service-centric design of architectures, protocols and networks Usability / ease of use is a major aspect for future applications and services, requiring significant efforts in automation

Fundamental technical changes in network functions imposed by clean slate design


Naming & addressing Routing & signaling Security functionality, especially authentication (advanced AAA) Scalability Optimization of topologies and hierarchies Commercial role of the Internet (non-trusted environment) Monitoring functionality (regarding network functionality)

152

Technical Implications of Network re-design


Clean slate design will shake the technical foundations of protocol design as well as network architectures and operations Protocol: Protocols and architectures are expected to change considerably (optics, slim modular protocol stack) Data plane: Multi-technology environment for provisioning of end-to-end services Control and management plane: The Internet might actually look more telco-like an intriguing thought!

153

Thank you!!
Q&A
Hans-Martin.Foisel@t-systems.com

www.oiforum.com
154

Backup

OIF documents and links Reference Material for ITU-T ASON and Transport Recommendations Glossary

156

OIF Documents
OIF presentation and newsletters
www.oiforum.com

OIF Implementation Agreements


http://www.oiforum.com/public/impagreements.html

OIF workshops on ASON/GMPLS implementations in test and carrier networks


http://www.oiforum.com/public/meetOIW050806.html http://www.oiforum.com/public/meetOIW073106testbeds.html http://www.oiforum.com/public/meetOIW101606.html

157

ITU-T Recommendations
Accessibility Information
Go to the publications link and choose download per URL:
http://www.itu.int/publications/EBookshop.html

There is an explicit button from the download publications page where you can register up front for 3 free Recommendations

158

Some Key ITU-T ASON Recommendations


Fundamental (Protocol-Neutral) Architecture & Requirements
G.8080, Architecture for the automatically switched optical network (ASON), 2006 Revision to be published imminently G.7713, Distributed call and connection management (DCM), 2006 Revision, to be published imminently

G.7718, Framework for ASON Management, February 05 G.7714, Generalized automatic discovery for transport entities, August 05 revision ITU-T G.7715/Y.1706 - Architecture and Requirements for Routing in the Automatic Switched Optical Networks, July 2002 ITU-T G.7715.1/Y.1706 - ASON Routing Architecture and requirements for Link State Protocols, Feb. 04 ITU-T G.7712/Y.1703 - Architecture and specification of data communication network*, March 03 ITU-T T G.7716 - Control Plane Initialization, Reconfiguration, and Recovery, target Consent Nov. 06

159

Textbooks covering ITU-T Architecture Aspects (e.g., Functional Modeling, ASON)


Broadband Networking: ATM, SDH, and SONET; Michael Sexton and Andrew Reid; ISBN 0-89006-578-0 (see in particular Chapters 2 4)
http://www.amazon.com/gp/product/0890065780/ref=sib_rdr_dp/103-20036979480609?%5Fencoding=UTF8&me=ATVPDKIKX0DER&no=283155&st=books&n=283155

Achieving Global Information Networking; Varma and Stephant et al; ISBN: 0890069999 (see in particular Chapters 1-4)
http://www.amazon.com/gp/product/0890069999/ref=dp_return_1/1032003697-9480609?%5Fencoding=UTF8&n=283155&s=books

SDH/SONET Explained in Functional Models : Modeling the Optical Transport Network; Huub van Helvoort; ISBN 0-470-09123-1
http://www.amazon.com/gp/product/0470091231/ref=sib_rdr_dp/10320036979480609?%5Fencoding=UTF8&me=ATVPDKIKX0DER&no=283155&st=books&n=283 155

Optical Networking Standards : A Comprehensive Guide for Professionals ; Khurram Kazi; ISBN: 0387240624 (to be published June 2006; see for example - Chapters 2, 16)
http://www.amazon.com/gp/product/0387240624/qid=1147161139/sr=11/ref=sr_1_1/103-2003697-9480609?s=books&v=glance&n=283155

160

Some Key ITU-T Functional Modeling Rec.


Fundamental Architecture & Equipment
ITU-T Rec. G.803, Architecture of transport networks based on the synchronous digital hierarchy (SDH), March 2003 ITU-T Rec. G.805 - Generic functional architecture of transport networks, March 2000 ITU-T Rec. G.809 - Functional architecture of connectionless layer networks, March 2003 ITU-T Rec. G.872, Architecture of optical transport networks, November 2001 ITU-T Rec. G.8010, Architecture of Ethernet Layer Networks, February 2004 ITU-T Rec. G.8110, MPLS layer network architecture, January 2005 ITU-T G.8110.1, Architecture of Transport MPLS (T-MPLS) Layer Network, publication imminent ITU-T G.783, Characteristics of synchronous digital hierarchy (SDH) equipment functional blocks, March 2006 ITU-T G.8021, Characteristics of Ethernet transport network equipment functional blocks, G.8121, Characteristics of Transport MPLS (T-MPLS) Equipment Functional Blocks, publication imminent Etc.
161

Glossary
ACDR AMA ASON AP API BAF BoD CC CCC CDR CORBA CP Cp DA DCM ECF EMF EMS E-NNI ETF FCAPS FTP IA I-NNI LCAS LRM MIB Mp NCC NE NMS MLRA MLSNPP ASCII CDR Automatic message accounting Automatically switched optical network Access point Application programming interface Billing AMA Format Bandwidth on Demand Connection controller Calling/called call controller Call detail record Common object request broker architecture Connection point Control plane Discovery agent Distributed Call and Connection Mngmt Equipment control function Equipment management function Element management system External NNI Equipment transport function Fault, Configuration, Accounting, Performance, Security File transfer protocol Implementation agreement Internal NNI Link capacity adjustment scheme Link resource manager Management information base Management plane Network call controller Network element Network management system Multi-layer routing area Multi-layer SNPP MTNM NNI OH OSF OSS OTN PC RA RC SC SCN SNC SPC SNP SNPP SRG STM TAF TAP TCE TCP TNA TP Tp TTP UNI UML VC VCAT VLAN WSF XCDR XML Multi-technology network management Network-network interface Overhead Operations system function Operations support system Optical transport network Protocol controller Routing area Routing controller Switched connection Signaling communication network Subnetwork connection Soft permanent connection Subnetwork point SNP Pool Share risk group Synchronous Transport Module Transport atomic function Termination & adaptation performer Transport capability exchange Termination connection point Transport network address Termination point Transport plane Trail termination point User-network interface Unified modeling language Virtual container Virtual concatenation Virtual local area network Workstation function XML CDR format Extensible modeling language

162

Potrebbero piacerti anche