Sei sulla pagina 1di 105

SYNCHRONOUS DIGITAL HIERARCHY OVERVIEW

Outline

Background (analog telephony, TDM, PDH) SONET/SDH history and motivation Architecture (path, line, section) Rates and frame structure Payloads and mappings Protection and rings VCAT and LCAS Handling packet data

Background

The Present PSTN


tandem switch
last mile subscriber line class 5 switch

PSTN Network
class 5 switch

Analog voltages and copper wire used only in last mile, Time Division Multiplexing of digital signals in the network
Extensive use of fiber optic and wireless physical links T1/E1, PDH and SONET/SDH synchronous protocols Signaling can be channel/trunk associated or via separate network (SS7)

TDM Timing

Time Domain Multiplexing relies on all channels (timeslots) having precisely the same timing

(frequency and phase) In order to enforce this, the TDM device itself frequently performs the digitization
digital signals analog signals

If The Inputs Are Already Digital

If the TDM switch does not digitize the analog signals then there can be a problem the clocks used to digitize do not have identical frequencies we get byte slips!
(well, actually, we can get bit slips first )
exaggerated pictorial example
1 2 3 4 5 6 7 8 9

Numerical example: clock derived from 8000 Hz. quartz crystal typical crystal accuracy = 50 ppm So 2 crystals can differ by 100 ppm i.e. 0.8 samples / second So difference is 1 sample after 1 seconds

component signals
9

1 1 1

2 2 2

3 3 3

4 4 4

5 5 5

6 6 6

7 7 7

9 8 8

TDM

The Fix

We must ensure that all the clocks have the same frequency Every telephony network has an accurate clock called a stratum 1, or Primary Reference Clock All other clocks are directly or indirectly locked to it (master slave) A TDM receiving device can lock onto the source clock based on the incoming data (FLL, PLL) For this to work, we must ensure that the data has enough transitions

(special line coding, scrambling bits, etc.)

1 0 transitions no transitions

Comparing Clocks

A clock is said to be isochronous (isos=equal, chronos=time) if its ticks are equally spaced in time 2 clocks are said to be synchronous (syn=same chronos=time) if they tick in time, i.e. have precisely the same frequency

2 clocks are said to be plesiochronous (plesio=near chronos=time) if they are nominally if the same frequency but are not locked

PDH principle

If we want yet higher rates, we can mux together TDM signals (tributaries) We could demux the TDM timeslots and directly remux them

but that is too complex

The TDM inputs are already digital, so we must

insist that the mux provide clock to all tributaries (not always possible, may already be locked to a network) somehow transport tributary with its own clock across a higher speed network with a different clock (without spoiling remote clock recovery)

PDH Hierarchies
level

0
*

64 kbps 30
*

24

24 J1 1.544 Mbps
*

1 2 3 4

E1 2.048 Mbps
*

T1 1.544 Mbps
*

E2 8.448 Mbps
*

T2 6.312 Mbps
*

J2 6.312 Mbps
*

4 T3

E3

34.368 Mbps
*

44.736 Mbps
*

J3 32.064 Mbps
*

E4 139.264 Mbps CEPT

T4 274.176 Mbps N.A.

J4 97.728 Mbps Japan

Framing and Overhead

In addition to locking on to bit-rate, we need to recognize the frame structure We identify frames by adding Frame Alignment Signal The FAS is part of the frame overhead (which also includes "C-bits", OAM, etc.) Each layer in PDH hierarchy adds its own overhead For example

E1 2 overhead bytes per 32 bytes overhead 6.25 % E2 4 E1s = 8.192 Mbps out of 8.448Mbps

so there is an additional 0.256 Mbps = 3 % altogether 4*30*64 kbps = 7.680 Mbps out of 8.448 Mbps or 9.09% overhead

PDH overhead
digital signal T1 T2 T3 T4 E1 E2 E3 E4 data rate (Mbps) 1.544 6.312 44.736 274.176 2.048 8.448 34.368 139.264 voice channels 24 96 672 4032 30 120 480 1920 overhead percentage 0.52 % 2.66 % 3.86 % 5.88 % 6.25 % 9.09 % 10.61 % 11.76 %

Overhead always increases with data rate !

OAM

Analog channels and 64 kbps digital channels

Do not have mechanisms to check signal validity and quality

major faults could go undetected for long periods of time hard to characterize and localize faults when reported minor defects might be unnoticed indefinitely

Solution is to add mechanisms based on overhead As PDH networks evolved, more and more overhead was dedicated to Operations, Administration and Maintenance (OAM) functions including:

monitoring for valid signal defect reporting alarm indication/inhibition (AIS)

PDH Justification
In addition to FAS, PDH overhead includes
justification control (C-bits) and justification opportunity stuffing (R-bits) Assume the tributary bitrate is B T Positive justification payload is expected at highest bitrate B+T if the tributary rate is actually at the maximum bitrate then all payload and R bits are filled if the tributary rate is lower than the maximum then sometimes there are not enough incoming bits so the R-bits are not filled and C-bits indicate this Negative justification payload is expected at lowest bitrate B-T if the tributary rate is actually the minimum bitrate then payload space suffices if the tributary rate is higher than the minimum then sometimes there are not enough positions to accommodate so R-bits in the overhead are used and the C-bits indicate this Positive/Negative justification payload is expected at nominal bitrate B positive or negative justification is applied as required

SONET/SDH Motivation and History

15

First step
With the disvestiture of the US Bell system a new need arose MCI and NYNEX couldnt directly interconnect optical trunks Interexchange Carrier Compatibility Forum requested T1 to solve problem Needed multivendor/ multioperator fiber-optic communications standard Three main tasks: Optical interfaces (wavelengths, power levels, etc) proposal submitted to T1X1 (Aug 1984) T1.106 standard on single mode optical interfaces (1988) Operations (OAM) system proposal submitted to T1M1 T1.119 standard Rates, formats, definition of network elements Bellcore (Yau-Chau Ching and Rodney Boehm) proposal (Feb 1985) proposed to T1X1 term SONET was coined T1.105 standard (1988)

PDH limitations

Rate limitations

Copper interfaces defined Need to mux/demux hierarchy of levels (hard to pull out a single timeslot) Overhead percentage increases with rate

At least three different systems (Europe, NA, Japan)


E 2.048, 8.448, 34.348, 139.264 T 1.544, 3.152, 6.312, 44.736, 91.053, 274.176 J 1.544, 3.152, 6.312, 32.064, 97.728, 397.2

So a completely new mechanism was needed

Idea Behind SONET


Synchronous Optical NETwork Designed for optical transport (high bitrate) Direct mapping of lower levels into higher ones Carry all PDH types in one universal hierarchy ITU version = Synchronous Digital Hierarchy different terminology but interoperable Overhead doesnt increase with rate OAM designed-in from beginning

Standardization !

The original Bellcore proposal:


hierarchy of signals, all multiple of basic rate (50.688) basic rate about 50 Mbps to carry DS3 payload bit-oriented mux mechanisms to carry DS1, DS2, DS3

Many other proposals were merged into 1987 draft document (rate 49.920) In summer of 1986 CCITT express interest in cooperation

needed a rate of about 150 Mbps to carry E4 wanted byte oriented mux byte mux US wanted 13 rows * 180 columns CEPT wanted 9 rows * 270 columns US would use basic rate of 51.84 Mbps, 9 rows * 90 columns CEPT would use three times that rate - 155.52 Mbps, 9 rows * 270 columns

Initial compromise attempt


Compromise!

SONET/SDH Architecture

20

Layers

SONET was designed with definite layering concepts Physical layer optical fiber (linear or ring)

when exceed fiber reach regenerators regenerators are not mere amplifiers, regenerators use their own overhead fiber between regenerators called section (regenerator section)

Line layer link between SONET muxes (Add/Drop Multiplexers)


input and output at this level are Virtual Tributaries (VCs) actually 2 layers

lower order VC (for low bitrate payloads) higher order VC (for high bitrate payloads)

Path layer end-to-end path of client data (tributaries)

client data (payload) may be


PDH ATM packet data

SONET architecture
ADM
Path Termination Line Termination

regenerator
Section Termination

ADM
Line Termination Path Termination

path line section section line section line section

SONET (SDH) has at 3 layers:

path end-to-end data connection, muxes tributary signals path section

there are STS paths + Virtual Tributary (VT) paths

line protected multiplexed SONET payload section physical link between adjacent elements

multiplex section regenerator section

Each layer has its own overhead to support needed functionality SDH terminology

STS, OC, etc.


A SONET signal is called a Synchronous Transport Signal The basic STS is STS-1, all others are multiples of it - STS-N The (optical) physical layer signal corresponding to an STS-N is an OC-N

SONET STS-1 STS-3 STS-12 STS-48 STS-192

Optical OC-1 OC-3 OC-12 OC-48 OC-192

rate 51.84M 155.52M 622.080M 2488.32M 9953.28M *3 *4 *4 *4

Rates and Frame Structure

24

SONET / SDH Frames


Framing

Synchronous Transfer Signals are bit-signals (OC are optical) Like all TDM signals, there are framing bits at the beginning of the frame However, it is convenient to draw SONET/SDH signals as rectangles

SONET STS-1 frame


90 columns
Framing

9 rows
Each STS-1 frame is 90 columns * 9 rows = 810 bytes There are 8000 STS-1 frames per second so each byte represents 64 kbps (each column is 576 kbps) Thus the basic STS-1 rate is 51.840 Mbps

SDH STM-1 frame


270 columns

9 rows

Synchronous Transport Modules are the bit-signals for SDH Each STM-1 frame is 270 columns * 9 rows = 2430 bytes There are 8000 STM-1 frames per second Thus the basic STM-1 rate is 155.520 Mbps 3 times the STS-1 rate!

SONET/SDH Rates
SONET STS-1 STS-3 STS-12 STS-48 STS-192 STM-1 STM-4 STM-16 STM-64 SDH columns 90 270 1080 4320 17280 rate 51.84M 155.52M 622.080M 2488.32M 9953.28M

STS-N has 90N columns STM-M corresponds to STS-N with N = 3M SDH rates increase by factors of 4 each time STS/STM signals can carry PDH tributaries, for example: STS-1 can carry 1 T3 or 28 T1s or 1 E3 or 21 E1s STM-1 can carry 3 E3s or 63 E1s or 3 T3s or 84 T1s

SONET/SDH tributaries
SONET STS-1 STS-3 STS-12 STS-48 STS-192 STM-1 STM-4 STM-16 STM-64 SDH T1 28 84 336 1344 5376 T3 1 3 12 48 192 E1 21 63 252 1008 4032 E3 1 3 12 48 1 4 16 E4

192 64

E3 and T3 are carried as Higher Order Paths HOPs) E1 and T1 are carried as Lower Order Paths (LOPs)
(the numbers are for direct mapping)

STS-1 frame structure


90 columns
3 rows

9 rows

6 rows

Synchronous Payload Envelope

Transport Overhead TOH

Section overhead is 3 rows * 3 columns = 9 bytes = 576 kbps framing, performance monitoring, management Line overhead is 6 rows * 3 columns = 18 bytes = 1152 kbps protection switching, line maintenance, mux/concat, SPE pointer SPE is 9 rows * 87 columns = 783 bytes = 50.112 Mbps Similarly, STM-1 has 9 (different) columns of section+line overhead !

STM-1 frame structure


270 columns
RSOH

MSOH

Section Overhead SOH

STM-1 has 9 (different) columns of transport overhead ! RS overhead is 3 rows * 9 columns Pointer overhead is 1 row * 9 columns MS overhead is 5 rows * 9 columns SPE is 9 rows * 261 columns

Even Higher Rates


9*N columns

9 rows

270*N columns

3 STS-1s can form an STS-3 4 STM-1s (STS-3s) can form an STM-4 (STS-12) 4 STM-4s (STS-12s) can form an STM-16 (STS-48) etc. for STM-N (STS-3N) The procedure is byte-interleaving

Byte-interleaving

...

Scrambling
SONET/SDH receivers recover clock based on incoming signal Insufficient number of 0-1 transitions causes degradation of clock performance In order to guarantee sufficient transitions, SONET/SDH employ a scrambler

All data except first row of section overhead is scrambled Scrambler is 7 bit self-synchronizing X7 + X6 + 1 Scrambler is initialized with ones

A short scrambler is sufficient for voice data but NOT for data which may contain long stretches of zeros

Scrambling
When sending data an additional payload scrambler is used

modern standards use 43 bit X43 + 1 run continuously on ATM payload bytes (suspended for 5 bytes of cell tax) run continuously on HDLC payloads Xn Z-43 Yn = Xn + Yn-43

STS-1 Overhead
A1 section overhead B1 D1 H1 B2 line overhead D4 D7 A2 E1 D2 H2 K1 D5 D8 J0 F1 D3 H3 K2 D6 D9

The STS-1 overhead consists of

D10 D11 D12 S1 M0 E2

3 rows of section overhead frame sync (A1, A2) section trace (J0) error control (B1) section orderwire (E1) Embedded Operations Channel (Di) 6 rows of line overhead pointer and pointer action (Hi) error control (B2) Automatic Protection Switching signaling (Ki) Data Channel (Di) Synchronization Status M essage (S1) Far End Block Error (M0) line orderwire (E2)

STM-1 Overhead
A1 RSOH B1 D1 A1 m m A1 m m A2 E1 D2 A2 m m A2 J0 F1 D3 res res res res m media dependent
(defined for SONET radio)

AU pointers
B2 D4 MSOH D7 D10 S1 B2 B2 K1 D5 D8 D11 M1 K2 D6 D9 D12 E2 res reserved for national use

SOH

A1, A2, J0 (Section Overhead)


A1, A2 - framing bytes A1 = 11110110 A2 = 00101000 SONET/SDH framing always uses equal numbers of A1 and A2 bytes

J0 - regenerator section trace (in early SONET - a counter called C1) enables receiver to be sure that the section connection is still OK enables identifying individual STS/STMs after muxing J0 goes through a 16 byte sequence MSBs are J0 framing (100000) Cs are CRC-7 of previous frame S are 15 7-bit characters section access point identifier
1 0 C1 C2 C3 C4 C5 C6 C7 S S S S S S S

0 S S S S S S S

B1, E1, F1, D1-3 (Section Overhead)


B1 Byte Interleaved Parity-8 byte even parity of bits of bytes of previous frame after scrambling only 1 BIT-8 for multiplexed STS/STM E1 section orderwire 64 kbps voice link for technicians from regenerator to regenerator F1 64 kbps link for user purposes D1 + D2 + D3 192 kbps messaging channel used by section termination as Embedded Operations Channel (SONET) or Data Communications Channel (SDH)

Pointers (Line Overhead)


In SONET, pointers are considered part of line overhead For STS-1, H1+H2 is the pointer, H3 is the pointer action H1+H2 indicates the offset (in bytes) from H3 to the SPE
(i.e. if 0 then J1 POH byte is immediately after H3 in the row)

4 MSBs are New Data Flag, 10 LSBs are actual offset value (0 782)
When offset=522 the STS-1 SPE is in a single STS-1 frame In all other cases the SPE straddles two frames When offset is a multiple of 87, the SPE is rectangular

To compensate for clock differences we have pointer justification When negative justification H3 carries the extra data When positive justification byte after H3 is stuffing byte

SONET Justification
If tributary rate is above nominal, negative justification is needed
When less than 8 more bits than expected in buffer NDF is 0110 offset unchanged When 8 extra bits accumulate NDF is set to 1001 H1 H2 extra extra byte placed into H3 offset is decremented by 1 (byte)

If tributary rate is below nominal, positive justification is needed


When less than 8 fewer than expected bits in buffer NDF is 0110 offset unchanged When 8 missing bits NDF is set to 1001 H1 H2 H3 stuff byte after H3 is stuffing offset is incremented by 1 (byte)

B2, K1, K2, D4-D12 (Line Overhead)

B2 BIP-8 of line overhead + previous envelope (w/o scrambling)

N B2s for muxed STM-N

K1 and K2 are used for Automatic Protection Switching (see later) D4 D12 are a 576 Kbps Data Communications Channel

Between multiplexers Usually manufacturer specific OAM functions

S1, M0, E2 (Line Overhead)


S1 Synchronization Status Message indicates stratum level (unknown, stratum 1, , do not use) M0 Far End Block Error indicates number of BIP violations detected E2 line orderwire 64 kbps voice link for technicians from line mux to line mux

Payloads and Mappings

44

STS-1 HOP SPE Structure

We saw that the pointer the line overhead points to the STS path overhead POH (after re-arranging) POH is one column of 9 rows (9 bytes = 576 kbps)

STS-1 HOP
1 30 59 87

1 column of SPE is POH 2 more (fixed stuffing) columns are reserved We are left with 84 columns = 756 bytes = 48.384 Mbps for payload This is enough for a E3 (34.368M) or a T3 (44.736M)

STS-1 Path overhead


J1 B3 C2 G1 F2 H4 F3 K3 N1

1 column of overhead for path (576 Kbps) POH is responsible for path type identification path performance monitoring status (including of mapped payloads) virtual concatenation path protection trace

POH

J1, B3, C2 (Path Overhead)


J1 path trace enables receiver to be sure that the path connection is still OK B3 BIP-8 even bit parity of bytes
(without scrambling)

C2 (hex) 00 01 02 04 12 13

Payload type unequipped nonspecific LOP (TUG) E3/T3 E4 ATM PoS RFC 1662 LAPS X.85 10G Ethernet GFP PoS - RFC1619

of previous payload C2 path signal label identifies the payload type (examples in table)

16 18 1A 1B CF

G1, F2, H4, F3, K3, N1 (Path Overhead)


G1 path status conveys status and performance back to originator 4 MSBs are path FEBE, 1 bit RDI, 3 unused F2 and F3 user specific communications H4 used for LOP multiframe sync and VCAT (see later) K3 (4 MSBs) path APS N1 Tandem Connection Monitoring Messaging channel for tandem connections

LOP
7 VTGs
1 30 59 87
1 2 3 4 5 6 7

To carry lower rate payloads, divide the 84 available columns into 7 * 12 interleaved columns, i.e. 7 Virtual Tributary (VT) Groups

VT group is 12 columns of 9 rows, i.e. 108 bytes or 6.912 Mbps

VT group is composed of VT(s)


there are different types of VT in order to carry different types of payload all VTs in VT group must be of the same type (no mixing) but different VT groups in same SPE can have different VT types

A VT can have 3, 4, 6 or 12 columns

SONET/SDH : VT/VC types


VT/STS VT 1.5 VT 2 LOP VT 3 VT 6 STS-1 HOP STS-1 STS-3c VC-2 VC-3 VC-3 VC-4 VC VC-11 VC-12 column rate 3 4 6 12 payload 4 per group 3 per group 2 per group 1 per group

1.728 DS1 (1.544) 2.304 E1


(2.048)

3.456 DS1C (3.152) 6.912 DS2 48.384 E3


(6.312) (34.368)

48.384 DS3 (44.736) 149.760 E4 (139.264)

standard PDH rates map efficiently into SONET/SDH !

LO Path Overhead
LOP OH is responsible for timing, PM, REI, LO Path APS signaling is 4 MSBs of byte K4

H4=XXXXXX00

V1 pointer
125 msec

V5
VC11 25B VC12 34B

H4=XXXXXX01

V2 pointer

J2

500 msec
H4=XXXXXX10

V3 pointer

N2

H4=XXXXXX11

V4 pointer

VC11 27B VC12 36B

K4

Payload Capacity
VT1.5/VC-11 has 3 columns = 27 bytes = 1.728 Mbps but 2 bytes are used for overhead (V1/V2/V3/V4 and V5/J2/N2/K4) so actually only 25 bytes = 1.6 Mbps are available Similarly VT2/VC-12 has 4 columns = 36 bytes = 2.304 Mbps but 2 bytes are used for overhead So actually only 34 bytes = 2.176 Mbps are available

LOP Overhead

V5 consists of

BIP (2b) REI (1b) RFI (1b) Signal label (3b) (uneq, async, bit-sync, byte-sync, test, AIS) RDI (1b)

J2 is path trace N2 is the network operator byte

may be used for LOP tandem connection monitoring (LO-TCM)

K4 is for LO VCAT and LO APS

SDH Containers

Tributary payloads are not placed directly into SDH Payloads are placed (adapted) into containers The containers are made into virtual containers (by adding
POH)

Next, the pointer is used the pointer + VC is a TU or AU Tributary Unit adapts a lower order VC to high order VC Administrative Unit adapts higher order VC to SDH TUs and AUs are grouped together until they are big enough We finally get an Administrative Unit Group To the AUG we add SOH to make the STM frame

Formally
C-n n = 11, 12, 2, 3, 4 VC-n = POH + C-n TU-n = pointer + VC-n (n=11, 12, 2, 3) AU-n = pointer + VC-n (n=3,4) TUG = N * TU-n AUG = N * AU-n STM-N = SOH + AUG

Multiplexing

An AUG may contain a VC-4 with an E4

or it may contain 3 AU-3s each with a VC-3s with an E3 and inside the AUG are 3 pointers to the AU-3s

In the latter case, the AU pointer points to the AUG

J1 B3 C2 G1 H1 H1 H1 H2 H2 H2 F2 H4 F3 K3 N1 H3 H3 H3

More Multiplexing

Similarly, we can hierarchically build complex structures

Lower rate STMs can be combined into higher rate STMs AUGs can be combined into STMs AUs can be combined into AUGs TUGs can be combined into high order VCs Lower rate TUs can be combined into TUGs etc.

But only certain combinations are allowed by standards

All SDH Mappings


AUG STM-N AUG AU-4 VC-4 *3 *3 TUG-3 TU-3 VC-3 C-4
E4 139.264 M ATM 149.760M

AUG

STM-0

AU-3

VC-3 *7 *7 TUG-2 TU-2 *3 TU-12 *4 TU-11 VC-11 VC-12 VC-2

C3

E3 34.368 M T3 44.736 M ATM 48.384 M

C2

T2 6.312 M ATM 6.874M

C12

E1 2.048 M ATM 2.144 M

C11

T1 1.544 M ATM 1.6 M

All SONET Mappings


STS-N *N STS-3c STS-3 SPE
E4 139.264 M ATM 149.760M E3 34.368 M T3 44.736 M ATM 48.384 M

STS-1

STS-1 SPE *7 VTG

VT6 *3 pointer processing *4 VT1.5 VT-2

VT6 SPE

T2 6.312 M ATM 6.874M

VT2 SPE

E1 2.048 M ATM 2.144 M

VT1.5 SPE

T1 1.544 M ATM 1.6 M

Tributary Mapping Types

When mapping tributaries into VCs, PDH-like bit-stuffing is used For E1 and T1 there are several options

Asynchronous mapping (framing-agnostic) Bit synchronous mapping Byte synchronous mapping (time-slot aligned)

E4 into VC-4, E3/T3 into VC-3 are always asynchronous T1 into VC-11 may be any of the 3
(in byte synchronous the framing bit is placed in the VC overhead)

E1 into VC-12 may be asynchronous or byte synchronous

WAN-PHY (10 GbE in STM-64)


10GBASE-W
GbE 10GBASE-R (64B/66B coding) can be directly mapped into a STM-64 (with contiguous concatenation - see later) without need for GFP MAC creates "stretched InterPacket Gap" to compensate for rate being < 10G This is the fastest connection commonly used for Internet traffic
Complication: SDH clock accuracy is 4.6 ppm, GbE accuracy is 20 ppm

802.3-2005 Clause 50

There is a special case where the bit-rates work out relatively well

64*(270-9) = 16704 columns J1

63 columns of fixed stuff

Protection and Rings

63

What is protection ?
SONET/SDH need to be highly reliable (five nines) Down-time should be minimal (less than 50 msec) So systems must repair themselves (no time for manual intervention) Upon detection of a failure (dLOS, dLOF, high BER) the network must reroute traffic (protection switching) from working channel to protection channel The Network Element that detects the failure (tail-end NE) initiates the protection switching The head-end NE must change forwarding or to send duplicate traffic Protection switching is unidirectional Protection switching may be revertive (automatically revert to working channel)
working channel

head-end NE

protection channel

tail-end NE

How Does It Work?


Head-end and tail-end NEs have bridges (muxes) Head-end and tail-end NEs maintain bidirectional signaling channel Signaling is contained in K1 and K2 bytes of protection channel K1 tail-end status and requests K2 head-end status

head-end bridge

tail-end bridge

working channel

protection channel

signaling channel

Linear 1+1 protection


Simplest form of protection Can be at OC-n level (different physical fibers) or at STM/VC level (called SubNetwork Connection Protection) or end-to-end path (called trail protection) Head-end bridge always sends data on both channels Tail-end chooses channel to use based on BER, dLOS, etc. No need for signaling If non-revertive there is no distinction between working and protection channels BW utilization is 50%
channel A

channel B

Linear 1:1 protection


Head-end bridge usually sends data on working channel When tail-end detects failure it signals (using K1) to head-end Head-end then starts sending data over protection channel When not in use protection channel can be used for (discounted) extra traffic
(pre-emptible unprotected traffic)

May be at any layer (only OC-n level protects against fiber cuts)

working channel

extra traffic protection channel

Linear 1:N protection


In order to save BW we allocate 1 protection channel for every N working channels N limited to 14 4 bits in K1 byte from tail-end to head-end

0 protection channel 1-14 working channels 15 extra traffic channel

working channels protection channel

Two Fiber vs. Four-Fiber Rings


Ring based protection is popular in North America (100K+ rings) Full protection against physical fiber cuts Simpler and less expensive than mesh topologies Protection at line (multiplexed section) or path layer Four-fiber rings fully redundant at OC level can support bidirectional routing at line layer Two-fiber rings support unidirectional routing at line layer

2 fibers in opposite directions

Unidirectional vs. bidirectional


Unidirectional routing working channel B-A same direction (e.g. clockwise) as A-B management simplicity: A-B and B-A can occupy same timeslots Inefficient: waste in ring BW and excessive delay in one direction Bidirectional routing A-B and B-1 are opposite in direction both using shortest route spatial reuse: timeslots can be reused in other sections A-B B A-B B B-A A A C-B B-A C B-C

UPSR vs. BLSR (MS-SPRing)


UPSR BLSR Unidirectional Bidirectional Path switching Line switching Two-fiber Four-fiber

Of all the possible combinations, only a few are in use Unidirectional Path Switched Rings protects tributaries extension of 1+1 to ring topology Bidirectional Line Switched Rings (two-fiber and four-fiber versions) called Multiplex Section Shared Protection Ring in SDH simultaneously protects all tributaries in STM extension of 1:1 to ring topology

UPSR
Working channel is in one direction protection channel in the opposite direction All traffic is added in both directions decision as to which to use at drop point (no signaling) Normally non-revertive, so effective two diversity paths Good match for access networks 1 access resilient ring less expensive than fiber pair per customer Inefficient for core networks no spatial reuse every signal in every span in both directions node needs to continuously monitor every tributary to be dropped

BLSR
Switch at line level less monitoring When failure detected tail-end NE signals head-end NE Works for unidirectional/bidirectional fiber cuts, and NE failures Two-fiber version half of OC-N capacity devoted to protection only half capacity available for traffic Four-fiber version full redundant OC-N devoted to protection twice as many NEs as compared to two-fiber

Example recovery from unidirectional fiber cut

VCAT and LCAS

74

Concatenation
Payloads that dont fit into standard VT/VC sizes can be accommodated by concatenating of several VTs / VCs For example, 10 Mbps doesnt fit into any VT or VC so w/o concatenation we need to put it into an STS-1 (48.384 Mbps) the remaining 38.384 Mbps can not be used We would like to be able to divide the 10 Mbps among 7 VT1.5/VC-11 s = 7 * 1.600 = 11.20 Mbps or 5 VT2/VC-12 s = 5 * 2.176 = 10.88 Mbps

Concatenation (cont.)
There are 2 ways to concatenate X VTs or VCs: Contiguous Concatenation (G.707 11.1) HOP STS-Nc (SONET) or VC-4-Nc (SDH) or LOP 1-7 VC-2-Nc into a VC-3 since has to fit into SONET/SDH payload only STS-Nc : N=3 * 4n or VC-4-Nc : N=4n components transported together and in-phase requires support at intermediate network elements

Virtual Concatenation (VCAT G.707 11.2) HOP STS-1-Xv or STS-Nc-Xv (SONET) or VC-3/4-Xv (SDH) or LOP VT-1.5/2/3/6-Xv (SONET) or VC-11/12/2-Xv (SDH) LOP: X 64 (limitation due to bits in header) HOP: X 256 payload split over multiple STSs / STMs fragments may follow different routes requires support only at path terminations requires buffering and differential delay alignment

Contiguous Concatenation: STS-3c


270 columns
9 rows

258 columns of SPE 3 columns of path overhead

9 columns of section and line overhead

258 columns * 0.576 = 148.608 Mbps

STS-3

270 columns
9 rows

1 column of path overhead

9 columns of section and line overhead

STS-3c 260 columns * 0.576 = 149.760 Mbps

260 columns of SPE

STS-N vs. STS-Nc


Although both have raw rates of 155.520 Mbps STS-3c has 2 more columns (1.152Mbps) available More generally, For STS-Nc gains (N-1) columns
e.g. STS-12c gains 11 columns = 6.336Mbps vis a vis STS-12 STS-48c gains 47 columns = 27.072 Mbps STS-192c gains 191 columns = 110.016 Mbps !

However, an STS-Nc signal is not as easily separable when we want to add/drop component signals

Virtual Concatenation

H4

VCAT is an inverse multiplexing mechanism (round-robin) VCAT members may travel along different routes in SONET/SDH network Intermediate network elements dont need to know about VCAT
(unlike contiguous concatenation that is handled by all intermediate nodes)

SDH virtually concatenated VCs


VC VC-11-Xv Capacity (Mbps) if all members in one VC 1.600, 3.200, 1.600X in VC-3 X 28 C 44.800 in VC-4 X 64 C 102.400 VC-12-Xv 2.176, 4.352, 2.176X in VC-3 X 21 C 45.696 in VC-4 X 63 C 137.088 VC-2-Xv 6.784, 13.568, , 6.784X in VC-3 X 7 C 47.448

in VC-4 X 21 C 142.464

So we have many permissible rates 1.600, 2.176, 3.200, 4.352, 4.800, 6.400, 6.528, 6.784, 8.000,

SONET virtually concatenated VTs


VT Capacity (Mbps) If all members in one STS in STS-1 X 28 C 44.800 VT1.5-Xv 1.600, 3.200, 1.600X

in STS-3c X 64 C 102.400 VT2-Xv 2.176, 4.352, 2.176X in STS-1 X 21 C 45.696

in STS-3c X 63 C 137.088 VT3-Xv 3.328, 6.656, 3.328X in STS-1 X 14 C 46.592

in STS-3c X 42 C 139.776 VT6-Xv 6.784, 13.568, 6.784X in STS-1 X 7 C 47.448

in STS-3c X 21 C 142.464 So we have many permissible rates 1.600, 2.176, 3.200, 3.328, 4.352, 4.800, 6.400, 6.528, 6.656, 6.784,

Efficiency Comparison
rate 10 w/o VCAT STS-1 efficiency 21% with VCAT VT2-5v VC-12-5v 100 STS-3c VC-4 1000 STS-48c VC-4-16c 42% 67% STS-1-2v VC-3-2v STS-3c-7v VC-4-7v 95% 100% efficiency 92%

Using VCAT increases efficiency to close to 100% !

PDH VCAT
VCAT overhead octet

1st frame of 4 E1s Recently ITU-T G.7043 expanded VCAT to E1,T1,E3,T3 Enables bonding of up to 16 PDH signals to support higher rates Only bonding of like PDH signals allowed (e.g. cant mix E1s and T1s) Multiframe is always per G.704/G.832 (e.g. T1 ESF 24 frames, E1 16 frames) 1 byte per multiframe is VCAT overhead (SQ, MFI, MST, CRC) Supports LCAS (to be discussed next) each E1 time
TS0

PDH VCAT Overhead Octet


VCAT overhead octet

frames of an E1
TS0

There is one VCAT overhead octet per multiframe, so net rate is T1: (24*24-1=) 575 data bytes per 3 ms. multiframe = 191.666 kB/s E1: (16*30-1=) 495 data bytes per 2 ms multiframe = 247.5 kB/s T3 and E3 can also be used We will show the overhead octet format later (when using LCAS, the overhead octet is called VLI)

Delay Compensation
802.1ad Ethernet link aggregation cheats

each identifiable flow is restricted to one link doesnt work if single high-BW flow works even with a single flow

VCAT is completely general

VCG members may travel over completely separate paths


so the VCAT mechanism must compensate for differential delay

Requirement for over second compensation Must compensate to the bit level
but since frames have Frame Alignment Signal the VCAT mechanism only needs to identify individual frames

VCAT Buffering

Since VCAT components may take different paths At egress the members are no longer in the proper temporal relationship VCAT path termination function buffers members and outputs in proper order (relying on POH sequencing)
(up to 512 ms of differential delay can be tolerated)

VCAT defines a multiframe to enable delay compensation length of multiframe determines delay that can be accommodated H4 byte in members POH contains : sequence indicator (identifies component) (number of bits limits X) MFI multiframe indicator (multiframe sequencing to find differential delay)

Multiframes and Superframes


Here is how we compensate for 512 ms of differential delay 512 ms corresponds to a superframe is 4096 TDM frames (4096*0.125m=512m) For HOP SDH VCAT and PDH VCAT (H4 byte or PDH VCAT overhead) The basic multiframe is 16 frames So we need 256 multiframes in a superframe (256*16=4096) The MultiFrame Indicator is divided into two parts:

MFI1 (4 bits) appears once per frame and counts from 0 to 15 to sequence the multiframe MFI2 (8bits) appears once per multiframe and counts from 0 to 255

For LOP SDH (bit 2 of K4 byte) a 32 bit frame is built and a 5-bit MFI is dedicated 32 multiframes of 16 ms give the needed 512 ms

Link Capacity Adjustment Scheme


LCAS is defined in G.7042 (also numbered Y.1305) LCAS extends VCAT by allowing dynamic BW changes LCAS is a protocol for dynamic adding/removing of VCAT members hitless BW modification similar to Link Aggregation Control Protocol for Ethernet links LCAS is not a control plane or management protocol

it doesnt allocate the members still need control protocols to perform actual allocation LCAS is a handshake protocol

it enables the path ends to negotiate the additional / deletion it guarantees that there will be no loss of data during change it can determine that a proposed member is ill suited it allows automatic removal of faulty member

LCAS how does it work?


LCAS is unidirectional (for symmetric BW need to perform twice) LCAS functions can be initiated by source or sink LCAS assumes that all VCG members are error-free J1 LCAS messages are CRC protected B3 LCAS messages are sent in advance C2 sink processes messages after differential compensation G1 message describes link state at time of next message F2 receiver can switch to new configuration in time H4 LCAS messages are in the upper nibble of H4 byte for HOS SONET/SDH F3 K4 byte for LOS SONET/SDH K3 VCAT overhead octet for PDH VCAT and LCAS Information N1 LCAS messages employ redundancy POH messages from source to sink are member specific messages from sink to source are replicated

LCAS control messages


LCAS adds fields to the basic VCAT ones Fields in messages from source to sink: MultiFrame Indicator MFI SQ SeQuence indicator (member ID inside VCAT group) CTRL ConTRoL (IDLE, being ADDed, NORMal, End of Sequence, Do Not Use) GID Group Identification (identifies VCAT group) Fields in messages from sink to source (identical in all members): Member Status (1 bit for each VCG member) MST RS-Ack ReSequence Acknowledgement Fields in both directions CRC Cyclic Redundancy Code The precise format depends on the VCAT type (H4, K4, PDH)
Note: for H4 format SQ is 8 bits, so up to 256 VCG members for PDH SQ is only 4 bits, so up to 16 VCG members

H4 format
MFI2 bits 1-4 MFI2 bits 5-8 CTRL 0 0 GID 0 0 0 0 0 0 CRC-8 bits 1-4 CRC-8 bits 5-8 MST bits more MST bits 0 0 RS-ACK 0 0 0 0 0 0 0 0 0 SQ bits 1-4 SQ bits 5-8 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 MFI1 0 0 0 0 0 1 0 1 1 0 1 0 1 1 1 1 0 0 0 0 0 1 0 1 1 0 1 0 1 1 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

reserved fields

0 0 0

16 frame multiframe

reserved fields

0 0 0 0

H4 Format
CRC-8 (when using K4 it is CRC-3) covers the previous 14 frames (not synced on multiframe) polynomial x8 + x2 + x + 1 MST

each VCG member carries the status of all members so we need 256 bits of member status this is done by muxing MST bits there are MST bits per multiframe and 32 multiframes in an MST multiframe no special sequencing, just MFI2 multiframe mod 32 single bit indentifier all members of VCG share the same bit cycles through 215-1 LFSR sequence different VCGs use different phase offsets of sequence

GID

LCAS Adding a Member (1)


When more/less BW is needed, we need to add/remove VCAT members Adding/removing VCAT members first requires provisioning (management) LCAS handles member sequence numbers assignment LCAS ensures service is not disrupted Example: to add a 4th member to group 1 Initial state:
GID=g SQ=1 CTRL=NORM GID=g SQ=2 CTRL=NORM GID=g SQ=3 CTRL=EOS

Step 1: NMS provisions new member source sends CTRL=IDLE for new member GID=g SQ=1 CTRL=NORM GID=g SQ=2 CTRL=NORM sink sends MST=FAIL for new member
GID=g SQ=3 CTRL=EOS GID=g SQ=FF CTRL=IDLE

LCAS Adding a Member (2)


Step 2: source sends CTRL=ADD and SQ sink sends MST=OK for new member if it has been provisioned if receiving new member OK if it is able to compensate for delay otherwise it will send MST=FAIL and source reports this to NMS
GID=g SQ=1 CTRL=NORM GID=g SQ=2 CTRL=NORM GID=g SQ=3 CTRL=EOS GID=g SQ=4 CTRL=ADD

Step 3: source sends CTRL=EOS for new member GID=g SQ=1 CTRL=NORM new member starts to carry traffic sink sends RS-ACK
Note 1: several new members may be added at once
GID=g SQ=2 CTRL=NORM GID=g SQ=3 CTRL=NORM GID=g SQ=4 CTRL=EOS

Note 2: removing a member is similar Source puts CTRL=IDLE for member to be removed and stops using it All member sequence numbers must be adjusted

LCAS Service Preservation


To preserve service integrity if sink detects a failure of a VCAT member LCAS can temporarily remove member (if service can tolerate BW reduction)
GID=g SQ=1 CTRL=NORM GID=g SQ=2 CTRL=NORM

Example: Initial state

GID=g SQ=3 CTRL=NORM GID=g SQ=4 CTRL=EOS

Step 1: sink sends MST=FAIL for member 2 source sends CTRL=DNU (special treatment if EoS) and ceases to use member 2
Note: if EoS fails, renumber to ensure EoS is active

GID=g SQ=1 CTRL=NORM GID=g SQ=2 CTRL=DNU GID=g SQ=3 CTRL=NORM GID=g SQ=4 CTRL=EOS

Step 2: sink sends MST=OK indicating defect is cleared source returns CTRL to NORM and starts using the member again
Note: if NMS decides to permanently remove the member, proceed as in previous slide

Handling Packet Data

96

Packet over SONET


Currently defined in RFC2615 (PPP over SONET)
obsoletes RFC1619

SONET/SDH can provide a point-to-point byte-oriented


full-duplex synchronous link

PPP is ideal for data transport over such a link PoS uses PPP in HDLC framing to provide a byte-oriented
interface to the SONET/SDH infrastructure

POH signal label (C2)


indicates PoS as C2=16 (C2=CF if no scrambler)

PoS Architecture
IP PPP HDLC SONET/SDH

PoS is based on PPP in HDLC framing Since SONET/SDH is byte oriented, byte stuffing is employed A special scrambler is used to protect SONET/SDH timing PoS operates on IP packets If IP is delivered over Ethernet

the Ethernet is terminated (frame removed) Ethernet must be reconstituted at the far end require routers at edges of SONET/SDH network

PoS Details
IP packet is encapsulated in PPP default MTU is 1500 bytes up to 64,000 bytes allowed if negotiated by PPP FCS is generated and appended PPP in HDLC framing with byte stuffing 43 bit scrambler is run over the SPE byte stream is placed octet-aligned in SPE (e.g. 149.760 Mbps of STM-1) HDLC frames may cross SPE boundaries

POS Problems
PoS is BW efficient but POS has its disadvantages

BW must be predetermined HDLC BW expansion and nondeterminacy BW allocation is tightly constrained by SONET/SDH capacities

e.g. GBE requires a full OC-48 pipe so lose RPR, VLAN, 802.1p, multicasting, etc

POS requires removing the Ethernet headers

POS requires IP routers

LAPS

In 2001 ITU-T introduced protocols for transporting packets over SDH


X.85 IP over SDH using LAPS X.86 Ethernet over LAPS Use ISO HDLC format

Built on series of ITU LAPx HDLC-based protocols

Implement connectionless byte-oriented protocols over SDH X.85 is very close to (but not quite) IETF PoS

GFP Architecture
A new approach, not based on HDLC Defined in ITU-T G.7041 (also numbered Y.1303) originally developed in T1X1 to fix ATM limitations (like ATM) uses HEC protected frames instead of HDLC

Ethernet

IP

HDLC

other

GFP client specific part GFP common part SDH OTN other

Client may be PDU-oriented (Ethernet MAC, IP) or block-oriented (GBE, fiber channel) GFP frames are octet aligned contain at most 65,535 bytes consist of a header + payload area Any idle time between GFP frames is filled with GFP idle frames

GFP Frame Structure


Every GFP frame has a 4-byte core header

2 byte Payload Length Indicator


PLI = 01,2,3 are for control frames

core header

PLI (2B) cHEC (2B) payload header (4-64B)

2 byte core Header Error Control


X16 + X12 + X5 + 1

entire core header is XORed with B6AB31E0 payload area

Idle GFP frames have PLI=0 have no payload area Non-idle GFP frames have 4 bytes in payload area the payload has its own header 2 payload modes : GFP-F and GFP-T optionally protect payload with CRC-32

payload optional payload FCS (4B)

GFP Payload Header


GFP payload header has type (2B) PTI (3b) PFI EXI (3b) type HEC (CRC-16) UPI (8b) extension header (0-60B) either null or linear extension (payload type muxing) extension HEC (CRC-16) type consists of Payload Type Identifier (3b) PTI=000 for client data PTI=100 for client management (OAM dLOS, dLOF) Payload FCS Indicator (1b) PFI=1 means there is a payload FCS Extension Header ID (3b) User Payload Identifier (8b) values for Ethernet, IP, PPP, FC, RPR, MPLS, etc. type (2B) tHEC (2B) extension header (0-60B) eHEC (2B)

GFP Modes
GFP-F - frame mapped GFP Good for PDU-based protocols (Ethernet, IP, MPLS) or HDLC-based ones (PPP) Client PDU is placed in GFP payload field GFP-T transparent GFP Good for protocols that exploit physical layer capabilities In particular 8B/10B line code
used in fiber channel, GbE, FICON, ESCON, DVB, etc Were we to use GFP-F would lose control info, GFP-T is transparent to these codes Also, GFP-T neednt wait for entire PDU to be received (adding delay!)

Potrebbero piacerti anche