Sei sulla pagina 1di 45

10.

Congestion Control in
Data Networks and Internets

Chandra Prakash LPU
Congestion

Dictionary definition
Excessive crowding

Network definition
When one part of the subnet (e.g.
one or more routers in an area)
becomes overloaded, congestion
results.

Chandra Prakash LPU
Introduction
Congestion occurs when the number of packets
transmitted approaches network capacity

Packetswitched networks get congested!

Objective of congestion control:
keep the number of packets that are entering/within
the network below the level at which performance
drops off dramatically
Chandra Prakash LPU
Queuing Theory

Data network is a network of queues


At each node (data network switch, internet router)
there is a queue of packets for each outgoing channel.
Chandra Prakash LPU
Queuing Theory
If arrival rate at any queue > transmission rate from
the node then queue size grows without bound and packet
delay goes to infinity
Rule of Thumb Design Point:
When line for which packets are queuing becomes more than 80% the
queuing length grows at alarming rate.

= L/R < .8
*

Generally 80% utilization is critical

Chandra Prakash LPU
Input & Output Queues at a Node
T
s
= L/R
T
s
= L/R
T
s
= L/R
Nodal
Processing
Two buffer at
each port:

One to accept the
arriving
packets

Second to hold
packets
that are
waiting to
depart .
Chandra Prakash LPU
Strategies During Congestion
Two Possible Strategies at Node:
Because routers are receiving packets faster than they can
forward them, one of two things must happen:
1. Discard
any incoming packet if no buffer space is available
can discard queued packets to make room for those that are
arriving.
2. Exercise flow control over neighbors
May cause congestion to propagate throughout network
The subnet must prevent additional packets from entering the
congested region until those already present can be
processed.
Chandra Prakash LPU
Factors that Cause Congestion
Packet arrival rate exceeds the outgoing link capacity.

Insufficient memory to store arriving packets

Busty traffic :
Busty traffic refers to an uneven pattern of data transmission:.
sometime very high data transmission rate while other time it might be very low

Slow processor

Chandra Prakash LPU
Effects of Congestion
Packets arriving are stored at input buffers
Routing decision made
Packet moves to output buffer
Packets queued for output transmitted as fast as possible
Statistical time division multiplexing
If packets arrive to fast to be routed, or to be output, buffers
will fill
Can discard packets
Can use flow control
Can propagate congestion through network

Chandra Prakash LPU
Queue Interaction in Data Network
(delay propagation)
Chandra Prakash LPU
Ideal Performance


infinite buffers, no variable overhead for packet
transmission or congestion control
Throughput increases with offered load up to full
capacity
Packet delay increases with offered load approaching
infinity at full capacity
Power = throughput / delay, or a measure of the balance
between throughput and delay
Higher throughput results in higher delay
Chandra Prakash LPU
Ideal Network Utilization
Load:
T
s
= L/R
Power: relationship between
Normalized Throughput and Delay
Chandra Prakash LPU
Practical Performance
Buffers are finite
Overheads occur in exchanging congestion control
messages

With no congestion control, increased load eventually
causes moderate congestion: throughput increases at
slower rate than load

Further increased load causes packet delays to
increase and eventually throughput to drop to zero
Chandra Prakash LPU
Effects of Congestion
Whats happening here?
buffers fill
packets discarded
sources re-transmit
routers generate more
traffic to update paths
good packets resent
delays propagate
Chandra Prakash LPU
Common Congestion Control
Mechanisms
Chandra Prakash LPU
Congestion Control

Backpressure
Request from destination to source to reduce rate

Policing
Measuring and restricting packets as they enter the network

Choke packet


Chandra Prakash LPU
Backpressure
If node becomes congested it can slow down or halt flow of
packets from other nodes
May mean that other nodes have to apply control on incoming
packet rates
Request from destination to source to reduce rate
Propagates back to source
Can restrict to logical connections generating most traffic
Used in connection oriented that allow hop by hop congestion
control (e.g. X.25)
Not used in ATM nor frame relay
Only recently developed for IP

Chandra Prakash LPU
Congestion Control
Choke packet
A more direct way of telling the source to slow down.
Specific message back to source
A choke packet is a control packet generated at a congested
node and transmitted to restrict traffic flow.
The source, on receiving the choke packet must reduce its
transmission rate by a certain percentage.
e.g. ICMP source quench
From router or destination
Source cuts back until no more source quench message
Sent for every discarded packet, or anticipated



Chandra Prakash LPU
Implicit congestion signaling
Transmission delay may increase with congestion
Packet may be discarded
Source can detect these as implicit indications of
congestion
End system responsibility
no action from network
reaction depends on round trip time
Useful on connectionless (datagram) networks
e.g. IP based
(TCP includes congestion and flow)
Used in frame relay LAPF



Chandra Prakash LPU
Explicit congestion signaling
Network responsibility to alert end systems of growing congestion
End systems take steps to reduce offered load
Backwards
network notifies the source
Congestion avoidance in opposite direction to packet required
Indicates that the packet that the user transmits on this logical connection, may
encountered congested resources.
Information is transmitted either
by altering bits in a data packet headed for the source to be controlled or
by transmitting separate control packets to the source.
Forwards
Network notifies the destination
Congestion avoidance in same direction as packet required
Indicates that this packet, on this logical connection, has encountered congested
resources.
Chandra Prakash LPU
Explicit congestion signaling
Direction
Backward
Forward
Categories
Binary
Credit-based
Rate-based
Chandra Prakash LPU
Categories of Explicit Signaling
Binary
A bit set in a packet indicates congestion
On receiving, reduces its traffic flow
Credit based
Similar to token based technique
Indicates how many packets/octets source may send
Common for end to end flow control
Rate based
Supply explicit data rate limit to source over a logical
connection
e.g. ATM

Chandra Prakash LPU
Traffic Shaping

Another method of congestion control is to shape the
traffic before it enters the network.
Traffic shaping controls the rate at which packets are sent
(not just how many). Used in ATM and Integrated Services
networks.
At connection set-up time, the sender and carrier negotiate
a traffic pattern (shape).
Two traffic shaping algorithms are:
Leaky Bucket
Token Bucket

Chandra Prakash LPU
The Leaky Bucket Algorithm
The Leaky Bucket Algorithm
used to control rate in a network.
It is implemented as a single-server queue with constant
service time.
If the bucket (buffer) overflows then packets are
discarded.
Chandra Prakash LPU
The Leaky Bucket Algorithm
(a) A leaky bucket with water. (b) a leaky bucket with packets.
Chandra Prakash LPU

Leaky Bucket Algorithm, cont.

The leaky bucket enforces a constant output rate
(average rate) regardless of the burstiness of the input.
Does nothing when input is idle.
The host injects one packet per clock tick onto the
network. This results in a uniform flow of packets,
smoothing out bursts and reducing congestion.
When packets are the same size (as in ATM cells), the
one packet per tick is okay. For variable length packets
though, it is better to allow a fixed number of bytes per
tick. E.g. 1024 bytes per tick will allow one 1024-byte
packet or two 512-byte packets or four 256-byte packets
on 1 tick.
Chandra Prakash LPU
Token Bucket Algorithm
In contrast to the LB, the Token Bucket Algorithm,
allows the output rate to vary, depending on the size of
the burst.
In the TB algorithm, the bucket holds tokens. To
transmit a packet, the host must capture and destroy
one token.
Tokens are generated by a clock at the rate of one
token every t sec.
Idle hosts can capture and save up tokens (up to the
max. size of the bucket) in order to send larger bursts
later.
Chandra Prakash LPU
The Token Bucket Algorithm
(a) Before. (b) After.
5-34
Chandra Prakash LPU
Leaky Bucket vs Token Bucket

LB discards packets; TB does not. TB discards tokens.

With TB, a packet can only be transmitted if there are
enough tokens to cover its length in bytes.

LB sends packets at an average rate. TB allows for
large bursts to be sent faster by speeding up the output.

TB allows saving up tokens (permissions) to send large
bursts. LB does not allow saving.

Chandra Prakash LPU
Traffic Management in Congested
Network Some Considerations
Fairness
Congestion effects should be distributed equally to traffic flows
Various flows should suffer equally
Last-in-first-discarded may not be fair
Node can maintain a separate queue for each logical connection or
for each source-destination pair.
All queue equal length- discard queue with highest traffic load
Quality of Service (QoS)
Flows treated differently; Differentiation based on application
requirements
Voice, video: delay sensitive, loss insensitive
File transfer, mail: delay insensitive, loss sensitive
Interactive computing: delay and loss sensitive
Chandra Prakash LPU
Traffic Management in Congested
Network Some Considerations
Reservations
Policing: excess traffic discarded or handled on
best-effort basis
e.g. ATM
Traffic contract between user and network
Network agrees to give a defined QoS so long as
the traffic flow is within contract parameters
Excess traffice is either discarded or handled on a
best effort basis.

Chandra Prakash LPU
Congestion Control in Packet
Switched Networks

Send control packet to some or all source nodes
Requires additional traffic during congestion
Rely on routing information
May react too quickly
End to end probe packets
Adds to overhead
Add congestion info to packets as they cross nodes
Either backwards or forwards
Chandra Prakash LPU
Frame Relay Congestion Control
I.370 defines the objectives for frame relay congestion control
to be the following:

Minimize frame discard
Maintain QoS (per-connection bandwidth)
Minimize monopolization of network
Simple to implement, little overhead
Minimal additional network traffic
Resources distributed fairly
Limit spread of congestion
Operate effectively regardless of flow
Have minimum impact other systems in network
Minimize variance in QoS
Chandra Prakash LPU
Discuss
Congestion control is difficult for a frame reply network.
Why ???

Limited tools available to frame handlers
Frame Relay protocol has been streamlined to maximize throughput
and efficiency.
A consequence of this is that a frame handler can not control the
flow of frames coming from a subscriber or an adjacent frame
handler using the typical sliding window flow control protocol as in
found in HDLC.
Chandra Prakash LPU
Discuss
Congestion control is the joint responsibility of the
network and the end users.

The network (i.e collection of frame handlers ) is in the
best position to monitor the degree of congestion, while
the end users are in the best position to control congestion
by limiting the flow of traffic.
Chandra Prakash LPU
Frame Relay Techniques
Chandra Prakash LPU
Frame Relay Traffic Rate
Management Parameters
Committed Information Rate (CIR)
Average data rate in bits/second that the network agrees to
support for a connection
Data Rate of User Access Channel (Access Rate)
Fixed rate link between user and network (for network access)
Committed Burst Size (B
c
)
Maximum data over an interval agreed to by network
Excess Burst Size (B
e
)
Maximum data, above B
c
, over an interval that network will
attempt to transfer
Chandra Prakash LPU
Committed Information Rate
(CIR) Operation
CIR
i,j
AccessRate
j

i
B
e
B
c
Maximum line speed
of connection to
Frame Relay network
(i.e., peak data rate)
Average data rate
(bps) committed to
the user by the Frame
Relay network.
Maximum data rate over
time period allowed for
this connection by the
Frame Relay network.
Current rate at which
user is sending over the
channel
Chandra Prakash LPU
Discuss
CIR provides a way of discriminating among frames in
determining which frames to discard in the face of
congestion.

Discrimination is indicated by using Discard Eligibility (DE) bit
in LAPF frame.

If user data rate < CIR, no alter in DE bit.
If user data rate > CIR, DE bit is set
Such frame may get through or may be discarded if congestion is
encountered. A maximum rate is defined , such that nay frames
above the maximum are discarded at the entry frame handler.
Chandra Prakash LPU
DE
Frame Relay Traffic Rate
Management Parameters
Max.
Rate
CIR = bps
B
c

T
Chandra Prakash LPU
Relationship of Congestion
Parameters
From ITU-T I.370
Note that T =
B
c
CIR
Chandra Prakash LPU
Congestion Avoidance with
Explicit Signaling
Two general strategies considered:
Hypothesis 1: Congestion always occurs slowly, almost always
at egress nodes
forward explicit congestion avoidance
Hypothesis 2: Congestion grows very quickly in internal nodes
and requires quick action
backward explicit congestion avoidance

Chandra Prakash LPU
Frame Relay - 2 Bits for Explicit
Signaling
Forward Explicit Congestion Notification
For traffic in same direction as received frame
This frame has encountered congestion

Backward Explicit Congestion Notification
For traffic in opposite direction of received frame
Frames transmitted may encounter congestion
Chandra Prakash LPU
Congestion Control: BECN/FECN
Chandra Prakash LPU
Explicit Signaling Response

Network Response
each frame handler monitors its queuing behavior
and takes action
use FECN/BECN bits
some/all connections notified of congestion
User (end-system) Response
receipt of BECN/FECN bits in frame
BECN at sender: reduce transmission rate
FECN at receiver: notify peer (via LAPF or higher
layer) to restrict flow
Chandra Prakash LPU

Potrebbero piacerti anche