Sei sulla pagina 1di 8

A Comparison of Active Queue Management Algorithms Using OPNET Modeler

Chengyu Zhu, Oliver W.W.Yang


School of Information Technology and Engineering
University of Ottawa
Ottawa, Ontario, Canada
Email: {czhu, yang}@site.uottawa.ca

James Aweya, Michel Ouellette,


Delfin Y. Montuno
Nortel Networks
Ottawa, Ontario, Canada
Email: {aweyaj, ouellett, delfin}@nortelnetworks.com

AbstractA number of active queue management algorithms for

This is a new active queue management technique, which uses


a simple feedback control approach to randomly discard 1
packets. We also compare the performance of DRED with
those of RED, BLUE, and SRED in terms of queue size, drop
probability and packet loss rate. To the best of our knowledge,
such comparison does not exist in the literature.
This paper is organized as follow. Section 2 gives the
details of the DRED algorithm. Section 3 summarizes the
RED, BLUE and SRED algorithms in order for readers to
appreciate the comparison. Section 4 provides a description of
the OPNET Models used by all four algorithms. Section 5
compares their performance. Conclusion is provided in
Section 6.

IP routers such as RED (Random Early Detection), SRED


(Stabilized-RED), and BLUE have been proposed in the
past few years. This paper compares them with the DRED
(Dynamic-RED) algorithm. The evaluation is done using the
OPNET Modeler, which provides a convenient and easy-touse platform for simulating large-scale networks. The
performance metrics under investigation are the queue size,
the drop probability and the packet loss rate. We found that
the DRED algorithm can stabilize the queue size very well
while keeping the link utilization high and helping in
controlling the packet loss rate. The benefits of a stabilized
queues in a network are high resource utilization, bounded
delays, more certain buffer provisioning, and traffic-loadindependent network performance in terms of traffic
intensity and number of TCP connections.

Keywords: TCP, Congestion Control, Active Queue Management,


Random Early Detection, Control Theory

1. Introduction
Over the last decade, TCP (Transmission Control Protocol)
congestion control [1] has been used to adaptively control the
rates of individual connections sharing IP (Internet Protocol)
network links. However, TCP congestion control algorithm
over current drop-tail networks has one serious drawback, i.e.
TCP source reduces its transmission rate only after losing
packets. Therefore, even with techniques such as congestion
avoidance, slow start, fast retransmit and fast recovery
mechanism [9], the performance of TCP congestion control
algorithm over current drop-tail networks is inadequate in a
heavily loaded network. Active queue management has been
proposed as a solution for preventing packet loss due to buffer
overflow. RED (Random Early Detection) [2], an active queue
management algorithm was recommended by the IETF for
deployment in IP routers/networks [3]. The basic idea behind
an active queue management algorithm is to convey
congestion notification early enough to the senders, so that
senders are able to reduce the transmission rates before the
queue overflows and any sustained packet loss occurs. It is
now widely accepted that a RED-controlled queue performs
better than a drop-tail queue. However, the inherent design of
RED makes it difficult to parameterize RED queues to give
good performance under different network scenarios. Several
algorithms, like SRED [4] and BLUE [5], discard packets with
a load-dependent probability whenever the queue buffer in a
router appears to be congested.
In this paper we describe the Dynamic-RED (DRED)
active queue management algorithm that is proposed in [7].

2. The Dynamic-RED (DRED) Algorithm


In this section we describe the concept of Dynamic-Random
Early Detection (DRED) in the context of control theory.
Figure 1 shows a block diagram of the closed-loop feedback
control system.
Reference
Input
T +

_
Feedback
Signal
q

Actuacting
(or Error)
Signal
e = T - q
B

pd

Control Signal
or
Manipulating
Variable
pd

Disturbances
(Perturbations)
d

+
Process

Controlled
Output
q

z-1

Figure 1. Block diagram of the closed-loop


feedback control system

The objective of the DRED algorithm is to stabilize the


actual queue size q(n) to a target queue size T, a threshold
value independent of the network traffic load. The actual
queue size is sampled every t units of time and fed back to
produce an error signal e(n) = q(n) T. This error signal is then
used in the DRED controller to adapt the drop probability pd,
so that e(n) can be kept as small as possible. The basic strategy
of the DRED algorithm controller is to use a discrete-time
first-order low-pass filter with a filter gain to get a filtered
error signal e( n) , and then to adjust the drop probability of
DRED with the filtered error signal e(n) , where
e( n) (1 )e(n 1) e( n)
(1)
The drop probability of DRED can then be expressed as
1

Although this paper focuses on packet discarding or dropping as a means for


congestion notification to the TCP sources, the discussion equally applies to
packet marking (ECN) [6].

e(n)
1
B
e(n)
e(n)
pd (n 1)
if 0 pd (n 1)
1
B
B
e(n)
0
if pd (n 1)
0
B
if pd (n 1)

pd (n)

where

(2)
is a control gain and B is buffer size.
Initialize timer to t
n=0
Initialize p d ( n), e( n) 0

Timer expires
Reset timer to t
n=n+1

Sample current queue size: q(n)

Compute current error: e(n) = q(n) - T

Compute filtered error, if desired


Basic
Block

e( n) (1 )e( n 1) e(n )
else
e( n) e(n )

Compute current drop probability


e(n )

p d (n ) p d (n 1)
B

1
0

Figure 2. Drop Probability Computations


Table 1 Control Parameter in DRED Algorithm
Parameters
Function of
Recommended
Parameters
Value of Parameters
Sampling
Time interval for taking 10 packet
measurement and
transmission time or
Interval t
applying a computed
any suitable value
control.
Control Gain
This controls the
0.00005
reaction and stability of
the control system.
This controls the
0.002
Filter Gain
reaction speed of the
filter.
Control Target
This decides the buffer
T=B/2
T
utilization level and
average queuing delay.
Buffer Size B
Buffer size in router
B= 1 or 2 times BDP

No-drop
Threshold L

Keep high link


utilization

L= 0.9T

In order to keep the resource utilization high, DRED does


not drop packets until q(n) is greater than the no-drop
threshold L, which is set to L= 0.9B in this study.
Figure 2 shows the flowcharts of the drop probability
computations. In Table 1, we list all the parameters that can
affect the control performance along with their recommend
values.
3. Related Work
In this section, we give a brief overview of the other three
active queue management algorithms, RED, BLUE, and
SRED. These will be used for the performance comparison.
3.1 RED (Random Early Detection) [2]
A network with RED algorithm detects congestion and
measures the traffic load in the network by using the average
queue size avg. This is calculated using an exponentially
weighted moving average filter and expressed as the following
equation,
avg (1 wq ) avg wq q
(3)
where wq is queue weight.
When the average queue size is smaller than the minimum
threshold minth, no packets are dropped. Once the average
queue size exceeds the minimum threshold, the router
considers the network in congestion and randomly drops the
arriving packets with a given drop probability. The probability
that a packet arriving at the RED queue is dropped depends on
the average queue length, the time elapsed since the last
packet was dropped, and the maximum drop probability
parameter maxp. The drop probability Pa is computed as

Pb
(4)
1 count Pb
avg min th
where Pb max p
, maxp is maximum
max th min th
Pa

value for Pb and count is a variable that keeps track of the


number of packets that have been forwarded since the last
drop. If the average queue size is larger than the maximum
threshold maxth, all the arriving packets will be dropped.
It is shown in [8] that the average queue length (avg) is
related to the number of active connection (N) in the system as
follows,

avg 0.91N 2 / 3 ( maxth / max p )1 / 3


(5)
For the RED algorithm, this equation implies that avg
increases with N until maxth is reached, and there is always an
N where maxth will be exceeded. Since most existing routers
operate with a limited amount of buffer, maxth is usually small
and can easily be exceeded even with small N. Dropping all
incoming packets may result in global synchronization, which

is usually followed by a sustained period of low-link


utilization.

picked up from the list for each subsequent packet arrival, and
its content is compared with the source and destination of the
new packet. If there is a match, Hit is set to one. Otherwise,
Hit is set zero, and with a certain probability p, the content of
this zombie may be replaced by the source and destination of
this new packet. The Hit frequency P(t) will be updated after
each packet arrival, and it can be estimated by using an
exponentially weighted moving average filter expressed in the
following equation.
P (t ) (1 ) P (t 1) Hit (t ) ,
(8)

p
/
M
where
, p is the probability of updating the zombie
list when Hit is zero, and M is number of the zombies in the
list. It was shown that P(t)-1 is a good estimate of the number
of active connections.
Secondly, SRED algorithm can stabilize the queue size at
a level independent of the number of active connections. The
basic drop probability Psred of SRED is related to the queue
size as follows.

3.2 BLUE [5]


Instead of calculating the average queue size, BLUE uses
buffer overflow and link-idle events to manage congestion. If
the queue size keeps on exceeding a certain value after a given
period of time, called the freeze time, the drop probability will
be increased by a constant value d1. On the other hand, when
the link remains idle for the freeze time, the drop probability
will be decreased by a constant value d2. BLUE algorithm can
be approximated by a closed-loop negative feedback system,
in which the control variable is increased when the processor
output is smaller than a threshold value, and the control
variable is decreased when the processor output is larger than
threshold value, as shown in Figure 3. Let e be defined as error
signal, which is the difference between the control target and
the processor output (actual queue size). Then the following
control law can be defined in discrete time
p (n) p ( n 1) d f (e(n)) ,
(6)
where the drop probability updates are limited to d which
can be implemented with a relay (see the control system in
Figure 3. For the drop probability of BLUE, the previous
expression can be expressed as

q ( n) T
p(n) p(n 1) d sgn
T

q (n) T

T

(7)
where T is the control target, and is equal to d1/d2. If d is the
drop probability adjustment parameter d1, and the control
target is set to B/2, the drop probability of BLUE can be
expressed as equation (7). This type of control leads to a
system where the process output oscillates [10]. Thus, the
queue size in a router can theoretically be unstable with the
BLUE algorithm.
Actuacting
(or Error)
Signal
e=T-q

Reference
Input
T +

_
Feedback
Signal
q

Relay
+1

e
-1

Control Signal
or
Manipulating
Variable
p

Disturbances
(Perturbations)
D

+
Process

q ( n ) T

sgn

Pmax

Psred Pmax / 4
0

if B / 3 q B
if B / 6 q B / 3

(9)

if q B / 6

where Pmax = 0.15. The full SRED drop probability Pzap is


associated with the frequency of Hit, basic drop probability
and Hit value.


1
Hit (t )
1

Pzap Psred min 1,


2
P(t )
256 P(t )

(10)

Controlled
Output
q

z-1

Figure 3. A Feedback Control System With a Relay

3.3 SRED (Stabilized RED) [4]


SRED algorithm has its own features in managing the queue
size. First, SRED can estimate the number of active
connections (or flows) or traffic load in order to adjust the
drop probability. The traffic load is estimated by first
initializing the Hit parameter equal to 0 and creating an empty
zombie list (a zombie can be viewed as a container holding
one source and destination address from a packet). For each
packet arrival, its source and destination addresses are
deposited in the list. Once the list is full, a random zombie is

Figure 4. OPNET Network Model Bottleneck Configuration

4. OPNET Models and Network Configuration


In this section, we describe the network configuration and the
OPNET simulation model. Figure 4 shows the OPNET
network model that represents a simple bottleneck network
configuration with two routers and number of subnet nodes.
Each subnet has a number of TCP sources (e.g. 100 TCP

sources per subnet). This configuration can represent the


interconnection of LANs to WANs or dial-up access users
through WANs as in the case of an ISP network.

Figure 5. OPNET Process Model of the IP Router

Figure 6. OPNET Process Model of a TCP Source

Figure 5 shows the process model implemented in the


router rt-1 and rt-2. The process model of the router,
which is an extension the M/M/1 queue model, consists of five
main states init state, arrival state, svc_start state,
svc_compl state and idle state. The init state is used to
initialize the parameters for the simulation. In the arrival
state, instead of using the drop tail method of the acb_fifo
model, the DRED algorithm shown in Figure 2 is
implemented. The packet service process and sending-out
process is implemented in the svc_start state and the
svc_compl state respectively. It transits into the idle state
automatically when no processing is. The feedback control of
Figure 1 along with the algorithm in Figure 2, are
implemented in the DRED state, which calculates the
DRED error signal and DRED drop probability. The
get_stat1 state and the get_stat2 state are used to collect
statistic values at certain intervals.
The other three active queue management algorithms,
RED, BLUE, and SRED are also implemented in the
arrival state. The drop probability of BLUE algorithm is
calculated in the buffer_full state and the link_idle state.
The drop probability of RED algorithm is implemented in the
arrival state. For the SRED algorithm, the Hit frequency
and drop probability are calculated in the arrival state. At
the beginning of each simulation, the active queue

management algorithm can be selected from the simulation


tool.
Figure 6 shows the process model of a TCP source in the
network model src_subnet_n. The process model
implements the TCP-Reno version (i.e., with fast retransmit
and fast recovery). The init state is used to initialize the
source parameters for the simulation. The end_tx state is
used to send out the TCP packet. timer state is to calculate
the timeout period. The ack state is used when
acknowledgements are received from the destination. It
transits into the wait state automatically when no processing
is to be performed. The st_21 state is to collect statistic
values at specified intervals and the st_18 is to collect
statistic values at the end of a simulation.
5. Performance Evaluation
In this section, we evaluate the performance of DRED, and
compare with those of RED, BLUE, and SRED. The TCP
sources are based on a TCP-Reno implementation. The Reno
version uses the Fast-Retransmit and Fast-Recovery
mechanisms. The TCP connections are modeled as greedy FTP
connections; that is, they always have data to send as long as
their congestion windows permit. The maximum segment size
(MSS) for TCP is set to 536 bytes. The receivers advertised
window size is set sufficiently large so that TCP connections
are not constrained at the destination. The ACK-everysegment strategy is used at the TCP destinations. The TCP
timer granularity tick is set to 500 msec and the minimum
retransmission timeout is set to two ticks.
Because of the different design approach in each
algorithm, it is difficult to fairly compare them. In order to be
fair, we set most of the parameter values in each algorithm
using the recommended values from the original papers
[2,4,5,7], except the d1 and d2 values in the BLUE algorithm
and the control target T in the DRED (which is set between the
minimum and maximum threshold of RED). In order to fairly
compare the algorithms, simulation were conducted to select a
suitable buffer size for each algorithm. After many simulation
runs, we determined that the buffer size would be 586 packets
for RED, 586 packets for DRED, 450 packets for BLUE, and
860 packets for SRED. Note that the buffer size is always
equal to approximately one bandwidth-delay product for RED,
DRED.
Table 2 Default Values of Parameters
Parameters
RED
Minimum threshold (minth )
Maximum threshold (maxth)
Maximum value for Pb (maxp)
Queue weight (wq)
Buffer size
DRED
Sampling Interval ( t )
Control Gain (

Filter Gain ( )
Control Target (T)

Default Values
118 packets
352 packets
0.1
0.002
586 packets
10 packet
transmission time
0.00005
0.002
293 packets

BLUE

SRED

Buffer Size (B)


No-drop Threshold (L)
Initial Drop Probability
Initial Drop Probability
Freeze time period
Increase drop probability (d1)
Decrease drop probability (d2)
Buffer Size (B)
Number of the zombie in the zombie
list (M)
The maximum drop probability
(Pmax)
The refresh probability to update the
zombie list (P)
Buffer Size (B)

586 packets
264 packets
0.05
0.05
0.01
0.00025
0.000025
450 packets
1000
0.15
0.25
860 packets

The queue size, drop probability and loss rate are plotted
each 10 msec interval. We run the simulations on a PIII-800
NT workstation. It takes about 15 minutes to complete a
simulation for 100 seconds of simulated time.
Figure 4 shows a network configuration with a total of
1000 TCP sources grouped into 10 subnets with 100 sources
per subnet. In this symmetric system, the interconnection
between the routers is a T3 (45Mbps) link and the rest of links
have an equal data rate so that there is no bottleneck. All the
links have a propagation delay of 10 msec and the round-trip
time (RTT) of the network is 60 msec. We use this model to
run the simulation with different number of sources and to
compare the performance of all four active queue management
algorithms. Due to lack of space, we only present simulation
results for 1000 sources. The default values for each algorithm
are given in Table 2.
Figure 7 shows the queue size of each algorithm with
1000 TCP connections respectively. It can be seen that the
queue sizes are unstable in both RED and BLUE algorithm.
This is due to the design principle of both algorithms. For
RED, the queue size oscillates violently around the thresholds.
For RED and BLUE the queue size shows periods of buffer
underflow
and
overflow
and
the
queue
size
increases/decreases has the number of connection
increases/decreases (load-dependent). Like SRED, the queue
size of DRED is stable around the target buffer occupancy, but
SRED need a larger buffer size than DRED in order to achieve
the same performance. Both DRED and SRED are loadindependent.
Figure 8 shows the drop probability of each algorithm
with 1000 TCP connections respectively. The drop probability
of RED is the highest all four algorithms. The drop probability
of DRED seems to adapt faster compared to BLUE and
SRED. BLUEs drop probability does not react fast enough,
thus leading to the periods of buffer overflow and underflow.
Figure 9 shows the packet loss rate of each algorithm with
1000 TCP connections respectively. Among them, the packet
loss rate of RED is the highest. What is interesting to note is
that the drop probability of DRED seems to be a good
indicator of the real packet loss rate.

Figure 10 shows the histogram of queue size for each


algorithm. We can clearly see that the queue size stabilizes
around the control target for both the DRED and the SRED
algorithm. The RED and the BLUE algorithm have a hard time
stabilizing the queue size.
6. Concluding Remarks
From the simulation results, we can clearly see that the RED
algorithm does not perform as well as the other congestion
control algorithms in a heavily loaded network. It is difficult to
stabilize the queue size as well as to parameterize a queue to
give good performance under different network scenarios and
over a wide range of load levels. Like RED, BLUE has a hard
time to stabilize the queue size. It took BLUE some finetuning before getting those results. Even with fine-tuning,
BLUE does not seem to have good responsiveness when the
traffic load changes. BLUE reacts slowly and leads to periods
of buffer overflow and underflow. On the contrary, DRED and
SRED both stabilize the queue size very well, thus resulting in
a more predictable packet delay inside the network. However,
the drop probability of SRED is not as smooth as DRED. It
can reach 100% randomly, thus causing a higher drop
probability and packet loss rate than those of DRED. In
addition, DRED is much simpler to implement than SRED,
since it does not require any per-flow accounting mechanism.
There still remain many simulation scenarios under which
all the four algorithms can be evaluated and compared, e.g.,
different network configurations, short-lived and long-lived
flows and other variations found in the Internet. We hope to
present these in another paper in the near future as soon as the
results are available.
Reference
[1] V. Jacobson, Congestion Avoidance and Control, Proc.
ACM SIGCOMM88, Aug. 1988, pp. 314 329.
[2] S. Floyd and V. Jacobson, Random Early Detection
Gateways for Congestion Avoidance, IEEE/ACM
Trans. Networking, Vol. 1, No. 4, Aug. 1993, pp. 397
413.
[3] B. Braden, D. Clark, J. Crowcroft, B. Davie, D. Estrin, S.
Floyd, V. Jacobson, G. Minshall, C. Partridge, L.
Peterson, K. K. Ramakrishnan, S. Shenker, J.
Wroclawski, L. Zhang, Recommendation on Queue
Management and Congestion Avoidance in the
Internet, IETF RFC 2309, Apr. 1998.
[4] T. J. Ott, T. V. Lakshman, and L. H. Wong, SRED:
Stabilized RED, Proc. IEEE INFOCOM99, NY,
March 21-25, 1999, pp. 1346-1355.
[5] W. Feng, D. D. Kandlur, D. Saha, and D. G. Shin,
BLUE: A New Class of Active Queue Management
Algorithms, Technical Report CSE-TR-387-99, Dept.
of EECS, University of Michigan, April 1999.
[6] S. Floyd, TCP and Explicit Congestion Notification,
ACM Computer Communication Review, Vol. 24, No.
5, Oct. 1994, pp. 10-23.

[7] J. Aweya, M. Ouellette, and D. Y. Montuno, A Control


Theoretic Approach to Active Queue Management,
Published in Computer Networks Journal (Elsevier
Science), Vol. 36, Issue 2-3, pp. 203-235, July 2001.
[8] R. Morris, Scalable TCP Congestion Control, Proc.
IEEE INFOCOM 2000, pp. 1176 1183.

[9] W. Richard Stevens. TCP Slow Start, Congestion


Avoidance, Fast Retransmit, and Fast Recovery
Algorithms. RFC2001, Jan 1997.
[10] K. Astrom and T. Hagglund, PID Controllers: Theory,
Design, and Tuning, Instrument Society of America,
1995.

B
RED has a hard time maintaining the
queue size between the two
thresholds
maxth
T

minth

DRED maintains the queue size


around the control target T

(b)DRED

(a) RED

B
SRED effectively controls the queue
size at the expense of a larger buffer
size

B/3
Like RED, BLUE suffers from
queue size oscillations (in this
scenario)

B/6

(d) SRED

(c)BLUE
Figure 7. Queue size for 1000 TCP Connections

Compared to DRED, BLUEs drop


probability is less adaptive (due to its
design) and leads to larger queue
deviations since it reacts slower to
traffic load changes

(a) RED

(c) BLUE

DRED is continuously adjusting its


drop probability to reflect any change
in the traffic load
Pmax=0.15
Pmax/4 =0.15/4

(b) DRED

(d) SRED
Figure 8. Drop Probability for 1000 TCP Connections

(a) RED

(b) DRED

(c) BLUE

(d) SRED
Figure 9. Packet Loss Rate for 1000 TCP Connections
DRED stabilizes the queue size
around the T

(a) RED

(b) DRED

Like RED, BLUE has a hard time


maintaining the queue size away from
buffer underflow and underflow

Like DRED, SRED is effective at


stabilizing the queue size

(c) BLUE

(d) SRED
Figure 10. Histogram of Queue Size

Potrebbero piacerti anche