Sei sulla pagina 1di 6

TCP over 2.

5G and 3G wireless networks


Dongmei Zhang , Runtong Zhang, Zhigang Kan, Renaud Cuny, Jussi Ruutu, Jian Ma

Email: { Dongmei Zhang , Runtong Zhang, Zhigang Kan, Renaud Cuny, Jussi Ruutu, Jian Ma }@ nokia.com
NOKIA Research Center

Abstract—Transmission Control Protocol (TCP) by IETF is one of the most widely used transport layer protocols in Internet.
Recently, much development and deployment activity has centered around GPRS, UMTS and IMT-2000, also referred to 2.5G/3G
wireless networks. However, TCP has not been designed bearing in mind wireless networks. Especially flow control features can
perform less than optimally over wireless interfaces. A number of TCP optimization techniques have been studied to enhence the
TCP performance for various wirless environments. This paper proposes a profile of such techniques, particularly effective for to
2.5G/3G wireless networks.

1. INTRODUCTION

Transmission Control Protocol (TCP) is one of the most significant protocols in current Internet. According to
some estimation over 90% of Internet traffic is transported over TCP. For example, www-browsing, e-mail, and
FTP are run over TCP.

Mobile Internet will support existing applications and, consequently, also TCP protocol. However, the original TCP
protocol [1] dates back to 1981 when wireless networks did not have the same position as they have nowadays. As
a consequence, TCP contains certain features that are not very suitable considering the special features of wireless
networks.

Especially the heart of TCP, namely flow control and retransmission mechanisms, may cause problems over
wireless interfaces. These problems originate mainly because the basic TCP assumes that all packet losses are due
to network congestion, not bit errors. When this assumption is combined with the rough flow control scheme of
TCP, the performance of TCP transmissions over wireless networks can be severely degraded.

Fortunately, there exist a number of TCP optimization methods that can be used to improve the situation. This
paper introduces first the basic features of TCP protocol. Next several TCP optimization methods are introduced
and their operation and performance is studied. Finally, conclusions are made about the use of TCP optimization
methods in wireless networks.

2. TCP FLOW CONTROL

TCP provides a connection-oriented, reliable, byte stream service. There are several ways to guarantee reliability,
in which acknowledgment and advertised window are two important concepts. If the TCP sender doesn't receive
an acknowledgment (ACK) of one certain segment during a certain time, the segment is retransmitted. By
advertised window, the receiving TCP can inform the TCP sender that how much buffer the receiver has, so it
prevents a fast host from taking all the buffers on a slower host.

Besides the above simple flow control, TCP also supports four intertwined congestion control algorithms, slow
start, congestion avoidance, fast retransmit and fast recovery.

Slow start operates by observing that the rate at which new packets should be injected into the network is the rate at
which the acknowledgments are returned by the other end. Slow start adds another window to sender's TCP: the
congestion window. By setting congestion window, slow start adjusts the sending rate of the TCP sender.
Meanwhile, when congestion occurs, congestion avoidance algorithm decreases the sending rate immediately by
reducing the congestion window to one segment.

Fast retransmit algorithm is based on the idea that when the third of duplicate acknowledgments is received, TCP
assume that a segment has been lost, and retransmit only this one segment, without waiting for a retransmission
timer to expire. Then, fast recovery algorithm, but not slow start is triggered. Fast recovery algorithm doesn't
reduce the flow abruptly as slow start does, because there is still data flowing between the two ends.

3. ISSUES WITH 2.5G/3G WIRELESS NETWORKS

Hosts in a wired network are connected by cables and hence there is least transmission error due to interference
from the environment. In contrast, hosts in wireless networks move frequently while communicating and as they
share the media for communication they experience a lot of interference from the environment [2].

3.1 High Bit Error Rate (BER)

TCP regards all data loss as a notification of network congestion and lowers its sending rate accordingly. This is a
proper behavior in wired networks where data loss due to bit error is uncommon. However, the wireless BER is
high because wireless transmission is vulnerable to interference from the environment. If TCP unnecessarily
performs a lot of congestion control reactions in case of corruption loss, the end-to-end throughput might be
degraded.

Especially in the 2.5G/3G networks, in oder to improve the performance of wireless link, the link layer applys for
ARQ (Automatic Repeat Request) and FEC (Forward error correction). In general, the link layer ARQ and FEC
can provide an almost error-free packet service to the upper layer traffic. However, the retransmission by ARQ
introduces latency and jitter to the link layer flow, that results in relatively large banwidth-delay product of the
2.5G/3G networks.

3.2 Mobility

Wireless hosts may move frequently while communicating. When the movement of a host causes the change to
another access node, i.e. another wireless link, data already in fight to the wireless host is lost. If the sender has a
large window and the link has a large bandwidth product, such kind of losses is really considerable. Besides, the
TCP at the destination interprets this loss as congestion and invokes congestion control mechanisms, which is also
unnecessary [3].

3.3 Narrow bandwidth

Bandwidth is a scarce resource in the current wireless networks. The maximal bandwidth for circuit switching is
2M bits/s according to the 3G standard and it is for indoor use only. The outdoor bitrate is even lower, around
384K bits/s. Bandwidth also varies highly on wireless networks. From the network infrastructure point of view, a
device, which connects a wireless link and a wired link, is likely to get congested by traffic towards wireless hosts.

4. TCP OPTIMIZATION METHODS

4.1 Parameter optimization

A careful setting of parameters in the TCP sender and receiver is necessary to achieve good performance over
wireless links.

4.1.1 MTU size

One of the link layer parameters is MTU (Maximum Transfer Unit). The larger MTU allows TCP to grow the
congestion window faster [17], because the window is counted in unit of segments. In the link with error, smaller
link PDU size is better in terms of the chance of successful transmission. With layer two ARQ and transparent link
layer fragmentation, the network layer can enjoy larger MTU even in a relatively high BER (Bit Error Rate),
condition. Without these features in the link, smaller MTU is better. TCP over 2.5G/3G SHOULD allow freedom
for designers to choose MTU from a small value (such as 576B) to a large value (up to 1500B).
4.1.2 Path MTU discovery

Path MTU discovery allows a sender to determine the maximum end-to-end transmission unit for a given routing
path. RFC1191[18] and RFC1981[19] describe the MTU discovery procedure for IPv4 and IPv6 respectively. This
allows TCP senders to employ larger segment sizes (without causing IP layer fragmentation) instead of assuming
the default MTU. TCP over 2.5G/3G implementations SHOULD implement Path MTU Discovery. Path MTU
Discovery requires intermediate routers to support the generation of the necessary ICMP messages.

4.1.3 Advertised window size

The other parameter is the advertised window negotiated during the TCP session. If this value is large, the sender
might send at once many segments causing overflows in the last element before the slow link (RNC in case of
UMTS). Packet losses can decrease dramatically the TCP throughput and therefore, the mobile receiver should not
advertise a too large window if the wireless link is thin. One problem is that it is not always evident to know
exactly what will be the available bandwidth of a connection. Especially in 3G mobile networks, this bandwidth
might vary from tens of kbps to hundreds of kbps for a single user. Dynamic methods like window pacing might
provide some solutions.

4.2 Flow related methods

4.2.1 Fast TCP

The basic idea of Fast-TCP mechanism arises from the fact that controlling ACKs flow can indirectly affect the
dynamics of TCP behavior. Therefore, when the congestion on forward path is detected in a router (or a similar
device), where the information of link utilization or buffer occupancy may be employed to notify the congestion,
the backward ACKs will be delayed so as to relieve the congestion quickly and enhance the TCP throughput.
Delaying ACKs effectively controls the rate at which the sender will transmit into the network, and may result in
little or no queuing at the bottleneck router [5][6].

Forward ACK
Buffer Buffer
ACK
Delay

TCP Source TCP Destinatio


Fast-TCP Node

Figure 1. Fast TCP mechanism

The Fast TCP mechanism requires only the modification of router, thus the implementation is inexpensive.
However, because IP network is connectionless, the ACKs of the forward data flow in one router may not go
through the same router, thus delaying the backward ACKs would better to employed in access router, not in any
intermediate routers. The detail implementation of Fast TCP in IPSO router is in [7].

The following figure shows the forward buffer occupancy in some router when the flows are using a wireless link
with and without Fast TCP method. We can observe the benefit of Fast-TCP in this simulation example since no
overflow occurs.
Figure 2. Forward buffer occupancy

4.2.2 Window pacing

In the window pacing method, the idea is to modify the advertised window (AW) of TCP acknowledgments if a
network element gets congested. When an acknowledgment is received in some element, the buffering situation for
the flow in the reverse direction is checked and the AW is possibly modified. Then, the acknowledgement is
normally forwarded to the next hop. There is one major limitation with this method: It is rather difficult to decide
on the new AW value to set. The reason is that the value must not depend only on the available buffer space in the
reverse direction but also on the number of TCP connections that share the same buffer. In normal routers where
the aggregate traffic might be composed of hundreds of TCP connections it's almost impossible to use the window
pacing method. However, in some particular elements where the buffering memory is divided into many part, each
part dedicated to one or few users, this technique might be most useful, especially if the element is a potential
bottleneck of the path. This can happen for example when a wireless network is connected to an IP network. The
algorithm used should however be robust enough to adapt to various situations. In [8] the authors propose a method
to increase or decrease slowly the AW value according to the average queue length. This algorithm is said to
clearly outperform RED and drop from front mechanism in the related environments.

A simulation system built in Nokia Research Center, models TCP flows that goes over a WCDMA link (including
RLC, MAC and physical layers). We tested the window pacing mechanism in the last element before the wireless
link (RNC). In the following scenario, one file of 400.000 bytes is sent over one 64kbits/s channel from a server to
mobile equipment. The Frame Error rate is around 10%. A corrupted frame can be retransmitted (3 times
maximum). The advertised window is set to 64kbytes. The same simulation has been run 20 times with different
seeds and the results show an average obtained out of these samples.

Frame Error Rate = 10% TCP TCP+Window


AW=64000bytes Pacing
data lost at PDCP (bytes) 89 293 29 208
data sent from the server 590 090 452 547
Data received in mobile 498 624 422 520
Mean transfer time for a 154,6 76,25
400 000 bytes file (s)
Max transfer time(s) 188,3 117,7
Figure 3 Window pacing in the RNC

We can see in this example that the advertised window set originally by the mobile is obviously too high for this
bitrate channel causing many packets losses in the RNC. The window pacing mechanism allows this AW value to
be decreased gradually according to the buffering situation. The results show much less packet losses in RNC and
increased TCP throughput.
4.2.3 Random Early Detection (RED)

In RED algorithm, packets are dropped from the element before it gets really congested. An average queue length
is calculated. If the average queue length is below a minimum threshold, all incoming packets are accepted in the
element; If the average queue length is greater that a maximum threshold, all incoming packets are dropped; If the
average queue length is between the two thresholds, packets will be dropped with a probability that varies linearly
from 0 to maxp [9]. The goal of RED is to early inform sources that congestion is likely to occur by dropping
segments. The main effect is to keep queues in routers quite small in order to ease the acceptance of bursts of new
segments. Another advantage is that buffering delay is made shorter so interactive applications (web transfer, telnet
traffic ) have better subjective performance. RED is mostly used in wired networks where aggregate traffic is
made of many TCP flows. This mechanism intends also to avoid global synchronisation, which happens when part
of the time many flows are transmitting, and part of the time only few flows are transmitting. In wireless networks
this technique can be used also in elements, however, RED gives is mostly useful when many flows share the same
buffer which is not always the case in radio networks. But, it is now accepted that RED can provide numerous
advantages and seemingly no disadvantage so it should be widely deployed [9].

4.3 End system modifications

4.3.1 Early Congestion Notification (ECN)

ECN [10] has become a RFC standard (RFC2481) of IETF (Internet Engineering Task Force) in 1999. With this
algorithm, routers detect congestion before the queue overflows; routers are no longer limited to packet drops as an
indication of congestion. Routers could instead set a Congestion Experienced (CE) bit in the packet header of
packets. When the receiver accepts the packet with CE, it will send the notification in backward packet. Then the
sender reduces the current window and slows down the transmission rate, so that the network can avoid congestion
to some extend.

ECN [10] provides useful information to avoid further deteriorating, but still has some limitations for wireless applications.
[11][12] Enhance ECN by adjusting the window size in different ways based on the different congestion notifications, so that
the bandwidth utilization is improved.

4.3.2 Selective Acknowledgment (SACK)

In [13], the authors explain that TCP may experience poor performance when multiple packets are lost from one
window of data. SACK is a strategy, which improves the original TCP behavior in face of multiple dropped
segments. With selective acknowledgments, the data receiver can inform the sender about all segments that have
arrived successfully, so the sender need retransmit only the segments that have been lost. This improvement of TCP
mechanism is really useful in wireless environment where packets can be lost not only from congestion but also
from errors in transmission. This option is more and more deployed in the TCP stack nowadays because it
drastically increases the TCP throughput in some cases.

4.3.3 Increasing TCP's initial window

RFC 2414 [14] describes a modification in TCP that allows an increase of the initial congestion window. An upper
bound of 4380 bytes is specified. This represents a change from RFC 2001 [15], which says that the congestion
window must be initialized to only one segment. In the new RFC, up to four segments can be sent in the beginning
of the connection, depending on the MSS.

The advantages of starting with greater window are listed below [14].
1. When the initial window is one segment, a receiver employing delayed ACKs is forced to wait for a timeout
before generating an ACK. With an initial window of at least two segments, the receiver will generate an ACK
after the second data segment arrives. This eliminates the wait on the timeout (often up to 200 msec).
2. For connections transmitting only a small amount of data, a larger initial window reduces the transmission time
(assuming at most moderate segment drop rates). For many email (SMTP) and web page (HTTP) transfers that
are less than 4K bytes, the larger initial window would reduce the data transfer time to a single RTT.
3. For connections that will be able to use large congestion windows, this modification eliminates up to three
RTTs and a delayed ACK timeout during the initial slow-start phase.

For these reasons increasing TCP's initial window should be most useful in wireless networks where end to end
delays can be quite large.

4.4 Snoop method

The so-called snoop method has some similarities with Split TCP methods. Since it does not break the TCP
connection, it can be regarded as an improvement over Split TCP because end-to-end semantics is retained[16].
The base station has a snoop module inside and it looks at every packet on all TCP connections going by. The
module caches the packets that are sent by the fixed host to the wireless host but have not yet been acknowledged
by the wireless host. On receiving a packet from the fixed host the module stores the packet in its cache and then
passes it to the wireless host. If the packets are lost on the wireless link then the base station gets repeated
acknowledgements for the lost segment from the wireless host. The snoop module on detecting this loss checks if it
has the packet in the cache if it has the packet retransmits the packet and suppresses the ACK to the fixed host,
otherwise forwards the ACK to the fixed host and lets the sender recover from the loss[3].

5. CONCLUSION

In this article we have presented the basic features of TCP protocol. Since TCP has not been originally designed
bearing in mind wireless networks, the performance can be degraded over wireless interfaces.Several methods for
optimizing TCP performance have been introduced and studied.

REFERENCES
[1] IETF RFC793, Transmission Control Protocol , 1981.
[2] Liu J "Transport layer protocols for wireless networks" Ph.D. thesis University of South Carolina, 1999
[3] Nachiket Deshpande, "TCP Extensions for Wireless Networks"
[4] Internet Draft"End-to-end Performance Implications of Slow Links" July 2000
[5] Jian Ma, "A simple Fast TCP flow control in IP network or IP/ATM subnet – a significant improvement over IP", November, 1997
[6] Jian Ma, Jing Wu, and Peng Zhang, "ACK delay control in IP networks", ICT'99, 1999
[7] Dongmei Zhang, Yu Shi, Jian Ma, "Fast TCP implementation in IPSO router", NERD conference, 2000
[8] Lampros Kalampoukas, Anujan Varma, K.K. Ramakrishnan, "Explicit Window Adaptation: A Method to Enhance TCP Performance" 1998 IEEE.
[9] IETF RFC2309 "Recommendation on queue management and congestion avoidance in the Internet" 1998.
[10] Ramakrishnan, K. and S. Floyd, "A Proposal to add Explicit Congestion Notification (ECN) to IP", RFC 2481, January 1999.
[11] X. Li, F. Peng, J. Wu, S. Cheng, J. Ma, "A New Proposal to Enhance the performance of TCP in Wireless Networks", NOKIA invention report, Oct.,
1999
[12] X. Li, F. Peng, J. Wu, S. Cheng, J. Ma, "Enhancement of ECN for Wireless Network Application", NOKIA invention report, Sep., 1999
[13] IETF RFC2018 "TCP Selective Acknowledgment Options" 1996
[14] IETF RFC2414 "Increasing TCP's initial window" 1998.
[15] IETF RFC 2001 TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms. 1997
[16] G. Montenegro, S. Dawkins, M. Kojo, V. Magre, N. Vaidya, Request for Comments: 2757 "Long Thin Networks" January 2000
[17] Dawkins, S. and G. Montenegro, "End-to-end Performance Implications of Slow Links", Internet draft , November 2000
[18] Mogul, J. and S. Deering, "Path MTU Discovery", RFC 1191, November 1990.
[19] McCann, J., Deering, S. and J. Mogul, "Path MTU Discovery for IP version 6", RFC 1981, August 1996.

Potrebbero piacerti anche