Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Algorithm
Abstract
Service prioritization among different traffic classes is an important goal for the
Internet. Conventional approaches to solving this problem consider the existing best-
effort class as the low-priority class, and attempt to develop mechanisms that provide
“better-than-best-effort” service.
The key mechanisms unique to TCP-LP congestion control are the use of one-way
packet delays for early congestion indications and a TCP-transparent congestion
avoidance policy. Following things are experienced in our project
2) Both single and aggregate TCP-LP flows are able to successfully utilize excess
Network bandwidth; moreover, multiple TCP-LP flows share excess bandwidth fairly
3) Substantial amounts of excess bandwidth are available to the low-priority class, even
in the presence of “greedy” TCP flows
4) Despite their low-priority nature, TCP-LP flows are able to utilize significant amounts
of available bandwidth in a wide-area network environment
Synopsis
Since TCP is the dominant protocol for best-effort traffic, we design TCP-LP to
realize a low-priority service as compared to the existing best effort service. Namely, the
objective is for TCP-LP flows to utilize the bandwidth left unused by TCP flows in a
non-intrusive, or TCP-transparent, fashion.
In contrast, TCP-LP is algorithmic with the goal of transmitting at the rate of the
available bandwidth. Consequently, competing TCP-LP flows obtain their fair share of
the available bandwidth, as opposed to probing flows which infer the total available
bandwidth, overestimating the fraction actually available individually when many flows
are simultaneously probing. Moreover, as the available bandwidth changes over time,
TCP-LP provides a mechanism to continuously adapt to changing network conditions.
ANALYSIS
In the Internet protocol suite, TCP the intermediate layer between the Internet
Protocol (IP) below it, and an application above it. Applications often need reliable pipe-
like connections to each other, whereas the Internet Protocol does not provide such
streams, rather only reliable packets. TCP does the task of the Transport Layer in the
simplified OSI model of computer networks. The Transmission Control Protocol (TCP) is
one of the core protocol of the Internet Protocol suite.
Using TCP, applications on networked hosts can create connections to one and in-
order delivery of data from sender to receiver. TCP also distinguishes data for multiple
connections by current applications e.g. webserver and e-mail server running on the same
host. Cp supports many of Internet’s most popular application protocols and resulting
applications including the World Wide Web e-mail and Secure Shell.
Application send streams of octets (8-bit bytes) to TCP for delivery through the
network, and TCP divides the byte streams into appropriately sized segments usually
delineated by the maximum transmission unit (MTU) size of the data link layer of the
network the computer attached to. TCP then passes the resulting packets to the Internet
Protocol, for delivery through a network to the TCP module of the entity at the other end.
TCP checks to make sure no packets are lost by giving each packet a sequence number,
which is also used to make sure that the data are delivered to the entity at the other end in
the correct order. The TCP module at the far end sends back an acknowledgement for
packets which have been successfully receiver, a timer at the sending TCP will cause a
timeout if an acknowledgement is not received with a reasonable round-trip time RTT)
and the data will be re-transmitted. The TCP checks that no bytes are damaged by using a
checksum; one is computer at the sender for each block of data before it is send, and
checked at the receiver.
CONGESTION CONTROL
The final part to TCP is congestion throttling. Acknowledgements for data send,
or lack of acknowledgements, are used by senders to implicitly interpret network
conditions between the TCP sender and receiver. Coupled with the timers, TCP senders
and receivers can alter the behavior of the flow of data. This is generally referred to as
flow control, congestion control and /or network congestion avoidance. TCP uses a
number of mechanisms to achieve high performance and avoid congesting the network.
Enhancing TCP to reliably handle loss, minimize errors, manage congestion and go fast
in very high-speed environments are ongoing areas of research and standards
developments.
PRIOR WORK
LIMITATONS:
TCP/Vegas can improve TCP throughput over the Internet by avoiding packet
loss. However, these studies were based on Internet paths that existed in the early 1990’s
which generally involved at least one T1 speed link and consequently allows any given
flow to consume a significant fraction of available bandwidth. The studies also did not
isolate the impact of the congestion avoidance algorithm (i.e., CAM) from the enhanced
loss recovery enhancement.
In order to overcome the drawbacks in TCP Vegas “ The Incremental
Deployability of RTT-Based Congestion Avoidance for High Speed TCP Internet
Connections “ [4] by Jim Martin, Arne Nilsson and Injong Rhee was proposed .
This focuses on end-to-end congestion avoidance algorithms that use round trip time
(RTT) fluctuations as an indicator of the level of congestion.
DCA in today’s best effort Internet where IP switches are subject to thousands of
TCP flows resulting in congestion with time scales that span orders of magnitude. There
results suggested that RTT-based congestion avoidance may not be reliably incrementally
deployed in this environment. Through extensive measurement and simulation, it was
found that when TCP/DCA (i.e., a TCP/Reno sender that is extended with DCA) is
deployed over a high speed Internet path, the flow generally experience degraded
throughput compared to an unmodified TCP/Reno flow. It showed that the congestion
information contained in RTT samples is not sufficient to reliably predict packet loss and
that the congestion reaction by a DCA flow, assuming that the flow consumes a small
fraction of the resources at the bottleneck, has minimal impact on the congestion level
over the path when the total DCA traffic at the bottleneck consumes less than 10% of the
bottleneck bandwidth.
LIMITATIONS:
In the last decade a large body of work has been devoted to providing quality of
service to individual real-time flows. Admission control is the common element of these
Integrated Services (IntServ) architectures; that is, flows must request service from the
network and are accepted (or rejected) depending on the level of available resources.
Typically this involves a signaling mechanism such as to carry the reservation request to
all the routers along the path. While such architectures provide excellent quality-of
service, they have significant scalability problems. In “ Endpoint Admission Control :
Architectural Issues and Performance “ [5] by Lee Breslau, Edward W.Knightly, Scott
Shenkar proposed the traditional approach to implementing admission control, as
exemplified by the Integrated Services proposal in the IETF, uses a signaling protocol to
establish reservations at all routers along the path.
LIMITATIONS:
1. End point admission control certainly has its flaws. The
Set-up delay is substantial, on the order of seconds, which may limit
its appeal for certain applications.
2. The utilization and loss rate can degrade somewhat under sufficiently high loads even
with slow start probing.
3. The quality of service is not predictable across settings.
While these performance problems are not insignificant, there are two far greater barriers
to adoption.
First, as of yet we have no proposed mechanism to enforce the uniformity
of the admission thresholds, or even to enforce the use of Admission control at all in this
service class. That is, users could send packets with the appropriate admission control DS
field without using admission control. A similar problem is faced by our current best
effort congestion control paradigm, where users can Where we contended that the real
complexity for out-of-band marking was the virtual queue, as one could easily achieve
exactly the same results doing out-of-band virtual dropping instead of out-of-band
marking. This is equivalent to using a threshold of _ = 1, which is why we related it to
the problem of setting the thresholds uniformly currently send best-effort traffic without
using any congestion control.
ADVANTAGES:
The algorithm is adaptive it requires no a priori traffic statistics and effectively tracks
changes in network conditions. Network simulator experiments revealed that Delphi
gives accurate cross-traffic estimates for higher link utilization levels.
LIMITATIONS:
Many distributed applications can make use of large background transfers of data
that humans are not waiting for to improve availability, reliability, latency or consistency.
However, given the rapid fluctuations of available network bandwidth and changing
resource costs due to technology trends, hand tuning the aggressiveness of background
transfers risks (1) complicating applications, (2) being too aggressive and interfering
with other applications, and (3) being too timid and not gaining the benefits of
background transfers.
Our goal is for the operating system to manage network resources in order to
provide a simple abstraction of near zero-cost background transfers. “ TCP NICE: A
MECHANISM FOR BACKGROUND TRANSFERS” [9] by Arun Venkataramani, Ravi
Kokku and Mike Dahlin can provably bound the interference inflicted by background
flows on foreground flows in a restricted network model. And they microbenchmarks and
case study applications suggest that in practice it interferes little with foreground flows,
reaps a large fraction of spare network bandwidth, and simplifies application construction
and deployment. For example, in our prefetching case study application, aggressive
prefetching improves demand performance by a factor of three when Nice manages
resources; but the same prefetching hurts demand performance by a factor of six under
standard network congestion control. It dramatically reduces the interference inflicted by
background flows on foreground flows. It does so by modifying TCP congestion control
to be more sensitive to congestion than traditional protocols such as TCP Reno or TCP
Vegas by detecting congestion earlier, reacting to it more aggressively, and allowing
much smaller effective minimum congestion windows.
SYSTEM DESIGN
TCP-LP (Low Priority), an end-point protocol will be devised that achieves two-
class service prioritization without any support from the network. The key observation is
that end-to-end differentiation can be achieved by having different end-host applications
employ different congestion control algorithms as dictated by their performance
objectives. Since TCP is the dominant protocol for best-effort traffic, we design TCP-LP
to realize a low-priority service as compared to the existing best effort service. Namely,
the objective is for TCP-LP flows to utilize the bandwidth left unused by TCP flows in a
non-intrusive, or TCP-transparent, fashion. Moreover, TCP-LP is a distributed algorithm
that is realized as a sender-side modification of the TCP protocol.
IMPLEMENTATION PLAN
First, we develop a reference model to formalize the two design objectives: TCP-
LP transparency to TCP, and (TCP-like) fairness among multiple TCP-LP flows
competing to share the excess bandwidth. The reference model consists of a two level
hierarchical scheduler in which the first level provides TCP packets with strict priority
over TCP-LP packets and the second level provides fairness among micro flows within
each class. TCP-LP aims to achieve this behavior in networks with non differentiated
(first-come-first-serve) service.
ALGORITHM DESCRYPTION
To achieve low priority service in the presence of TCP traffic, it is necessary for
TCP-LP to infer congestion earlier than TCP. In principle, the network could provide
such early congestion indicators. For example, TCP-LP flows could use a type-of service
bit to indicate low priority, and routers could use Early Congestion Notification (ECN)
messages to inform TCPLP flows of lesser congestion levels than TCP flows. However,
given the absence of such network support, we devise an endpoint realization of this
functionality by using packet delays as early indicators for TCP-LP, as compared to
packet drops used by TCP. In this way, TCP-LP and TCP implicitly coordinate in a
distributed manner to provide the desired priority levels.
DELAY THRESHOLD:
TCP-LP measures one-way packet delays and employs a simple delay threshold-
based method for early inference of congestion. Denote di as the one-way delay of the
packet with sequence number i, and dmin and dmax as the minimum and maximum
one-way packet delays experienced throughout the As UDP flows are non-responsive,
they would also be considered high priority and multiplexed with the TCP flows.
connection’s lifetime. Thus, dmin is an estimate of the oneway propagation delay and
dmax - dmin is an estimate of the maximum queueing delay. Next, denote as the
delay smoothing parameter, and sdi as the smoothed one-way delay. An early indication
of congestion is inferred by a TCP-LP flow whenever the smoothed one-way delay
exceeds a threshold
within the range of the minimum and maximum delay.
DELAY MEASUREMENT
TCP-LP obtains samples of one-way packet delays using the TCP timestamp
option . Each TCP packet carries two fourbyte timestamp fields. A TCP-LP sender
timestamps one of these fields with its current clock value when it sends a data packet.
On the other side, the receiver echoes back this timestamp value and in addition
timestamps the ACK packet with its own current time. In this way, the TCP-LP sender
measures one-way packet delays. Note that the sender and receiver clocks do not have to
be synchronized since we are only interested in the relative time difference. Moreover, a
drift between the two clocks is not significant here as resets of dmin and dmax on
timescales of minutes can be applied .Finally, we note that by using one-way packet
delay measurements instead of round-trip times, cross-traffic in the reverse direction does
not influence TCP-LP’s inference of early congestion. Minimum and maximum one-way
packet delays are initially estimated during the slow-start phase and are used after the
first packet loss, i.e., in the congestion avoidance phase.
CONGESTION AVOIDANCE POLICY
Our control model is based on the assumptions of the original AIMD algorithm; we show
that both efficiency and fairness of AIMD can be improved.
2. Consider for simplicity a scenario with one TCP-LP and one TCP flow. The
reference strict priority scheduler serves TCP-LP packets only when there are
no TCP packets in the system. However, whenever TCP packets arrive, the
scheduler immediately begins service of higher priority TCP packets.
Similarly, after serving the last packet from the TCP class, the strict priority
scheduler immediately starts serving TCP-LP packets. Note that it is impossible
to exactly achieve this behavior from the network endpoints as TCP-LP operates
on timescales of round-trip times, while the reference scheduling model operates on time-
scales of packet transmission times. Thus, our goal is to develop a congestion control
policy that is able to approximate the desired dynamic behavior.
window by one per round-trip time (as with TCP flows in this phase). We observe
that as with router-assisted early congestion indication consecutive packets from the same
flow often experience similar network congestion state. Consequently, as suggested for
ECN flows, TCP-LP also reacts to a congestion indication event at most once per round-
trip time. Thus, in order to prevent TCP-LP from over-reacting to bursts of congestion
indicated packets, TCP-LP ignores succeeding congestion indications if the source has
reacted to a previous delay-based congestion indication or to a dropped packet in the last
round-trip time. Finally, the minimum congestion window for TCP-LP flows in the
inference phase is set to 1. In this way, TCP-LP flows conservatively ensure that an
excess bandwidth of at least one packet per round-trip time is available before probing for
additional bandwidth.
Yes
Transmission
Congestion
Avoidance
Policy (AIMD)
TCP Data
Transmission
Simultaneous
Background File
Transfer
Module Description:
This Module is used to check whether the congestion is occurred or not in the
network This is achieved by sending time stamp with the header of TCP packet.
For each and every Packet , We will attach the information that is to be transferred
with the header of the each Packet. Among these measurement, the Timestamp is
important thing to identify the congestion.
There are two types of approach such as Loss Based, Delay Based to identify the
congestion. Loss Based Approach is used in TCP – Reno . The Concept of this
Approach is to check the Congestion in loss of first packet during Transmission.
But, Delay Based Approach solves the problem with out loss of packet. Instead
that we take the total Round Trip Queuing Delay for the entire Transmission. These
Delay is used to compare the Time Stamp value that we defined in Header of Each
Packet.
If the Round Trip Queuing Delay is Greater than the Timestamp, We determine
the Congestion Status
Congestion Avoidance Policy.
There are number of approach to identify the congestion and to prevent it. But here we
consider AIMD (Additive Increase Multiplicative Decrease)
Our control model is based on the assumptions of the original AIMD algorithm;
we show that both efficiency and fairness of AIMD can be improved.
Many applications require fast data transfer over high speed and long distance
networks. However, standard TCP fails to fully utilize the network capacity in high-speed
and long distance networks due to its conservative congestion control (CC) algorithm.
Some works have been proposed to improve the connection’s throughput by adopting
more aggressive loss-based CC algorithms, which may severely decrease the throughput
of regular TCP flows sharing the network path. On the other hand, pure delay-based
approaches may not work well if they compete with loss-based flows.
Many distributed applications can make use of large background transfers of data
that humans are not waiting for to improve service quality. For example, a broad range of
applications and services such as data backup, prefetching, enterprise data distribution,
Internet content distribution, and peer-to-peer storage can trade increased network
This is achieved by transferring the file in sharable thread which has the
capability to transfer more than one file at a time. Here we transfer TCP in one thread and
TCP-LP in another thread. So TCP-LP occupies unused bandwidth that is left by TCP
transmission due to ReverseTaffic Delay.
Low Priority data transfer is transferred through Unused Bandwidth as compared to the
“fair share” of bandwidth as targeted by TCP. This module achieves both the TCP and
TCP LP communication. Simultaneous transmission of both Tcp and Tcp Lp utilizes the
maximum Bandwidth.
4. TCP-LP Performance Evaluation against TCP
For each and every new Technology we propose there will be comparison against
the Existing system which has some defects. Here we compare the two service TCP
and TCP-LP in terms of Bandwidth Throughput. Bandwidth of TCP of TCP-LP
transmission is found and compared each of them.
Here We show the output in chart representation which compares two service for
High priority and Low priority.
WHAT IS JAVA?
Simple Architecture-neutral
Object-oriented Portable
Distributed High-performance
Interpreted multithreaded
Robust Dynamic
Secure
Java is also unusual in that each Java program is both compiled and interpreted.
With a compile you translate a Java program into an intermediate language called Java
bytecodes the platform-independent code instruction is passed and run on the
computer.
Compilation happens just once; interpretation occurs each time the program is
executed. The figure illustrates how this works.
Java
Interpreter
Program
Compilers My Program
You can think of Java byte codes as the machine code instructions for the Java
Virtual Machine (Java VM). Every Java interpreter, whether it’s a Java development
tool or a Web browser that can run Java applets, is an implementation of the Java VM.
The Java VM can also be implemented in hardware.
Java byte codes help make “write once, run anywhere” possible. You can compile
your Java program into byte codes on my platform that has a Java compiler. The byte
codes can then be run any implementation of the Java VM. For example, the same
Java program can run Windows NT, Solaris, and Macintosh.
JAVA PLATFORM
You’ve already been introduced to the Java VM. It’s the base for the Java
platform and is ported onto various hardware-based platforms.
as a server serves and supports clients on a network. Examples of the servers include
Web Servers, proxy servers, mail servers, print servers, and boot servers. Another
specialized program is a Servlet. Servlets are similar to applets in that they are
runtime extensions of the application. Instead of working in browsers, though, servlets
run with in Java Web Servers, configuring of tailoring the server.
How does the Java API support all of these kinds of programs? With
packages of software components that provide a wide range of functionality. The API
is the API included in every full implementation of the platform.
The Essentials: Objects, Strings, threads, numbers, input and output, datastructures,
system properties, date and time, and so on.
Internationalization: Help for writing programs that can be localized for users.
JAVA PROGRAM
• Java API
• Java Program
• Hard Ware
API and Virtual Machine insulates the Java program from hardware
dependencies. As a platform-independent environment, Java can be a bit slower than
native code. However, smart compilers, well-tuned interpreters, and Just-in-time-byte-
code compilers can bring Java’s performance close to the native code without
threatening portability.
However, Java is not just for writing cut, entertaining applets for
the World Wide Web (WWW). Java is a general purpose, high-level programming
language and a powerful software platform. Using the fineries Java API,you can write
many types of programs.
Security:
Introduction
Description
This article presents a new socket class which supports both TCP and
UDP communication. But it provides some advantages compared to other
classes that you may find here or on some other Socket Programming
articles. First of all, this class doesn't have any limitation like the need to
provide a window handle to be used. This limitation is bad if all you want is
a simple console application. So this library doesn't have such a limitation.
It also provides threading support automatically for you, which handles the
socket connection and disconnection to a peer. It also features some options
not yet found in any socket classes that I have seen so far. It supports both
client and server sockets. A server socket can be referred as to a socket that
can accept many connections. And a client socket is a socket that is
connected to server socket. You may still use this class to communicate
between two applications without establishing a connection. In the latter
case, you will want to create two UDP server sockets (one for each
application). This class also helps reduce coding need to create chat-like
applications and IPC (Inter-Process Communication) between two or more
applications (processes). Reliable communication between two peers is also
supported with TCP/IP with error handling. You may want to use the smart
addressing operation to control the destination of the data being transmitted
(UDP only). TCP operation of this class deals only with communication
between two peers.
Analysis of Network
TCP/IP stack
IP datagram’s
TCP
Internet addresses
In order to use a service, you must be able to find it. The Internet uses an
address scheme for machines so that they can be located. The address is a 32 bit
integer which gives the IP address. This encodes a network ID and more
addressing. The network ID falls into various classes according to the size of the
network address.
Network address
Class A uses 8 bits for the network address with 24 bits left over for other
addressing. Class B uses 16 bit network addressing. Class C uses 24 bit network
addressing and class D uses all 32.
Subnet address
Host address
8 bits are finally used for host addresses within our subnet. This places a
limit of 256 machines that can be on the subnet.
Total address
Port addresses
• Simultaneously TCP-LP transfers the low priority file with TCP normal flows.
• Background file transfer after reducing the congestion in end point.
• Transmitting the large volume of data by utilizing the TCP unused bandwidth.
• Enables the physical channel to increase the data transfer rate.
Conclusion
TCP-LP achieves low-priority service without the support of the network
TCP-LP is largely non-intrusive to TCP traffic while at the same time, TCP-LP
flows can successfully utilize a large portion of the excess network bandwidth.
TCP-LP Bandwidth utilization is increased with the TCP utilization
File transfer time of best-effort web traffic are significantly reduced when long-
lived bulk data transfers use TCP-LP rather than TCP.
Future Enhancement
There is a say that nothing could be failure until try, try, try.
So in the near future, I will complete following things
This project is focused for end to end congestion avoidance and Low Priority Data
Transfer. In order to utilize the entire unused bandwidth It can be implemented in
entire World by covering all the terminal in the network.
TCP LP DATA FLOW DIAGRAM
LP DESTINATION
TRANSMISSION N/W
Unit testing
Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program input produce valid outputs. All
decision branches and internal code flow should be validated. It is the testing of
individual software units of the application .it is done after the completion of an
individual unit before integration. This is a structural testing, that relies on knowledge of
its construction and is invasive. Unit tests perform basic tests at component level and test
a specific business process, application, and/or system configuration. Unit tests ensure
that each unique path of a business process performs accurately to the documented
specifications and contains clearly defined inputs and expected results.
Tests, Scripts and Cases
Unit Tests
Test
No. Test Case Expected Result Pass
Test case
Individual Modules are executed separately and executed entire modules finally
This project is applied in the network and checked for the LP Service. If the
Remote System was not having LP application , it was shown that LP application should
be enabled in the two end.