Sei sulla pagina 1di 60

CN2

June- 2015
1.
A. Explain store-and-forward packet switching?
Ans: The major components of the network are the ISP’s equipment (routers connected by transmission lines), shown inside
the shaded oval, and the customers’ equipment, shown outside the oval. Host H1 is directly connected to one of the ISP’s
routers, A, perhaps as a home computer that is plugged into a DSL modem. In contrast, H2 is on a LAN, which might be an
office Ethernet,with a router, F,
owned and operated by the customer. This router has a leased line to the ISP’s equipment. We have shown F as being
outside the oval because it does not belong to the ISP. For the purposes of this chapter, however, routers on customer
premises are considered part of the ISP network because they run the same algorithms as the ISP’s routers (and our main
concern here is algorithms). This equipment is used as follows. A host with a packet to send transmits it to the nearest
router, either on its own LAN or over a point-to-point link to the ISP. The packet is stored there until it has fully arrived
and the link has finished its processing by verifying the checksum. Then it is forwarded to the next router along the path
until it reaches the destination host, where it is delivered. This mechanism is store-and-forward packet switching, as we
have seen in previous chapters.

B. Breifly discuss the various design issues of the network layer design ?
Ans: i). Store and forward packet switching: * A host with a packet to send transmits it to the nearest router. * the packets
is stored there until it has fully arrived. * the link has finished its processing by verifyingthe checksum. * then it is forwarded
to the next router along the path until it reaches the destination host. * this mechanism is store-forward packet switching.
ii). Service provided to the transport layers: the network layer services have been designed with the goals:
• The advice should be independent of router technology. * the transport layer should be shielded from the
number,type, and topology of the routers present. * the network address made available to the transport layer shd
use a uniform numbering plan, even across LANs and WANs
iii).Implementation of connectionless services: if connectionless services is offered, packets are injected into network
individually and routed independently of each other. No advance setup is needed. In this contex, the packets are frequently
called datagrams
iv). Implementation of connection oriented services: if connection oriented services is used, a path from the source router
all the way to the destination router must be estb bfr any data packets can be sent. This connection is called virtual circuit.
v). comparison of VC and Datagram networks:

C. Differentiate service and protocol?


Ans: Services and protocols are distinct concepts.. A service is a set of primitives (operations) that a layer provides to the
layer above it. The service defines what operations the layer is prepared to perform on behalf of its users, but it says
nothing at all about how these operations are implemented. A service relates to an interface between two layers, with the
lower layer being the service provider and the upper layer being the service user.
A protocol, in contrast, is a set of rules governing the format and meaning of the packets, or messages that are exchanged
by the peer entities within a layer. Entities use protocols to implement their service definitions. They are free to change
their protocols at will, provided they do not change the service visible to their users. In this way, the service and the
protocol are completely decoupled.
Thus we can say that services relate to the interfaces between layer. In contrast, protocols relate to the packets sent
between peer entities on different machines.

D. Compare Virtual circuit and Datagram Subnet?


2. Explain hierarchical routing in detail?
Ans: In hierarchical routing, routers are classified in groups known as regions. Each router has only the information
about the routers in its own region and has no information about routers in other regions. So routers just save one record
in their table for every other region. In this example, we have classified our network into five regions (see below).
If A wants to send packets to any router in region 2 (2a,2b,2c and 2d), it sends them to B, and so on. As you can see, in
this type of routing, the tables can be summarized, so network efficiency improves. The above example shows two-level
hierarchical routing. We can also use three- or four-level hierarchical routing.
In three-level hierarchical routing, the network is classified into a number of clusters. Each cluster is made up of a
number of regions, and each region contains a number or routers. Hierarchical routing is widely used in Internet routing
and makes use of several routing protocols.

B. Explain SPIN and Zigbee protocol?


Ans: SPIN stands for Sensor Protocol for Information via Negotiation. There is family of protocols called as SPIN; they
come in different flavors and features. These protocols are designed to address the deficiency of flooding and gossiping.
SPIN uses three types of messages, ADV, REQ and DATA. The ADV message is broadcasted by a node which has some
data. This message is broadcasted by the node. This message will say about type of data contained by the advertising
node. Interested nodes which got the ADV message send REQ message requesting for the data. The node having the data
sends the data to the interested nodes. The nodes after receiving data send ADV message, and the process continues. This
can be seen in figure below.

SPIN protocol

Node 1 sends ADV message to all its neighbors, 2 and 3.Node 3 requests for the data using REQ message, for which node
1 send data using message DATA to node 3. After receiving the data Node 3 sends ADV message to its neighbors 4 and 5
and the process continues. It does not send to 1 because 3 knows that it received data from 1.
The data is described in the ADV packet using high level data descriptors, which are good enough to identify the data.
These high level data descriptors are called meta-data. The meta-data of two different data’s should be different and meta-
data of two similar data should be similar. The use of meta-data prevents, the actual data being flooded through out the
network. The actual data can be given to only the nodes which need the data. This protocol also makes nodes more
intelligent, every node will have a resource manager, which will inform each node about the amount various resources left
in the node. Accordingly the node can make a decision regarding, whether it can as forwarding node or not.
ZigBee
ZigBee is an open global standard for wireless technology designed to use low-power digital radio signals for personal
area networks. ZigBee operates on the IEEE 802.15.4 specification and is used to create networks that require a low data
transfer rate, energy efficiency and secure networking. It is employed in a number of applications such as building
automation systems, heating and cooling control and in medical devices.
ZigBee is designed to be simpler and less expensive than other personal are network technologies such as
Bluetooth.ZigBee is a cost- and energy-efficient wireless network standard. It employs mesh network topology, allowing
it provide high reliability and a reasonable range.
One of ZigBee's defining features is the secure communications it is able to provide. This is accomplished through the use
of 128-bit cryptographic keys. This system is based on symmetric keys, which means that both the recipient and
originator of a transaction need to share the same key. These keys are either pre-installed, transported by a "trust center"
designated within the network or established between the trust center and a device without being transported. Security in a
personal area network is most crucial when ZigBee is used in corporate or manufacturing networks.
3.
A. Distinguish between the leaky bucket and token bucket congestion control algorithm?
Ans: • Traffic shaping (also referred to as packet shaping) is the technique of delaying and restricting certain packets
traveling through a network to increase the performance of packets that have been given priority.

• Classes are defined to separate the packets into groupings so that they can each be shaped separately allowing some
classes to pass through a network more freely than others. Traffic shapers are usually placed at the boundaries of a
network to either shape the traffic going entering or leaving a network.

• Traffic shaping is a mechanism to control the amount and rate of the traffic sent to the network. The two traffic shaping
techniques are:
i. Leaky Bucket Algorithm

• Leaky bucket is a bucket with a hole at bottom. Flow of the water from bucket is at a constant rate which is independent
of water entering the bucket. If bucket is full, any additional water entering in the bucket is thrown out.

• Same technique is applied to control congestion in network traffic.Every host in the network is having a buffer with
finite queue length
• Packets which are put in the buffer is full are thrown away.The buffer may drain onto the subnet either by some number
of packets per unit time,or by some total number of bytes per unit time.
• A FIFO queue is used for holding the packets.
• If the arriving packets are of fixed size,then the process removes a fixed number of packets from the queue at each tick
of the clock.
• If the arriving packets are of different size,then the fixed output rate will not be based on the number of departing
packets.
• Instead it will be based on the number of departing bytes or bits.
Comparison Of Token Bucket and Leaky Bucket Algorithm:

LEAKY BUCKET TOKEN BUCKET

When the host has to send a packet In this leaky bucket holds tokens
, packet is thrown in bucket. generated at regular intervals of time.

Bucket leaks at constant rate Bucket has maximum capacity.

Bursty traffic is converted into If there is a ready packet , a token is


uniform traffic by leaky bucket. removed from Bucket and packet is send.

In practice bucket is a finite queue If there is a no token in bucket, packet


outputs at finite rate can not be send.

Some advantage of token Bucket over leaky bucket –


• If bucket is full in token Bucket , token are discard not packets. While in leaky bucket, packets are discarded.
• Token Bucket can send Large bursts can faster rate while leaky bucket always sends packets at constant rate.
The token bucket is an algorithm used in packet switched computer networks and telecommunications networks. It can be
used to check that data transmissions, in the form of packets, conform to defined limits on bandwidth and burstiness (a
measure of the unevenness or variations in the traffic flow). It can also be used as a scheduling algorithm to determine the
timing of transmissions that will comply with the limits set for the bandwidth and burstiness: see network scheduler

The token bucket algorithm is based on an analogy of a fixed capacity bucket into which tokens, normally representing a
unit of bytes or a single packet of predetermined size, are added at a fixed rate. When a packet is to be checked for
conformance to the defined limits, the bucket is inspected to see if it contains sufficient tokens at that time. If so, the
appropriate number of tokens, e.g. equivalent to the length of the packet in bytes, are removed ("cashed in"), and the
packet is passed, e.g., for transmission. The packet does not conform if there are insufficient tokens in the bucket, and the
contents of the bucket are not changed. Non-conformant packets can be treated in various ways:
• They may be dropped.
• They may be enqueued for subsequent transmission when sufficient tokens have accumulated in the bucket.
• They may be transmitted, but marked as being non-conformant, possibly to be dropped subsequently if the network is
overloaded.
A conforming flow can thus contain traffic with an average rate up to the rate at which tokens are added to the bucket,
and have a burstiness determined by the depth of the bucket. This burstiness may be expressed in terms of either a jitter
tolerance, i.e. how much sooner a packet might conform (e.g. arrive or be transmitted) than would be expected from the
limit on the average rate, or a burst tolerance or maximum burst size, i.e. how much more than the average level of traffic
might conform in some finite period.

Algorithm
The token bucket algorithm can be conceptually understood as follows:

• A token is added to the bucket every seconds.

• The bucket can hold at the most tokens. If a token arrives when the bucket is full, it is discarded.
• When a packet (network layer PDU) of n bytes arrives, n tokens are removed from the bucket, and the packet is sent to
the network.
• If fewer than n tokens are available, no tokens are removed from the bucket, and the packet is considered to be non-
conformant.

Uses
The token bucket can be used in either traffic shaping or traffic policing. In traffic policing, nonconforming packets may
be discarded (dropped) or may be reduced in priority (for downstream traffic management functions to drop if there is
congestion). In traffic shaping, packets are delayed until they conform. Traffic policing and traffic shaping are commonly
used to protect the network against excess or excessively bursty traffic, see bandwidth management and congestion
avoidance. Traffic shaping is commonly used in the network interfaces in hosts to prevent transmissions being discarded
by traffic management functions in the network.

B. Explain the following:


i). HOP-BY-HOP choke packets:Sending a choke packet to the source hosts does not work well at high speeds or over
long distances because the reaction is so slow. Let’s take an example that host existing in San Francisco and is sending
traffic to host at New York.The choke packet propagation is shown as the second, third, and fourth steps in figure (a). In
those 30 msec, another 4.6 megabits will have been sent. Even if the host in San Francisco completely shuts down
immediately, the 4.6 megabits in the pipe will continue to pour in and have to be dealt with.

Another approach is to have the choke packet take effect at every hop it passes through, as shown in the sequence of
figure (b). Here, F is required to reduce the flow to D as soon as the choke packet reaches F. Doing so will require F to
devote more buffers to the flow. In the next step, the choke packet reaches E, which tells E to reduce the flow to F. This
action puts a greater demand on E‘s buffers but gives F immediate relief. Finally, the choke packet reaches A and the flow
genuinely slows down.
Hop-by-Hop Backpressure
At high speeds or over long distances, many new packets may be transmitted after congestion has been signaled because
of the delay before the signal takes effect.Consider, for example, a host in San Francisco (router A in Fig. 5-26) that is
sending traffic to a host in New York (router D in Fig. 5-26) at the OC-3 speed of 155 Mbps. If the New York host begins
to run out of buffers, it will take about 40 msec for a choke packet to get back to San Francisco to tell it to slow down. An
ECN indication will take even longer because it is delivered via the destination.Choke packet propagation is illustrated as
the second, third, and fourth steps in Fig. 5-26(a). In those 40 msec, another 6.2 megabits will have been sent. Even if the
host in San Francisco completely shuts down immediately, the 6.2 megabits in the pipe will continue to pour in and have
to be dealt with. Only in the seventh diagram in Fig. 5-26(a) will the New York router notice a slower flow.An alternative
approach is to have the choke packet take effect at every hop it passes through, as shown in the sequence of Fig. 5-26(b).
Here, as soon as the choke packet reaches F, F is required to reduce the flow to D. Doing so will require F to devote more
buffers to the connection, since the source is still sending away at full blast, but it gives D immediate relief, like a
headache remedy in a television commercial. In the next step, the choke packet reaches E, which tells E to reduce the
flow to F. This action puts a greater demand on E’s buffers but gives F immediate relief. Finally, the choke packet
reaches A and the flow genuinely slows down. The net effect of this hop-by-hop scheme is to provide quick relief at the
point of congestion, at the price of using up more buffers upstream. In this way, congestion can be nipped in the bud
without losing any packets.

ii). Resource Reservation protocol: (RSVP)

• The RSVP is a signaling protocol, which helps IP to create a flow and to make resource reservation.

• It is an independent protocol and also can be used in other different model.

• RSVP helps to design multicasting (one to many or many to many distribution), where a data can be sent to group of
destination computers simultaneously.
For example: The IP multicast is technique for one to many communication through an IP infrastructure in the network.

• RSVP can be also used for unicasting (transmitting a data to all possible destination) to provide resource reservation for
all types of traffic.

The two important types of RSVP messages are:

1. Path messages:

• The receivers in a flow make the reservation in RSVP, but the receivers do not know the path traveled by the packets
before the reservation. The path is required for reservation To solve this problem the RSVP uses the path messages.
• A path message travels from the sender and reaches to all receivers by multicasting and path message stores the
necessary information for the receivers.

2. Resv messages:
After receiving path message, the receiver sends a Resv message. The Resv message travels to the sender and makes a
resource reservation on the routers which supports for RSVP.
The Resource Reservation Protocol (RSVP) is a Transport Layer protocol designed to reserve resources across a network
for an integrated services Internet. RSVP operates over an IPv4 or IPv6 Internet Layer and provides receiver-initiated
setup of resource reservations for multicast or unicast data flows with scaling and robustness.
RSVP can be used by either hosts or routers to request or deliver specific levels of quality of service (QoS) for application
data streams or flows. RSVP is not a routing protocol and was designed to interoperate with current and future routing
protocols. RSVP-TE, the traffic engineering extension of RSVP, is becoming more widely accepted nowadays in many
QoS-oriented networks.
The main attributes of RSVP are:
• RSVP requests resources for simplex flows: a traffic stream in only one direction from sender to one or more receivers.
• RSVP is not a routing protocol but works with current and future routing protocols.
• RSVP is receiver oriented: in that the receiver of a data flow initiates and maintains the resource reservation for that
flow.
• RSVP maintains “soft state” of the host and routers’ resource reservations, hence supporting dynamic automatic
adaptation to network changes.
• RSVP provides several reservation styles and allows for future styles to be added to protocol revisions to fit varied
applications.
• RSVP transports and maintains traffic and policy control parameters that are opaque to RSVP.
5.
A.Explain how flow control and buffering is done in transport layer?
Ans: Transport Layer
o The transport layer is a 4th layer from the top.
o The main role of the transport layer is to provide the communication services directly to the application processes
running on different hosts.
o The transport layer provides a logical communication between application processes running on different hosts.
Although the application processes on different hosts are not physically connected, application processes use the
logical communication provided by the transport layer to send the messages to each other.
o The transport layer protocols are implemented in the end systems but not in the network routers.
o A computer network provides more than one protocol to the network applications. For example, TCP and UDP
are two transport layer protocols that provide a different set of services to the network layer.
o All transport layer protocols provide multiplexing/demultiplexing service. It also provides other services such as
reliable data transfer, bandwidth guarantees, and delay guarantees.
o Each of the applications in the application layer has the ability to send a message by using TCP or UDP. The
application communicates by using either of these two protocols. Both TCP and UDP will then communicate with
the internet protocol in the internet layer. The applications can read and write to the transport layer. Therefore, we
can say that communication is a two-way process.
Services provided by the Transport Layer
The services provided by the transport layer are similar to those of the data link layer. The data link layer provides the
services within a single network while the transport layer provides the services across an internetwork made up of many
networks. The data link layer controls the physical layer while the transport layer controls all the lower layers.
The services provided by the transport layer protocols can be divided into five categories:
o End-to-end delivery
o Addressing
o Reliable delivery
o Flow control
o Multiplexing
End-to-end delivery:
The transport layer transmits the entire message to the destination. Therefore, it ensures the end-to-end delivery of an
entire message from a source to the destination.
Reliable delivery:
The transport layer provides reliability services by retransmitting the lost and damaged packets.
The reliable delivery has four aspects:
o Error control
o Sequence control
o Loss control
o Duplication control

Error Control
o The primary role of reliability is Error Control. In reality, no transmission will be 100 percent error-free
delivery. Therefore, transport layer protocols are designed to provide error-free transmission.
o The data link layer also provides the error handling mechanism, but it ensures only node-to-node error-free
delivery. However, node-to-node reliability does not ensure the end-to-end reliability.
o The data link layer checks for the error between each network. If an error is introduced inside one of the routers,
then this error will not be caught by the data link layer. It only detects those errors that have been introduced
between the beginning and end of the link. Therefore, the transport layer performs the checking for the errors
end-to-end to ensure that the packet has arrived correctly.
Sequence Control
o The second aspect of the reliability is sequence control which is implemented at the transport layer.
o On the sending end, the transport layer is responsible for ensuring that the packets received from the upper layers
can be used by the lower layers. On the receiving end, it ensures that the various segments of a transmission can
be correctly reassembled.
Loss Control
Loss Control is a third aspect of reliability. The transport layer ensures that all the fragments of a transmission arrive at
the destination, not some of them. On the sending end, all the fragments of transmission are given sequence numbers by a
transport layer. These sequence numbers allow the receiver?s transport layer to identify the missing segment.
Duplication Control
Duplication Control is the fourth aspect of reliability. The transport layer guarantees that no duplicate data arrive at the
destination. Sequence numbers are used to identify the lost packets; similarly, it allows the receiver to identify and discard
duplicate segments.
Flow Control
Flow control is used to prevent the sender from overwhelming the receiver. If the receiver is overloaded with too much
data, then the receiver discards the packets and asking for the retransmission of packets. This increases network
congestion and thus, reducing the system performance. The transport layer is responsible for flow control. It uses the
sliding window protocol that makes the data transmission more efficient as well as it controls the flow of data so that the
receiver does not become overwhelmed. Sliding window protocol is byte oriented rather than frame oriented.
Transport layer provides a flow control mechanism between the adjacent layers of the TCP/IP model. TCP also prevents
the data loss due to a fast sender and slow receiver by imposing some flow control techniques. It uses the method of
sliding window protocol which is accomplished by receiver by sending a window back to the sender informing the size of
data it can receive.
Flow Control & Buffers
The sender's transport layer must worry about overwhelming both the network and the receiver. The network may exceed
the carrying capacity, and the receiver may run out of buffers.

Buffers are statically allocated kernel memory so that storing received TPDUs can be done quickly.

If the network layer is reliable the transport layer need not buffer transmitted data, since it relies on the network layer to
get the data through.

If the network layer is unreliable, then the sending transport entity has to buffer all TPDUs until they are acknowledged.
This gives the receiving transport entity the choice of buffering. If it does not, it knows the sender will eventually resend,
though the time spent transmitting and receiving the TPDU has been wasted. Why might the receiver not buffer? It might
not have a buffer to put the received TPDU in. Remember that the transport entities handle many connections
simultaneously. The buffer pool available to the transport entity may be exhausted by other connections.

Several possible means of managing buffers exists.

If all TPDUs are the same size, then a pool of same-size buffers can be maintained, and each connection has a linked list
corresponding to its received TPDUs. This is not a good scheme if TPDUs vary in size, since you'd have to make your
buffers as big as the largest possible TPDU, so small packets would waste buffer space.

You could have a pool of varying sized buffers, then pick one that fits as well as possible, maintaining the same sort of
linked list per connection. This is a little more complicated to manage, since you can't just grab any free buffer.

The other possibility is to assign a block of memory to each connection, then manage it as a circular buffer. But how big
should the circular buffer block be? If the connection is busy, large, but if it is a slow connection, a large block wastes
memory.
Buffering isn't the only thing that limits the flow control in the transport layer. Suppose the receiver had an infinite supply
of memory to dedicate to buffers. You still have the limit of the subnet's carrying capacity. This is the issue of congestion
control.
Congestion control
If routers in the subnet can exchange x packets per second on direct links, and there are k hops between sender and
receiver, then the most data that can be sent is k*x packets per second (store and forward network). Anything more than
this causes congestion in the network.

One scheme is to have the sender monitor the carrying capacity of the network by measuring the time required for
sending and receiving an ack for a TPDU. Then, with a capacity of C TPDUs/second, and a round trip time of r seconds
per TPDU, the sender should be allowed a window of C * r bytes. This keeps the pipe full. Since the network capacity
may change rapidly due to congestion, the estimates of C and r must be continually updated.

B. Explain TCP protocol?


Ans:
TCP: TCP Header, TCP Connection Establishing and Releasing, TCP transmission policy, TCP timer management.
Page number: 556 and 409

6. Explain the following


A. UDP protocol: User datagram protocol is an open systems interconnection (OSI) transport layer protocol for client-
server network applications. UDP uses a simple transmission model but does not employ handshaking dialogs for
reliability, ordering and data integrity. The protocol assumes that error-checking and correction is not required, thus
avoiding processing at the network interface level.
UDP is widely used in video conferencing and real-time computer games. The protocol permits individual packets to be
dropped and UDP packets to be received in a different order than that in which they were sent, allowing for better
performance.
UDP network traffic is organized in the form of datagrams, which comprise one message units. The first eight bytes of a
datagram contain header information, while the remaining bytes contain message data. A UDP datagram header contains
four fields of two bytes each:

• Source port number


• Destination port number
• Datagram size
• Checksum

UDP (User Datagram Protocol) is an alternative communications protocol to Transmission Control Protocol (TCP) used
primarily for establishing low-latency and loss-tolerating connections between applications on the internet.

Both UDP and TCP run on top of the Internet Protocol (IP) and are sometimes referred to as UDP/IP or TCP/IP. But there
are important differences between the two.

Where UDP enables process-to-process communication, TCP supports host-to-host communication. TCP sends individual
packets and is considered a reliable transport medium; UDP sends messages, called datagrams, and is considered a best-
effort mode of communications.

In addition, where TCP provides error and flow control, no such mechanisms are supported in UDP. UDP is considered a
connectionless protocol because it doesn't require a virtual circuit to be established before any data transfer occurs.

UDP provides two services not provided by the IP layer. It provides port numbers to help distinguish different user
requests and, optionally, achecksum capability to verify that the data arrived intact.

TCP has emerged as the dominant protocol used for the bulk of internet connectivity due to its ability to break large data
sets into individual packets, check for and resend lost packets, and reassemble packets in the correct sequence. But these
additional services come at a cost in terms of additional data overhead and delays called latency.
In contrast, UDP just sends the packets, which means that it has much lower bandwidthoverhead and latency. With UDP,
packets may take different paths between sender and receiver and, as a result, some packets may be lost or received out of
order.
Applications of UDP
UDP is an ideal protocol for network applications in which perceived latency is critical, such as in gaming and voice and
video communications, which can suffer some data loss without adversely affecting perceived quality. In some cases,
forward error correction techniques are used to improve audio and video quality in spite of some loss.

UDP can also be used in applications that require lossless data transmission when the application is configured to manage
the process of retransmitting lost packets and correctly arranging received packets. This approach can help to improve
the data transfer rate of large files compared to TCP.

In the Open Systems Interconnection (OSI) communication model, UDP, like TCP, is in Layer 4, the transport layer. UDP
works in conjunction with higher level protocols to help manage data transmission services including Trivial File Transfer
Protocol (TFTP), Real Time Streaming Protocol (RTSP), Simple Network Protocol (SNP) and domain name system
(DNS) lookups.
User datagram protocol features

The user datagram protocol has attributes that make it advantageous for use with applications that can tolerate lost data.

• It allows packets to be dropped and received in a different order than they were transmitted, making it suitable for
real-time applications where latency might be a concern.

• It can be used for transaction-based protocols, such as DNS or Network Time Protocol.

• It can be used where a large number of clients are connected and where real-time error correction isn't necessary,
such as gaming, voice or video conferencing, and streaming media.

UDP header composition

The User Datagram Protocol header has four fields, each of which is 2 bytes. They are:

• source port number, which is the number of the sender;

• destination port number, the port the datagram is addressed to;

• length, the length in bytes of the UDP header and any encapsulated data; and

• checksum, which is used in error checking. Its use is required in IPv6 and optional in IPv4.

B. Virtual private network: Virtual Private Network (VPN) | An Introduction

VPN stands for virtual private network. A virtual private network (VPN) is a technology that creates a safe and encrypted
connection over a less secure network, such as the internet. Virtual Private network is a way to extend a private network
using a public network such as internet. The name only suggests that it is Virtual “private network” i.e. user can be the
part of local network sitting at a remote location. It makes use of tunneling protocols to establish a secure connection.
Lets understand VPN by an example:
Think of a situation where corporate office of a bank is situated in Washington,USA.This office has a local network
consisting of say 100 computers. Suppose another branches of bank are in Mumbai, India and Tokyo, Japan. The
traditional method of establishing a secure connection between head office and branch was to have a leased line between
the branches and head office which was very costly as well as troublesome job. VPN let us overcome this issue in an
effective manner.
The situation is described below:
• All 100 hundred computers of corporate office at Washington are connected to the VPN server(which is a well
configured server containing a public IP address and a switch to connect all computers present in the local network
i.e. in US head office).
• The person sitting in the Mumbai office connects to The VPN server using dial up window and VPN server return
an IP address which belongs to the series of IP addresses belonging to local network of corporate office.
• Thus person from Mumbai branch becomes local to the head office and information can be shared securely over the
public internet.
• So this is the intuitive way of extending local network even across the geographical borders of the country.

Definition - What does Virtual Private Network (VPN) mean?


A virtual private network (VPN) is a private network that is built over a public infrastructure. Security mechanisms, such
as encryption, allow VPN users to securely access a network from different locations via a public telecommunications
network, most frequently the Internet.
In some cases, virtual area network (VAN) is a VPN synonym.
Techopedia explains Virtual Private Network (VPN)
VPN data security remains constant through encrypted data and tunneling protocols. The key VPN advantage is that it is
less expensive than a private wide area network (WAN) buildout. As with any network, an organization's goal is to
provide cost-effective business communication.
In a remote-access VPN, an organization uses an outside enterprise service provider (ESP) to establish a network access
server (NAS). Remote users then receive VPN desktop software and connect to the NAS via a toll-free number, which
accesses the organization's network. In a site-to-site VPN, many sites use secure data encryption to connect over a
network (usually the Internet).

A virtual private network (VPN) is programming that creates a safe and encrypted connection over a less secure network,
such as the public internet. A VPN works by using the shared public infrastructure while maintaining privacy through
security procedures and tunnelingprotocols. In effect, the protocols, by encrypting data at the sending end and decrypting
it at the receiving end, send the data through a "tunnel" that cannot be "entered" by data that is not properly encrypted. An
additional level of security involves encrypting not only the data, but also the originating and receiving network
addresses.In the early days of the internet, VPNs were developed to provide branch office employees with an inexpensive,
safe way to access corporate applications and data. Today, VPNs are often used by remote corporate employees, gig
economy freelance workers and business travelers who require access to sites that are geographically restricted. The two
most common types of VPNs are remote access VPNs and site-to-site VPNs.

8.Explain the following


A. DNS: DNS (Domain Name Server) | NetWorking

DNS is a host name to IP address translation service. DNS is a distributed database implemented in a hierarchy of name
servers. It is an application layer protocol for message exchange between clients and servers.
Requirement
Every host is identified by the IP address but remembering numbers is very difficult for the people and also the IP
addresses are not static therefore a mapping is required to change the domain name to IP address. So DNS is used to
convert the domain name of the websites to their numerical IP address.
Domain :
There are various kinds of DOMAIN :
1. Generic domain : .com(commercial) .edu(educational) .mil(military) .org(non profit organization) .net(similar to
commercial) all these are generic domain.
2. Country domain .in (india) .us .uk
3. Inverse domain if we want to know what is the domain name of the website. Ip to domain name mapping.So DNS
can provide both the mapping for example to find the ip addresses of geeksforgeeks.org then we have to type
nslookup www.geeksforgeeks.org.
Organization of Domain

It is Very difficult to find out the ip address associated to a website because there are millions of websites and with all
those websites we should be able to generate the ip address immediately,
there should not be a lot of delay for that to happen organization of database is very important.
DNS record – Domain name, ip address what is the validity?? what is the time to live ?? and all the information related to
that domain name. These records are stored in tree like structure.

Namespace – Set of possible names, flat or hierarchical . Naming system maintains a collection of bindings of names to
values – given a name, a resolution mechanism returns the corresponding value –
Name server – It is an implementation of the resolution mechanism.. DNS (Domain Name System) = Name service in
Internet – Zone is an administrative unit, domain is a subtree.

Name to Address Resolution

The host request the DNS name server to resolve the domain name. And the name server returns the IP address
corresponding to that domain name to the host so that the host can future connect to that IP address.

Hierarchy of Name Servers


Root name servers – It is contacted by name servers that can not resolve the name. It contacts authoritative name server
if name mapping is not known. It then gets the mapping and return the IP address to the host.
Top level server – It is responsible for com, org, edu etc and all top level country domains like uk, fr, ca, in etc. They
have info about authoritative domain servers and know names and IP addresses of each authoritative name server for the
second level domains.
Authoritative name servers This is organization’s DNS server, providing authoritative hostName to IP mapping for
organization servers. It can be maintained by organization or service provider. In order to reach cse.dtu.in we have to ask
the root DNS server, then it will point out to the top level domain server and then to authoritative domain name server
which actually contains the IP address. So the authoritative domain server will return the associative ip address.

Domain Name Server

The client machine sends a request to the local name server, which , if root does not find the address in its database, sends
a request to the root name server , which in turn, will route the query to an intermediate or authoritative name server. The
root name server can also contain some hostName to IP address mappings . The intermediate name server always knows
who the authoritative name server is. So finally the IP address is returned to the local name server which in turn returns
the IP address to the host.
B. MPEG-7: MPEG-7 is a multimedia content description standard. It was standardized in ISO/IEC 15938 (Multimedia
content description interface).[1][2][3][4] This description will be associated with the content itself, to allow fast and efficient
searching for material that is of interest to the user. MPEG-7 is formally called Multimedia Content Description Interface.
Thus, it is nota standard which deals with the actual encoding of moving pictures and audio, like MPEG-1, MPEG-
2 and MPEG-4. It uses XML to store metadata, and can be attached to timecodein order to tag particular events,
or synchronise lyrics to a song, for example.
It was designed to standardize:

• a set of Description Schemes ("DS") and Descriptors ("D")


• a language to specify these schemes, called the Description Definition Language ("DDL")
• a scheme for coding the description

MPEG-7 is intended to provide complementary functionality to the previous MPEG standards, representing information
about the content, not the content itself ("the bits about the bits"). This functionality is the standardization of multimedia
content descriptions. MPEG-7 can be used independently of the other MPEG standards - the description might even be
attached to an analog movie. The representation that is defined within MPEG-4, i.e. the representation of audio-visual
data in terms of objects, is however very well suited to what will be built on the MPEG-7 standard. This representation is
basic to the process of categorization. In addition, MPEG-7 descriptions could be used to improve the functionality of
previous MPEG standards.With these tools, we can build an MPEG-7 Description and deploy it. According to the
requirements document,1 “a Description consists of a Description Scheme (structure) and the set of Descriptor Values
(instantiations) that describe the Data.” A Descriptor Value is “an instantiation of a Descriptor for a given data set (or
subset thereof).” The Descriptor is the syntatic and semantic definition of the content. extraction algorithms are inside the
scope of the standard because their standardization isn’t required to allow interoperability.
There are many applications and application domains which will benefit from the MPEG-7 standard. A few application
examples are:

• Digital library: Image/video catalogue, musical dictionary.


• Multimedia directory services: e.g. yellow pages.
• Broadcast media selection: Radio channel, TV channel.
• Multimedia editing: Personalized electronic news service, media authoring.
• Security services: Traffic control, production chains...
• E-business: Searching process of products.
• Cultural services: Art-galleries, museums...
• Educational applications.
• Biomedical applications.

C. SMTP:
Simple Mail Transfer Protocol (SMTP)
Email is emerging as one of the most valuable services on the internet today. Most of the internet systems use SMTP as a
method to transfer mail from one user to another. SMTP is a push protocol and is used to send the mail whereas POP
(post office protocol) or IMAP (internet message access protocol) are used to retrieve those mails at the receiver’s side.
SMTP Fundamentals
SMTP is an application layer protocol. The client who wants to send the mail opens a TCP connection to the SMTP
server and then sends the mail across the connection. The SMTP server is always on listening mode. As soon as it listens
for a TCP connection from any client, the SMTP process initiates a connection on that port (25). After successfully
establishing the TCP connection the client process sends the mail instantly.
SMTP Protocol
The SMTP model is of two type :
1. End-to- end method
2. Store-and- forward method
The end to end model is used to communicate between different organizations whereas the store and forward method are
used within an organization. A SMTP client who wants to send the mail will contact the destination’s host SMTP directly
in order to send the mail to the destination. The SMTP server will keep the mail to itself until it is successfully copied to
the receiver’s SMTP.
The client SMTP is the one which initiates the session let us call it as the client- SMTP and the server SMTP is the one
which responds to the session request and let us call it as receiver-SMTP. The client- SMTP will start the session and the
receiver-SMTP will respond to the request.

Model of SMTP system


In the SMTP model user deals with the user agent (UA) for example Microsoft Outlook, Netscape, Mozilla, etc. In order
to exchange the mail using TCP, MTA is used. The users sending the mail do not have to deal with the MTA it is the
responsibility of the system admin to set up the local MTA. The MTA maintains a small queue of mails so that it can
schedule repeat delivery of mail in case the receiver is not available. The MTA delivers the mail to the mailboxes and the
information can later be downloaded by the user agents.
Both the SMTP-client and MSTP-server should have 2 components:
1. User agent (UA)
2. Local MTA
Communication between sender and the receiver :
The senders, user agent prepare the message and send it to the MTA. The MTA functioning is to transfer the mail across
the network to the receivers MTA. To send mail, a system must have the client MTA, and to receive mail, a system must
have a server MTA.

SENDING EMAIL:
Mail is sent by a series of request and response messages between the client and a server. The message which is sent
across consists of a header and the body. A null line is used to terminate the mail header. Everything which is after the
null line is considered as the body of the message which is a sequence of ASCII characters. The message body contains
the actual information read by the receipt.
RECEIVING EMAIL:
The user agent at the server side checks the mailboxes at a particular time of intervals. If any information is received it
informs the user about the mail. When the user tries to read the mail it displays a list of mails with a short description of
each mail in the mailbox. By selecting any of the mail user can view its contents on the terminal.

Some SMTP Commands:


• HELO – Identifies the client to the server, fully qualified domain name, only sent once per session
• MAIL – Initiate a message transfer, fully qualified domain of originator
• RCPT – Follows MAIL, identifies an addressee, typically the fully qualified name of the addressee and for multiple
addressees use one RCPT for each addressee
• DATA – send data line by line

June 2016(1 2 3 5 6 8)
1.

A). Explain the terms:

i).Protocol: A network protocol defines rules and conventions for communication between network devices. Network
protocols include mechanisms for devices to identify and make connections with each other, as well as formatting rules
that specify how data is packaged into sent and received messages. Some protocols also support message
acknowledgment and data compression designed for reliable and/or high-performance network communication.
ii). SAP: A Service Access Point (SAP) is an identifying label for network endpoints used in Open Systems
Interconnection (OSI) networking.
The SAP is a conceptual location at which one OSI layer can request the services of another OSI layer. As an example,
PD-SAP or PLME-SAP in IEEE 802.15.4 can be mentioned, where the Media Access Control (MAC) layer requests
certain services from the Physical Layer. Service access points are also used in IEEE 802.2 Logical Link
Control in Ethernetand similar Data Link Layer protocols.

iii). Subnet: A subnetwork or subnet is a logical subdivision of an IP network.[1]:1,16 The practice of dividing a network
into two or more networks is called subnetting.

iv). Internet: The Internet (contraction of interconnected network) is the global system of interconnected computer
networks that use the Internet protocol suite (TCP/IP) to link devices worldwide. It is a network of networks that consists
of private, public, academic, business, and government networks of local to global scope, linked by a broad array of
electronic, wireless, and optical networking technologies.

v). PDU: In telecommunications, a protocol data unit (PDU) is a single unit of information transmitted among peer
entities of a computer network. A PDU is composed of protocol specific control information and user data. In the layered
architectures of communication protocol stacks, each layer implements protocols tailored to the specific type or mode of
data exchange.

B. Bring out the design issues of network layer. Compare VC and datagram subnets? REPEATED

C. Briefly describe the different network topologies?


Ans BUS Topology
Bus topology is a network type in which every computer and network device is connected to single cable. When it has
exactly two endpoints, then it is called Linear Bus topology.

Features of Bus Topology

1. It transmits data only in one direction.

2. Every device is connected to a single cable


Advantages of Bus Topology

1. It is cost effective.

2. Cable required is least compared to other network topology.

3. Used in small networks.

4. It is easy to understand.

5. Easy to expand joining two cables together.


Disadvantages of Bus Topology

1. Cables fails then whole network fails.

2. If network traffic is heavy or nodes are more the performance of the network decreases.

3. Cable has a limited length.

4. It is slower than the ring topology.

RING Topology
It is called ring topology because it forms a ring as each computer is connected to another computer, with the last one
connected to the first. Exactly two neighbours for each device.

Features of Ring Topology

1. A number of repeaters are used for Ring topology with large number of nodes, because if someone wants to send

some data to the last node in the ring topology with 100 nodes, then the data will have to pass through 99 nodes to

reach the 100th node. Hence to prevent data loss repeaters are used in the network.

2. The transmission is unidirectional, but it can be made bidirectional by having 2 connections between each Network

Node, it is called Dual Ring Topology.

3. In Dual Ring Topology, two ring networks are formed, and data flow is in opposite direction in them. Also, if one

ring fails, the second ring can act as a backup, to keep the network up.

4. Data is transferred in a sequential manner that is bit by bit. Data transmitted, has to pass through each node of the

network, till the destination node.


Advantages of Ring Topology

1. Transmitting network is not affected by high traffic or by adding more nodes, as only the nodes having tokens can

transmit data.

2. Cheap to install and expand


Disadvantages of Ring Topology

1. Troubleshooting is difficult in ring topology.


2. Adding or deleting the computers disturbs the network activity.

3. Failure of one computer disturbs the whole network.

STAR Topology
In this type of topology all the computers are connected to a single hub through a cable. This hub is the central node and
all others nodes are connected to the central node.

Features of Star Topology

1. Every node has its own dedicated connection to the hub.

2. Hub acts as a repeater for data flow.

3. Can be used with twisted pair, Optical Fibre or coaxial cable.


Advantages of Star Topology

1. Fast performance with few nodes and low network traffic.

2. Hub can be upgraded easily.

3. Easy to troubleshoot.

4. Easy to setup and modify.

5. Only that node is affected which has failed, rest of the nodes can work smoothly.
Disadvantages of Star Topology

1. Cost of installation is high.

2. Expensive to use.

3. If the hub fails then the whole network is stopped because all the nodes depend on the hub.

4. Performance is based on the hub that is it depends on its capacity

MESH Topology
It is a point-to-point connection to other nodes or devices. All the network nodes are connected to each other. Mesh
has n(n-1)/2 physical channels to link n devices.
There are two techniques to transmit data over the Mesh topology, they are :

1. Routing

2. Flooding

MESH Topology: Routing


In routing, the nodes have a routing logic, as per the network requirements. Like routing logic to direct the data to reach
the destination using the shortest distance. Or, routing logic which has information about the broken links, and it avoids
those node etc. We can even have routing logic, to re-configure the failed nodes.

MESH Topology: Flooding


In flooding, the same data is transmitted to all the network nodes, hence no routing logic is required. The network is
robust, and the its very unlikely to lose the data. But it leads to unwanted load over the network.

Types of Mesh Topology

1. Partial Mesh Topology : In this topology some of the systems are connected in the same fashion as mesh topology

but some devices are only connected to two or three devices.

2. Full Mesh Topology : Each and every nodes or devices are connected to each other.
Features of Mesh Topology

1. Fully connected.

2. Robust.

3. Not flexible.
Advantages of Mesh Topology

1. Each connection can carry its own data load.

2. It is robust.

3. Fault is diagnosed easily.

4. Provides security and privacy.


Disadvantages of Mesh Topology

1. Installation and configuration is difficult.

2. Cabling cost is more.

3. Bulk wiring is required.

TREE Topology
It has a root node and all other nodes are connected to it forming a hierarchy. It is also called hierarchical topology. It
should at least have three levels to the hierarchy.

Features of Tree Topology

1. Ideal if workstations are located in groups.

2. Used in Wide Area Network.


Advantages of Tree Topology

1. Extension of bus and star topologies.

2. Expansion of nodes is possible and easy.

3. Easily managed and maintained.

4. Error detection is easily done.


Disadvantages of Tree Topology

1. Heavily cabled.

2. Costly.

3. If more nodes are added maintenance is difficult.

4. Central hub fails, network fails.

HYBRID Topology
It is two different types of topologies which is a mixture of two or more topologies. For example if in an office in one
department ring topology is used and in another star topology is used, connecting these topologies will result in Hybrid
Topology (ring topology and star topology).

Features of Hybrid Topology

1. It is a combination of two or topologies

2. Inherits the advantages and disadvantages of the topologies included


Advantages of Hybrid Topology

1. Reliable as Error detecting and trouble shooting is easy.

2. Effective.

3. Scalable as size can be increased easily.

4. Flexible.
Disadvantages of Hybrid Topology

1. Complex in design.

2. Costly.

2.
A. Explain hierarchical routing and flooding algorithm?
Ans: hierarchical routing REPEATED
Another static algorithm is flooding, in which every incoming packet is sent out on every outgoing line except the one it
arrived on. Flooding obviously generates vast numbers of duplicate packets, in fact, an infinite number unless some
measures are taken to damp the process. One such measure is to have a hop counter contained in the header of each
packet, which is decremented at each hop, with the packet being discarded when the counter reaches zero. Ideally, the hop
counter should be initialized to the length of the path from source to destination. If the sender does not know how long the
path is, it can initialize the counter to the worst case, namely, the full diameter of the subnet. An alternative technique for
damming the flood is to keep track of which packets have been flooded, to avoid sending them out a second time. achieve
this goal is to have the source router put a sequence number in each packet it receives from its hosts. Each router then
needs a list per source router telling which sequence numbers originating at that source have already been seen.
If an incoming packet is on the list, it is not flooded.To prevent the list from growing without bound, each list should be
augmented by a counter, k, meaning that all sequence numbers through k have been seen. When a packet comes in, it
is easy to check if the packet is a duplicate; if so, it is discarded. Furthermore, the full list below k is not needed, since k
effectively summarizes it. A variation of flooding that is slightly more practical is selective flooding.In this algorithm
the routers do not send every incoming packet out on every line, only on those lines that are going approximately in the
right direction. There is usually little point in sending a westbound packet on an eastbound line unless the topology is
extremely peculiar and the router is sure of this fact. Flooding is not practical in most applications, but it does have some
uses. For example, in military applications, where large numbers of routers may be blown to bits at any instant, the
tremendous robustness of flooding is highly desirable. In distributed database applications, it is sometimes necessary to
update all the databases concurrently, in which case flooding can be useful. In wireless networks, all messages transmitted
by a station can be received by all other stations within its radio range, which is, in fact, flooding, and some algorithms
utilize this property. A fourth possible use of flooding is as a metric against which other routing algorithms can be
compared. Flooding always chooses the shortest path because it chooses every possible path in parallel. Consequently, no
other algorithm can produce a shorter delay (if we ignore the overhead generated by the flooding process itself).

B. Explain IEEE 802.15.4.?


802.15.4 is a simple packet data protocol for lightweight wireless networks
► Channel Access is via Carrier Sense Multiple Access with collision avoidance and optional time slotting
► Message acknowledgement and an optional beacon structure
► Multi-level security
► Three bands, 27 channels specified !
2.4 GHz: 16 channels, 250 kbps
! 868.3 MHz : 1 channel, 20 kbps (BPSK) / 100 kbps (O-QPSK), 250 kbps (ASK)
! 902-928 MHz: 10 channels, 40 kbps (BPSK) / 250kbps (ASK or OQPSK)
► Works well for
! Long battery life, selectable latency for controllers, sensors, remote monitoring and portable electronics
► Configured for maximum battery life, has the potential to last as long as the shelf life of most batteries 802.15.4
General Characteristics
Data rates of 250 kb/s, 100 kb/s 40 kb/s and 20 kb/s.
Star or Peer-to-Peer operation.
Support for low latency devices.
CSMA-CA channel access.
Dynamic device addressing.
Fully handshaked protocol for transfer reliability.
Low power consumption.
Frequency Bands of Operation, either:
"16 channels in the 2.4GHz ISM band;
"Or 10 channels in the 915MHz ISM band and 1 channel in the European 868MHz band.
IEEE standard 802.15.4 intends to offer the fundamental lower network layers of a type of wireless personal area network
(WPAN) which focuses on low-cost, low-speed ubiquitous communication between devices. It can be contrasted with
other approaches, such as Wi-Fi, which offer more bandwidth and require more power. The emphasis is on very low cost
communication of nearby devices with little to no underlying infrastructure, intending to exploit this to lower power
consumption even more.
The basic framework conceives a 10-meter communications range with a transfer rate of 250 kbit/s. Tradeoffs are
possible to favor more radically embedded devices with even lower power requirements, through the definition of not
one, but several physical layers. Lower transfer rates of 20 and 40 kbit/s were initially defined, with the 100 kbit/s rate
being added in the current revision.
Even lower rates can be considered with the resulting effect on power consumption. As already mentioned, the main
identifying feature of IEEE 802.15.4 among WPANs is the importance of achieving extremely low manufacturing and
operation costs and technological simplicity, without sacrificing flexibility or generality.
Important features include real-time suitability by reservation of Guaranteed Time Slots (GTS), collision avoidance
through CSMA/CA and integrated support for secure communications. Devices also include power management
functions such as link quality and energy detection. The standard does have provisions for supporting time and rate
sensitive applications because of its ability to operate in pure CSMA/CA or TDMA access modes. The TDMA mode of
operation is supported via the GTS feature of the standard.[4]
IEEE 802.15.4-conformant devices may use one of three possible frequency bands for operation (868/915/2450 MHz).
Architecture:
The physical layer
The physical layer is the initial layer in the OSI reference model used worldwide. The physical layer (PHY) ultimately
provides the data transmission service, as well as the interface to the physical layer management entity, which offers
access to every layer management function and maintains a database of information on related personal area networks.
Thus, the PHY manages the physical RFtransceiver and performs channel selection and energy and signal management
functions. It operates on one of three possible unlicensed frequency bands:

• 868.0–868.6 MHz: Europe, allows one communication channel (2003, 2006, 2011[5])
• 902–928 MHz: North America, up to ten channels (2003), extended to thirty (2006)
• 2400–2483.5 MHz: worldwide use, up to sixteen channels (2003, 2006)
The MAC layer[edit]
The medium access control (MAC) enables the transmission of MAC frames through the use of the physical channel.
Besides the data service, it offers a management interface and itself manages access to the physical channel and
network beaconing. It also controls frame validation, guarantees time slots and handles node associations. Finally, it
offers hook points for secure services.
Note that the IEEE 802.15 standard does not use 802.1D or 802.1Q, i.e., it does not exchange standard Ethernet frames.
The physical frame-format is specified in IEEE802.15.4-2011 in section 5.2. It is tailored to the fact that most IEEE
802.15.4 PHYs only support frames of up to 127 bytes (adaptation layer protocols such as 6LoWPAN provide
fragmentation schemes to support larger network layer packets).
Higher layers[edit]
No higher-level layers and interoperability sublayers are defined in the standard. Other specifications - such as ZigBee,
SNAP, and 6LoWPAN/Thread - build on this standard. RIOT, OpenWSN, TinyOS, Unison RTOS, DSPnano
RTOS, nanoQplus, Contiki and Zephyr operating systems also use a few items of IEEE 802.15.4 hardware and software.
3.
A. Define open loop and closed loop. Explain the different congestion control approaches for datagram subnets?
Ans: Congestion control refers to the techniques used to control or prevent congestion. Congestion control techniques
can be broadly classified into two categories:

Open Loop Congestion Control


Open loop congestion control policies are applied to prevent congestion before it happens. The congestion control is
handled either by the source or the destination.
Policies adopted by open loop congestion control –
1. Retransmission Policy :
It is the policy in which retransmission of the packets are taken care. If the sender feels that a sent packet is lost or
corrupted, the packet needs to be retransmitted. This transmission may increase the congestion in the network.
To prevent congestion, retransmission timers must be designed to prevent congestion and also able to optimize
efficiency.
2. Window Policy :
The type of window at the sender side may also affect the congestion. Several packets in the Go-back-n window are
resent, although some packets may be received successfully at the receiver side. This duplication may increase the
congestion in the network and making it worse.
Therefore, Selective repeat window should be adopted as it sends the specific packet that may have been lost.
3. Discarding Policy :
A good discarding policy adopted by the routers is that the routers may prevent congestion and at the same time
partially discards the corrupted or less sensitive package and also able to maintain the quality of a message.
In case of audio file transmission, routers can discard less sensitive packets to prevent congestion and also maintain
the quality of the audio file.
4. Acknowledgment Policy :
Since acknowledgement are also the part of the load in network, the acknowledgment policy imposed by the
receiver may also affect congestion. Several approaches can be used to prevent congestion related to
acknowledgment.
The receiver should send acknowledgement for N packets rather than sending acknowledgement for a single packet.
The receiver should send a acknowledgment only if it has to sent a packet or a timer expires.
5. Admission Policy :
In admission policy a mechanism should be used to prevent congestion. Switches in a flow should first check the
resource requirement of a network flow before transmitting it further. If there is a chance of a congestion or there is
a congestion in the network, router should deny establishing a virtual network connection to prevent further
congestion.
All the above policies are adopted to prevent congestion before it happens in the network.
Closed Loop Congestion Control
Closed loop congestion control technique is used to treat or alleviate congestion after it happens. Several techniques are
used by different protocols; some of them are:
1. Backpressure :
Backpressure is a technique in which a congested node stop receiving packet from upstream node. This may cause
the upstream node or nodes to become congested and rejects receiving data from above nodes. Backpressure is a
node-to-node congestion control technique that propagate in the opposite direction of data flow. The backpressure
technique can be applied only to virtual circuit where each node has information of its above upstream node.

In above diagram the 3rd node is congested and stops receiving packets as a result 2nd node may be get congested
due to slowing down of the output data flow. Similarly 1st node may get congested and informs the source to slow
down.
2. Choke Packet Technique :
Choke packet technique is applicable to both virtual networks as well as datagram subnets. A choke packet is a
packet sent by a node to the source to inform it of congestion. Each router monitor its resources and the utilization
at each of its output lines. whenever the resource utilization exceeds the threshold value which is set by the
administrator, the router directly sends a choke packet to the source giving it a feedback to reduce the traffic. The
intermediate nodes through which the packets has traveled are not warned about congestion.

3. Implicit Signaling :
In implicit signaling, there is no communication between the congested nodes and the source. The source guesses
that there is congestion in a network. For example when sender sends several packets and there is no
acknowledgment for a while, one assumption is that there is a congestion.
4. Explicit Signaling :
In explicit signaling, if a node experiences congestion it can explicitly sends a packet to the source or destination to
inform about congestion. The difference between choke packet and explicit signaling is that the signal is included in
the packets that carry data rather than creating different packet as in case of choke packet technique.
Explicit signaling can occur in either forward or backward direction.
• Forward Signaling : In forward signaling signal is sent in the direction of the congestion. The destination is
warned about congestion. The reciever in this case adopt policies to prevent further congestion.
• Backward Signaling : In backward signaling signal is sent in the opposite direction of the congestion. The
source is warned about congestion and it needs to slow down.
Congestion control approaches which can be used in the datagram subnets. The techniques are:

1. Choke Packets.
2. Load Shedding.
3. Jitter control.
Choke Packets:
This approach can be used in virtual circuits as well as in the datagram subnets. In this technique each router
associates a real variable with each of its output lines. This real variable say “u” has a value between 0and1 and
it indicates the percentage utilization of that line. If the value of “u” goes above the threshold then that output
line will enter into a “warning” state. The router will check each newly arriving packet to see if its output line is
in the “warning state”. If it is in the warning state then the router will send back a choke packet signal to the
sending host. The sender host will not generate any more choke packets. Several variations on the congestion
control algorithm have been proposed, depending on the value of thresholds.
Load shedding:
Admission control, choke packets, fair queuing are the techniques suitable for light congestion. But if these
techniques cannot make the congestion to disappear, then the load shedding technique is to be used. The
principle of load shedding states that when the routers are being inaundated by the packets away. A router
which is flooding with packets due to congestion can drop any packets at random. The policy for dropping a
packet depends on the type of packet. So the policy for file transfer called wine (old is better than new) and that
for the multimedia is called milk (new is better than old). To implement such an intelligent discard policy, co-
operation from the sender is essential. The applications should mark their packets are to be discarded the routers
can first drop packets from lowest class.
Jitter control:
Jitter is defined as the variation in delay for the packets belonging to the same flow. The real time audio and
video cannot tolerate jitter on the other hand the jitter does not matter if the packets are carrying an information
contained in a file. For the audio and video transmission if the packets take 20 msec to 30msec to reach the
destination, it does not matter, provided that the delay remains constant. When a packet arrives at a router, the
router will check to see whether the packet is behind or ahead and by what time. This information is stored in
the packet and updated every hop. If the packet is ahead of the schedule then the router will hold it for slightly
longer time and if the packet is behind the schedule, then the router will try to send it out as quickly as
possible.

Leaky Bucket The leaky bucket mechanism is usually used to smooth the burstiness of the traffic by limiting the traffic
peak rate and the maximum burst size. This mechanism, as its name describes, uses the analogy of a leaky bucket to
describe the traffic policing scheme. The bucket’s parameters such as its size and the hole’s size are analogous to the
traffic policing parameters such as the maximum burst size and maximum rate, respectively. The leaky bucket shapes the
traffic with a maximum rate of up to the bucket rate. The bucket size determines the maximum burst size before the leaky
bucket starts to drop packets. The mechanism works in the following way. The arriving packets are inserted at the top of
the bucket. At the bottom of the bucket, there is a hole through which traffic can leak out at a maximum rate of r bytes per
second. The bucket size is b bytes (i.e., the bucket can hold at most b bytes). Let us follow the leaky bucket operation by
observing the example shown in Figure 3.10. We assume first that the bucket is empty. • Figure 3.10 (A): Incoming traffic
with rate R which is less than the bucket rate r. The outgoing traffic rate is equal to R. In this case when we start with an
empty bucket, the burstiness of the incoming traffic is the same as the burstiness of the outgoing traffic as long as R < r. •
Figure 3.10 (B): Incoming traffic with rate R which is greater than the bucket rate r. The outgoing traffic rate is equal to r
(bucket rate). • Figure 3.10 (C): Same as (B) but the bucket is full. Non-conformant traffic is either dropped or sent as
best effort traffic.

Token Bucket The token bucket mechanism is almost the same as the leaky bucket mechanism but it preserves the
burstiness of the traffic. The token bucket of size b bytes is filled with tokens at rate r (bytes per second). When a packet
arrives, it retrieves a token from the token bucket (given such a token is available) and the packet is sent to the outgoing
traffic stream. As long as there are tokens in the token bucket, the outgoing traffic rate and pattern will be the same as the
incoming traffic rate and pattern. If the token bucket is empty, incoming packets have to wait until there are tokens
available in the bucket, and then they continue to send. Figure 3.11 shows an example of the token bucket mechanism. •
Figure 3.11 (A): The incoming traffic rate is less than the token arrival rate. In this case the outgoing traffic rate is equal
to the incoming traffic rate. • Figure 3.11 (B): The incoming traffic rate is greater than the token arrival rate. In case there
are still tokens in the bucket, the outgoing traffic rate is equal to the incoming traffic rate. • Figure 3.11 (C): If the
incoming traffic rate is still greater than the token arrival rate (e.g., long traffic burst), eventually all the tokens will be
exhausted. In this case the incoming traffic has to wait for the new tokens to arrive in order to be able to send out.
Therefore, the outgoing traffic is limited at the token arrival rate. The token bucket preserves the burstiness of the traffic
up to the maximum burst size. The outgoing traffic will maintain a maximum average rate equal to the token rate, r.
Therefore, the token bucket is used to control the average rate of the traffic. In practical traffic policing, we use a
combination of the token bucket and leaky bucket mechanisms connected in series (token bucket, then leaky bucket). The
token bucket enforces the average data rate to be bound to token bucket rate while the leaky bucket (p) enforces the peak
data rate to be bound to leaky bucket rate. Traffic policing, in cooperation with other QoS mechanisms, enables QoS
support.

B. Explain how admission control and packet scheduling helps to achieve good quality of services in network layer?
Ans: Packet Scheduling Mechanisms Packet scheduling is the mechanism that selects a packet for transmission from
the packets waiting in the transmission queue. It decides which packet from which queue and station are scheduled for
transmission in a certain period of time. Packet scheduling controls bandwidth allocation to stations, classes, and
applications. As shown in Figure 3.6, there are two levels of packet scheduling mechanisms: 1. Intrastation packet
scheduling: The packet scheduling mechanism that retrieves a packet from a queue within the same host. 2. Interstation
packet scheduling: The packet scheduling mechanism that retrieves a packet from a queue from different hosts. Packet
scheduling can be implemented using hierarchical or flat approaches. • Hierarchical packet scheduling: Bandwidth is
allocated to stations—that is, each station is allowed to transmit at a certain period of time. The amount of bandwidth
assigned to each station is controlled by interstation policy and module. When a station receives the opportunity to
transmit, the intrastation packet scheduling module will decide which packets to transmit. This approach is scalable
because interstation packet scheduling maintains the state by station (not by connection or application). Overall
bandwidth is allocated based on stations (in fact, they can be groups, departments, or companies). Then, stations will have
the authority to manage or allocate their own bandwidth portion to applications or classes within the host.
Packet scheduling mechanism deals with how to retrieve packets from queues, which is quite similar to a queuing
mechanism. Since in intrastation packet scheduling the status of each queue in a station is known, the intrastation packet
scheduling mechanism is virtually identical to a queuing mechanism. Interstation packet scheduling mechanism is slightly
different from a queuing mechanism because queues are distributed among hosts and there is no central knowledge of the
status of each queue. Therefore, some interstation packet scheduling mechanisms require a signaling procedure to
coordinate the scheduling among hosts. Because of the similarities between packet scheduling and queuing mechanisms
we introduce a number of queuing schemes (First In First Out [FIFO], Strict Priority, and Weight Fair Queue [WFQ]) and
briefly discuss how they support QoS services.
3.4.1 First In First Out (FIFO) First In First Out (FIFO) is the simplest queuing mechanism. All packets are inserted to
the tail of a single queue. Packets are scheduled in order of their arrival. Figure 3.7 shows FIFO packet scheduling. FIFO
provides best effort service—that is, it does not provide service differentiation in terms of bandwidth and delay. The high
bandwidth flows will get a larger bandwidth portion than the low bandwidth flows. In general, all flows will experience
the same average delay. If a flow increases its bandwidth aggressively, other flows will be affected by getting less
bandwidth, causing increased average packet delay for all flows. It is possible to improve QoS support by adding 1)
traffic policing to limit the rate of each flow and 2) admission control.
3.4.2 Strict Priority Queues are assigned a priority order. Strict priority packet scheduling schedules packets based on
the assigned priority order. Packets in higher priority queues always transmit before packets in lower priority queues. A
lower priority queue has a chance to transmit packets only when there are no packets waiting in a higher priority queue.
Figure 3.8 illustrates the strict priority packet scheduling mechanism. Strict priority provides differentiated services
(relative services) in both bandwidth and delay. The highest priority queue always receives bandwidth (up to the total
bandwidth) and the lower priority queues receive the remaining bandwidth. Therefore, higher priority queues always
experience lower delay than the lower priority queues. Aggressive bandwidth spending by the high priority queues can
starve the low priority queues. Again, it is possible to improve the QoS support by adding 1) traffic policing to limit the
rate of each flow and 2) admission control.
3.4.3 Weight Fair Queue (WFQ) Weight Fair Queue schedules packets based on the weight ratio of each queue. Weight,
wi , is assigned to each queue i according to the network policy. For example, there are three queues A, B, C with weights
w1, w2, w3, respectively. Queues A, B, and C receive the following ratios of available bandwidth: w1/(w1+w2+w3),
w2/(w1+w2+w3), and w3/(w1+w2+w3), respectively, as shown in Figure 3.9.
Bandwidth abuse from a specific queue will not affect other queues. WFQ can provide the required bandwidth and the
delay performance is directly related to the allocated bandwidth. A queue with high bandwidth allocation (large weight)
will experience lower delay. This may lead to some mismatch between the bandwidth and delay requirements. Some
applications may require low bandwidth and low delay. In this case WFQ will allocate high bandwidth to these
applications in order to guarantee the low delay bound. Some applications may require high bandwidth and high delay.
WFQ still has to allocate high bandwidth in order for the applications to operate. Of course, applications will satisfy the
delay but sometimes far beyond their needs. This mismatch can lead to low bandwidth utilization. However, in real life,
WFQ mostly schedules packets that belong to aggregated flows, groups, and classes (instead of individual flows) where
the goal is to provide link sharing among groups. In this case delay is of less concern. The elementary queuing
mechanisms introduced above will be the basis of a number of packet scheduling variations. Before we move our
discussion to the next QoS mechanisms, it is worth mentioning that in some implementations the channel access
mechanism and packet scheduling mechanism are not mutually exclusive. There is some overlap between these two
mechanisms and sometimes they are blended into one solution. When we discuss QoS support of each wireless
technology in later chapters, in some cases, we will discuss both mechanisms together.
3.7 Admission Control Admission control is the mechanism that makes the decision whether to allow a new session to
join the network. This mechanism will ensure that existing sessions’ QoS will not be degraded and the new session will
be provided QoS support. If there are not enough network resources to accommodate the new sessions, the admission
control mechanism may either reject the new session or admit the session while notifying the user that the network cannot
provide the required QoS. Admission control and resource reservation signaling mechanisms closely cooperate with each
other. Both are implemented in the same device. There are two admission control approaches: • Explicit admission
control: This approach is based on explicit resource reservation. Applications will send the request to join the network
through the resource reservation signaling mechanism. The request that contains QoS parameters is forwarded to the
admission control mechanism. The admission control mechanism decides to accept or reject the application based on the
application’s QoS requirements, available resources, performance criteria, and network policy. • Implicit admission
control: There is no explicit resource reservation signaling. The admission control mechanism relies on bandwidth over-
provisioning and traffic control (i.e., traffic policing). The location of the admission control mechanism depends on the
network architecture. For example, in case we have a wide area network such as a high-speed backbone that consists of a
number of interconnected routers, the admission control mechanism is implemented on each router. In shared media
networks, such as wireless networks, there is a designated entity in the network (e.g., station, access point, gateway, base
station) that hosts the admission control agent. This agent is in charge of making admission control decisions for the
entire wireless network. This concept is similar to the SBM (subnet bandwidth manager) which serves as the admission
control agent in 802 networks. In ad hoc wireless networks, the admission control functionality can be distributed among
all hosts. In infrastructure wireless networks where all communication passes through the access point or base station, the
admission control functionality can be implemented in the access point or base station.

5
A . Explain the working of TCP protocol along with the TCP segment header format?

Ans: TCP Header Format:

Source and Destination Port Number


Identification of the sending and receiving application. Along with the source and destination IP addresses in the IP -
header identify the connection as a socket.
Sequence Number
The sequence number of the first data byte in this segment. If the SYN bit is set, the sequence number is the initial
sequence number and the first data byte is initial sequence number + 1.

Acknowledgement Number
If the ACK bit is set, this field contains the value of the next sequence number the sender of the segment is expecting to
receive. Once a connection is established this is always sent.

Hlen
The number of 32-bit words in the TCP header. This indicates where the data begins. The length of the TCP header is
always a multiple of 32 bits.

Flags
There are six flags in the TCP header. One or more can be turned on at the same time.
URG The URGENT POINTER field contains valid data

ACK The ackowledgement number is valid

PSH The receiver should pass this data to the application as soon
as possible

RST Reset the connection

SYN Synchronize sequence numbers to initiate a connection.

FIN The sender is finished sending data.


Window
This is the number of bytes, starting with the one specified by the acknowledgment number field, that the receiver is
willing to accept. This is a 16-bit field, limiting the window to 65535 bytes.

Checksum
This covers both the header and the data. It is calculated by prepending a pseudo-header to the TCP segment, this consists
of three 32 bit words which contain the source and destination IP addresses, a byte set to 0, a byte set to 6 (the protocol
number for TCP in an IP datagram header) and the segment length (in words). The 16-bit one's complement sum of the
header is calculated (i.e., the entire pseudo-header is considered a sequence of 16-bit words). The 16-bit one's
complement of this sum is stored in the checksum field. This is a mandatory field that must be calculated and stored by
the sender, and then verified by the receiver.

Urgent Pointer
The urgent pointer is valid only if the URG flag is set. This pointer is a positive offset that must be added to the sequence
number field of the segment to yield the sequence number of the last byte of urgent data. TCP's urgent mode is a way for
the sender to transmit emergency data to the other end. This feature is rarely used.

1. Introduction
The Transmission Control Protocol (TCP) standard is defined in the Request For Comment (RFC) standards document
number 793 [10] by the Internet Engineering Task Force (IETF). The original specification written in 1981 was based on
earlier research and experimentation in the original ARPANET. The design of TCP was heavily influenced by what has
come to be known as the "end-to-end argument" [3].

As it applies to the Internet, the end-to-end argument says that by putting excessive intelligence in physical and link
layers to handle error control, encryption or flow control you unnecessarily complicate the system. This is because these
functions will usually need to be done at the endpoints anyway, so why duplicate the effort along the way? The result of
an end-to-end network then, is to provide minimal functionality on a hop-by-hop basis and maximal control between end-
to-end communicating systems.

The end-to-end argument helped determine how two characteristics of TCP operate; performance and error handling. TCP
performance is often dependent on a subset of algorithms and techniques such as flow control and congestion control.
Flow control determines the rate at which data is transmitted between a sender and receiver. Congestion control defines
the methods for implicitly interpreting signals from the network in order for a sender to adjust its rate of transmission.
The term congestion control is a bit of a misnomer. Congestion avoidance would be a better term since TCP cannot
control congestion per se. Ultimately intermediate devices, such as IP routers would only be able to control congestion.

Congestion control is currently a large area of research and concern in the network community. A companion study on
congestion control examines the current state of activity in that area [9].

Timeouts and retransmissions handle error control in TCP. Although delay could be substantial, particularly if you were
to implement real-time applications, the use of both techniques offer error detection and error correction thereby
guaranteeing that data will eventually be sent successfully.

The nature of TCP and the underlying packet switched network provide formidable challenges for managers, designers
and researchers of networks. Once regulated to low speed data communication applications, the Internet and in part TCP
are being used to support very high speed communications of voice, video and data. It is unlikely that the Internet
protocols will remain static as the applications change and expand. Understanding the current state of affairs will assist us
in understanding protocol changes made to support future applications.

1.1 Transmission Control Protocol


TCP is often described as a byte stream, connection-oriented, reliable delivery transport layer protocol. In turn, we will
discuss the meaning for each of these descriptive terms.

1.1.1 Byte Stream Delivery


TCP interfaces between the application layer above and the network layer below. When an application sends data to TCP,
it does so in 8-bit byte streams. It is then up to the sending TCP to segment or delineate the byte stream in order to
transmit data in manageable pieces to the receiver1. It is this lack of 'record boundaries" which give it the name "byte
stream delivery service".

1.1.2 Connection-Oriented
Before two communicating TCPs can exchange data, they must first agree upon the willingness to communicate.
Analogous to a telephone call, a connection must first be made before two parties exchange information.

1.1.3 Reliability
A number of mechanisms help provide the reliability TCP guarantees. Each of these is described briefly below.

Checksums. All TCP segments carry a checksum, which is used by the receiver to detect errors with either the TCP
header or data.

Duplicate data detection. It is possible for packets to be duplicated in packet switched network; therefore TCP keeps track
of bytes received in order to discard duplicate copies of data that has already been received.2

Retransmissions. In order to guarantee delivery of data, TCP must implement retransmission schemes for data that may be
lost or damaged. The use of positive acknowledgements by the receiver to the sender confirms successful reception of
data. The lack of positive acknowledgements, coupled with a timeout period (see timers below) calls for a retransmission.

Sequencing. In packet switched networks, it is possible for packets to be delivered out of order. It is TCP's job to properly
sequence segments it receives so it can deliver the byte stream data to an application in order.

Timers. TCP maintains various static and dynamic timers on data sent. The sending TCP waits for the receiver to reply
with an acknowledgement within a bounded length of time. If the timer expires before receiving an acknowledgement, the
sender can retransmit the segment.

1.2 TCP Header Format


Remember that the combination of TCP header and TCP in one packet is called a TCP segment. Figure 1 depicts the
format of all valid TCP segments. The size of the header without options is 20 bytes. We will briefly define each field of
the TCP header below.

Figure 1 - TCP Header Format


1.2.1 Source Port
A 16-bit number identifying the application the TCP segment originated from within the sending host. The port numbers
are divided into three ranges, well-known ports (0 through 1023), registered ports (1024 through 49151) and private ports
(49152 through 65535). Port assignments are used by TCP as an interface to the application layer. For example, the
TELNET server is always assigned to the well-known port 23 by default on TCP hosts. A complete pair of IP addresses
(source and destination) plus a complete pair of TCP ports (source and destination) define a single TCP connection that is
globally unique. See [5] for further details.

1.2.2 Destination Port


A 16-bit number identifying the application the TCP segment is destined for on a receiving host. Destination ports use the
same port number assignments as those set aside for source ports [5].

1.2.3 Sequence Number


A 32-bit number identifying the current position of the first data byte in the segment within the entire byte stream for the
TCP connection. After reaching 232 -1, this number will wrap around to 0.

1.2.4 Acknowledgement Number


A 32-bit number identifying the next data byte the sender expects from the receiver. Therefore, the number will be one
greater than the most recently received data byte. This field is only used when the ACK control bit is turned on (see
below).

1.2.5 Header Length


A 4-bit field that specifies the total TCP header length in 32-bit words (or in multiples of 4 bytes if you prefer). Without
options, a TCP header is always 20 bytes in length. The largest a TCP header may be is 60 bytes. This field is required
because the size of the options field(s) cannot be determined in advance. Note that this field is called "data offset" in the
official TCP standard, but header length is more commonly used.

1.2.6 Reserved
A 6-bit field currently unused and reserved for future use.

1.2.7 Control Bits


Urgent Pointer (URG). If this bit field is set, the receiving TCP should interpret the urgent pointer field (see below).

Acknowledgement (ACK). If this bit field is set, the acknowledgement field described earlier is valid.

Push Function (PSH). If this bit field is set, the receiver should deliver this segment to the receiving application as soon
as possible. An example of its use may be to send a Control-BREAK request to an application, which can jump ahead of
queued data.

Reset the Connection (RST). If this bit is present, it signals the receiver that the sender is aborting the connection and all
queued data and allocated buffers for the connection can be freely relinquished.

Synchronize (SYN). When present, this bit field signifies that sender is attempting to "synchronize" sequence numbers.
This bit is used during the initial stages of connection establishment between a sender and receiver.
No More Data from Sender (FIN). If set, this bit field tells the receiver that the sender has reached the end of its byte
stream for the current TCP connection.

1.2.8 Window
A 16-bit integer used by TCP for flow control in the form of a data transmission window size. This number tells the
sender how much data the receiver is willing to accept. The maximum value for this field would limit the window size to
65,535 bytes, however a "window scale" option can be used to make use of even larger windows.

1.2.9 Checksum
A TCP sender computes a value based on the contents of the TCP header and data fields. This 16-bit value will be
compared with the value the receiver generates using the same computation. If the values match, the receiver can be very
confident that the segment arrived intact.

1.2.10 Urgent Pointer


In certain circumstances, it may be necessary for a TCP sender to notify the receiver of urgent data that should be
processed by the receiving application as soon as possible. This 16-bit field tells the receiver when the last byte of urgent
data in the segment ends.

1.2.11 Options
In order to provide additional functionality, several optional parameters may be used between a TCP sender and receiver.
Depending on the option(s) used, the length of this field will vary in size, but it cannot be larger than 40 bytes due to the
size of the header length field (4 bits). The most common option is the maximum segment size (MSS) option. A TCP
receiver tells the TCP sender the maximum segment size it is willing to accept through the use of this option. Other
options are often used for various flow control and congestion control techniques.

1.2.12 Padding
Because options may vary in size, it may be necessary to "pad" the TCP header with zeroes so that the segment ends on a
32-bit word boundary as defined by the standard [10].

1.2.13 Data
Although not used in some circumstances (e.g. acknowledgement segments with no data in the reverse direction), this
variable length field carries the application data from TCP sender to receiver. This field coupled with the TCP header
fields constitutes a TCP segment.

B. Explain with suitable diagram the connection establishment and connection release by transport layer protocols?
Ans:2. Connection Establishment and Termination
TCP provides a connection-oriented service over packet switched networks. Connection-oriented implies that there is a
virtual connection between two endpoints.3 There are three phases in any virtual connection. These are the connection
establishment, data transfer and connection termination phases.
2.1 Three-Way Handshake
In order for two hosts to communicate using TCP they must first establish a connection by exchanging messages in what
is known as the three-way handshake. The diagram below depicts the process of the three-way handshake.

Figure 2 - TCP Connection Establishment


From figure 2, it can be seen that there are three TCP segments exchanged between two hosts, Host A and Host B.
Reading down the diagram depicts events in time.

To start, Host A initiates the connection by sending a TCP segment with the SYN control bit set and an initial sequence
number (ISN) we represent as the variable x in the sequence number field.

At some moment later in time, Host B receives this SYN segment, processes it and responds with a TCP segment of its
own. The response from Host B contains the SYN control bit set and its own ISN represented as variable y. Host B also
sets the ACK control bit to indicate the next expected byte from Host A should contain data starting with sequence
number x+1.

When Host A receives Host B's ISN and ACK, it finishes the connection establishment phase by sending a final
acknowledgement segment to Host B. In this case, Host A sets the ACK control bit and indicates the next expected byte
from Host B by placing acknowledgement number y+1 in the acknowledgement field.

In addition to the information shown in the diagram above, an exchange of source and destination ports to use for this
connection are also included in each senders' segments.4

2.2 Data Transfer


Once ISNs have been exchanged, communicating applications can transmit data between each other. Most of the
discussion surrounding data transfer requires us to look at flow control and congestion control techniques which we
discuss later in this document and refer to other texts [9]. A few key ideas will be briefly made here, while leaving the
technical details aside.
A simple TCP implementation will place segments into the network for a receiver as long as there is data to send and as
long as the sender does not exceed the window advertised by the receiver. As the receiver accepts and processes TCP
segments, it sends back positive acknowledgements, indicating where in the byte stream it is. These acknowledgements
also contain the "window" which determines how many bytes the receiver is currently willing to accept. If data is
duplicated or lost, a "hole" may exist in the byte stream. A receiver will continue to acknowledge the most current
contiguous place in the byte stream it has accepted.
If there is no data to send, the sending TCP will simply sit idly by waiting for the application to put data into the byte
stream or to receive data from the other end of the connection.
If data queued by the sender reaches a point where data sent will exceed the receiver's advertised window size, the sender
must halt transmission and wait for further acknowledgements and an advertised window size that is greater than zero
before resuming.
Timers are used to avoid deadlock and unresponsive connections. Delayed transmissions are used to make more efficient
use of network bandwidth by sending larger "chunks" of data at once rather than in smaller individual pieces. 5

2.3 Connection Termination


In order for a connection to be released, four segments are required to completely close a connection. Four segments are
necessary due to the fact that TCP is a full-duplex protocol, meaning that each end must shut down independently.6 The
connection termination phase is shown in figure 3 below.
Figure 3 - TCP Connection Termination
Notice that instead of SYN control bit fields, the connection termination phase uses the FIN control bit fields to signal the
close of a connection.

To terminate the connection in our example, the application running on Host A signals TCP to close the connection. This
generates the first FIN segment from Host A to Host B. When Host B receives the initial FIN segment, it immediately
acknowledges the segment and notifies its destination application of the termination request. Once the application on Host
B also decides to shut down the connection, it then sends its own FIN segment, which Host A will process and respond
with an acknowledgement

6.A: VPN: repeated


B: Write a note on RPC used to send a msg to remote location?
Ans: Remote Procedure call (RPC)
Remote Procedure Call (RPC) is a powerful technique for constructing distributed, client-server based applications.
It is based on extending the conventional local procedure calling, so that the called procedure need not exist in the same
address space as the calling procedure. The two processes may be on the same system, or they may be on different
systems with a network connecting them.
When making a Remote Procedure Call:

1. The calling environment is suspended, procedure parameters are transferred across the network to the environment
where the procedure is to execute, and the procedure is executed there.
2. When the procedure finishes and produces its results, its results are transferred back to the calling environment, where
execution resumes as if returning from a regular procedure call.
NOTE: RPC is especially well suited for client-server (e.g. query-response) interaction in which the flow of
control alternates between the caller and callee. Conceptually, the client and server do not both execute at the same
time. Instead, the thread of execution jumps from the caller to the callee and then back again.
Working of RPC
The following steps take place during a RPC:
1. A client invokes a client stub procedure, passing parameters in the usual way. The client stub resides within the
client’s own address space.
2. The client stub marshalls(pack) the parameters into a message. Marshalling includes converting the representation of
the parameters into a standard format, and copying each parameter into the message.
3. The client stub passes the message to the transport layer, which sends it to the remote server machine.
4. On the server, the transport layer passes the message to a server stub, which demarshalls(unpack) the parameters and
calls the desired server routine using the regular procedure call mechanism.
5. When the server procedure completes, it returns to the server stub (e.g., via a normal procedure call return), which
marshalls the return values into a message. The server stub then hands the message to the transport layer.
6. The transport layer sends the result message back to the client transport layer, which hands the message back to the
client stub.
7. The client stub demarshalls the return parameters and execution returns to the caller.
ADVANTAGES
1. RPC provides ABSTRACTION i.e message-passing nature of network communication is hidden from the user.
2. RPC often omits many of the protocol layers to improve performance. Even a small performance improvement is
important because a program may invoke RPCs often.
3. RPC enables the usage of the applications in the distributed environment, not only in the local environment.
4. With RPC code re-writing / re-developing effort is minimized.
5. Process-oriented and thread oriented models supported by RPC.

C. Compare Integrated Services and differentiated services?


Ans:
QoS Service Best Effort IntServ DiffServ
Isolation No isolation Per flow isolation Per aggregation isolation
Guarantee No Per flow Per aggregation (Traffic Class)
guarantee
Service Scope End-to-end End-to-end Per domain
Complexity No setup Per flow setup Long term setup
Scalability Highly Not scalable (each router Scalable (edge routers maintain per aggregate
scalable maintains per flow state) state; core routers per class state)
Suitable for Real No Yes, resource reservation. Yes, LLQ.
Time traffic
Admission Control No Deterministic based on flows. Statistic based on Traffic Classes.
Applicability Internet Small networks and flow Networks of any size.
Default aggregation scenarios.
Resource Not Per flow on each node in the Per Traffic Class on every node in the domain.
Reservation available source-destination path.
Complexity Low High Medium

8.
A. Explain the role of DNS. What are the resource records ? Briefly explain?
Ans: We can define DNS Resource Records simply as DNS Server database entries. Resource Records are usually a name to IP
Address (IPv4 or IPv6) mapping (or vice versa). DNS Resource Records are used to answer DNS client queries. Resource
Records are added to the DNS server for the portion of the DNS namespace which the DNS Server is hosting.
Resource Records (RRs) are the DNS data records. Their precise format is defined in RFC 1035 §3.2.1. The most
important fields in a resource record are Name, Class, Type, and Data. Name is a domain name, Class and Type are two-
byte integers, and Data is a variable-length field to be interpreted in the context of Class and Type. Almost all Internet
applications use Class 1, the Internet Class. For the Internet Class, many standard Types have been defined. The complete
list can be found in the current Assigned Numbers RFC. Only those most important to DNS operation are shown here.

Address (A) RRs

Address (A) records match domain names to IP address, and are both the most important and the most mundane aspect of
DNS. See RFC 1035 §3.4.1 for a more detailed description of the A RR, though there is really very little to describe. The
data section consists entirely of a 32-bit IP address. Most DNS operations are queries for A records matching a given
domain name. Since hosts can have multiple IP addresses, corresponding to multiple physical network interfaces, so it is
permissible for multiple A records to match a given domain name. Normally, only the first one is used, so chose a host's
most reliable IP address and put it first when constructing name server databases.

+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| ADDRESS |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+

where:

ADDRESS A 32 bit Internet address.

A resource record, commonly referred to as an RR, is the unit of information entry in DNS zone files; RRs are the basic
building blocks of host-name and IP information and are used to resolve all DNS queries. Resource records exist as many
types to provide extended name-resolution services.

Different types of RRs have different formats, as they contain different data. In general, however, many RRs share a
common format, as the following address resource records example illustrates
-
B. Describe the architecture of SNMP protocol?
Ans: The Simple Network Management Protocol (SNMP) architecture includes four layers.
As the following figure illustrates, the SNMP architecture includes the following layers:
• SNMP Network Managers
• Master agents
• Subagents
• Managed components
Figure 1. SNMP architecture
A network can have multiple SNMP Network Managers. Each workstation can have one master agent. The SNMP
Network Managers and master agents use SNMP protocols to communicate with each other. Each managed component
has a corresponding subagent and MIBs. SNMP does not specify the protocol for communications between master agents
and subagents.
• SNMP network managers

An SNMP Network Manager is a program that asks for information from master agents and displays that information.
You can use most SNMP Network Managers to select the items to monitor and the form in which to display the
information.
• Master agents

A master agent is a software program that provides the interface between an SNMP Network Manager and a subagent.
• Subagents

A subagent is a software program that provides information to a master agent.


• Managed components

A managed component is hardware or software that provides a subagent. For example, database servers, operating
systems, routers, and printers can be managed components if they provide subagents.
• Management Information Bases

A Management Information Base (MIB) is a group of tables that specify the information that a subagent provides to a
master agent. MIBs follow SNMP protocols.

June 2018(1 2 3 5 6 8)
1
A . Explain the implementation of connection oriented and connectionless services?

Ans: Connection Oriented Services


There is a sequence of operation to be followed by the users of connection oriented service. These are:
1. Connection is established.
2. Information is sent.
3. Connection is released.
In connection oriented service we have to establish a connection before starting the communication. When connection is
established, we send the message or the information and then we release the connection.
Connection oriented service is more reliable than connectionless service. We can send the message in connection oriented
service if there is an error at the receivers end. Example of connection oriented is TCP (Transmission Control Protocol)
protocol.

Connection Less Services


It is similar to the postal services, as it carries the full address where the message (letter) is to be carried. Each message is
routed independently from source to destination. The order of message sent can be different from the order received.
In connectionless the data is transferred in one direction from source to destination without checking that destination is
still there or not or if it prepared to accept the message. Authentication is not needed in this. Example of Connectionless
service is UDP (User Datagram Protocol) protocol.

Difference: Connection oriented and Connectionless service


1. In connection oriented service authentication is needed, while connectionless service does not need any
authentication.
2. Connection oriented protocol makes a connection and checks whether message is received or not and sends again if
an error occurs, while connectionless service protocol does not guarantees a message delivery.
3. Connection oriented service is more reliable than connectionless service.
4. Connection oriented service interface is stream based and connectionless is message based.
Connection-oriented Requires a session connection (analogous to a phone call) be established before any data can be
sent. This method is often called a "reliable" network service. It can guarantee that data will arrive in the same order.
Connection-oriented services set up virtual links between end systems through a network, as shown in Figure 1. Note that
the packet on the left is assigned the virtual circuit number 01. As it moves through the network, routers quickly send it
through virtual circuit 01.

Connectionless Does not require a session connection between sender and receiver. The sender simply starts sending
packets (called datagrams) to the destination. This service does not have the reliability of the connection-oriented method,
but it is useful for periodic burst transfers. Neither system must maintain state information for the systems that they send
transmission to or receive transmission from. A connectionless network provides minimal services.

Definition of Connection-oriented Service


Connection-oriented service is analogous to the telephone system that requires communication entities to establish a
connection before sending data. TCP provides Connection-oriented services as does ATM, Frame
Relay and MPLShardware. It uses handshake process to establish the connection between the sender and receiver.
A handshake process includes some steps which are:

• Client requests server to set up a connection for transfer of data.


• Server program notifies its TCP that connection can be accepted.
• The client transmits a SYN segment to the server.
• The server sends SYN+ACK to the client.
• Client transmits 3rd segment i.e. just ACK segment.
• Then server terminates the connection.
More precisely, it sets up a connection uses that connection then terminates the connection.
Reliability is achieved by having recipient acknowledge each message. There are sequencing and flow control, that’s the
reason packets received at the receiving end are always in order. It uses circuit switching for transmission of data.

Definition of Connection-less Service


Connection-less service is analogous to the postal system. In which packets of data (usually known as a datagram) is
transmitted from source to destination directly. Each packet is treated as an individual entity, which allows
communication entities to send data before establishing communication. Each packet carries a destination address to
identify the intended recipient.
Packets don’t follow a fixed path that is the reason the packets received at receiver end can be out of order. It
uses packet switching for transmission of data.
Most network hardware, the Internet Protocol (IP), and the User Datagram Protocol (UDP) provides connection-less
service.

B. Explain link state routing protocol?


Ans: Link-State Routing Protocols
Link-state algorithms (also known as shortest path first algorithms) flood only incremental changes that have occurred
since the last routing table update. During this incremental update, each router sends only that portion of the routing table
that describes the state of its own links, as opposed to its entire routing table.
Link-state routing protocols require routers to periodically send routing updates to their neighboring routers in the
internetwork. In addition, link-state routing protocols are quick to converge their routing updates across the network in
comparison to distance vector protocols.
The speed at which they converge makes link-state protocols less prone to routing loops than distance vector protocols.
However, link-state protocols also require more CPU power and system memory. One of the primary reasons that
additional CPU power and memory are needed is that link-state protocols are based on the distributed map concept, which
means that every router has a copy of the network map that is regularly updated. In addition to the size of the routing
table, the number of routers in an area and the number of adjacencies amongst routers also has an affect on router memory
and CPU usage in linkstate protocols. These factors were obvious in the old fully meshed asynchronous transfer mode
(ATM) networks, where some routers had 50 or more OSPF adjacent peers and performed poorly.
Link-state protocols are based on link-state algorithms, which are also called shortest path first (SPF) algorithms or
Dijkstra algorithms. "SPF in Operation," later in this tutorial, covers the SPF algorithm in more detail.
A simple way to understand how link-state technology operates is to picture the network as a large jigsaw puzzle; the
number of pieces in your puzzle depends on the size of your network. Each piece of the puzzle holds only one router or
one LAN. Each router "draws" itself on that jigsaw piece, including arrows to other routers and LANs. Those pieces are
then replicated and sent throughout the network from router to router (via link-state advertisements [LSAs]), until each
router has a complete and accurate copy of each piece of the puzzle. Each router then assembles these pieces by using the
SPF algorithm.
NOTE The principle of link-state routing is that all the routers within an area maintain an identical copy of the network
topology. From this map, each router performs a series of calculations that determine the best routes. This network
topology is contained within a link-state database, where each record represents the links to a particular node in the
network.
Each record contains the following pieces of information:
• Interface identifier
• Link number
• Metric information regarding the state of the link
Armed with that information, each router can quickly compute the shortest path from itself to all other routers.
The SPF algorithm determines how the various pieces of the puzzle fit together. Figure below illustrates all of these
pieces put together in operation.

Link-state protocols such as OSPF flood all the routing information when they first become active in link-state packets.
After the network converges, they send only small updates via link-state packets.

2.
A. Compare static routing VS dynamic routing algorithm?
Ans:
BASIS FOR
STATIC ROUTING DYNAMIC ROUTING
COMPARISON

Configuration Manual Automatic

Routing table building Routing locations are hand- Locations are dynamically filled in the table.
typed

Routes User defined Routes are updated according to change in


topology.

Routing algorithms Doesn't employ complex Uses complex routing algorithms to perform
routing algorithms. routing operations.

Implemented in Small networks Large networks

Link failure Link failure obstructs the Link failure doesn't affect the rerouting.
rerouting.

Security Provides high security. Less secure due to sending broadcasts and
multicasts.
BASIS FOR
STATIC ROUTING DYNAMIC ROUTING
COMPARISON

Routing protocols No routing protocols are Routing protocols such as RIP, EIGRP, etc
indulged in the process. are involved in the routing process.

Additional resources Not required Needs additional resources to store the


information.

B. Explain Broadcast routing technique and various methods for doing it?
Ans: In some applications, hosts need to send messages to many or all other hosts.For example, a service distributing
weather reports, stock market updates, or live radio programs might work best by sending to all machines and letting
those that are interested read the data. Sending a packet to all destinations simultaneously is called broadcasting. Various
methods have been proposed for doing it. One broadcasting method that requires no special features from the network is
for the source to simply send a distinct packet to each destination. Not only is the method wasteful of bandwidth and
slow, but it also requires the source to have a complete list of all destinations. This method is not desirable in practice,
even though it is widely applicable. An improvement is multidestination routing, in which each packet contains either a
list of destinations or a bit map indicating the desired destinations. When a packet arrives at a router, the router checks all
the destinations to determine the set of output lines that will be needed. (An output line is needed if it is the best route to
at least one of the destinations.) The router generates a new copy of the packet for each output line to be used and
includes in each packet only those destinations that are to use the line. In effect, the destination set is partitioned among
the output lines. After a sufficient number of hops, each packet will carry only one destination like a normal packet.
Multidestination routing is like using separately addressed packets, except that when several packets must follow the
same route, one of them pays full fare and the rest ride free. The network bandwidth is therefore used more efficiently.
However, this scheme still requires the source to know all the destinations, plus it is as much work for a router to
determine where to send one multidestination packet as it is for multiple distinct packets. We have already seen a better
broadcast routing technique: flooding. When implemented with a sequence number per source, flooding uses links
efficiently with a decision rule at routers that is relatively simple. Although flooding is illsuited for ordinary point-to-
point communication, it rates serious consideration for broadcasting. However, it turns out that we can do better still once
the shortest path routes for regular packets have been computed. The idea for reverse path forwarding is elegant and
remarkably simple once it has been pointed out (Dalal and Metcalfe, 1978). When a broadcast packet arrives at a router,
the router checks to see if the packet arrived on the link that is normally used for sending packets toward the source of the
broadcast. If so, there is an excellent chance that the broadcast packet itself followed the best route from the router and is
therefore the first copy to arrive at the router. This being the case, the router forwards copies of it onto all links except the
one it arrived on. If, however, the broadcast packet arrived on a link other than the preferred one for reaching the source,
the packet is discarded as a likely duplicate.

An example of reverse path forwarding is shown in Fig. 5-15. Part (a) shows a network, part (b) shows a sink tree for
router I of that network, and part (c) shows how the reverse path algorithm works. On the first hop, I sends packets to
F, H, J, and N, as indicated by the second row of the tree. Each of these packets arrives on the preferred path to I
(assuming that the preferred path falls along the sink tree) and is so indicated by a circle around the letter. On the second
hop, eight packets are generated, two by each of the routers that received a packet on the first hop. As it turns out, all
eight of these arrive at previously unvisited routers,and five of these arrive along the preferred line. Of the six packets
generated on the third hop, only three arrive on the preferred path (at C, E, and K); the others are duplicates. After five
hops and 24 packets, the broadcasting terminates, compared with four hops and 14 packets had the sink tree been
followed exactly. The principal advantage of reverse path forwarding is that it is efficient while being easy to implement.
It sends the broadcast packet over each link only once in each direction, just as in flooding, yet it requires only that
routers know how to reach all destinations, without needing to remember sequence numbers (or use other mechanisms to
stop the flood) or list all destinations in the packet. Our last broadcast algorithm improves on the behavior of reverse path
forwarding. It makes explicit use of the sink tree—or any other convenient spanning tree—for the router initiating the
broadcast. A spanning tree is a subset of the network that includes all the routers but contains no loops. Sink trees are
spanning trees. If each router knows which of its lines belong to the spanning tree, it can copy an incoming broadcast
packet onto all the spanning tree lines except the one it arrived on. This method makes excellent use of bandwidth,
generating the absolute minimum number of packets necessary to do the job. In Fig. 5-15, for example, when the sink tree
of part (b) is used as the spanning tree, the broadcast packet is sent with the minimum 14 packets. The only problem is
that each router must have knowledge of some spanning tree for the method to be applicable. Sometimes this information
is available (e.g., with link state routing, all routers know the complete topology, so they can compute a spanning tree) but
sometimes it is not (e.g., with distance vector routing).
Broadcast routing
By default, the broadcast packets are not routed and forwarded by the routers on any network. Routers create broadcast
domains. But it can be configured to forward broadcasts in some special cases. A broadcast message is destined to all
network devices.
Broadcast routing can be done in two ways (algorithm):
• A router creates a data packet and then sends it to each host one by one. In this case, the router creates multiple
copies of single data packet with different destination addresses. All packets are sent as unicast but because they
are sent to all, it simulates as if router is broadcasting.
This method consumes lots of bandwidth and router must destination address of each node.
• Secondly, when router receives a packet that is to be broadcasted, it simply floods those packets out of all interfaces.
All routers are configured in the same way.

This method is easy on router's CPU but may cause the problem of duplicate packets received from peer routers.
Reverse path forwarding is a technique, in which router knows in advance about its predecessor from where it
should receive broadcast. This technique is used to detect and discard duplicates.

C. Explain IEEE 802.15.4 Zigbee protocol?


Ans: How Is ZigBee Related to IEEE 802.15.4? ■ ZigBee takes full advantage of a physical radio and MAC layers specified
by IEEE 802.15.4 (lower layers) ■ ZigBee adds logical network, security and application software (higher layers) ■ ZigBee
continues to work closely with the IEEE to ensure an integrated and complete solution for the market 7 Zigbee target
3.
A. Explain the congestion prevention policies at various layers?
Ans: Policies adopted by open loop congestion control –

1. Retransmission Policy :
It is the policy in which retransmission of the packets are taken care. If the sender feels that a sent packet is lost or
corrupted, the packet needs to be retransmitted. This transmission may increase the congestion in the network.
To prevent congestion, retransmission timers must be designed to prevent congestion and also able to optimize
efficiency.
2. Window Policy :
The type of window at the sender side may also affect the congestion. Several packets in the Go-back-n window are
resent, although some packets may be received successfully at the receiver side. This duplication may increase the
congestion in the network and making it worse.
Therefore, Selective repeat window should be adopted as it sends the specific packet that may have been lost.
3. Discarding Policy :
A good discarding policy adopted by the routers is that the routers may prevent congestion and at the same time
partially discards the corrupted or less sensitive package and also able to maintain the quality of a message.
In case of audio file transmission, routers can discard less sensitive packets to prevent congestion and also maintain
the quality of the audio file.
4. Acknowledgment Policy :
Since acknowledgement are also the part of the load in network, the acknowledgment policy imposed by the
receiver may also affect congestion. Several approaches can be used to prevent congestion related to
acknowledgment.
The receiver should send acknowledgement for N packets rather than sending acknowledgement for a single packet.
The receiver should send a acknowledgment only if it has to sent a packet or a timer expires.
5. Admission Policy :
In admission policy a mechanism should be used to prevent congestion. Switches in a flow should first check the
resource requirement of a network flow before transmitting it further. If there is a chance of a congestion or there is
a congestion in the network, router should deny establishing a virtual network connection to prevent further
congestion

B. Describe the following techniques used for QoS:


1). Over provisioning: An alternative to complex QoS control mechanisms is to provide high quality
communication by generously over-provisioning a network so that capacity is based on peak traffic load estimates.
This approach is simple for networks with predictable peak loads. The performance is reasonable for many
applications. This might include demanding applications that can compensate for variations in bandwidth and delay
with large receive buffers, which is often possible for example in video streaming. Over-provisioning can be of
limited use, however, in the face of transport protocols (such as TCP) that over time exponentially increase the
amount of data placed on the network until all available bandwidth is consumed and packets are dropped. Such
greedy protocols tend to increase latency and packet loss for all users.
Commercial VoIP services are often competitive with traditional telephone service in terms of call quality even
though QoS mechanisms are usually not in use on the user's connection to their ISP and the VoIP provider's
connection to a different ISP. Under high load conditions, however, VoIP may degrade to cell-phone quality or
worse. The mathematics of packet traffic indicate that network requires just 60% more raw capacity under
conservative assumptions.[5]
The amount of over-provisioning in interior links required to replace QoS depends on the number of users and their
traffic demands. This limits usability of over-provisioning. Newer more bandwidth intensive applications and the
addition of more users results in the loss of over-provisioned networks. This then requires a physical update of the
relevant network links which is an expensive process. Thus over-provisioning cannot be blindly assumed on the
Internet.

The Differentiated Services (DiffServ) [RFC2475] approach involves the reservation of network resources such as output
interface buffer space/queues and percentages of link bandwidth assigned for each type of traffic. Traffic types (which
may also be called RFSs - Resource-Facing Services) aggregate flows with similar delay and packet loss requirements.
DiffServ in itself makes no absolute guarantees other than that different traffic types will be treated in different ways,
according to the QoS parameters configured.

This configuration is required on each QoS-enabled interface in the network and there is no interaction between nodes so
packets are treated on a per-hop basis (PHB). This means that each router on the path between the source and destination
may have different QoS parameters configured. For example, the amount of bandwidth assigned for a traffic type on a
backbone network link is likely to be greater than that on an access link, as the backbone may be carrying many flows of
that type, from different sources.

Varying the bandwidth parameter is usual on any network with a backbone and many access links, and is unlikely to be
harmful. However, varying other parameters may result in traffic receiving priority treatment in one router and a different
treatment in the next, so a consistent approach to setting QoS parameters across a network is important. For this reason
when two different networks, for example JANET and GÉANT, attempt to interwork using DiffServ, it is vital that both
parties understand the other’s QoS architecture. To simplify inter-domain configuration, the IETF recommended two
main types of PHB: Expedited Forwarding (EF) [RFC3246] and Assured Forwarding (AF) [RFC2597]. EF assumes the
best quality of treatment in terms of latency/lost parameters which a router can provide to packets. AF is mostly designed
for traffic which needs guaranteed delivery but is more tolerant to packet delays/loss than traffic which requires EF.
Neither EF nor AF definitions specify any particular details of router configurations such as queuing, admission control,
policing and shaping types and parameters to implementation.

DiffServ is a stateless architecture in that a packet enters a router, is classified as necessary, and then placed in the
appropriate queue on the output interface. The router does not attempt to track flows and, once a packet has been
transmitted, it is forgotten.

According to the DiffServ approach, any network router can carry out traffic classification, i.e. decide what PHB should
be applied to arriving packets, independently. However, DiffServ defines a special field in the IP packet called DSCP
(Differential Services Code Point) which can be used as an attribute indicating the desirable PHB for this packet. The
DSCP field is usually intended to be used within a network where routers trust each other, as in a single administrative
domain. In such a case, only the edge routers of the network perform classification and mark ingress packets with a
specific DSCP value; all the core routers can then trust this choice and treat packets accordingly. By extension, DSCP
values can also be used as a means to coordinate traffic handling between trusting networks such as JANET and Regional
Networks. The use of the DSCP field is not mandatory; it is a tool for loose coordination of a network of routers which is
intended to decrease the amount of packet processing work for the core routers.

2). Buffering: 1) FIFO Queuing: In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node
(router or switch) is ready to process them. If the average arrival rate is higher than the average processing rate, the queue
will fill up and new packets will be discarded. Figure9 shows a conceptual view of a FIFO queue.

Fig9: FIFO queue


2) Priority Queuing: In priority queuing, packets are first assigned to a priority class. Each priority class has its own
queue. The packets in the highest-priority queue are processed first. Packets in the lowest-priority queue are processed
last. Note that the system does not stop serving a queue until it is empty.
Figure10 shows priority queuing with two priority levels (for simplicity).

Fig10: Priority queuing


A priority queue can provide better QoS than the FIFO queue because higher priority traffic, such as multimedia, can
reach the destination with less delay.

3).leaky bucket: 1) Leaky Bucket: A technique called leaky bucket can smooth out bursty traffic. Bursty chunks are
stored in the bucket and sent out at an average rate.
A simple leaky bucket implementation is shown in Figure11. A FIFO queue holds the packets. If the traffic
consists of fixed-size packets, the process removes a fixed number of packets from the queue at each tick of the
clock. If the traffic consists of variable-length packets, the fixed output rate must be based on the number of bytes
or bits.

Fig11: Leaky bucket implementation


The following is an algorithm for variable-length packets:
1. Initialize a counter to n at the tick of the clock.
2. If n is greater than the size of the packet, send the packet and decrement the counter by the packet size. Repeat
this step until n is smaller than the packet size.
3. Reset the counter and go to step 1.

5.
A. Explain connection establishment in transport layer with different protocol scenarios ?

Ans: To aid in our understanding of the connect, accept, and close functions and to help us debug TCP applications
using the netstat program, we must understand how TCP connections are established and terminated, and TCP's
state transition diagram.

Three-Way Handshake

The following scenario occurs when a TCP connection is established:

1. The server must be prepared to accept an incoming connection. This is normally done by
calling socket, bind, and listen and is called a passive open.
2. The client issues an active open by calling connect. This causes the client TCP to send a "synchronize"
(SYN) segment, which tells the server the client's initial sequence number for the data that the client
will send on the connection. Normally, there is no data sent with the SYN; it just contains an IP header,
a TCP header, and possible TCP options (which we will talk about shortly).
3. The server must acknowledge (ACK) the client's SYN and the server must also send its own SYN
containing the initial sequence number for the data that the server will send on the connection. The
server sends its SYN and the ACK of the client's SYN in a single segment.
4. The client must acknowledge the server's SYN.

The minimum number of packets required for this exchange is three; hence, this is called TCP's three-way
handshake. We show the three segments in Figure 2.2.

Figure 2.2. TCP three-way handshake.


We show the client's initial sequence number as J and the server's initial sequence number as K. The
acknowledgment number in an ACK is the next expected sequence number for the end sending the ACK. Since
a SYN occupies one byte of the sequence number space, the acknowledgment number in the ACK of each SYN
is the initial sequence number plus one. Similarly, the ACK of each FIN is the sequence number of the FIN plus
one.

An everyday analogy for establishing a TCP connection is the telephone system [Nemeth 1997].
The socket function is the equivalent of having a telephone to use. bind is telling other people your telephone
number so that they can call you. listen is turning on the ringer so that you will hear when an incoming call
arrives. connect requires that we know the other person's phone number and dial it. accept is when the person
being called answers the phone. Having the client's identity returned by accept (where the identify is the client's
IP address and port number) is similar to having the caller ID feature show the caller's phone number. One
difference, however, is that accept returns the client's identity only after the connection has been established,
whereas the caller ID feature shows the caller's phone number before we choose whether to answer the phone
or not. If the DNS is used (Chapter 11), it provides a service analogous to a telephone book. getaddrinfo is similar
to looking up a person's phone number in the phone book. getnameinfo would be the equivalent of having a
phone book sorted by telephone numbers that we could search, instead of a book sorted by name.

TCP Options

Each SYN can contain TCP options. Commonly used options include the following:

• MSS option. With this option, the TCP sending the SYN announces its maximum segment size, the
maximum amount of data that it is willing to accept in each TCP segment, on this connection. The
sending TCP uses the receiver's MSS value as the maximum size of a segment that it sends. We will see
how to fetch and set this TCP option with the TCP_MAXSEG socket option (Section 7.9).
• Window scale option. The maximum window that either TCP can advertise to the other TCP is 65,535,
because the corresponding field in the TCP header occupies 16 bits. But, high-speed connections,
common in today's Internet (45 Mbits/sec and faster, as described in RFC 1323 [Jacobson, Braden, and
Borman 1992]), or long delay paths (satellite links) require a larger window to obtain the maximum
throughput possible. This newer option specifies that the advertised window in the TCP header must be
scaled (left-shifted) by 0–14 bits, providing a maximum window of almost one gigabyte (65,535 x 214).
Both end-systems must support this option for the window scale to be used on a connection. We will see
how to affect this option with the SO_RCVBUF socket option (Section 7.5).

To provide interoperability with older implementations that do not support this option, the following
rules apply. TCP can send the option with its SYN as part of an active open. But, it can scale its
windows only if the other end also sends the option with its SYN. Similarly, the server's TCP can send
this option only if it receives the option with the client's SYN. This logic assumes that implementations
ignore options that they do not understand, which is required and common, but unfortunately, not
guaranteed with all implementations.

• Timestamp option. This option is needed for high-speed connections to prevent possible data corruption
caused by old, delayed, or duplicated segments. Since it is a newer option, it is negotiated similarly to
the window scale option. As network programmers there is nothing we need to worry about with this
option.

These common options are supported by most implementations. The latter two are sometimes called the "RFC
1323 options," as that RFC [Jacobson, Braden, and Borman 1992] specifies the options. They are also called the
"long fat pipe options," since a network with either a high bandwidth or a long delay is called a long fat pipe.
Chapter 24 of TCPv1 contains more details on these options.
B. If TCP round trip RTT is currently 30 msec and the following acknowledgements come in after 26, 32 and 24 msec
respectively. What is the new RTT using Jacobson algorithm? Use Alpha = 0.9.

C. explain TCP connection establishment?


Ans: Repeated

6.
A. explain label switching and MPLS with neat diagram?
Ans: A label switching router (LSR) makes up the core of a label-switched network. Label-switched networks are made
up of predetermined paths, called label-switched paths, (LSPs) which are the result of establishing source-destination
pairs by the process called Multi-Protocol Label Switching (MPLS). Label switching routers support MPLS, which
ensures that all of the packets carried in a specific route will remain in the same path over a backbone. Label switching is
a technique of network relaying to overcome the problems perceived by traditional IP-table switching (also known as
traditional layer 3 hop-by-hop routing[1]). Here, the switching of network packets occurs at a lower level, namely the data
link layer rather than the traditional network layer.
Each packet is assigned a label number and the switching takes place after examination of the label assigned to each
packet. The switching is much faster than IP-routing. New technologies such as Multiprotocol Label Switching (MPLS)
use label switching. The established ATM protocol also uses label switching at its core.

Multiprotocol Label Switching (MPLS)

Multiprotocol Label Switching (MPLS) is a protocol-agnostic routing technique designed to speed up and shape traffic
flows across enterprise wide area and service provider networks.

MPLS allows most data packets to be forwarded at Layer 2 -- the switching level -- rather than having to be passed up
to Layer 3 -- the routing level. For this reason, it is often informally described as operating at Layer 2.5.

MPLS was created in the late 1990s as a more efficient alternative to traditional IP routing, which requires each router to
independently determine a packet's next hop by inspecting the packet's destination IP address before consulting its
own routing table. This process consumes time and hardware resources, potentially resulting in degraded performance for
real-time applications such as voice and video.

In an MPLS network, the very first router to receive a packet determines the packet's entire route upfront, the identity of
which is quickly conveyed to subsequent routers using a label in the packet header.

While router hardware has improved exponentially since MPLS was first developed -- somewhat diminishing its
significance as a more efficient traffic management technology-- it remains important and popular due to its various other
benefits, particularly security, flexibility and traffic engineering.
Components of MPLS

One of the defining features of MPLS is its use of labels -- the L in MPLS. Sandwiched between Layers 2 and 3, a label is
a four-byte -- 32-bit -- identifier that conveys the packet's predetermined forwarding path in an MPLS network. Labels
can also contain information related to quality of service (QoS), indicating a packet's priority level.

MPLS labels consist of four parts:

• Label value: 20 bits


• Experimental: 3 bits
• Bottom of stack: 1 bit
• Time to live: 8 bits
The paths, which are called label-switched paths (LSPs), enable service providers to decide ahead of time the best way for
certain types of traffic to flow within a private or public network.
How an MPLS network works
In an MPLS network, each packet gets labeled on entry into the service provider's network by the ingress router, also
known as the label edge router (LER). This is also the router that decides the LSP the packet will take until it reaches its
destination address.
All the subsequent label-switching routers (LSRs) perform packet forwarding based only on those MPLS labels -- they
never look as far as the IP header. Finally, the egress router removes the labels and forwards the original IP packet toward
its final destination.
When an LSR receives a packet, it performs one or more of the following actions:
• Push: Adds a label. This is typically performed by the ingress router.
• Swap: Replaces a label. This is usually performed by LSRs between the ingress and egress routers.
• Pop: Removes a label. This is most often done by the egress router.
This diagram illustrates how a simple MPLS network works.

Advantages of MPLS
Service providers and enterprises can use MPLS to implement QoS by defining LSPs that can meet specific service-level
agreements on traffic latency, jitter, packet loss and downtime. For example, a network might have three service levels
that prioritize different types of traffic -- e.g., one level for voice, one level for time-sensitive traffic and one level for best
effort traffic.\

B. Explain the UDP header format?


Ans: UDP header
UDP is a simple, datagram-oriented, transport layer protocol: each output operation by a process produces exactly one
UDP datagram, which causes one IP datagram to be sent. The encapsulation of a UDP datagram as an IP datagram looks
like this:

UDP Header Format


Each UDP message is called a user datagram. Conceptually, a user datagram consists of two parts: a UDP Header and a
UDP data area. The header is divided in four 16-bit fields as shown:
Source and Destination Port
The port numbers identify the sending process and the receiving process. TCP and UDP use the destination port number
to demultiplex incoming data from IP. Since IP has already demultiplexed the incoming IP datagram to either TCP or
UDP (based on the protocol value in the IP header), this means the TCP port numbers are looked at by TCP, and the UDP
port numbers by UDP. The TCP port numbers are independent of the UDP port numbers.

The length in bytes of the UDP header and the encapsulated data. The minimum value for this field is 8.
Checksum
This is computed as the 16-bit one's complement of the one's complement sum of a pseudo header of information from the
IP header, the UDP header, and the data, padded as needed with zero bytes at the end to make a multiple of two bytes. If
the checksum is set to zero, then checksuming is disabled. The designers chose to make the checksum optional to allow
implementations to operate with little computational overhead. If the computed checksum is zero, then this field must be
set to 0xFFFF.

C. Note on VPN?

8.
A. Explain the DNS name space and various resource records?
Ans: 5) DNS namespace: DNS is the name service provided by the Internet for TCP/IP networks. DNS is broken up
into domains, a logical organization of computers that exist in a larger network. The domains exist at different levels and
connect in a hierarchy that resembles the root structure of a tree. Each domain extends from the nodeabove it, beginning
at the top with the root-level domain. Under the root-level domain are the top-level domains, under those are the second-
level domains, and on down into subdomains. DNS namespace identifies the structure of the domains that combine to
form a complete domain name. For example, in the domain name sub.secondary.com, "com" is the top-level domain,
"secondary" identifies the secondary domain name (commonly a site hosted by an organization and/or business), and
"sub" identifies a subdomain within the larger network. This entire DNS domain structure is called the DNS namespace.
The name assigned to a domain or computer relates to its position in the namespace.

Alternatively referred to as a namespace, a domain namespace is a name service


provided by the Internet for Transmission Control Protocol networks/Internet
Protocol (TCP/IP). DNS is broken up into domains, a logical organization of
computers that exist in a larger network. Below is an example of the hierarchy of
domain naming on the Internet.

In the above example, all websites are broken into regional sections based on
the TLD (top-level domain). In the example of http://support.computerhope.com it
has a ".com" TLD, with "computerhope" as its second level domain that is local to
the .com TLD, and "support" as its subdomain, which is determined by its server.

There are different types of Resource Records. Most important types of Resource Records are 1) IPv4 host address (A), 2)
IPv6 host address (AAAA, pronounced "quad-A") 3) CNAME (Alias), 4) Pointer (PTR), 5) Mail Exchanger (MX) 6 )
Service (SRV)
DNS Resource
Explanation
Record Type

A Record IPv4 Host Record, used for mapping a Domain Name to an IPv4 address

AAAA Record
(pronounced "quad- IPv6 Host Record, used for mapping a Domain Name to an IPv6 address
A")

CNAME Record Alias Record, used for mapping an alias of a DNS domain name. CNAME Record are useful to use
(Canonical Names) more than one name to a single host. CNAME Records allow using different names for same host.

Mail Exchanger, used for mapping a DNS domain name to the mail server. MX (Mail Exchanger)
Records are used by e-mail applications to locate mail server for a DNS domain, based on the
MX Record
destination e-mail address. MX (Mail Exchanger) Record stores the mail server information for a
particular domain.

PTR Record Pointer, used for reverse lookup (IP Address to Domain Name resolution)

SRV record, used to map available services. Mainly used by Active Directory in Microsoft Windows
SRV Record
Server

Type Meaning Value


SOA Start of authority Parameters for this zone
A IPv4 address of a host 32-Bit integer
AAAA IPv6 address of a host 128-Bit integer
MX Mail exchange Priority, domain willing to accept email
NS Name server Name of a server for this domain
CNAME Canonical name Domain name
PTR Pointer Alias for an IP address
SPF Sender policy framework Text encoding of mail sending policy
SRV Service Host that provides it
TXT Text Descriptive ASCII text
B. Compare POP3 and IMAP?
Ans:
BASIS FOR
POP3 IMAP
COMPARISON

Basic To read the mail ithas to be The mail content can be checked partially

downloaded first. before downloading.

Organize The user can not organize mails in The user can organize the mails on the server.

the mailbox of the mail server.

Folder The user can not create, delete or The user can create, delete or rename

rename mailboxes on a mail server. mailboxes on the mail server.

Content A user can not search the content of A user can search the content of mail for

mail for prior downloading. specific string of character before

downloading.
BASIS FOR
POP3 IMAP
COMPARISON

Partial The user has to download the mail The user can partially download the mail if

Download for accessing it. bandwidth is limited.

Functions POP3 is simple and has limited IMAP is more powerful, more complex and

functions. has more features over POP3.

C. Write a note on JPEG?


Ans: JPEG is a commonly used method of lossy compression for digital images, particularly for those images produced
by digital photography. The degree of compression can be adjusted, allowing a selectable tradeoff between storage size
and image quality. JPEG typically achieves 10:1 compression with little perceptible loss in image quality.[2]
JPEG compression is used in a number of image file formats. JPEG/Exif is the most common image format used
by digital cameras and other photographic image capture devices; along with JPEG/JFIF, it is the most common format
for storing and transmitting photographic images on the World Wide Web.[3] These format variations are often not
distinguished, and are simply called JPEG.
The term "JPEG" is an initialism/acronym for the Joint Photographic Experts Group, which created the standard.
The MIME media typefor JPEG is image/jpeg, except in older Internet Explorer versions, which provides a MIME type
of image/pjpeg when uploading JPEG images.[4] JPEG files usually have a filename extension of .jpg or .jpeg.
JPEG/JFIF supports a maximum image size of 65,535×65,535 pixels,[5] hence up to 4 gigapixels for an aspect ratio of 1:1.

JPEG stands for Joint photographic experts group. It is the first interanational standard in image compression. It is widely
used today. It could be lossy as well as lossless . But the technique we are going to discuss here today is lossy
compression technique.
How jpeg compression works
First step is to divide an image into blocks with each having dimensions of 8 x8.

Let’s for the record, say that this 8x8 image contains the following values.

The range of the pixels intensities now are from 0 to 255. We will change the range from -128 to 127.
Subtracting 128 from each pixel value yields pixel value from -128 to 127. After subtracting 128 from each of the pixel
value, we got the following results.

Now we will compute using this formula.


The result comes from this is stored in let’s say A(j,k) matrix.
There is a standard matrix that is used for computing JPEG compression, which is given by a matrix called as Luminance
matrix.
This matrix is given below

We got this result after applying.

Now we will perform the real trick which is done in JPEG compression which is ZIG-ZAG movement. The zig zag
sequence for the above matrix is shown below. You have to perform zig zag until you find all zeroes ahead. Hence our
image is now compressed.

Summarizing JPEG compression


The first step is to convert an image to Y’CbCr and just pick the Y’ channel and break into 8 x 8 blocks. Then starting
from the first block, map the range from -128 to 127. After that you have to find the discrete Fourier transform of the
matrix. The result of this should be quantized. The last step is to apply encoding in the zig zag manner and do it till you
find all zero.
Save this one dimensional array and you are done.

DEC- JAN 2018/*2019


1.
A. Bring out the design issues of network layer . compare VC and datagram subnet?
B.Explain different network topologies?

2.
A. Differentiate static routing and dynamic routing?
Ans:
BASIS FOR
STATIC ROUTING DYNAMIC ROUTING
COMPARISON

Configuration Manual Automatic

Routing table Routing locations are hand- Locations are dynamically filled in the table.
building typed

Routes User defined Routes are updated according to change in


topology.

Routing Doesn't employ complex Uses complex routing algorithms to perform


algorithms routing algorithms. routing operations.

Implemented in Small networks Large networks

Link failure Link failure obstructs the Link failure doesn't affect the rerouting.
rerouting.

Security Provides high security. Less secure due to sending broadcasts and
multicasts.

Routing protocols No routing protocols are Routing protocols such as RIP, EIGRP, etc are
indulged in the process. involved in the routing process.

Additional Not required Needs additional resources to store the


resources information.

B. Describe the following algorithm of network layer: 400 PG


i). Dijkstra routing: 390
ii).Broadcast routing:
iii). Multicast routing:

3.
A. how congestion is controlled by using hop-by-hop choke packets and random early detection techniques ? Expalin
Ans: Random Early Detection
Dealing with congestion when it first starts is more effective than letting itgum up the works and then trying to deal with
it. This observation leads to an interesting twist on l oad shedding, which is to discard packets before all the buffer space
is really exhausted. The motivation for this idea is that most Internet hosts do not yet get congestion signals from routers
in the form of ECN. Instead, the only reliable indication of congestion that hosts get from the network is packet loss.
After all, it is difficult to build a router that does not drop packets when it is overloaded. Transport protocols such as TCP
are thus hardwired to react to loss as congestion, slowing down the source in response. The reasoning behind this logic is
that TCP was designed for wired networks and wired networks are very reliable, so lost packets are mostly due to buffer
overruns rather than transmission errors. Wireless links must recover transmission errors at the link layer (so they are not
seen at the network layer) to work well with TCP. This situation can be exploited to help reduce congestion. By having
routers drop packets early, before the situation has become hopeless, there is time for the source to take action before it is
too late. A popular algorithm for doing this is called RED (Random Early Detection) (Floyd and Jacobson, 1993). To
determine when to start discarding, routers maintain a running average of their queue lengths. When the average queue
length on some link exceeds a threshold, the link is said to be congested and a small fraction of the packets are dropped at
random. Picking packets at random makes it more likely that the fastest senders will see a packet drop; this is the best
option since the router cannot tell which source is causing the most trouble in a datagram network. The affected sender
will notice the loss when there is no acknowledgement, and then the transport protocol will slow down. The lost packet is
thus delivering the same message as a choke packet, but implicitly, without the router sending any explicit signal. RED
routers improve performance compared to routers that drop packets only when their buffers are full, though they may
require tuning to work well. For example, the ideal number of packets to drop depends on how many senders need to be
notified of congestion. However, ECN is the preferred option if it is available. It works in exactly the same manner, but
delivers a congestion signal explicitly rather than as a loss; RED is used when hosts cannot receive explicit signals.

B.Distinguish between leaky bucket and token bucket and describe how the good quality of service is achieved by these
algorithm?
Ans:

Weighted Fair Queuing: A better scheduling method is weighted fair queuing. In this technique, the packets are still
assigned to different classes and admitted to different queues. The queues, however, are weighted based on the priority of
the queues; higher priority means a higher weight. The system processes packets in each queue in a round-robin fashion
with the number of packets selected from each queue based on the corresponding weight.
• Traffic Shaping :
Traffic shaping is a mechanism to control the amount and the rate of the traffic sent to the network. Two
techniques can shape traffic: leaky bucket and token bucket.
1) Leaky Bucket 2)Token Bucket
5.
A. Why the flow control and buffering is required in transport layer? Explain how it is done?
Ans: The main reason why flow control at the transport layer is to avoid and control congestion. The sending
and the receiving entity can adjust the rate, there by helping to reduce the end to end traffic congestion. This is
very much applicable to tcp...

Flow Control basically means that TCP will ensure that a sender is not overwhelming a receiver by sending packets
faster than it can consume. It’s pretty similar to what’s normally called Back pressure in the Distributed Systems
literature. The idea is that a node receiving data will send some kind of feedback to the node sending the data to let it
know about its current condition.
It’s important to understand that this is not the same as Congestion Control. Although there’s some overlap between the
mechanisms TCP uses to provide both services, they are distinct features. Congestion control is about preventing a node
from overwhelming the network (i.e. the links between two nodes), while Flow Control is about the end-node.
How it works
When we need to send data over a network, this is normally what happens.
The sender application writes data to a socket, the transport layer (in our case, TCP ) will wrap this data in a segment and
hand it to the network layer (e.g. IP ), that will somehow route this packet to the receiving node.
On the other side of this communication, the network layer will deliver this piece of data to TCP , that will make it
available to the receiver application as an exact copy of the data sent, meaning if will not deliver packets out of order, and
will wait for a retransmission in case it notices a gap in the byte stream.
If we zoom in, we will see something like this.
TCP stores the data it needs to send in the send buffer, and the data it receives in the receive buffer. When the application
is ready, it will then read data from the receive buffer.
Flow Control is all about making sure we don’t send more packets when the receive buffer is already full, as the receiver
wouldn’t be able to handle them and would need to drop these packets.
To control the amount of data that TCP can send, the receiver will advertise its Receive Window (rwnd), that is, the spare
room in the receive buffer.

Every time TCP receives a packet, it needs to send an ack message to the sender, acknowledging it received that packet
correctly, and with this ack message it sends the value of the current receive window, so the sender knows if it can keep
sending data.

B.Explain TCP segment header format and discuss how the TCP connection is estb and released in transport layer?

6.
A. VPN
B. Explain the differences between integrated services and differented services and their uses?
Ans:uses not got

8.
A. Explain Domain Name system?
B. Describe SMTP protocol?

Dec- Jan 2017/2016

1.
A. compare VC And datagram subnet?
B. Explain distance vector routing protocol with an example subnet and discuss count to infinity problem?
Ans: A distance-vector routing (DVR) protocol requires that a router inform its neighbors of topology changes
periodically. Historically known as the old ARPANET routing algorithm (or known as Bellman-Ford algorithm).
Bellman Ford Basics – Each router maintains a Distance Vector table containing the distance between itself and ALL
possible destination nodes. Distances,based on a chosen metric, are computed using information from the neighbors’
distance vectors.
Information kept by DV router -
• Each router has an ID
• Associated with each link connected to a router,
• there is a link cost (static or dynamic).
• Intermediate hops

Distance Vector Table Initialization -


• Distance to itself = 0
• Distance to ALL other routers = infinity number.

Distance Vector Algorithm –


1. A router transmits its distance vector to each of its neighbors in a routing packet.
2. Each router receives and saves the most recently received distance vector from each of its neighbors.
3. A router recalculates its distance vector when:
• It receives a distance vector from a neighbor containing different information than before.
• It discovers that a link to a neighbor has gone down.
The DV calculation is based on minimizing the cost to each destination
Dx(y) = Estimate of least cost from x to y
C(x,v) = Node x knows cost to each neighbor v
Dx = [Dx(y): y ∈ N ] = Node x maintains distance vector
Node x also maintains its neighbors' distance vectors
– For each neighbor v, x maintains Dv = [Dv(y): y ∈ N ]
Note –

• From time-to-time, each node sends its own distance vector estimate to neighbors.
• When a node x receives new DV estimate from any neighbor v, it saves v’s distance vector and it updates its own
DV using B-F equation:
• Dx(y) = min { C(x,v) + Dv(y)} for each node y ∈ N
Example – Consider 3-routers X, Y and Z as shown in figure. Each router have their routing table. Every routing table
will contain distance to the destination nodes.
Consider router X , X will share it routing table to neighbors and neighbors will share it routing table to it to X and
distance from node X to destination will be calculated using bellmen- ford equation.
Dx(y) = min { C(x,v) + Dv(y)} for each node y ∈ N
As we can see that distance will be less going from X to Z when Y is intermediate node(hop) so it will be update in
routing table X.

Similarly for Z also –

Finally the routing table for all –


Advantages of Distance Vector routing –
• It is simpler to configure and maintain than link state routing.
Disadvantages of Distance Vector routing –
• It is slower to converge than link state.
• It is at risk from the count-to-infinity problem.
• It creates more traffic than link state since a hop count change must be propagated to all routers and processed on
each router. Hop count updates take place on a periodic basis, even if there are no changes in the network topology,
so bandwidth-wasting broadcasts still occur.
• For larger networks, distance vector routing results in larger routing tables than link state since each router must
know about all other routers. This can also lead to congestion on WAN links.

The main issue with Distance Vector Routing (DVR) protocols is Routing Loops, since Bellman-Ford Algorithm cannot
prevent loops. This routing loop in DVR network causes Count to Infinity Problem. Routing loops usually occur when
any interface goes down or two-routers send updates at the same time.
Counting to infinity problem:

So in this example, the Bellman-Ford algorithm will converge for each router, they will have entries for each other. B will
know that it can get to C at a cost of 1, and A will know that it can get to C via B at a cost of 2.

If the link between B and C is disconnected, then B will know that it can no longer get to C via that link and will remove
it from it’s table. Before it can send any updates it’s possible that it will receive an update from A which will be
advertising that it can get to C at a cost of 2. B can get to A at a cost of 1, so it will update a route to C via A at a cost of 3.
A will then receive updates from B later and update its cost to 4. They will then go on feeding each other bad information
toward infinity which is called as Count to Infinity problem.

Solution for Count to Infinity problem:-

Route Poisoning:
When a route fails, distance vector protocols spread the bad news about a route failure by poisoning the route. Route
poisoning refers to the practice of advertising a route, but with a special metric value called Infinity. Routers consider
routes advertised with an infinite metric to have failed. Each distance vector routing protocol uses the concept of an actual
metric value that represents infinity. RIP defines infinity as 16. The main disadvantage of poison reverse is that it can
significantly increase the size of routing announcements in certain fairly common network topologies.

Split horizon:
If the link between B and C goes down, and B had received a route from A , B could end up using that route via A. A
would send the packet right back to B, creating a loop. But according to Split horizon Rule, Node A does not advertise its
route for C (namely A to B to C) back to B. On the surface, this seems redundant since B will never route via node A
because the route costs more than the direct route from B to C.
Consider the following network topology showing Split horizon-

• In addition to these, we can also use split horizon with route poisoning where above both technique will be used
combinely to achieve efficiency and less increase the size of routing announcements.
• Split horizon with Poison reverse technique is used by Routing Information Protocol (RIP) to reduce routing loops.
Additionally, Holddown timers can be used to avoid the formation of loops. Holddown timer immediately starts
when the router is informed that attached link is down. Till this time, router ignores all updates of down route
unless it receives an update from the router of that downed link. During the timer, If the down link is reachable
again, routing table can be updated.

2.
A. Explain ad Hoc routing algorithm with discovery and route maintenance stages?
Ans: 413

B. Expalin Broadcast and multicast routing protocol?

3.
A. i). Hop by hop choke packet
ii). Load shedding: 425
iii). Jitter control: 574

B. A computer on a 6MBPS network is regulated by a token bucket , the token bucket is filled at a rate of 1MBPS . it is
initially filled to a capacity with 8 megabits . how long the computer transmits at the full 6 mbps?
Ans:

5.
A. Explain with a neat diagram TCP header. What is the total size of minimum TCP MTU including TCP and IP
overhead but not including datalink layer overhead?
Ans: The default segment is 536 bytes. TCP adds 20 bytes and so does IP, making the default 576 bytes.

B. Explain flow control and buffering technique used in transport layer?

C. Write a note on transport service primitive?


ANs: A service is specified by a set of primitives. A primitive means operation. To access the service a user process can
access these primitives. These primitives are different for connection oriented service and connectionless service.
There are five types of service primitives:

1. LISTEN : When a server is ready to accept an incoming connection it executes the LISTEN primitive. It blocks
waiting for an incoming connection.
2. CONNECT : It connects the server by establishing a connection. Response is awaited.
3. RECIEVE: Then the RECIEVE call blocks the server.
4. SEND : Then the client executes SEND primitive to transmit its request followed by the execution of RECIEVE
to get the reply. Send the message.
5. DISCONNECT : This primitive is used for terminating the connection. After this primitive one can’t send any
message. When the client sends DISCONNECT packet then the server also sends the DISCONNECT packet to
acknowledge the client. When the server package is received by client then the process is terminated.
Connection Oriented Service Primitives

• There are 4 types of primitives for Connection Oriented Service :


CONNECT This primitive makes a connection

DATA, DATA-ACKNOWLEDGE, EXPEDITED-DATA Data and information is sent using thus primitive

CONNECT Primitive for closing the connection

RESET Primitive for reseting the connection


Connectionless Oriented Service Primitives

• There are 2 types of primitives for Connectionless Oriented Service:

UNIDATA This primitive sends a packet of data

FACILITY, REPORT Primitive for enquiring about the performance of the network, like delivery statistics.
• Consider an application with a server and a number of remote clients.
a. To start with,the server executes a LISTEN primitive,typically by calling a library procedure that makes a
system call to block the server until a client turns up.
b. For lack of a better term,we will reluctantly use the somwhat ungainly acronym TPDU(Transport Protocol
Data Unit) for message sent from transport entity to transport entity.
c. Thus,TPDUs(exchanged by the transport layer) are contained in packets(exchanged by the network layer).
d. In turn,packets are contained in frames(exchanged by the data link layer).
e. When a frame arrives,the data link layer processes the frame header and passes the contents of the frame
payload field up to the network entity.

f. When a client wants to talk to the server, it executes a CONNECT primitive.


g. The transport entity carries out this primitives by blocking the caller and sending a packet to the server.
h. Encapsulated in the payload of this packet is a transport layer message for the server’s transport entity.
i. The client’s CONNECT call causes a CONNECTION REQUEST TPDU to be sent to the server.
j. When it arrives, the transport entity checks to see that the server is blocked on a LISTEN.
k. It then unblocks the server and sends a CONNECTION ACCEPTED TPDU back to the client.
l. When this TPDU arrives, the client is unblocked and the connection is established. Data can now be exchanged using
the SEND and RECEIVE primitives.
m. In the simplest form, either party can do a(blocking)RECEIVE to wait for the other party to do a SEND. When the
TPDU arrives, the receiver is unblocked.
n. It can then process the TPDU and send a reply. As long as both sides can keep track of whose turn it is to send, this
scheme works fine.
o. When a connection is no longer needed, it must be released to free up table space within the two transport entities.

6.
A. Explain TCP connection establishment. Write a note on silly window syndrome problem?
Ans: Silly window syndrome is a problem in computer networking caused by poorly implemented TCP flow control. A
serious problem can arise in the sliding window operation when the sending application program creates data slowly, the
receiving application program consumes data slowly, or both. If a server with this problem is unable to process all
incoming data, it requests that its clients reduce the amount of data they send at a time (the window setting on a
TCP packet). If the server continues to be unable to process all incoming data, the window becomes smaller and smaller,
sometimes to the point that the data transmitted is smaller than the packet header, making data transmission extremely
inefficient. The name of this problem is due to the window size shrinking to a "silly" value.
Since there is a certain amount of overhead associated with processing each packet, the increased number of packets
means increased overhead to process a decreasing amount of data. The end result is thrashing.
When there is no synchronization between the sender and receiver regarding capacity of the flow of data or the size of the
packet, the window syndrome problem is created. When the silly window syndrome is created by the sender, Nagle's
algorithm is used. Nagle's solution requires that the sender send the first segment even if it is a small one, then that it wait
until an ACK is received or a maximum sized segment (MSS) is accumulated. When the silly window syndrome is
created by the receiver, David D Clark's solution is used.[citation needed] Clark's solution closes the window until another
segment of maximum segment size (MSS) can be received or the buffer is half empty.
There are 3 causes of SWS:

1. When the server announces Empty space as 0


2. When client is able to generate only 1 byte at a time
3. When server is able to consume only 1 byte at a time
During SWS, efficiency of communication is almost 0, so SWS duration should be short as possible.
Send-side silly window avoidance[edit]
A heuristic method where the send TCP must allow the sending application to make "write" calls, and collect the data
transferred in each call before transmitting it into a large segment. The sending TCP delays sending segments until it can
accumulate reasonable amounts of data, which is known as clumping.
Receive-side silly window avoidance[edit]
A heuristic method that a receiver uses to maintain an internal record of the available window, and delay advertising an
increase in window size to the sender until it can advance a significant amount. This amount depends on the
receiver's buffer size and maximum segment size. By using this method, it prevents small window advertisements where
received applications extract data octets slowly.
B. Explain the approaches used to implement Integrated and differentiated services?
Ans: not got

8.
i).Explain the role of DNS in application layer?
An application-layer protocol defines how applications on different systems pass messages to each other. An application-
layer protocol defines; the types of messages exchanged, the syntax of the various message types, the meaning of the
information, and rules for determining when and how a process sends and responds to messages.

One application layer protocol is the Domain Name System which is a name-resolution system critical to World Wide
Web (WWW) function and services which is responsible for translating fully qualified domain names such
as www.zymitry.com, into machine readable IP addresses. The Domain Name System is what allows users to use
alphanumeric names to navigate the WWW, email systems, FTP services, and others, instead of having to use these
systems Internet Protocol (IP) addresses. The Domain Name Service protocol is different from most other protocols
because users usually have no direct interaction with the Domain Name System, example; web browsers and FTP
applications. The Domain Name System provides the names translation used behind the scenes by various services.

ii).Explain SNMP and data types supported by SNMP?


Simple Network Management Protocol (SNMP) is an Internet Standard protocol for collecting and organizing
information about managed devices on IP networks and for modifying that information to change device behavior.
Devices that typically support SNMP include cable modems, routers, switches, servers, workstations, printers, and
more.[1]
SNMP is widely used in network management for network monitoring. SNMP exposes management data in the form of
variables on the managed systems organized in a management information base (MIB) which describe the system status
and configuration. These variables can then be remotely queried (and, in some circumstances, manipulated) by managing
applications.
Three significant versions of SNMP have been developed and deployed. SNMPv1 is the original version of the protocol.
More recent versions, SNMPv2c and SNMPv3, feature improvements in performance, flexibility and security.
SNMP is a component of the Internet Protocol Suite as defined by the Internet Engineering Task Force (IETF). It consists
of a set of standards for network management, including an application layer protocol, a database schema, and a set
of data objects.[2]

Data types not got

iii).Describe SMTP protocol used for email services?


Ans: Email is emerging as one of the most valuable services on the internet today. Most of the internet systems use SMTP
as a method to transfer mail from one user to another. SMTP is a push protocol and is used to send the mail whereas POP
(post office protocol) or IMAP (internet message access protocol) are used to retrieve those mails at the receiver’s side.
SMTP Fundamentals
SMTP is an application layer protocol. The client who wants to send the mail opens a TCP connection to the SMTP
server and then sends the mail across the connection. The SMTP server is always on listening mode. As soon as it listens
for a TCP connection from any client, the SMTP process initiates a connection on that port (25). After successfully
establishing the TCP connection the client process sends the mail instantly.
SMTP Protocol
The SMTP model is of two type :
1. End-to- end method
2. Store-and- forward method
The end to end model is used to communicate between different organizations whereas the store and forward method are
used within an organization. A SMTP client who wants to send the mail will contact the destination’s host SMTP directly
in order to send the mail to the destination. The SMTP server will keep the mail to itself until it is successfully copied to
the receiver’s SMTP.
The client SMTP is the one which initiates the session let us call it as the client- SMTP and the server SMTP is the one
which responds to the session request and let us call it as receiver-SMTP. The client- SMTP will start the session and the
receiver-SMTP will respond to the request.

Potrebbero piacerti anche