Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
June- 2015
1.
A. Explain store-and-forward packet switching?
Ans: The major components of the network are the ISP’s equipment (routers connected by transmission lines), shown inside
the shaded oval, and the customers’ equipment, shown outside the oval. Host H1 is directly connected to one of the ISP’s
routers, A, perhaps as a home computer that is plugged into a DSL modem. In contrast, H2 is on a LAN, which might be an
office Ethernet,with a router, F,
owned and operated by the customer. This router has a leased line to the ISP’s equipment. We have shown F as being
outside the oval because it does not belong to the ISP. For the purposes of this chapter, however, routers on customer
premises are considered part of the ISP network because they run the same algorithms as the ISP’s routers (and our main
concern here is algorithms). This equipment is used as follows. A host with a packet to send transmits it to the nearest
router, either on its own LAN or over a point-to-point link to the ISP. The packet is stored there until it has fully arrived
and the link has finished its processing by verifying the checksum. Then it is forwarded to the next router along the path
until it reaches the destination host, where it is delivered. This mechanism is store-and-forward packet switching, as we
have seen in previous chapters.
B. Breifly discuss the various design issues of the network layer design ?
Ans: i). Store and forward packet switching: * A host with a packet to send transmits it to the nearest router. * the packets
is stored there until it has fully arrived. * the link has finished its processing by verifyingthe checksum. * then it is forwarded
to the next router along the path until it reaches the destination host. * this mechanism is store-forward packet switching.
ii). Service provided to the transport layers: the network layer services have been designed with the goals:
• The advice should be independent of router technology. * the transport layer should be shielded from the
number,type, and topology of the routers present. * the network address made available to the transport layer shd
use a uniform numbering plan, even across LANs and WANs
iii).Implementation of connectionless services: if connectionless services is offered, packets are injected into network
individually and routed independently of each other. No advance setup is needed. In this contex, the packets are frequently
called datagrams
iv). Implementation of connection oriented services: if connection oriented services is used, a path from the source router
all the way to the destination router must be estb bfr any data packets can be sent. This connection is called virtual circuit.
v). comparison of VC and Datagram networks:
SPIN protocol
Node 1 sends ADV message to all its neighbors, 2 and 3.Node 3 requests for the data using REQ message, for which node
1 send data using message DATA to node 3. After receiving the data Node 3 sends ADV message to its neighbors 4 and 5
and the process continues. It does not send to 1 because 3 knows that it received data from 1.
The data is described in the ADV packet using high level data descriptors, which are good enough to identify the data.
These high level data descriptors are called meta-data. The meta-data of two different data’s should be different and meta-
data of two similar data should be similar. The use of meta-data prevents, the actual data being flooded through out the
network. The actual data can be given to only the nodes which need the data. This protocol also makes nodes more
intelligent, every node will have a resource manager, which will inform each node about the amount various resources left
in the node. Accordingly the node can make a decision regarding, whether it can as forwarding node or not.
ZigBee
ZigBee is an open global standard for wireless technology designed to use low-power digital radio signals for personal
area networks. ZigBee operates on the IEEE 802.15.4 specification and is used to create networks that require a low data
transfer rate, energy efficiency and secure networking. It is employed in a number of applications such as building
automation systems, heating and cooling control and in medical devices.
ZigBee is designed to be simpler and less expensive than other personal are network technologies such as
Bluetooth.ZigBee is a cost- and energy-efficient wireless network standard. It employs mesh network topology, allowing
it provide high reliability and a reasonable range.
One of ZigBee's defining features is the secure communications it is able to provide. This is accomplished through the use
of 128-bit cryptographic keys. This system is based on symmetric keys, which means that both the recipient and
originator of a transaction need to share the same key. These keys are either pre-installed, transported by a "trust center"
designated within the network or established between the trust center and a device without being transported. Security in a
personal area network is most crucial when ZigBee is used in corporate or manufacturing networks.
3.
A. Distinguish between the leaky bucket and token bucket congestion control algorithm?
Ans: • Traffic shaping (also referred to as packet shaping) is the technique of delaying and restricting certain packets
traveling through a network to increase the performance of packets that have been given priority.
• Classes are defined to separate the packets into groupings so that they can each be shaped separately allowing some
classes to pass through a network more freely than others. Traffic shapers are usually placed at the boundaries of a
network to either shape the traffic going entering or leaving a network.
• Traffic shaping is a mechanism to control the amount and rate of the traffic sent to the network. The two traffic shaping
techniques are:
i. Leaky Bucket Algorithm
• Leaky bucket is a bucket with a hole at bottom. Flow of the water from bucket is at a constant rate which is independent
of water entering the bucket. If bucket is full, any additional water entering in the bucket is thrown out.
• Same technique is applied to control congestion in network traffic.Every host in the network is having a buffer with
finite queue length
• Packets which are put in the buffer is full are thrown away.The buffer may drain onto the subnet either by some number
of packets per unit time,or by some total number of bytes per unit time.
• A FIFO queue is used for holding the packets.
• If the arriving packets are of fixed size,then the process removes a fixed number of packets from the queue at each tick
of the clock.
• If the arriving packets are of different size,then the fixed output rate will not be based on the number of departing
packets.
• Instead it will be based on the number of departing bytes or bits.
Comparison Of Token Bucket and Leaky Bucket Algorithm:
When the host has to send a packet In this leaky bucket holds tokens
, packet is thrown in bucket. generated at regular intervals of time.
The token bucket algorithm is based on an analogy of a fixed capacity bucket into which tokens, normally representing a
unit of bytes or a single packet of predetermined size, are added at a fixed rate. When a packet is to be checked for
conformance to the defined limits, the bucket is inspected to see if it contains sufficient tokens at that time. If so, the
appropriate number of tokens, e.g. equivalent to the length of the packet in bytes, are removed ("cashed in"), and the
packet is passed, e.g., for transmission. The packet does not conform if there are insufficient tokens in the bucket, and the
contents of the bucket are not changed. Non-conformant packets can be treated in various ways:
• They may be dropped.
• They may be enqueued for subsequent transmission when sufficient tokens have accumulated in the bucket.
• They may be transmitted, but marked as being non-conformant, possibly to be dropped subsequently if the network is
overloaded.
A conforming flow can thus contain traffic with an average rate up to the rate at which tokens are added to the bucket,
and have a burstiness determined by the depth of the bucket. This burstiness may be expressed in terms of either a jitter
tolerance, i.e. how much sooner a packet might conform (e.g. arrive or be transmitted) than would be expected from the
limit on the average rate, or a burst tolerance or maximum burst size, i.e. how much more than the average level of traffic
might conform in some finite period.
Algorithm
The token bucket algorithm can be conceptually understood as follows:
• The bucket can hold at the most tokens. If a token arrives when the bucket is full, it is discarded.
• When a packet (network layer PDU) of n bytes arrives, n tokens are removed from the bucket, and the packet is sent to
the network.
• If fewer than n tokens are available, no tokens are removed from the bucket, and the packet is considered to be non-
conformant.
Uses
The token bucket can be used in either traffic shaping or traffic policing. In traffic policing, nonconforming packets may
be discarded (dropped) or may be reduced in priority (for downstream traffic management functions to drop if there is
congestion). In traffic shaping, packets are delayed until they conform. Traffic policing and traffic shaping are commonly
used to protect the network against excess or excessively bursty traffic, see bandwidth management and congestion
avoidance. Traffic shaping is commonly used in the network interfaces in hosts to prevent transmissions being discarded
by traffic management functions in the network.
Another approach is to have the choke packet take effect at every hop it passes through, as shown in the sequence of
figure (b). Here, F is required to reduce the flow to D as soon as the choke packet reaches F. Doing so will require F to
devote more buffers to the flow. In the next step, the choke packet reaches E, which tells E to reduce the flow to F. This
action puts a greater demand on E‘s buffers but gives F immediate relief. Finally, the choke packet reaches A and the flow
genuinely slows down.
Hop-by-Hop Backpressure
At high speeds or over long distances, many new packets may be transmitted after congestion has been signaled because
of the delay before the signal takes effect.Consider, for example, a host in San Francisco (router A in Fig. 5-26) that is
sending traffic to a host in New York (router D in Fig. 5-26) at the OC-3 speed of 155 Mbps. If the New York host begins
to run out of buffers, it will take about 40 msec for a choke packet to get back to San Francisco to tell it to slow down. An
ECN indication will take even longer because it is delivered via the destination.Choke packet propagation is illustrated as
the second, third, and fourth steps in Fig. 5-26(a). In those 40 msec, another 6.2 megabits will have been sent. Even if the
host in San Francisco completely shuts down immediately, the 6.2 megabits in the pipe will continue to pour in and have
to be dealt with. Only in the seventh diagram in Fig. 5-26(a) will the New York router notice a slower flow.An alternative
approach is to have the choke packet take effect at every hop it passes through, as shown in the sequence of Fig. 5-26(b).
Here, as soon as the choke packet reaches F, F is required to reduce the flow to D. Doing so will require F to devote more
buffers to the connection, since the source is still sending away at full blast, but it gives D immediate relief, like a
headache remedy in a television commercial. In the next step, the choke packet reaches E, which tells E to reduce the
flow to F. This action puts a greater demand on E’s buffers but gives F immediate relief. Finally, the choke packet
reaches A and the flow genuinely slows down. The net effect of this hop-by-hop scheme is to provide quick relief at the
point of congestion, at the price of using up more buffers upstream. In this way, congestion can be nipped in the bud
without losing any packets.
• The RSVP is a signaling protocol, which helps IP to create a flow and to make resource reservation.
• RSVP helps to design multicasting (one to many or many to many distribution), where a data can be sent to group of
destination computers simultaneously.
For example: The IP multicast is technique for one to many communication through an IP infrastructure in the network.
• RSVP can be also used for unicasting (transmitting a data to all possible destination) to provide resource reservation for
all types of traffic.
1. Path messages:
• The receivers in a flow make the reservation in RSVP, but the receivers do not know the path traveled by the packets
before the reservation. The path is required for reservation To solve this problem the RSVP uses the path messages.
• A path message travels from the sender and reaches to all receivers by multicasting and path message stores the
necessary information for the receivers.
2. Resv messages:
After receiving path message, the receiver sends a Resv message. The Resv message travels to the sender and makes a
resource reservation on the routers which supports for RSVP.
The Resource Reservation Protocol (RSVP) is a Transport Layer protocol designed to reserve resources across a network
for an integrated services Internet. RSVP operates over an IPv4 or IPv6 Internet Layer and provides receiver-initiated
setup of resource reservations for multicast or unicast data flows with scaling and robustness.
RSVP can be used by either hosts or routers to request or deliver specific levels of quality of service (QoS) for application
data streams or flows. RSVP is not a routing protocol and was designed to interoperate with current and future routing
protocols. RSVP-TE, the traffic engineering extension of RSVP, is becoming more widely accepted nowadays in many
QoS-oriented networks.
The main attributes of RSVP are:
• RSVP requests resources for simplex flows: a traffic stream in only one direction from sender to one or more receivers.
• RSVP is not a routing protocol but works with current and future routing protocols.
• RSVP is receiver oriented: in that the receiver of a data flow initiates and maintains the resource reservation for that
flow.
• RSVP maintains “soft state” of the host and routers’ resource reservations, hence supporting dynamic automatic
adaptation to network changes.
• RSVP provides several reservation styles and allows for future styles to be added to protocol revisions to fit varied
applications.
• RSVP transports and maintains traffic and policy control parameters that are opaque to RSVP.
5.
A.Explain how flow control and buffering is done in transport layer?
Ans: Transport Layer
o The transport layer is a 4th layer from the top.
o The main role of the transport layer is to provide the communication services directly to the application processes
running on different hosts.
o The transport layer provides a logical communication between application processes running on different hosts.
Although the application processes on different hosts are not physically connected, application processes use the
logical communication provided by the transport layer to send the messages to each other.
o The transport layer protocols are implemented in the end systems but not in the network routers.
o A computer network provides more than one protocol to the network applications. For example, TCP and UDP
are two transport layer protocols that provide a different set of services to the network layer.
o All transport layer protocols provide multiplexing/demultiplexing service. It also provides other services such as
reliable data transfer, bandwidth guarantees, and delay guarantees.
o Each of the applications in the application layer has the ability to send a message by using TCP or UDP. The
application communicates by using either of these two protocols. Both TCP and UDP will then communicate with
the internet protocol in the internet layer. The applications can read and write to the transport layer. Therefore, we
can say that communication is a two-way process.
Services provided by the Transport Layer
The services provided by the transport layer are similar to those of the data link layer. The data link layer provides the
services within a single network while the transport layer provides the services across an internetwork made up of many
networks. The data link layer controls the physical layer while the transport layer controls all the lower layers.
The services provided by the transport layer protocols can be divided into five categories:
o End-to-end delivery
o Addressing
o Reliable delivery
o Flow control
o Multiplexing
End-to-end delivery:
The transport layer transmits the entire message to the destination. Therefore, it ensures the end-to-end delivery of an
entire message from a source to the destination.
Reliable delivery:
The transport layer provides reliability services by retransmitting the lost and damaged packets.
The reliable delivery has four aspects:
o Error control
o Sequence control
o Loss control
o Duplication control
Error Control
o The primary role of reliability is Error Control. In reality, no transmission will be 100 percent error-free
delivery. Therefore, transport layer protocols are designed to provide error-free transmission.
o The data link layer also provides the error handling mechanism, but it ensures only node-to-node error-free
delivery. However, node-to-node reliability does not ensure the end-to-end reliability.
o The data link layer checks for the error between each network. If an error is introduced inside one of the routers,
then this error will not be caught by the data link layer. It only detects those errors that have been introduced
between the beginning and end of the link. Therefore, the transport layer performs the checking for the errors
end-to-end to ensure that the packet has arrived correctly.
Sequence Control
o The second aspect of the reliability is sequence control which is implemented at the transport layer.
o On the sending end, the transport layer is responsible for ensuring that the packets received from the upper layers
can be used by the lower layers. On the receiving end, it ensures that the various segments of a transmission can
be correctly reassembled.
Loss Control
Loss Control is a third aspect of reliability. The transport layer ensures that all the fragments of a transmission arrive at
the destination, not some of them. On the sending end, all the fragments of transmission are given sequence numbers by a
transport layer. These sequence numbers allow the receiver?s transport layer to identify the missing segment.
Duplication Control
Duplication Control is the fourth aspect of reliability. The transport layer guarantees that no duplicate data arrive at the
destination. Sequence numbers are used to identify the lost packets; similarly, it allows the receiver to identify and discard
duplicate segments.
Flow Control
Flow control is used to prevent the sender from overwhelming the receiver. If the receiver is overloaded with too much
data, then the receiver discards the packets and asking for the retransmission of packets. This increases network
congestion and thus, reducing the system performance. The transport layer is responsible for flow control. It uses the
sliding window protocol that makes the data transmission more efficient as well as it controls the flow of data so that the
receiver does not become overwhelmed. Sliding window protocol is byte oriented rather than frame oriented.
Transport layer provides a flow control mechanism between the adjacent layers of the TCP/IP model. TCP also prevents
the data loss due to a fast sender and slow receiver by imposing some flow control techniques. It uses the method of
sliding window protocol which is accomplished by receiver by sending a window back to the sender informing the size of
data it can receive.
Flow Control & Buffers
The sender's transport layer must worry about overwhelming both the network and the receiver. The network may exceed
the carrying capacity, and the receiver may run out of buffers.
Buffers are statically allocated kernel memory so that storing received TPDUs can be done quickly.
If the network layer is reliable the transport layer need not buffer transmitted data, since it relies on the network layer to
get the data through.
If the network layer is unreliable, then the sending transport entity has to buffer all TPDUs until they are acknowledged.
This gives the receiving transport entity the choice of buffering. If it does not, it knows the sender will eventually resend,
though the time spent transmitting and receiving the TPDU has been wasted. Why might the receiver not buffer? It might
not have a buffer to put the received TPDU in. Remember that the transport entities handle many connections
simultaneously. The buffer pool available to the transport entity may be exhausted by other connections.
If all TPDUs are the same size, then a pool of same-size buffers can be maintained, and each connection has a linked list
corresponding to its received TPDUs. This is not a good scheme if TPDUs vary in size, since you'd have to make your
buffers as big as the largest possible TPDU, so small packets would waste buffer space.
You could have a pool of varying sized buffers, then pick one that fits as well as possible, maintaining the same sort of
linked list per connection. This is a little more complicated to manage, since you can't just grab any free buffer.
The other possibility is to assign a block of memory to each connection, then manage it as a circular buffer. But how big
should the circular buffer block be? If the connection is busy, large, but if it is a slow connection, a large block wastes
memory.
Buffering isn't the only thing that limits the flow control in the transport layer. Suppose the receiver had an infinite supply
of memory to dedicate to buffers. You still have the limit of the subnet's carrying capacity. This is the issue of congestion
control.
Congestion control
If routers in the subnet can exchange x packets per second on direct links, and there are k hops between sender and
receiver, then the most data that can be sent is k*x packets per second (store and forward network). Anything more than
this causes congestion in the network.
One scheme is to have the sender monitor the carrying capacity of the network by measuring the time required for
sending and receiving an ack for a TPDU. Then, with a capacity of C TPDUs/second, and a round trip time of r seconds
per TPDU, the sender should be allowed a window of C * r bytes. This keeps the pipe full. Since the network capacity
may change rapidly due to congestion, the estimates of C and r must be continually updated.
UDP (User Datagram Protocol) is an alternative communications protocol to Transmission Control Protocol (TCP) used
primarily for establishing low-latency and loss-tolerating connections between applications on the internet.
Both UDP and TCP run on top of the Internet Protocol (IP) and are sometimes referred to as UDP/IP or TCP/IP. But there
are important differences between the two.
Where UDP enables process-to-process communication, TCP supports host-to-host communication. TCP sends individual
packets and is considered a reliable transport medium; UDP sends messages, called datagrams, and is considered a best-
effort mode of communications.
In addition, where TCP provides error and flow control, no such mechanisms are supported in UDP. UDP is considered a
connectionless protocol because it doesn't require a virtual circuit to be established before any data transfer occurs.
UDP provides two services not provided by the IP layer. It provides port numbers to help distinguish different user
requests and, optionally, achecksum capability to verify that the data arrived intact.
TCP has emerged as the dominant protocol used for the bulk of internet connectivity due to its ability to break large data
sets into individual packets, check for and resend lost packets, and reassemble packets in the correct sequence. But these
additional services come at a cost in terms of additional data overhead and delays called latency.
In contrast, UDP just sends the packets, which means that it has much lower bandwidthoverhead and latency. With UDP,
packets may take different paths between sender and receiver and, as a result, some packets may be lost or received out of
order.
Applications of UDP
UDP is an ideal protocol for network applications in which perceived latency is critical, such as in gaming and voice and
video communications, which can suffer some data loss without adversely affecting perceived quality. In some cases,
forward error correction techniques are used to improve audio and video quality in spite of some loss.
UDP can also be used in applications that require lossless data transmission when the application is configured to manage
the process of retransmitting lost packets and correctly arranging received packets. This approach can help to improve
the data transfer rate of large files compared to TCP.
In the Open Systems Interconnection (OSI) communication model, UDP, like TCP, is in Layer 4, the transport layer. UDP
works in conjunction with higher level protocols to help manage data transmission services including Trivial File Transfer
Protocol (TFTP), Real Time Streaming Protocol (RTSP), Simple Network Protocol (SNP) and domain name system
(DNS) lookups.
User datagram protocol features
The user datagram protocol has attributes that make it advantageous for use with applications that can tolerate lost data.
• It allows packets to be dropped and received in a different order than they were transmitted, making it suitable for
real-time applications where latency might be a concern.
• It can be used for transaction-based protocols, such as DNS or Network Time Protocol.
• It can be used where a large number of clients are connected and where real-time error correction isn't necessary,
such as gaming, voice or video conferencing, and streaming media.
The User Datagram Protocol header has four fields, each of which is 2 bytes. They are:
• length, the length in bytes of the UDP header and any encapsulated data; and
• checksum, which is used in error checking. Its use is required in IPv6 and optional in IPv4.
VPN stands for virtual private network. A virtual private network (VPN) is a technology that creates a safe and encrypted
connection over a less secure network, such as the internet. Virtual Private network is a way to extend a private network
using a public network such as internet. The name only suggests that it is Virtual “private network” i.e. user can be the
part of local network sitting at a remote location. It makes use of tunneling protocols to establish a secure connection.
Lets understand VPN by an example:
Think of a situation where corporate office of a bank is situated in Washington,USA.This office has a local network
consisting of say 100 computers. Suppose another branches of bank are in Mumbai, India and Tokyo, Japan. The
traditional method of establishing a secure connection between head office and branch was to have a leased line between
the branches and head office which was very costly as well as troublesome job. VPN let us overcome this issue in an
effective manner.
The situation is described below:
• All 100 hundred computers of corporate office at Washington are connected to the VPN server(which is a well
configured server containing a public IP address and a switch to connect all computers present in the local network
i.e. in US head office).
• The person sitting in the Mumbai office connects to The VPN server using dial up window and VPN server return
an IP address which belongs to the series of IP addresses belonging to local network of corporate office.
• Thus person from Mumbai branch becomes local to the head office and information can be shared securely over the
public internet.
• So this is the intuitive way of extending local network even across the geographical borders of the country.
A virtual private network (VPN) is programming that creates a safe and encrypted connection over a less secure network,
such as the public internet. A VPN works by using the shared public infrastructure while maintaining privacy through
security procedures and tunnelingprotocols. In effect, the protocols, by encrypting data at the sending end and decrypting
it at the receiving end, send the data through a "tunnel" that cannot be "entered" by data that is not properly encrypted. An
additional level of security involves encrypting not only the data, but also the originating and receiving network
addresses.In the early days of the internet, VPNs were developed to provide branch office employees with an inexpensive,
safe way to access corporate applications and data. Today, VPNs are often used by remote corporate employees, gig
economy freelance workers and business travelers who require access to sites that are geographically restricted. The two
most common types of VPNs are remote access VPNs and site-to-site VPNs.
DNS is a host name to IP address translation service. DNS is a distributed database implemented in a hierarchy of name
servers. It is an application layer protocol for message exchange between clients and servers.
Requirement
Every host is identified by the IP address but remembering numbers is very difficult for the people and also the IP
addresses are not static therefore a mapping is required to change the domain name to IP address. So DNS is used to
convert the domain name of the websites to their numerical IP address.
Domain :
There are various kinds of DOMAIN :
1. Generic domain : .com(commercial) .edu(educational) .mil(military) .org(non profit organization) .net(similar to
commercial) all these are generic domain.
2. Country domain .in (india) .us .uk
3. Inverse domain if we want to know what is the domain name of the website. Ip to domain name mapping.So DNS
can provide both the mapping for example to find the ip addresses of geeksforgeeks.org then we have to type
nslookup www.geeksforgeeks.org.
Organization of Domain
It is Very difficult to find out the ip address associated to a website because there are millions of websites and with all
those websites we should be able to generate the ip address immediately,
there should not be a lot of delay for that to happen organization of database is very important.
DNS record – Domain name, ip address what is the validity?? what is the time to live ?? and all the information related to
that domain name. These records are stored in tree like structure.
Namespace – Set of possible names, flat or hierarchical . Naming system maintains a collection of bindings of names to
values – given a name, a resolution mechanism returns the corresponding value –
Name server – It is an implementation of the resolution mechanism.. DNS (Domain Name System) = Name service in
Internet – Zone is an administrative unit, domain is a subtree.
The host request the DNS name server to resolve the domain name. And the name server returns the IP address
corresponding to that domain name to the host so that the host can future connect to that IP address.
The client machine sends a request to the local name server, which , if root does not find the address in its database, sends
a request to the root name server , which in turn, will route the query to an intermediate or authoritative name server. The
root name server can also contain some hostName to IP address mappings . The intermediate name server always knows
who the authoritative name server is. So finally the IP address is returned to the local name server which in turn returns
the IP address to the host.
B. MPEG-7: MPEG-7 is a multimedia content description standard. It was standardized in ISO/IEC 15938 (Multimedia
content description interface).[1][2][3][4] This description will be associated with the content itself, to allow fast and efficient
searching for material that is of interest to the user. MPEG-7 is formally called Multimedia Content Description Interface.
Thus, it is nota standard which deals with the actual encoding of moving pictures and audio, like MPEG-1, MPEG-
2 and MPEG-4. It uses XML to store metadata, and can be attached to timecodein order to tag particular events,
or synchronise lyrics to a song, for example.
It was designed to standardize:
MPEG-7 is intended to provide complementary functionality to the previous MPEG standards, representing information
about the content, not the content itself ("the bits about the bits"). This functionality is the standardization of multimedia
content descriptions. MPEG-7 can be used independently of the other MPEG standards - the description might even be
attached to an analog movie. The representation that is defined within MPEG-4, i.e. the representation of audio-visual
data in terms of objects, is however very well suited to what will be built on the MPEG-7 standard. This representation is
basic to the process of categorization. In addition, MPEG-7 descriptions could be used to improve the functionality of
previous MPEG standards.With these tools, we can build an MPEG-7 Description and deploy it. According to the
requirements document,1 “a Description consists of a Description Scheme (structure) and the set of Descriptor Values
(instantiations) that describe the Data.” A Descriptor Value is “an instantiation of a Descriptor for a given data set (or
subset thereof).” The Descriptor is the syntatic and semantic definition of the content. extraction algorithms are inside the
scope of the standard because their standardization isn’t required to allow interoperability.
There are many applications and application domains which will benefit from the MPEG-7 standard. A few application
examples are:
C. SMTP:
Simple Mail Transfer Protocol (SMTP)
Email is emerging as one of the most valuable services on the internet today. Most of the internet systems use SMTP as a
method to transfer mail from one user to another. SMTP is a push protocol and is used to send the mail whereas POP
(post office protocol) or IMAP (internet message access protocol) are used to retrieve those mails at the receiver’s side.
SMTP Fundamentals
SMTP is an application layer protocol. The client who wants to send the mail opens a TCP connection to the SMTP
server and then sends the mail across the connection. The SMTP server is always on listening mode. As soon as it listens
for a TCP connection from any client, the SMTP process initiates a connection on that port (25). After successfully
establishing the TCP connection the client process sends the mail instantly.
SMTP Protocol
The SMTP model is of two type :
1. End-to- end method
2. Store-and- forward method
The end to end model is used to communicate between different organizations whereas the store and forward method are
used within an organization. A SMTP client who wants to send the mail will contact the destination’s host SMTP directly
in order to send the mail to the destination. The SMTP server will keep the mail to itself until it is successfully copied to
the receiver’s SMTP.
The client SMTP is the one which initiates the session let us call it as the client- SMTP and the server SMTP is the one
which responds to the session request and let us call it as receiver-SMTP. The client- SMTP will start the session and the
receiver-SMTP will respond to the request.
SENDING EMAIL:
Mail is sent by a series of request and response messages between the client and a server. The message which is sent
across consists of a header and the body. A null line is used to terminate the mail header. Everything which is after the
null line is considered as the body of the message which is a sequence of ASCII characters. The message body contains
the actual information read by the receipt.
RECEIVING EMAIL:
The user agent at the server side checks the mailboxes at a particular time of intervals. If any information is received it
informs the user about the mail. When the user tries to read the mail it displays a list of mails with a short description of
each mail in the mailbox. By selecting any of the mail user can view its contents on the terminal.
June 2016(1 2 3 5 6 8)
1.
i).Protocol: A network protocol defines rules and conventions for communication between network devices. Network
protocols include mechanisms for devices to identify and make connections with each other, as well as formatting rules
that specify how data is packaged into sent and received messages. Some protocols also support message
acknowledgment and data compression designed for reliable and/or high-performance network communication.
ii). SAP: A Service Access Point (SAP) is an identifying label for network endpoints used in Open Systems
Interconnection (OSI) networking.
The SAP is a conceptual location at which one OSI layer can request the services of another OSI layer. As an example,
PD-SAP or PLME-SAP in IEEE 802.15.4 can be mentioned, where the Media Access Control (MAC) layer requests
certain services from the Physical Layer. Service access points are also used in IEEE 802.2 Logical Link
Control in Ethernetand similar Data Link Layer protocols.
iii). Subnet: A subnetwork or subnet is a logical subdivision of an IP network.[1]:1,16 The practice of dividing a network
into two or more networks is called subnetting.
iv). Internet: The Internet (contraction of interconnected network) is the global system of interconnected computer
networks that use the Internet protocol suite (TCP/IP) to link devices worldwide. It is a network of networks that consists
of private, public, academic, business, and government networks of local to global scope, linked by a broad array of
electronic, wireless, and optical networking technologies.
v). PDU: In telecommunications, a protocol data unit (PDU) is a single unit of information transmitted among peer
entities of a computer network. A PDU is composed of protocol specific control information and user data. In the layered
architectures of communication protocol stacks, each layer implements protocols tailored to the specific type or mode of
data exchange.
B. Bring out the design issues of network layer. Compare VC and datagram subnets? REPEATED
1. It is cost effective.
4. It is easy to understand.
2. If network traffic is heavy or nodes are more the performance of the network decreases.
RING Topology
It is called ring topology because it forms a ring as each computer is connected to another computer, with the last one
connected to the first. Exactly two neighbours for each device.
1. A number of repeaters are used for Ring topology with large number of nodes, because if someone wants to send
some data to the last node in the ring topology with 100 nodes, then the data will have to pass through 99 nodes to
reach the 100th node. Hence to prevent data loss repeaters are used in the network.
2. The transmission is unidirectional, but it can be made bidirectional by having 2 connections between each Network
3. In Dual Ring Topology, two ring networks are formed, and data flow is in opposite direction in them. Also, if one
ring fails, the second ring can act as a backup, to keep the network up.
4. Data is transferred in a sequential manner that is bit by bit. Data transmitted, has to pass through each node of the
1. Transmitting network is not affected by high traffic or by adding more nodes, as only the nodes having tokens can
transmit data.
STAR Topology
In this type of topology all the computers are connected to a single hub through a cable. This hub is the central node and
all others nodes are connected to the central node.
3. Easy to troubleshoot.
5. Only that node is affected which has failed, rest of the nodes can work smoothly.
Disadvantages of Star Topology
2. Expensive to use.
3. If the hub fails then the whole network is stopped because all the nodes depend on the hub.
MESH Topology
It is a point-to-point connection to other nodes or devices. All the network nodes are connected to each other. Mesh
has n(n-1)/2 physical channels to link n devices.
There are two techniques to transmit data over the Mesh topology, they are :
1. Routing
2. Flooding
1. Partial Mesh Topology : In this topology some of the systems are connected in the same fashion as mesh topology
2. Full Mesh Topology : Each and every nodes or devices are connected to each other.
Features of Mesh Topology
1. Fully connected.
2. Robust.
3. Not flexible.
Advantages of Mesh Topology
2. It is robust.
TREE Topology
It has a root node and all other nodes are connected to it forming a hierarchy. It is also called hierarchical topology. It
should at least have three levels to the hierarchy.
1. Heavily cabled.
2. Costly.
HYBRID Topology
It is two different types of topologies which is a mixture of two or more topologies. For example if in an office in one
department ring topology is used and in another star topology is used, connecting these topologies will result in Hybrid
Topology (ring topology and star topology).
2. Effective.
4. Flexible.
Disadvantages of Hybrid Topology
1. Complex in design.
2. Costly.
2.
A. Explain hierarchical routing and flooding algorithm?
Ans: hierarchical routing REPEATED
Another static algorithm is flooding, in which every incoming packet is sent out on every outgoing line except the one it
arrived on. Flooding obviously generates vast numbers of duplicate packets, in fact, an infinite number unless some
measures are taken to damp the process. One such measure is to have a hop counter contained in the header of each
packet, which is decremented at each hop, with the packet being discarded when the counter reaches zero. Ideally, the hop
counter should be initialized to the length of the path from source to destination. If the sender does not know how long the
path is, it can initialize the counter to the worst case, namely, the full diameter of the subnet. An alternative technique for
damming the flood is to keep track of which packets have been flooded, to avoid sending them out a second time. achieve
this goal is to have the source router put a sequence number in each packet it receives from its hosts. Each router then
needs a list per source router telling which sequence numbers originating at that source have already been seen.
If an incoming packet is on the list, it is not flooded.To prevent the list from growing without bound, each list should be
augmented by a counter, k, meaning that all sequence numbers through k have been seen. When a packet comes in, it
is easy to check if the packet is a duplicate; if so, it is discarded. Furthermore, the full list below k is not needed, since k
effectively summarizes it. A variation of flooding that is slightly more practical is selective flooding.In this algorithm
the routers do not send every incoming packet out on every line, only on those lines that are going approximately in the
right direction. There is usually little point in sending a westbound packet on an eastbound line unless the topology is
extremely peculiar and the router is sure of this fact. Flooding is not practical in most applications, but it does have some
uses. For example, in military applications, where large numbers of routers may be blown to bits at any instant, the
tremendous robustness of flooding is highly desirable. In distributed database applications, it is sometimes necessary to
update all the databases concurrently, in which case flooding can be useful. In wireless networks, all messages transmitted
by a station can be received by all other stations within its radio range, which is, in fact, flooding, and some algorithms
utilize this property. A fourth possible use of flooding is as a metric against which other routing algorithms can be
compared. Flooding always chooses the shortest path because it chooses every possible path in parallel. Consequently, no
other algorithm can produce a shorter delay (if we ignore the overhead generated by the flooding process itself).
• 868.0–868.6 MHz: Europe, allows one communication channel (2003, 2006, 2011[5])
• 902–928 MHz: North America, up to ten channels (2003), extended to thirty (2006)
• 2400–2483.5 MHz: worldwide use, up to sixteen channels (2003, 2006)
The MAC layer[edit]
The medium access control (MAC) enables the transmission of MAC frames through the use of the physical channel.
Besides the data service, it offers a management interface and itself manages access to the physical channel and
network beaconing. It also controls frame validation, guarantees time slots and handles node associations. Finally, it
offers hook points for secure services.
Note that the IEEE 802.15 standard does not use 802.1D or 802.1Q, i.e., it does not exchange standard Ethernet frames.
The physical frame-format is specified in IEEE802.15.4-2011 in section 5.2. It is tailored to the fact that most IEEE
802.15.4 PHYs only support frames of up to 127 bytes (adaptation layer protocols such as 6LoWPAN provide
fragmentation schemes to support larger network layer packets).
Higher layers[edit]
No higher-level layers and interoperability sublayers are defined in the standard. Other specifications - such as ZigBee,
SNAP, and 6LoWPAN/Thread - build on this standard. RIOT, OpenWSN, TinyOS, Unison RTOS, DSPnano
RTOS, nanoQplus, Contiki and Zephyr operating systems also use a few items of IEEE 802.15.4 hardware and software.
3.
A. Define open loop and closed loop. Explain the different congestion control approaches for datagram subnets?
Ans: Congestion control refers to the techniques used to control or prevent congestion. Congestion control techniques
can be broadly classified into two categories:
In above diagram the 3rd node is congested and stops receiving packets as a result 2nd node may be get congested
due to slowing down of the output data flow. Similarly 1st node may get congested and informs the source to slow
down.
2. Choke Packet Technique :
Choke packet technique is applicable to both virtual networks as well as datagram subnets. A choke packet is a
packet sent by a node to the source to inform it of congestion. Each router monitor its resources and the utilization
at each of its output lines. whenever the resource utilization exceeds the threshold value which is set by the
administrator, the router directly sends a choke packet to the source giving it a feedback to reduce the traffic. The
intermediate nodes through which the packets has traveled are not warned about congestion.
3. Implicit Signaling :
In implicit signaling, there is no communication between the congested nodes and the source. The source guesses
that there is congestion in a network. For example when sender sends several packets and there is no
acknowledgment for a while, one assumption is that there is a congestion.
4. Explicit Signaling :
In explicit signaling, if a node experiences congestion it can explicitly sends a packet to the source or destination to
inform about congestion. The difference between choke packet and explicit signaling is that the signal is included in
the packets that carry data rather than creating different packet as in case of choke packet technique.
Explicit signaling can occur in either forward or backward direction.
• Forward Signaling : In forward signaling signal is sent in the direction of the congestion. The destination is
warned about congestion. The reciever in this case adopt policies to prevent further congestion.
• Backward Signaling : In backward signaling signal is sent in the opposite direction of the congestion. The
source is warned about congestion and it needs to slow down.
Congestion control approaches which can be used in the datagram subnets. The techniques are:
1. Choke Packets.
2. Load Shedding.
3. Jitter control.
Choke Packets:
This approach can be used in virtual circuits as well as in the datagram subnets. In this technique each router
associates a real variable with each of its output lines. This real variable say “u” has a value between 0and1 and
it indicates the percentage utilization of that line. If the value of “u” goes above the threshold then that output
line will enter into a “warning” state. The router will check each newly arriving packet to see if its output line is
in the “warning state”. If it is in the warning state then the router will send back a choke packet signal to the
sending host. The sender host will not generate any more choke packets. Several variations on the congestion
control algorithm have been proposed, depending on the value of thresholds.
Load shedding:
Admission control, choke packets, fair queuing are the techniques suitable for light congestion. But if these
techniques cannot make the congestion to disappear, then the load shedding technique is to be used. The
principle of load shedding states that when the routers are being inaundated by the packets away. A router
which is flooding with packets due to congestion can drop any packets at random. The policy for dropping a
packet depends on the type of packet. So the policy for file transfer called wine (old is better than new) and that
for the multimedia is called milk (new is better than old). To implement such an intelligent discard policy, co-
operation from the sender is essential. The applications should mark their packets are to be discarded the routers
can first drop packets from lowest class.
Jitter control:
Jitter is defined as the variation in delay for the packets belonging to the same flow. The real time audio and
video cannot tolerate jitter on the other hand the jitter does not matter if the packets are carrying an information
contained in a file. For the audio and video transmission if the packets take 20 msec to 30msec to reach the
destination, it does not matter, provided that the delay remains constant. When a packet arrives at a router, the
router will check to see whether the packet is behind or ahead and by what time. This information is stored in
the packet and updated every hop. If the packet is ahead of the schedule then the router will hold it for slightly
longer time and if the packet is behind the schedule, then the router will try to send it out as quickly as
possible.
Leaky Bucket The leaky bucket mechanism is usually used to smooth the burstiness of the traffic by limiting the traffic
peak rate and the maximum burst size. This mechanism, as its name describes, uses the analogy of a leaky bucket to
describe the traffic policing scheme. The bucket’s parameters such as its size and the hole’s size are analogous to the
traffic policing parameters such as the maximum burst size and maximum rate, respectively. The leaky bucket shapes the
traffic with a maximum rate of up to the bucket rate. The bucket size determines the maximum burst size before the leaky
bucket starts to drop packets. The mechanism works in the following way. The arriving packets are inserted at the top of
the bucket. At the bottom of the bucket, there is a hole through which traffic can leak out at a maximum rate of r bytes per
second. The bucket size is b bytes (i.e., the bucket can hold at most b bytes). Let us follow the leaky bucket operation by
observing the example shown in Figure 3.10. We assume first that the bucket is empty. • Figure 3.10 (A): Incoming traffic
with rate R which is less than the bucket rate r. The outgoing traffic rate is equal to R. In this case when we start with an
empty bucket, the burstiness of the incoming traffic is the same as the burstiness of the outgoing traffic as long as R < r. •
Figure 3.10 (B): Incoming traffic with rate R which is greater than the bucket rate r. The outgoing traffic rate is equal to r
(bucket rate). • Figure 3.10 (C): Same as (B) but the bucket is full. Non-conformant traffic is either dropped or sent as
best effort traffic.
Token Bucket The token bucket mechanism is almost the same as the leaky bucket mechanism but it preserves the
burstiness of the traffic. The token bucket of size b bytes is filled with tokens at rate r (bytes per second). When a packet
arrives, it retrieves a token from the token bucket (given such a token is available) and the packet is sent to the outgoing
traffic stream. As long as there are tokens in the token bucket, the outgoing traffic rate and pattern will be the same as the
incoming traffic rate and pattern. If the token bucket is empty, incoming packets have to wait until there are tokens
available in the bucket, and then they continue to send. Figure 3.11 shows an example of the token bucket mechanism. •
Figure 3.11 (A): The incoming traffic rate is less than the token arrival rate. In this case the outgoing traffic rate is equal
to the incoming traffic rate. • Figure 3.11 (B): The incoming traffic rate is greater than the token arrival rate. In case there
are still tokens in the bucket, the outgoing traffic rate is equal to the incoming traffic rate. • Figure 3.11 (C): If the
incoming traffic rate is still greater than the token arrival rate (e.g., long traffic burst), eventually all the tokens will be
exhausted. In this case the incoming traffic has to wait for the new tokens to arrive in order to be able to send out.
Therefore, the outgoing traffic is limited at the token arrival rate. The token bucket preserves the burstiness of the traffic
up to the maximum burst size. The outgoing traffic will maintain a maximum average rate equal to the token rate, r.
Therefore, the token bucket is used to control the average rate of the traffic. In practical traffic policing, we use a
combination of the token bucket and leaky bucket mechanisms connected in series (token bucket, then leaky bucket). The
token bucket enforces the average data rate to be bound to token bucket rate while the leaky bucket (p) enforces the peak
data rate to be bound to leaky bucket rate. Traffic policing, in cooperation with other QoS mechanisms, enables QoS
support.
B. Explain how admission control and packet scheduling helps to achieve good quality of services in network layer?
Ans: Packet Scheduling Mechanisms Packet scheduling is the mechanism that selects a packet for transmission from
the packets waiting in the transmission queue. It decides which packet from which queue and station are scheduled for
transmission in a certain period of time. Packet scheduling controls bandwidth allocation to stations, classes, and
applications. As shown in Figure 3.6, there are two levels of packet scheduling mechanisms: 1. Intrastation packet
scheduling: The packet scheduling mechanism that retrieves a packet from a queue within the same host. 2. Interstation
packet scheduling: The packet scheduling mechanism that retrieves a packet from a queue from different hosts. Packet
scheduling can be implemented using hierarchical or flat approaches. • Hierarchical packet scheduling: Bandwidth is
allocated to stations—that is, each station is allowed to transmit at a certain period of time. The amount of bandwidth
assigned to each station is controlled by interstation policy and module. When a station receives the opportunity to
transmit, the intrastation packet scheduling module will decide which packets to transmit. This approach is scalable
because interstation packet scheduling maintains the state by station (not by connection or application). Overall
bandwidth is allocated based on stations (in fact, they can be groups, departments, or companies). Then, stations will have
the authority to manage or allocate their own bandwidth portion to applications or classes within the host.
Packet scheduling mechanism deals with how to retrieve packets from queues, which is quite similar to a queuing
mechanism. Since in intrastation packet scheduling the status of each queue in a station is known, the intrastation packet
scheduling mechanism is virtually identical to a queuing mechanism. Interstation packet scheduling mechanism is slightly
different from a queuing mechanism because queues are distributed among hosts and there is no central knowledge of the
status of each queue. Therefore, some interstation packet scheduling mechanisms require a signaling procedure to
coordinate the scheduling among hosts. Because of the similarities between packet scheduling and queuing mechanisms
we introduce a number of queuing schemes (First In First Out [FIFO], Strict Priority, and Weight Fair Queue [WFQ]) and
briefly discuss how they support QoS services.
3.4.1 First In First Out (FIFO) First In First Out (FIFO) is the simplest queuing mechanism. All packets are inserted to
the tail of a single queue. Packets are scheduled in order of their arrival. Figure 3.7 shows FIFO packet scheduling. FIFO
provides best effort service—that is, it does not provide service differentiation in terms of bandwidth and delay. The high
bandwidth flows will get a larger bandwidth portion than the low bandwidth flows. In general, all flows will experience
the same average delay. If a flow increases its bandwidth aggressively, other flows will be affected by getting less
bandwidth, causing increased average packet delay for all flows. It is possible to improve QoS support by adding 1)
traffic policing to limit the rate of each flow and 2) admission control.
3.4.2 Strict Priority Queues are assigned a priority order. Strict priority packet scheduling schedules packets based on
the assigned priority order. Packets in higher priority queues always transmit before packets in lower priority queues. A
lower priority queue has a chance to transmit packets only when there are no packets waiting in a higher priority queue.
Figure 3.8 illustrates the strict priority packet scheduling mechanism. Strict priority provides differentiated services
(relative services) in both bandwidth and delay. The highest priority queue always receives bandwidth (up to the total
bandwidth) and the lower priority queues receive the remaining bandwidth. Therefore, higher priority queues always
experience lower delay than the lower priority queues. Aggressive bandwidth spending by the high priority queues can
starve the low priority queues. Again, it is possible to improve the QoS support by adding 1) traffic policing to limit the
rate of each flow and 2) admission control.
3.4.3 Weight Fair Queue (WFQ) Weight Fair Queue schedules packets based on the weight ratio of each queue. Weight,
wi , is assigned to each queue i according to the network policy. For example, there are three queues A, B, C with weights
w1, w2, w3, respectively. Queues A, B, and C receive the following ratios of available bandwidth: w1/(w1+w2+w3),
w2/(w1+w2+w3), and w3/(w1+w2+w3), respectively, as shown in Figure 3.9.
Bandwidth abuse from a specific queue will not affect other queues. WFQ can provide the required bandwidth and the
delay performance is directly related to the allocated bandwidth. A queue with high bandwidth allocation (large weight)
will experience lower delay. This may lead to some mismatch between the bandwidth and delay requirements. Some
applications may require low bandwidth and low delay. In this case WFQ will allocate high bandwidth to these
applications in order to guarantee the low delay bound. Some applications may require high bandwidth and high delay.
WFQ still has to allocate high bandwidth in order for the applications to operate. Of course, applications will satisfy the
delay but sometimes far beyond their needs. This mismatch can lead to low bandwidth utilization. However, in real life,
WFQ mostly schedules packets that belong to aggregated flows, groups, and classes (instead of individual flows) where
the goal is to provide link sharing among groups. In this case delay is of less concern. The elementary queuing
mechanisms introduced above will be the basis of a number of packet scheduling variations. Before we move our
discussion to the next QoS mechanisms, it is worth mentioning that in some implementations the channel access
mechanism and packet scheduling mechanism are not mutually exclusive. There is some overlap between these two
mechanisms and sometimes they are blended into one solution. When we discuss QoS support of each wireless
technology in later chapters, in some cases, we will discuss both mechanisms together.
3.7 Admission Control Admission control is the mechanism that makes the decision whether to allow a new session to
join the network. This mechanism will ensure that existing sessions’ QoS will not be degraded and the new session will
be provided QoS support. If there are not enough network resources to accommodate the new sessions, the admission
control mechanism may either reject the new session or admit the session while notifying the user that the network cannot
provide the required QoS. Admission control and resource reservation signaling mechanisms closely cooperate with each
other. Both are implemented in the same device. There are two admission control approaches: • Explicit admission
control: This approach is based on explicit resource reservation. Applications will send the request to join the network
through the resource reservation signaling mechanism. The request that contains QoS parameters is forwarded to the
admission control mechanism. The admission control mechanism decides to accept or reject the application based on the
application’s QoS requirements, available resources, performance criteria, and network policy. • Implicit admission
control: There is no explicit resource reservation signaling. The admission control mechanism relies on bandwidth over-
provisioning and traffic control (i.e., traffic policing). The location of the admission control mechanism depends on the
network architecture. For example, in case we have a wide area network such as a high-speed backbone that consists of a
number of interconnected routers, the admission control mechanism is implemented on each router. In shared media
networks, such as wireless networks, there is a designated entity in the network (e.g., station, access point, gateway, base
station) that hosts the admission control agent. This agent is in charge of making admission control decisions for the
entire wireless network. This concept is similar to the SBM (subnet bandwidth manager) which serves as the admission
control agent in 802 networks. In ad hoc wireless networks, the admission control functionality can be distributed among
all hosts. In infrastructure wireless networks where all communication passes through the access point or base station, the
admission control functionality can be implemented in the access point or base station.
5
A . Explain the working of TCP protocol along with the TCP segment header format?
Acknowledgement Number
If the ACK bit is set, this field contains the value of the next sequence number the sender of the segment is expecting to
receive. Once a connection is established this is always sent.
Hlen
The number of 32-bit words in the TCP header. This indicates where the data begins. The length of the TCP header is
always a multiple of 32 bits.
Flags
There are six flags in the TCP header. One or more can be turned on at the same time.
URG The URGENT POINTER field contains valid data
PSH The receiver should pass this data to the application as soon
as possible
Checksum
This covers both the header and the data. It is calculated by prepending a pseudo-header to the TCP segment, this consists
of three 32 bit words which contain the source and destination IP addresses, a byte set to 0, a byte set to 6 (the protocol
number for TCP in an IP datagram header) and the segment length (in words). The 16-bit one's complement sum of the
header is calculated (i.e., the entire pseudo-header is considered a sequence of 16-bit words). The 16-bit one's
complement of this sum is stored in the checksum field. This is a mandatory field that must be calculated and stored by
the sender, and then verified by the receiver.
Urgent Pointer
The urgent pointer is valid only if the URG flag is set. This pointer is a positive offset that must be added to the sequence
number field of the segment to yield the sequence number of the last byte of urgent data. TCP's urgent mode is a way for
the sender to transmit emergency data to the other end. This feature is rarely used.
1. Introduction
The Transmission Control Protocol (TCP) standard is defined in the Request For Comment (RFC) standards document
number 793 [10] by the Internet Engineering Task Force (IETF). The original specification written in 1981 was based on
earlier research and experimentation in the original ARPANET. The design of TCP was heavily influenced by what has
come to be known as the "end-to-end argument" [3].
As it applies to the Internet, the end-to-end argument says that by putting excessive intelligence in physical and link
layers to handle error control, encryption or flow control you unnecessarily complicate the system. This is because these
functions will usually need to be done at the endpoints anyway, so why duplicate the effort along the way? The result of
an end-to-end network then, is to provide minimal functionality on a hop-by-hop basis and maximal control between end-
to-end communicating systems.
The end-to-end argument helped determine how two characteristics of TCP operate; performance and error handling. TCP
performance is often dependent on a subset of algorithms and techniques such as flow control and congestion control.
Flow control determines the rate at which data is transmitted between a sender and receiver. Congestion control defines
the methods for implicitly interpreting signals from the network in order for a sender to adjust its rate of transmission.
The term congestion control is a bit of a misnomer. Congestion avoidance would be a better term since TCP cannot
control congestion per se. Ultimately intermediate devices, such as IP routers would only be able to control congestion.
Congestion control is currently a large area of research and concern in the network community. A companion study on
congestion control examines the current state of activity in that area [9].
Timeouts and retransmissions handle error control in TCP. Although delay could be substantial, particularly if you were
to implement real-time applications, the use of both techniques offer error detection and error correction thereby
guaranteeing that data will eventually be sent successfully.
The nature of TCP and the underlying packet switched network provide formidable challenges for managers, designers
and researchers of networks. Once regulated to low speed data communication applications, the Internet and in part TCP
are being used to support very high speed communications of voice, video and data. It is unlikely that the Internet
protocols will remain static as the applications change and expand. Understanding the current state of affairs will assist us
in understanding protocol changes made to support future applications.
1.1.2 Connection-Oriented
Before two communicating TCPs can exchange data, they must first agree upon the willingness to communicate.
Analogous to a telephone call, a connection must first be made before two parties exchange information.
1.1.3 Reliability
A number of mechanisms help provide the reliability TCP guarantees. Each of these is described briefly below.
Checksums. All TCP segments carry a checksum, which is used by the receiver to detect errors with either the TCP
header or data.
Duplicate data detection. It is possible for packets to be duplicated in packet switched network; therefore TCP keeps track
of bytes received in order to discard duplicate copies of data that has already been received.2
Retransmissions. In order to guarantee delivery of data, TCP must implement retransmission schemes for data that may be
lost or damaged. The use of positive acknowledgements by the receiver to the sender confirms successful reception of
data. The lack of positive acknowledgements, coupled with a timeout period (see timers below) calls for a retransmission.
Sequencing. In packet switched networks, it is possible for packets to be delivered out of order. It is TCP's job to properly
sequence segments it receives so it can deliver the byte stream data to an application in order.
Timers. TCP maintains various static and dynamic timers on data sent. The sending TCP waits for the receiver to reply
with an acknowledgement within a bounded length of time. If the timer expires before receiving an acknowledgement, the
sender can retransmit the segment.
1.2.6 Reserved
A 6-bit field currently unused and reserved for future use.
Acknowledgement (ACK). If this bit field is set, the acknowledgement field described earlier is valid.
Push Function (PSH). If this bit field is set, the receiver should deliver this segment to the receiving application as soon
as possible. An example of its use may be to send a Control-BREAK request to an application, which can jump ahead of
queued data.
Reset the Connection (RST). If this bit is present, it signals the receiver that the sender is aborting the connection and all
queued data and allocated buffers for the connection can be freely relinquished.
Synchronize (SYN). When present, this bit field signifies that sender is attempting to "synchronize" sequence numbers.
This bit is used during the initial stages of connection establishment between a sender and receiver.
No More Data from Sender (FIN). If set, this bit field tells the receiver that the sender has reached the end of its byte
stream for the current TCP connection.
1.2.8 Window
A 16-bit integer used by TCP for flow control in the form of a data transmission window size. This number tells the
sender how much data the receiver is willing to accept. The maximum value for this field would limit the window size to
65,535 bytes, however a "window scale" option can be used to make use of even larger windows.
1.2.9 Checksum
A TCP sender computes a value based on the contents of the TCP header and data fields. This 16-bit value will be
compared with the value the receiver generates using the same computation. If the values match, the receiver can be very
confident that the segment arrived intact.
1.2.11 Options
In order to provide additional functionality, several optional parameters may be used between a TCP sender and receiver.
Depending on the option(s) used, the length of this field will vary in size, but it cannot be larger than 40 bytes due to the
size of the header length field (4 bits). The most common option is the maximum segment size (MSS) option. A TCP
receiver tells the TCP sender the maximum segment size it is willing to accept through the use of this option. Other
options are often used for various flow control and congestion control techniques.
1.2.12 Padding
Because options may vary in size, it may be necessary to "pad" the TCP header with zeroes so that the segment ends on a
32-bit word boundary as defined by the standard [10].
1.2.13 Data
Although not used in some circumstances (e.g. acknowledgement segments with no data in the reverse direction), this
variable length field carries the application data from TCP sender to receiver. This field coupled with the TCP header
fields constitutes a TCP segment.
B. Explain with suitable diagram the connection establishment and connection release by transport layer protocols?
Ans:2. Connection Establishment and Termination
TCP provides a connection-oriented service over packet switched networks. Connection-oriented implies that there is a
virtual connection between two endpoints.3 There are three phases in any virtual connection. These are the connection
establishment, data transfer and connection termination phases.
2.1 Three-Way Handshake
In order for two hosts to communicate using TCP they must first establish a connection by exchanging messages in what
is known as the three-way handshake. The diagram below depicts the process of the three-way handshake.
To start, Host A initiates the connection by sending a TCP segment with the SYN control bit set and an initial sequence
number (ISN) we represent as the variable x in the sequence number field.
At some moment later in time, Host B receives this SYN segment, processes it and responds with a TCP segment of its
own. The response from Host B contains the SYN control bit set and its own ISN represented as variable y. Host B also
sets the ACK control bit to indicate the next expected byte from Host A should contain data starting with sequence
number x+1.
When Host A receives Host B's ISN and ACK, it finishes the connection establishment phase by sending a final
acknowledgement segment to Host B. In this case, Host A sets the ACK control bit and indicates the next expected byte
from Host B by placing acknowledgement number y+1 in the acknowledgement field.
In addition to the information shown in the diagram above, an exchange of source and destination ports to use for this
connection are also included in each senders' segments.4
To terminate the connection in our example, the application running on Host A signals TCP to close the connection. This
generates the first FIN segment from Host A to Host B. When Host B receives the initial FIN segment, it immediately
acknowledges the segment and notifies its destination application of the termination request. Once the application on Host
B also decides to shut down the connection, it then sends its own FIN segment, which Host A will process and respond
with an acknowledgement
1. The calling environment is suspended, procedure parameters are transferred across the network to the environment
where the procedure is to execute, and the procedure is executed there.
2. When the procedure finishes and produces its results, its results are transferred back to the calling environment, where
execution resumes as if returning from a regular procedure call.
NOTE: RPC is especially well suited for client-server (e.g. query-response) interaction in which the flow of
control alternates between the caller and callee. Conceptually, the client and server do not both execute at the same
time. Instead, the thread of execution jumps from the caller to the callee and then back again.
Working of RPC
The following steps take place during a RPC:
1. A client invokes a client stub procedure, passing parameters in the usual way. The client stub resides within the
client’s own address space.
2. The client stub marshalls(pack) the parameters into a message. Marshalling includes converting the representation of
the parameters into a standard format, and copying each parameter into the message.
3. The client stub passes the message to the transport layer, which sends it to the remote server machine.
4. On the server, the transport layer passes the message to a server stub, which demarshalls(unpack) the parameters and
calls the desired server routine using the regular procedure call mechanism.
5. When the server procedure completes, it returns to the server stub (e.g., via a normal procedure call return), which
marshalls the return values into a message. The server stub then hands the message to the transport layer.
6. The transport layer sends the result message back to the client transport layer, which hands the message back to the
client stub.
7. The client stub demarshalls the return parameters and execution returns to the caller.
ADVANTAGES
1. RPC provides ABSTRACTION i.e message-passing nature of network communication is hidden from the user.
2. RPC often omits many of the protocol layers to improve performance. Even a small performance improvement is
important because a program may invoke RPCs often.
3. RPC enables the usage of the applications in the distributed environment, not only in the local environment.
4. With RPC code re-writing / re-developing effort is minimized.
5. Process-oriented and thread oriented models supported by RPC.
8.
A. Explain the role of DNS. What are the resource records ? Briefly explain?
Ans: We can define DNS Resource Records simply as DNS Server database entries. Resource Records are usually a name to IP
Address (IPv4 or IPv6) mapping (or vice versa). DNS Resource Records are used to answer DNS client queries. Resource
Records are added to the DNS server for the portion of the DNS namespace which the DNS Server is hosting.
Resource Records (RRs) are the DNS data records. Their precise format is defined in RFC 1035 §3.2.1. The most
important fields in a resource record are Name, Class, Type, and Data. Name is a domain name, Class and Type are two-
byte integers, and Data is a variable-length field to be interpreted in the context of Class and Type. Almost all Internet
applications use Class 1, the Internet Class. For the Internet Class, many standard Types have been defined. The complete
list can be found in the current Assigned Numbers RFC. Only those most important to DNS operation are shown here.
Address (A) records match domain names to IP address, and are both the most important and the most mundane aspect of
DNS. See RFC 1035 §3.4.1 for a more detailed description of the A RR, though there is really very little to describe. The
data section consists entirely of a 32-bit IP address. Most DNS operations are queries for A records matching a given
domain name. Since hosts can have multiple IP addresses, corresponding to multiple physical network interfaces, so it is
permissible for multiple A records to match a given domain name. Normally, only the first one is used, so chose a host's
most reliable IP address and put it first when constructing name server databases.
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
| ADDRESS |
+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+
where:
A resource record, commonly referred to as an RR, is the unit of information entry in DNS zone files; RRs are the basic
building blocks of host-name and IP information and are used to resolve all DNS queries. Resource records exist as many
types to provide extended name-resolution services.
Different types of RRs have different formats, as they contain different data. In general, however, many RRs share a
common format, as the following address resource records example illustrates
-
B. Describe the architecture of SNMP protocol?
Ans: The Simple Network Management Protocol (SNMP) architecture includes four layers.
As the following figure illustrates, the SNMP architecture includes the following layers:
• SNMP Network Managers
• Master agents
• Subagents
• Managed components
Figure 1. SNMP architecture
A network can have multiple SNMP Network Managers. Each workstation can have one master agent. The SNMP
Network Managers and master agents use SNMP protocols to communicate with each other. Each managed component
has a corresponding subagent and MIBs. SNMP does not specify the protocol for communications between master agents
and subagents.
• SNMP network managers
An SNMP Network Manager is a program that asks for information from master agents and displays that information.
You can use most SNMP Network Managers to select the items to monitor and the form in which to display the
information.
• Master agents
A master agent is a software program that provides the interface between an SNMP Network Manager and a subagent.
• Subagents
A managed component is hardware or software that provides a subagent. For example, database servers, operating
systems, routers, and printers can be managed components if they provide subagents.
• Management Information Bases
A Management Information Base (MIB) is a group of tables that specify the information that a subagent provides to a
master agent. MIBs follow SNMP protocols.
June 2018(1 2 3 5 6 8)
1
A . Explain the implementation of connection oriented and connectionless services?
Connectionless Does not require a session connection between sender and receiver. The sender simply starts sending
packets (called datagrams) to the destination. This service does not have the reliability of the connection-oriented method,
but it is useful for periodic burst transfers. Neither system must maintain state information for the systems that they send
transmission to or receive transmission from. A connectionless network provides minimal services.
Link-state protocols such as OSPF flood all the routing information when they first become active in link-state packets.
After the network converges, they send only small updates via link-state packets.
2.
A. Compare static routing VS dynamic routing algorithm?
Ans:
BASIS FOR
STATIC ROUTING DYNAMIC ROUTING
COMPARISON
Routing table building Routing locations are hand- Locations are dynamically filled in the table.
typed
Routing algorithms Doesn't employ complex Uses complex routing algorithms to perform
routing algorithms. routing operations.
Link failure Link failure obstructs the Link failure doesn't affect the rerouting.
rerouting.
Security Provides high security. Less secure due to sending broadcasts and
multicasts.
BASIS FOR
STATIC ROUTING DYNAMIC ROUTING
COMPARISON
Routing protocols No routing protocols are Routing protocols such as RIP, EIGRP, etc
indulged in the process. are involved in the routing process.
B. Explain Broadcast routing technique and various methods for doing it?
Ans: In some applications, hosts need to send messages to many or all other hosts.For example, a service distributing
weather reports, stock market updates, or live radio programs might work best by sending to all machines and letting
those that are interested read the data. Sending a packet to all destinations simultaneously is called broadcasting. Various
methods have been proposed for doing it. One broadcasting method that requires no special features from the network is
for the source to simply send a distinct packet to each destination. Not only is the method wasteful of bandwidth and
slow, but it also requires the source to have a complete list of all destinations. This method is not desirable in practice,
even though it is widely applicable. An improvement is multidestination routing, in which each packet contains either a
list of destinations or a bit map indicating the desired destinations. When a packet arrives at a router, the router checks all
the destinations to determine the set of output lines that will be needed. (An output line is needed if it is the best route to
at least one of the destinations.) The router generates a new copy of the packet for each output line to be used and
includes in each packet only those destinations that are to use the line. In effect, the destination set is partitioned among
the output lines. After a sufficient number of hops, each packet will carry only one destination like a normal packet.
Multidestination routing is like using separately addressed packets, except that when several packets must follow the
same route, one of them pays full fare and the rest ride free. The network bandwidth is therefore used more efficiently.
However, this scheme still requires the source to know all the destinations, plus it is as much work for a router to
determine where to send one multidestination packet as it is for multiple distinct packets. We have already seen a better
broadcast routing technique: flooding. When implemented with a sequence number per source, flooding uses links
efficiently with a decision rule at routers that is relatively simple. Although flooding is illsuited for ordinary point-to-
point communication, it rates serious consideration for broadcasting. However, it turns out that we can do better still once
the shortest path routes for regular packets have been computed. The idea for reverse path forwarding is elegant and
remarkably simple once it has been pointed out (Dalal and Metcalfe, 1978). When a broadcast packet arrives at a router,
the router checks to see if the packet arrived on the link that is normally used for sending packets toward the source of the
broadcast. If so, there is an excellent chance that the broadcast packet itself followed the best route from the router and is
therefore the first copy to arrive at the router. This being the case, the router forwards copies of it onto all links except the
one it arrived on. If, however, the broadcast packet arrived on a link other than the preferred one for reaching the source,
the packet is discarded as a likely duplicate.
An example of reverse path forwarding is shown in Fig. 5-15. Part (a) shows a network, part (b) shows a sink tree for
router I of that network, and part (c) shows how the reverse path algorithm works. On the first hop, I sends packets to
F, H, J, and N, as indicated by the second row of the tree. Each of these packets arrives on the preferred path to I
(assuming that the preferred path falls along the sink tree) and is so indicated by a circle around the letter. On the second
hop, eight packets are generated, two by each of the routers that received a packet on the first hop. As it turns out, all
eight of these arrive at previously unvisited routers,and five of these arrive along the preferred line. Of the six packets
generated on the third hop, only three arrive on the preferred path (at C, E, and K); the others are duplicates. After five
hops and 24 packets, the broadcasting terminates, compared with four hops and 14 packets had the sink tree been
followed exactly. The principal advantage of reverse path forwarding is that it is efficient while being easy to implement.
It sends the broadcast packet over each link only once in each direction, just as in flooding, yet it requires only that
routers know how to reach all destinations, without needing to remember sequence numbers (or use other mechanisms to
stop the flood) or list all destinations in the packet. Our last broadcast algorithm improves on the behavior of reverse path
forwarding. It makes explicit use of the sink tree—or any other convenient spanning tree—for the router initiating the
broadcast. A spanning tree is a subset of the network that includes all the routers but contains no loops. Sink trees are
spanning trees. If each router knows which of its lines belong to the spanning tree, it can copy an incoming broadcast
packet onto all the spanning tree lines except the one it arrived on. This method makes excellent use of bandwidth,
generating the absolute minimum number of packets necessary to do the job. In Fig. 5-15, for example, when the sink tree
of part (b) is used as the spanning tree, the broadcast packet is sent with the minimum 14 packets. The only problem is
that each router must have knowledge of some spanning tree for the method to be applicable. Sometimes this information
is available (e.g., with link state routing, all routers know the complete topology, so they can compute a spanning tree) but
sometimes it is not (e.g., with distance vector routing).
Broadcast routing
By default, the broadcast packets are not routed and forwarded by the routers on any network. Routers create broadcast
domains. But it can be configured to forward broadcasts in some special cases. A broadcast message is destined to all
network devices.
Broadcast routing can be done in two ways (algorithm):
• A router creates a data packet and then sends it to each host one by one. In this case, the router creates multiple
copies of single data packet with different destination addresses. All packets are sent as unicast but because they
are sent to all, it simulates as if router is broadcasting.
This method consumes lots of bandwidth and router must destination address of each node.
• Secondly, when router receives a packet that is to be broadcasted, it simply floods those packets out of all interfaces.
All routers are configured in the same way.
This method is easy on router's CPU but may cause the problem of duplicate packets received from peer routers.
Reverse path forwarding is a technique, in which router knows in advance about its predecessor from where it
should receive broadcast. This technique is used to detect and discard duplicates.
1. Retransmission Policy :
It is the policy in which retransmission of the packets are taken care. If the sender feels that a sent packet is lost or
corrupted, the packet needs to be retransmitted. This transmission may increase the congestion in the network.
To prevent congestion, retransmission timers must be designed to prevent congestion and also able to optimize
efficiency.
2. Window Policy :
The type of window at the sender side may also affect the congestion. Several packets in the Go-back-n window are
resent, although some packets may be received successfully at the receiver side. This duplication may increase the
congestion in the network and making it worse.
Therefore, Selective repeat window should be adopted as it sends the specific packet that may have been lost.
3. Discarding Policy :
A good discarding policy adopted by the routers is that the routers may prevent congestion and at the same time
partially discards the corrupted or less sensitive package and also able to maintain the quality of a message.
In case of audio file transmission, routers can discard less sensitive packets to prevent congestion and also maintain
the quality of the audio file.
4. Acknowledgment Policy :
Since acknowledgement are also the part of the load in network, the acknowledgment policy imposed by the
receiver may also affect congestion. Several approaches can be used to prevent congestion related to
acknowledgment.
The receiver should send acknowledgement for N packets rather than sending acknowledgement for a single packet.
The receiver should send a acknowledgment only if it has to sent a packet or a timer expires.
5. Admission Policy :
In admission policy a mechanism should be used to prevent congestion. Switches in a flow should first check the
resource requirement of a network flow before transmitting it further. If there is a chance of a congestion or there is
a congestion in the network, router should deny establishing a virtual network connection to prevent further
congestion
The Differentiated Services (DiffServ) [RFC2475] approach involves the reservation of network resources such as output
interface buffer space/queues and percentages of link bandwidth assigned for each type of traffic. Traffic types (which
may also be called RFSs - Resource-Facing Services) aggregate flows with similar delay and packet loss requirements.
DiffServ in itself makes no absolute guarantees other than that different traffic types will be treated in different ways,
according to the QoS parameters configured.
This configuration is required on each QoS-enabled interface in the network and there is no interaction between nodes so
packets are treated on a per-hop basis (PHB). This means that each router on the path between the source and destination
may have different QoS parameters configured. For example, the amount of bandwidth assigned for a traffic type on a
backbone network link is likely to be greater than that on an access link, as the backbone may be carrying many flows of
that type, from different sources.
Varying the bandwidth parameter is usual on any network with a backbone and many access links, and is unlikely to be
harmful. However, varying other parameters may result in traffic receiving priority treatment in one router and a different
treatment in the next, so a consistent approach to setting QoS parameters across a network is important. For this reason
when two different networks, for example JANET and GÉANT, attempt to interwork using DiffServ, it is vital that both
parties understand the other’s QoS architecture. To simplify inter-domain configuration, the IETF recommended two
main types of PHB: Expedited Forwarding (EF) [RFC3246] and Assured Forwarding (AF) [RFC2597]. EF assumes the
best quality of treatment in terms of latency/lost parameters which a router can provide to packets. AF is mostly designed
for traffic which needs guaranteed delivery but is more tolerant to packet delays/loss than traffic which requires EF.
Neither EF nor AF definitions specify any particular details of router configurations such as queuing, admission control,
policing and shaping types and parameters to implementation.
DiffServ is a stateless architecture in that a packet enters a router, is classified as necessary, and then placed in the
appropriate queue on the output interface. The router does not attempt to track flows and, once a packet has been
transmitted, it is forgotten.
According to the DiffServ approach, any network router can carry out traffic classification, i.e. decide what PHB should
be applied to arriving packets, independently. However, DiffServ defines a special field in the IP packet called DSCP
(Differential Services Code Point) which can be used as an attribute indicating the desirable PHB for this packet. The
DSCP field is usually intended to be used within a network where routers trust each other, as in a single administrative
domain. In such a case, only the edge routers of the network perform classification and mark ingress packets with a
specific DSCP value; all the core routers can then trust this choice and treat packets accordingly. By extension, DSCP
values can also be used as a means to coordinate traffic handling between trusting networks such as JANET and Regional
Networks. The use of the DSCP field is not mandatory; it is a tool for loose coordination of a network of routers which is
intended to decrease the amount of packet processing work for the core routers.
2). Buffering: 1) FIFO Queuing: In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node
(router or switch) is ready to process them. If the average arrival rate is higher than the average processing rate, the queue
will fill up and new packets will be discarded. Figure9 shows a conceptual view of a FIFO queue.
3).leaky bucket: 1) Leaky Bucket: A technique called leaky bucket can smooth out bursty traffic. Bursty chunks are
stored in the bucket and sent out at an average rate.
A simple leaky bucket implementation is shown in Figure11. A FIFO queue holds the packets. If the traffic
consists of fixed-size packets, the process removes a fixed number of packets from the queue at each tick of the
clock. If the traffic consists of variable-length packets, the fixed output rate must be based on the number of bytes
or bits.
5.
A. Explain connection establishment in transport layer with different protocol scenarios ?
Ans: To aid in our understanding of the connect, accept, and close functions and to help us debug TCP applications
using the netstat program, we must understand how TCP connections are established and terminated, and TCP's
state transition diagram.
Three-Way Handshake
1. The server must be prepared to accept an incoming connection. This is normally done by
calling socket, bind, and listen and is called a passive open.
2. The client issues an active open by calling connect. This causes the client TCP to send a "synchronize"
(SYN) segment, which tells the server the client's initial sequence number for the data that the client
will send on the connection. Normally, there is no data sent with the SYN; it just contains an IP header,
a TCP header, and possible TCP options (which we will talk about shortly).
3. The server must acknowledge (ACK) the client's SYN and the server must also send its own SYN
containing the initial sequence number for the data that the server will send on the connection. The
server sends its SYN and the ACK of the client's SYN in a single segment.
4. The client must acknowledge the server's SYN.
The minimum number of packets required for this exchange is three; hence, this is called TCP's three-way
handshake. We show the three segments in Figure 2.2.
An everyday analogy for establishing a TCP connection is the telephone system [Nemeth 1997].
The socket function is the equivalent of having a telephone to use. bind is telling other people your telephone
number so that they can call you. listen is turning on the ringer so that you will hear when an incoming call
arrives. connect requires that we know the other person's phone number and dial it. accept is when the person
being called answers the phone. Having the client's identity returned by accept (where the identify is the client's
IP address and port number) is similar to having the caller ID feature show the caller's phone number. One
difference, however, is that accept returns the client's identity only after the connection has been established,
whereas the caller ID feature shows the caller's phone number before we choose whether to answer the phone
or not. If the DNS is used (Chapter 11), it provides a service analogous to a telephone book. getaddrinfo is similar
to looking up a person's phone number in the phone book. getnameinfo would be the equivalent of having a
phone book sorted by telephone numbers that we could search, instead of a book sorted by name.
TCP Options
Each SYN can contain TCP options. Commonly used options include the following:
• MSS option. With this option, the TCP sending the SYN announces its maximum segment size, the
maximum amount of data that it is willing to accept in each TCP segment, on this connection. The
sending TCP uses the receiver's MSS value as the maximum size of a segment that it sends. We will see
how to fetch and set this TCP option with the TCP_MAXSEG socket option (Section 7.9).
• Window scale option. The maximum window that either TCP can advertise to the other TCP is 65,535,
because the corresponding field in the TCP header occupies 16 bits. But, high-speed connections,
common in today's Internet (45 Mbits/sec and faster, as described in RFC 1323 [Jacobson, Braden, and
Borman 1992]), or long delay paths (satellite links) require a larger window to obtain the maximum
throughput possible. This newer option specifies that the advertised window in the TCP header must be
scaled (left-shifted) by 0–14 bits, providing a maximum window of almost one gigabyte (65,535 x 214).
Both end-systems must support this option for the window scale to be used on a connection. We will see
how to affect this option with the SO_RCVBUF socket option (Section 7.5).
To provide interoperability with older implementations that do not support this option, the following
rules apply. TCP can send the option with its SYN as part of an active open. But, it can scale its
windows only if the other end also sends the option with its SYN. Similarly, the server's TCP can send
this option only if it receives the option with the client's SYN. This logic assumes that implementations
ignore options that they do not understand, which is required and common, but unfortunately, not
guaranteed with all implementations.
• Timestamp option. This option is needed for high-speed connections to prevent possible data corruption
caused by old, delayed, or duplicated segments. Since it is a newer option, it is negotiated similarly to
the window scale option. As network programmers there is nothing we need to worry about with this
option.
These common options are supported by most implementations. The latter two are sometimes called the "RFC
1323 options," as that RFC [Jacobson, Braden, and Borman 1992] specifies the options. They are also called the
"long fat pipe options," since a network with either a high bandwidth or a long delay is called a long fat pipe.
Chapter 24 of TCPv1 contains more details on these options.
B. If TCP round trip RTT is currently 30 msec and the following acknowledgements come in after 26, 32 and 24 msec
respectively. What is the new RTT using Jacobson algorithm? Use Alpha = 0.9.
6.
A. explain label switching and MPLS with neat diagram?
Ans: A label switching router (LSR) makes up the core of a label-switched network. Label-switched networks are made
up of predetermined paths, called label-switched paths, (LSPs) which are the result of establishing source-destination
pairs by the process called Multi-Protocol Label Switching (MPLS). Label switching routers support MPLS, which
ensures that all of the packets carried in a specific route will remain in the same path over a backbone. Label switching is
a technique of network relaying to overcome the problems perceived by traditional IP-table switching (also known as
traditional layer 3 hop-by-hop routing[1]). Here, the switching of network packets occurs at a lower level, namely the data
link layer rather than the traditional network layer.
Each packet is assigned a label number and the switching takes place after examination of the label assigned to each
packet. The switching is much faster than IP-routing. New technologies such as Multiprotocol Label Switching (MPLS)
use label switching. The established ATM protocol also uses label switching at its core.
Multiprotocol Label Switching (MPLS) is a protocol-agnostic routing technique designed to speed up and shape traffic
flows across enterprise wide area and service provider networks.
MPLS allows most data packets to be forwarded at Layer 2 -- the switching level -- rather than having to be passed up
to Layer 3 -- the routing level. For this reason, it is often informally described as operating at Layer 2.5.
MPLS was created in the late 1990s as a more efficient alternative to traditional IP routing, which requires each router to
independently determine a packet's next hop by inspecting the packet's destination IP address before consulting its
own routing table. This process consumes time and hardware resources, potentially resulting in degraded performance for
real-time applications such as voice and video.
In an MPLS network, the very first router to receive a packet determines the packet's entire route upfront, the identity of
which is quickly conveyed to subsequent routers using a label in the packet header.
While router hardware has improved exponentially since MPLS was first developed -- somewhat diminishing its
significance as a more efficient traffic management technology-- it remains important and popular due to its various other
benefits, particularly security, flexibility and traffic engineering.
Components of MPLS
One of the defining features of MPLS is its use of labels -- the L in MPLS. Sandwiched between Layers 2 and 3, a label is
a four-byte -- 32-bit -- identifier that conveys the packet's predetermined forwarding path in an MPLS network. Labels
can also contain information related to quality of service (QoS), indicating a packet's priority level.
Advantages of MPLS
Service providers and enterprises can use MPLS to implement QoS by defining LSPs that can meet specific service-level
agreements on traffic latency, jitter, packet loss and downtime. For example, a network might have three service levels
that prioritize different types of traffic -- e.g., one level for voice, one level for time-sensitive traffic and one level for best
effort traffic.\
The length in bytes of the UDP header and the encapsulated data. The minimum value for this field is 8.
Checksum
This is computed as the 16-bit one's complement of the one's complement sum of a pseudo header of information from the
IP header, the UDP header, and the data, padded as needed with zero bytes at the end to make a multiple of two bytes. If
the checksum is set to zero, then checksuming is disabled. The designers chose to make the checksum optional to allow
implementations to operate with little computational overhead. If the computed checksum is zero, then this field must be
set to 0xFFFF.
C. Note on VPN?
8.
A. Explain the DNS name space and various resource records?
Ans: 5) DNS namespace: DNS is the name service provided by the Internet for TCP/IP networks. DNS is broken up
into domains, a logical organization of computers that exist in a larger network. The domains exist at different levels and
connect in a hierarchy that resembles the root structure of a tree. Each domain extends from the nodeabove it, beginning
at the top with the root-level domain. Under the root-level domain are the top-level domains, under those are the second-
level domains, and on down into subdomains. DNS namespace identifies the structure of the domains that combine to
form a complete domain name. For example, in the domain name sub.secondary.com, "com" is the top-level domain,
"secondary" identifies the secondary domain name (commonly a site hosted by an organization and/or business), and
"sub" identifies a subdomain within the larger network. This entire DNS domain structure is called the DNS namespace.
The name assigned to a domain or computer relates to its position in the namespace.
In the above example, all websites are broken into regional sections based on
the TLD (top-level domain). In the example of http://support.computerhope.com it
has a ".com" TLD, with "computerhope" as its second level domain that is local to
the .com TLD, and "support" as its subdomain, which is determined by its server.
There are different types of Resource Records. Most important types of Resource Records are 1) IPv4 host address (A), 2)
IPv6 host address (AAAA, pronounced "quad-A") 3) CNAME (Alias), 4) Pointer (PTR), 5) Mail Exchanger (MX) 6 )
Service (SRV)
DNS Resource
Explanation
Record Type
A Record IPv4 Host Record, used for mapping a Domain Name to an IPv4 address
AAAA Record
(pronounced "quad- IPv6 Host Record, used for mapping a Domain Name to an IPv6 address
A")
CNAME Record Alias Record, used for mapping an alias of a DNS domain name. CNAME Record are useful to use
(Canonical Names) more than one name to a single host. CNAME Records allow using different names for same host.
Mail Exchanger, used for mapping a DNS domain name to the mail server. MX (Mail Exchanger)
Records are used by e-mail applications to locate mail server for a DNS domain, based on the
MX Record
destination e-mail address. MX (Mail Exchanger) Record stores the mail server information for a
particular domain.
PTR Record Pointer, used for reverse lookup (IP Address to Domain Name resolution)
SRV record, used to map available services. Mainly used by Active Directory in Microsoft Windows
SRV Record
Server
Basic To read the mail ithas to be The mail content can be checked partially
Organize The user can not organize mails in The user can organize the mails on the server.
Folder The user can not create, delete or The user can create, delete or rename
Content A user can not search the content of A user can search the content of mail for
downloading.
BASIS FOR
POP3 IMAP
COMPARISON
Partial The user has to download the mail The user can partially download the mail if
Functions POP3 is simple and has limited IMAP is more powerful, more complex and
JPEG stands for Joint photographic experts group. It is the first interanational standard in image compression. It is widely
used today. It could be lossy as well as lossless . But the technique we are going to discuss here today is lossy
compression technique.
How jpeg compression works
First step is to divide an image into blocks with each having dimensions of 8 x8.
Let’s for the record, say that this 8x8 image contains the following values.
The range of the pixels intensities now are from 0 to 255. We will change the range from -128 to 127.
Subtracting 128 from each pixel value yields pixel value from -128 to 127. After subtracting 128 from each of the pixel
value, we got the following results.
Now we will perform the real trick which is done in JPEG compression which is ZIG-ZAG movement. The zig zag
sequence for the above matrix is shown below. You have to perform zig zag until you find all zeroes ahead. Hence our
image is now compressed.
2.
A. Differentiate static routing and dynamic routing?
Ans:
BASIS FOR
STATIC ROUTING DYNAMIC ROUTING
COMPARISON
Routing table Routing locations are hand- Locations are dynamically filled in the table.
building typed
Link failure Link failure obstructs the Link failure doesn't affect the rerouting.
rerouting.
Security Provides high security. Less secure due to sending broadcasts and
multicasts.
Routing protocols No routing protocols are Routing protocols such as RIP, EIGRP, etc are
indulged in the process. involved in the routing process.
3.
A. how congestion is controlled by using hop-by-hop choke packets and random early detection techniques ? Expalin
Ans: Random Early Detection
Dealing with congestion when it first starts is more effective than letting itgum up the works and then trying to deal with
it. This observation leads to an interesting twist on l oad shedding, which is to discard packets before all the buffer space
is really exhausted. The motivation for this idea is that most Internet hosts do not yet get congestion signals from routers
in the form of ECN. Instead, the only reliable indication of congestion that hosts get from the network is packet loss.
After all, it is difficult to build a router that does not drop packets when it is overloaded. Transport protocols such as TCP
are thus hardwired to react to loss as congestion, slowing down the source in response. The reasoning behind this logic is
that TCP was designed for wired networks and wired networks are very reliable, so lost packets are mostly due to buffer
overruns rather than transmission errors. Wireless links must recover transmission errors at the link layer (so they are not
seen at the network layer) to work well with TCP. This situation can be exploited to help reduce congestion. By having
routers drop packets early, before the situation has become hopeless, there is time for the source to take action before it is
too late. A popular algorithm for doing this is called RED (Random Early Detection) (Floyd and Jacobson, 1993). To
determine when to start discarding, routers maintain a running average of their queue lengths. When the average queue
length on some link exceeds a threshold, the link is said to be congested and a small fraction of the packets are dropped at
random. Picking packets at random makes it more likely that the fastest senders will see a packet drop; this is the best
option since the router cannot tell which source is causing the most trouble in a datagram network. The affected sender
will notice the loss when there is no acknowledgement, and then the transport protocol will slow down. The lost packet is
thus delivering the same message as a choke packet, but implicitly, without the router sending any explicit signal. RED
routers improve performance compared to routers that drop packets only when their buffers are full, though they may
require tuning to work well. For example, the ideal number of packets to drop depends on how many senders need to be
notified of congestion. However, ECN is the preferred option if it is available. It works in exactly the same manner, but
delivers a congestion signal explicitly rather than as a loss; RED is used when hosts cannot receive explicit signals.
B.Distinguish between leaky bucket and token bucket and describe how the good quality of service is achieved by these
algorithm?
Ans:
Weighted Fair Queuing: A better scheduling method is weighted fair queuing. In this technique, the packets are still
assigned to different classes and admitted to different queues. The queues, however, are weighted based on the priority of
the queues; higher priority means a higher weight. The system processes packets in each queue in a round-robin fashion
with the number of packets selected from each queue based on the corresponding weight.
• Traffic Shaping :
Traffic shaping is a mechanism to control the amount and the rate of the traffic sent to the network. Two
techniques can shape traffic: leaky bucket and token bucket.
1) Leaky Bucket 2)Token Bucket
5.
A. Why the flow control and buffering is required in transport layer? Explain how it is done?
Ans: The main reason why flow control at the transport layer is to avoid and control congestion. The sending
and the receiving entity can adjust the rate, there by helping to reduce the end to end traffic congestion. This is
very much applicable to tcp...
Flow Control basically means that TCP will ensure that a sender is not overwhelming a receiver by sending packets
faster than it can consume. It’s pretty similar to what’s normally called Back pressure in the Distributed Systems
literature. The idea is that a node receiving data will send some kind of feedback to the node sending the data to let it
know about its current condition.
It’s important to understand that this is not the same as Congestion Control. Although there’s some overlap between the
mechanisms TCP uses to provide both services, they are distinct features. Congestion control is about preventing a node
from overwhelming the network (i.e. the links between two nodes), while Flow Control is about the end-node.
How it works
When we need to send data over a network, this is normally what happens.
The sender application writes data to a socket, the transport layer (in our case, TCP ) will wrap this data in a segment and
hand it to the network layer (e.g. IP ), that will somehow route this packet to the receiving node.
On the other side of this communication, the network layer will deliver this piece of data to TCP , that will make it
available to the receiver application as an exact copy of the data sent, meaning if will not deliver packets out of order, and
will wait for a retransmission in case it notices a gap in the byte stream.
If we zoom in, we will see something like this.
TCP stores the data it needs to send in the send buffer, and the data it receives in the receive buffer. When the application
is ready, it will then read data from the receive buffer.
Flow Control is all about making sure we don’t send more packets when the receive buffer is already full, as the receiver
wouldn’t be able to handle them and would need to drop these packets.
To control the amount of data that TCP can send, the receiver will advertise its Receive Window (rwnd), that is, the spare
room in the receive buffer.
Every time TCP receives a packet, it needs to send an ack message to the sender, acknowledging it received that packet
correctly, and with this ack message it sends the value of the current receive window, so the sender knows if it can keep
sending data.
B.Explain TCP segment header format and discuss how the TCP connection is estb and released in transport layer?
6.
A. VPN
B. Explain the differences between integrated services and differented services and their uses?
Ans:uses not got
8.
A. Explain Domain Name system?
B. Describe SMTP protocol?
1.
A. compare VC And datagram subnet?
B. Explain distance vector routing protocol with an example subnet and discuss count to infinity problem?
Ans: A distance-vector routing (DVR) protocol requires that a router inform its neighbors of topology changes
periodically. Historically known as the old ARPANET routing algorithm (or known as Bellman-Ford algorithm).
Bellman Ford Basics – Each router maintains a Distance Vector table containing the distance between itself and ALL
possible destination nodes. Distances,based on a chosen metric, are computed using information from the neighbors’
distance vectors.
Information kept by DV router -
• Each router has an ID
• Associated with each link connected to a router,
• there is a link cost (static or dynamic).
• Intermediate hops
• From time-to-time, each node sends its own distance vector estimate to neighbors.
• When a node x receives new DV estimate from any neighbor v, it saves v’s distance vector and it updates its own
DV using B-F equation:
• Dx(y) = min { C(x,v) + Dv(y)} for each node y ∈ N
Example – Consider 3-routers X, Y and Z as shown in figure. Each router have their routing table. Every routing table
will contain distance to the destination nodes.
Consider router X , X will share it routing table to neighbors and neighbors will share it routing table to it to X and
distance from node X to destination will be calculated using bellmen- ford equation.
Dx(y) = min { C(x,v) + Dv(y)} for each node y ∈ N
As we can see that distance will be less going from X to Z when Y is intermediate node(hop) so it will be update in
routing table X.
The main issue with Distance Vector Routing (DVR) protocols is Routing Loops, since Bellman-Ford Algorithm cannot
prevent loops. This routing loop in DVR network causes Count to Infinity Problem. Routing loops usually occur when
any interface goes down or two-routers send updates at the same time.
Counting to infinity problem:
So in this example, the Bellman-Ford algorithm will converge for each router, they will have entries for each other. B will
know that it can get to C at a cost of 1, and A will know that it can get to C via B at a cost of 2.
If the link between B and C is disconnected, then B will know that it can no longer get to C via that link and will remove
it from it’s table. Before it can send any updates it’s possible that it will receive an update from A which will be
advertising that it can get to C at a cost of 2. B can get to A at a cost of 1, so it will update a route to C via A at a cost of 3.
A will then receive updates from B later and update its cost to 4. They will then go on feeding each other bad information
toward infinity which is called as Count to Infinity problem.
Route Poisoning:
When a route fails, distance vector protocols spread the bad news about a route failure by poisoning the route. Route
poisoning refers to the practice of advertising a route, but with a special metric value called Infinity. Routers consider
routes advertised with an infinite metric to have failed. Each distance vector routing protocol uses the concept of an actual
metric value that represents infinity. RIP defines infinity as 16. The main disadvantage of poison reverse is that it can
significantly increase the size of routing announcements in certain fairly common network topologies.
Split horizon:
If the link between B and C goes down, and B had received a route from A , B could end up using that route via A. A
would send the packet right back to B, creating a loop. But according to Split horizon Rule, Node A does not advertise its
route for C (namely A to B to C) back to B. On the surface, this seems redundant since B will never route via node A
because the route costs more than the direct route from B to C.
Consider the following network topology showing Split horizon-
• In addition to these, we can also use split horizon with route poisoning where above both technique will be used
combinely to achieve efficiency and less increase the size of routing announcements.
• Split horizon with Poison reverse technique is used by Routing Information Protocol (RIP) to reduce routing loops.
Additionally, Holddown timers can be used to avoid the formation of loops. Holddown timer immediately starts
when the router is informed that attached link is down. Till this time, router ignores all updates of down route
unless it receives an update from the router of that downed link. During the timer, If the down link is reachable
again, routing table can be updated.
2.
A. Explain ad Hoc routing algorithm with discovery and route maintenance stages?
Ans: 413
3.
A. i). Hop by hop choke packet
ii). Load shedding: 425
iii). Jitter control: 574
B. A computer on a 6MBPS network is regulated by a token bucket , the token bucket is filled at a rate of 1MBPS . it is
initially filled to a capacity with 8 megabits . how long the computer transmits at the full 6 mbps?
Ans:
5.
A. Explain with a neat diagram TCP header. What is the total size of minimum TCP MTU including TCP and IP
overhead but not including datalink layer overhead?
Ans: The default segment is 536 bytes. TCP adds 20 bytes and so does IP, making the default 576 bytes.
1. LISTEN : When a server is ready to accept an incoming connection it executes the LISTEN primitive. It blocks
waiting for an incoming connection.
2. CONNECT : It connects the server by establishing a connection. Response is awaited.
3. RECIEVE: Then the RECIEVE call blocks the server.
4. SEND : Then the client executes SEND primitive to transmit its request followed by the execution of RECIEVE
to get the reply. Send the message.
5. DISCONNECT : This primitive is used for terminating the connection. After this primitive one can’t send any
message. When the client sends DISCONNECT packet then the server also sends the DISCONNECT packet to
acknowledge the client. When the server package is received by client then the process is terminated.
Connection Oriented Service Primitives
DATA, DATA-ACKNOWLEDGE, EXPEDITED-DATA Data and information is sent using thus primitive
FACILITY, REPORT Primitive for enquiring about the performance of the network, like delivery statistics.
• Consider an application with a server and a number of remote clients.
a. To start with,the server executes a LISTEN primitive,typically by calling a library procedure that makes a
system call to block the server until a client turns up.
b. For lack of a better term,we will reluctantly use the somwhat ungainly acronym TPDU(Transport Protocol
Data Unit) for message sent from transport entity to transport entity.
c. Thus,TPDUs(exchanged by the transport layer) are contained in packets(exchanged by the network layer).
d. In turn,packets are contained in frames(exchanged by the data link layer).
e. When a frame arrives,the data link layer processes the frame header and passes the contents of the frame
payload field up to the network entity.
6.
A. Explain TCP connection establishment. Write a note on silly window syndrome problem?
Ans: Silly window syndrome is a problem in computer networking caused by poorly implemented TCP flow control. A
serious problem can arise in the sliding window operation when the sending application program creates data slowly, the
receiving application program consumes data slowly, or both. If a server with this problem is unable to process all
incoming data, it requests that its clients reduce the amount of data they send at a time (the window setting on a
TCP packet). If the server continues to be unable to process all incoming data, the window becomes smaller and smaller,
sometimes to the point that the data transmitted is smaller than the packet header, making data transmission extremely
inefficient. The name of this problem is due to the window size shrinking to a "silly" value.
Since there is a certain amount of overhead associated with processing each packet, the increased number of packets
means increased overhead to process a decreasing amount of data. The end result is thrashing.
When there is no synchronization between the sender and receiver regarding capacity of the flow of data or the size of the
packet, the window syndrome problem is created. When the silly window syndrome is created by the sender, Nagle's
algorithm is used. Nagle's solution requires that the sender send the first segment even if it is a small one, then that it wait
until an ACK is received or a maximum sized segment (MSS) is accumulated. When the silly window syndrome is
created by the receiver, David D Clark's solution is used.[citation needed] Clark's solution closes the window until another
segment of maximum segment size (MSS) can be received or the buffer is half empty.
There are 3 causes of SWS:
8.
i).Explain the role of DNS in application layer?
An application-layer protocol defines how applications on different systems pass messages to each other. An application-
layer protocol defines; the types of messages exchanged, the syntax of the various message types, the meaning of the
information, and rules for determining when and how a process sends and responds to messages.
One application layer protocol is the Domain Name System which is a name-resolution system critical to World Wide
Web (WWW) function and services which is responsible for translating fully qualified domain names such
as www.zymitry.com, into machine readable IP addresses. The Domain Name System is what allows users to use
alphanumeric names to navigate the WWW, email systems, FTP services, and others, instead of having to use these
systems Internet Protocol (IP) addresses. The Domain Name Service protocol is different from most other protocols
because users usually have no direct interaction with the Domain Name System, example; web browsers and FTP
applications. The Domain Name System provides the names translation used behind the scenes by various services.