Sei sulla pagina 1di 29

UNIT – II

DATA LINK LAYER

Objective:
To familiarize about the Data Link Layer and its significance in the Data
Communication.

Syllabus:
Data Link Layer
Design issues-Services provided to the Network Layer, Framing, Error Detection and Correction,
CRC, Elementary Protocols- Stop and wait, Sliding Window Protocols, A One bit, Go Back n,
Selective Repeat, HDLC, PPP

Learning Outcomes:
Students will be able to

1. Understand the design issues, framing, error detection and correction methodologies
adopted in data transmission
2. Explain the concept of services of Data Link layer to the Upper layers and the protocols
of it.
Learning Material

2.1 Design issues:


The data link layer uses the services of the physical layer to send and receive bits over
communication channels. It has a number of functions, including:

 Providing a well-defined service interface to the network layer.


 Dealing with transmission errors.
 Regulating the flow of data so that slow receivers are not swamped by fast senders.
The data link layer takes the packets it gets from the network layer and encapsulates them into
frames for transmission. Each frame contains a frame header, a payload field for holding the
packet, and a frame trailer, as illustrated in Figure.

Figure 2.1: Relationship between packets and frames

2.2 Services Provided to the Network Layer:


 Unacknowledged Connectionless service
 Acknowledged Connectionless service
 Acknowledged Connection-Oriented service

Unacknowledged Connectionless service:


• no recovering of lost or corrupted frame
• when the error rate is very low
• real-time traffic, like speech or video
• Losses are taken care of at higher layers
• Used on reliable medium like coax cables or optical fiber, where the error rate is low.
• Appropriate for voice, where delay is worse than bad data.
• It consists of having the source machine send independent frames to the destination
machine without having the destination machine acknowledge them.
• Example: Ethernet, Voice over IP, etc. in all the communication channel were real time
operation is more important that quality of transmission.

Acknowledged Connectionless service


• Returns information a frame has safely arrived.
• time-out, resend, frames received twice
• Unreliable channels, such as wireless systems.
• Useful on unreliable medium like wireless.
• Acknowledgements add delays.
• Adding Ack in the DLL rather than in the NL is just an optimization and not a
requirement. Leaving it for the NL is inefficient as a large message (packet) has to be
resent in that case in contrast to small frames here.
• On reliable channels, like fiber, the overhead associated with the Ack is not justified.
• Each frame send by the Data Link layer is acknowledged and the sender knows if a
specific frame has been received or lost.
• Typically the protocol uses a specific time period that if has passed without getting
acknowledgment it will re-send the frame.
• This service is useful for commutation when an unreliable channel is being utilized (e.g.,
802.11 WiFi).
• Network layer does not know frame size of the packets and other restriction of the data
link layer. Hence it becomes necessary for data link layer to have some mechanism to
optimize the transmission.

Acknowledged Connection-oriented service


• Established connection before any data is sent.
• Provides the network layer with a reliable bit stream.
• Most reliable, Guaranteed service –
– Each frame sent is indeed received
– Each frame is received exactly once
– Frames are received in order
• Special care has to be taken to ensure this in connectionless services
• Source and Destination establish a connection first.
• Each frame sent is numbered
– Data link layer guarantees that each frame sent is indeed received.
– It guarantees that each frame is received only once and that all frames are
received in the correct order.
• Ex: Satellite channel communication, Long-distance telephone communication, etc.

2.3 Framing:
The usual approach is for the data link layer to break up the bit stream into discrete frames,
compute a short token called a checksum for each frame, and include the checksum in the frame
when it is transmitted.
• Large block of data may be broken up into small frames at the source because:
– limited buffer size at the receiver
– A larger block of data has higher probability of error
• With smaller frames, errors are detected sooner, and only a smaller
amount of data needs to be retransmitted
– On a shared medium, such as Ethernet and Wireless LAN, small frame size can
prevent one station from occupying medium for long periods
• Need to indicate the start and end of a block of data
• Use preamble (e.g., flag byte) and postamble
• If the receiver ever loses synchronization, it can just search for the flag byte.
• Frame: preamble + control info + data + postamble
• Problem: it is possible that the flag byte’s bit pattern occur in the data
• Two popular solutions:
Byte stuffing: The sender inserts a special byte (e.g., ESC) just before each “accidental” flag
byte in the data (like in C language, “ is replaced with \”). The receiver’s link layer removes this
special byte before the data are given to the network layer.

Bit stuffing: each frame starts with a flag byte “01111110”.


• Whenever the sender encounters five consecutive 1s in the data, it automatically stuffs a
0 bit into the outgoing bit stream. When the receiver sees five consecutive incoming 1
bits, followed by a 0 bit, it automatically deletes the 0 bit.
• To provide service to the network layer the data link layer must use the service provided
to it by physical layer.
• Stream of data bits provided to data link layer is not guaranteed to be without errors.
• Errors could be:
– Number of received bits does not match number of transmitted bits (deletion or
insertion)
– Bit Value
• It is up to data link layer to correct the errors if necessary.
• Transmission of the data link layer starts with breaking up the bit stream
– into discrete frames
– Computation of a checksum for each frame, and
– Include the checksum into the frame before it is transmitted.
• Receiver computes its checksum error for a receiving frame and if it is different from the
checksum that is being transmitted will have to deal with the error.
• Framing is more difficult than one could think
Fixed-Size Framing: Frames can be of fixed or variable size. In fixed-size framing, there is
no need for defining the boundaries of the frames; the size itself can be used as a delimiter.
An example of this type of framing is the ATM wide-area network, which uses frames of
fixed size called cells.
Variable-Size Framing: The main discussion in this chapter concerns variable-size framing,
prevalent in local area networks. In variable-size framing, one needs a way to define the end
of the frame and the beginning of the next. Historically, two approaches were used for this
purpose: a character-oriented approach and a bit-oriented approach.

2.3.1 Framing Methods


1. Character Count
2. Flag bytes with byte stuffing
3. Flag bits with bit stuffing
4. Physical layer coding violations
1. Framing with Character Count
A character stream. (a) Without errors. (b) With one error.

Figure 2.2: Character Count with and without errors


Problem with Framing with CC:
• What if the count is garbled
• Even if with checksum, the receiver knows that the frame is bad there is no way to tell
where the next frame starts.
• Asking for retransmission doesn’t help either because the start of the retransmitted frame
is not known
• No longer used
Byte Count Framing Method:
• It uses a field in the header to specify the number of bytes in the frame.
• Once the header information is being received it will be used to determine end of the
frame.Trouble with this algorithm is that when the count is incorrectly received the
destination will get out of synch with transmission.
– Destination may be able to detect that the frame is in error but it does not have a
means (in this algorithm) how to correct it.
2. Flag Bytes with Byte Stuffing Framing Method:
• This methods gets around the boundary detection of the frame by having each appended
by the frame start and frame end special bytes.
• If they are the same (beginning and ending byte in the frame) they are called flag byte.
• In the next figure this byte is shown as FLAG.
• If the actual data contains a byte that is identical to the FLAG byte (e.g., picture, data
stream, etc.) the convention that can be used is to have escape character inserted just
before the “FLAG” character.

Figure 2.3: Flag Bytes with Byte Stuffing


A frame delimited by flag bytes.
Four examples of byte sequences before and after byte stuffing.
Problem: fixed character size: assumes character size to be 8 bits: can’t handle
heterogeneous environment.
3. Flag Bits with Bit Stuffing Framing Method
• This method achieves the same thing as Byte Stuffing method by using Bits (1) instead of
Bytes (8 Bits).
• It was developed for High-level Data Link Control (HDLC) protocol.
• Each frames begins and ends with a special bit patter:
– 01111110 or 0x7E <- Flag Byte
– Whenever the sender’s data link layer encounters five consecutive 1s in the data it
automatically stuffs a 0 bit into the outgoing bit stream.
– USB uses bit stuffing.

Figure 2.4: Bit stuffing


(a) The original data.(b) The data as they appear on the line.
(c) The data as they are stored in receiver’s memory after destuffing.

2.4 Error Control:

• After solving the marking of the frame with start and end the DLL has to handle eventual
errors in transmission or detection.
– Ensuring that all frames are delivered to the network layer at the destination and
in proper order.
• Unacknowledged connectionless service: it is OK for the sender to output frames
regardless of its reception.
• Reliable connection-oriented service: it is NOT OK.
• Reliable connection-oriented service usually will provide a sender with some feedback
about what is happening at the other end of the line.
– Receiver Sends Back Special Control Frames.
– If the Sender Receives positive Acknowledgment it will know that the frame has
arrived safely.
• Timer and Frame Sequence Number for the Sender is Necessary to handle the case when
there is no response (positive or negative) from the Receiver .
• Error control in the data link layer is based on automatic repeat request, which is the
retransmission of data.
2.5 Flow Control:
• Important Design issue for the cases when the sender is running on a fast powerful
computer and receiver is running on a slow low-end machine.
Two approaches:
I. Feedback-based flow control
II. Rate-based flow control
Feedback-based Flow Control Receiver sends back information to the sender giving it
permission to send more data, or telling sender how receiver is doing.
Rate-based Flow Control built in mechanism that limits the rate at which sender may
transmit data, without the need for feedback from the receiver.
2.6 Error Detection & Correction:
• Network designers have developed two basic strategies for dealing with errors.
• Both add redundant information to the data that is sent.
• One strategy is to include enough redundant information to enable the receiver to deduce
what the transmitted data must have been.
• The other is to include only enough redundancy to allow the receiver to deduce that an
error has occurred (but not which error) and have it request a retransmission. The former
strategy uses error-correcting codes and the latter uses error-detecting codes.
• The use of error-correcting codes is often referred to as FEC (Forward Error
Correction).

2.6.1 Error-Correcting Codes

1. Hamming codes.
2. Binary convolution codes.
3. Reed-Solomon codes.
4. Low-Density Parity Check codes.
1. Hamming Code:
To understand how errors can be handled, it is necessary to first look closely at what an
error really is. Given any two code words that may be transmitted or received—say, 10001001
and 10110001—it is possible to determine how many corresponding bits differ. In this case, 3
bits differ. To determine how many bits differ, just XOR the two codewords and count the
number of 1 bits in the result.
For example:
10001001
10110001
-----------------
00111000
---------------
The number of bit positions in which two code words differ is called the Hamming distance
(Hamming, 1950). Its significance is that if two code words are a Hamming distance d apart, it
will require d single-bit errors to convert one into the other.
Given the algorithm for computing the check bits, it is possible to construct a complete
list of the legal code words, and from this list to find the two code words with the smallest
Hamming distance.
This distance is the Hamming distance of the complete code. In most data transmission
applications, all 2m possible data messages are legal, but due to the way the check bits are
computed, not all of the 2n possible code words are used.
In fact, when there are r check bits, only the small fraction of 2m /2n or 1/2r of the
possible messages will be legal code words. It is the sparseness with which the message is
embedded in the space of code words that allows the receiver to detect and correct errors.
As a simple example of an error-correcting code, consider a code with only four valid
code words:
0000000000, 0000011111, 1111100000, and 1111111111
This code has a distance of 5, which means that it can correct double errors or detect quadruple
errors. If the codeword 0000000111 arrives and we expect only single- or double-bit errors, the
receiver will know that the original must have been 0000011111. If, however, a triple error
changes 0000000000 into 0000000111, the error will not be corrected properly. Alternatively, if
we expect all of these errors, we can detect them. None of the received code words are legal code
words so an error must have occurred. It should be apparent that in this example we cannot both
correct double errors and detect quadruple errors because this would require us to interpret a
received codeword in two different ways.
Figure 2.5: Example of hamming code correcting a single-bit error.

2.6.2 Error-Detecting Codes


Error-correcting codes are widely used on wireless links, which are notoriously noisy and error
prone when compared to optical fibers. Without error-correcting codes, it would be hard to get
anything through.
1. Parity.
2. Checksums.
3. Cyclic Redundancy Checks (CRCs).
These are more efficient than error-correcting codes.
1. Parity:
• The parity bit is chosen so that the number of 1 bits in the codeword is even (or odd).
Doing this is equivalent to computing the (even) parity bit as the modulo 2 sum or XOR
of the data bits.
• For example, when 1011010 is sent in even parity, a bit is added to the end to make it
10110100. With odd parity 1011010 becomes 10110101.
• A code with a single parity bit has a distance of 2, since any single-bit error produces a
codeword with the wrong parity. This means that it can detect single-bit errors.
2. Checksum:
• The second kind of error-detecting code, the checksum, is closely related to groups of
parity bits. The word ‘‘checksum’’ is often used to mean a group of check bits associated
with a message, regardless of how are calculated.
• A group of parity bits is one example of a checksum. However, there are other, stronger
checksums based on a running sum of the data bits of the message.
• The checksum is usually placed at the end of the message, as the complement of the sum
function.
• This way, errors may be detected by summing the entire received codeword, both data
bits and checksum.
• If the result comes out to be zero, no error has been detected.
• This checksum is a sum of the message bits divided into 16-bit words.
• Because this method operates on words rather than on bits, as in parity, errors that leave
the parity unchanged can still alter the sum and be detected.
• For example, if the lowest order bit in two different words is flipped from a 0 to a 1, a
parity check across these bits would fail to detect an error.
• However, two 1s will be added to the 16-bit checksum to produce a different result.
• The error can then be detected.
3. CRC (Cyclic Redundancy Check):
 The CRC also known as a polynomial code. Polynomial codes are based upon treating
bit strings as representations of polynomials with coefficients of 0 and 1 only.
 A k-bit frame is regarded as the coefficient list for a polynomial with k terms, ranging
from x ^k – 1 to x^ 0. Such a polynomial is said to be of degree k − 1.
 The high-order (leftmost) bit is the coefficient of x ^k − 1, the next bit is the coefficient of
x ^k − 2, and so on.
 For example, 110001 has 6 bits and thus represents a six-term polynomial with
coefficients 1, 1, 0, 0, 0, and 1: 1x^ 5 + 1x ^4 + 0x^ 3 + 0x ^2 + 0x ^1 + 1x^ 0.
 Polynomial arithmetic is done modulo 2, according to the rules of algebraic field theory.
It does not have carries for addition or borrows for subtraction. Both addition and
subtraction are identical to exclusive OR. For example:

 When the polynomial code method is employed, the sender and receiver must agree upon
a generator polynomial, G(x), in advance.
 Both the high- and low order bits of the generator must be 1. To compute the CRC for
some frame with m bits corresponding to the polynomial M(x), the frame must be longer
than the generator polynomial.
 The idea is to append a CRC to the end of the frame in such a way that the polynomial
represented by the check summed frame is divisible by G(x). When the receiver gets the
check summed frame, it tries dividing it by G(x). If there is a remainder, there has been a
transmission error.

The algorithm for computing the CRC is as follows:

1. Let r be the degree of G(x). Append r zero bits to the low-order end of the frame so it
now contains m + r bits and corresponds to the polynomial x ^rM(x).
2. Divide the bit string corresponding to G(x) into the bit string corresponding to x rM(x),
using modulo 2 division.
3. Subtract the remainder (which is always r or fewer bits) from the bit string
corresponding to x ^rM(x) using modulo 2 subtraction. The result is the check summed
frame to be transmitted. Call its polynomial T(x).
Figure 2.6: Example calculation of the CRC.
2.7 Elementary DLL protocols

Figure 2.7: categories of protocols

• Protocols are classified into various types depending on different channels. They are:
1) noiseless channel
2) noisy channel
1) noiseless channel:
Protocols for noiseless channel are further classified into two
i) simplest protocol
ii) stop-and-wait protocol
i) simplest protocol:
 In simplest protocol, the transmission of data is only in single direction i.e. simplex
duplex transmission methodology is adopted, in this, there are no errors that take
place in physical channel.
 The sender/receiver can generate/consume infinite amount of data. The DLL on the
sender side takes the packet from the network layer and then adds the header and
trailer to create frame and transmits it to the physical layer.
 The receiver side DLL removes the header from the frame and transmits as packet to
the network layer. In this protocol, the receiver will never flow out i.e. it will never
be overwhelmed.
ii) Stop-and-wait Protocol:
 The most impractical limitation used in an unrestricted simplex protocol wouldn’t be
considered here. However, assumption of communication channel is to be error free and
still the data traffic is simplex in this protocol, the major difficulty to be faced is how to
overcome the problem of flooding from the sender’s end and transfer data faster than the
latter from the receiver’s end.
 Basically, if the receiver requires a time ∆t to execute from physical to network layer, the
sender need to transmit at an average rate less than one frame per ∆t. additionally, in
receiver’s hardware, assuming that there is no queuing done and no automatic buffering
done then the sender need not transmit a new frame until and unless the old frame has
been fetched by physical layer, because the new one will overwrite the old one.
 Generally, to solve this problem, receiver needs to provide feedback to the sender. After
transmitting a packet to network layer, a little dummy frame is sent back to the sender by
the receiver which is turn permits sender to transmit the next frame.
 Stop – and –wait protocol is a protocol where sender sends one frame and then waits for
acknowledgement before further proceedings.
 The advantage of stop and wait protocol is its simplicity. Each frame is checked and
acknowledged before the next frame is sent.
 The disadvantage is its inefficiency. Stop and wait is very slow. Each frame must travel
all the way to the receiver and an acknowledgement must travel all the way back before
the next frame can be transmitted.
 In other words, each frame is alone on the line. Each frame sent and received uses the
entire time needed to traverse the link. If the distance between devices is long, the time
spent waiting for ACKs between each frame can add significantly to the total
transmission time.

Figure 2.10: Design of Stop-and-Wait Protocol


Figure 2.11

2.8 Sliding Window protocols:


 In the previous protocols, data frames were transmitted in one direction only. In most
practical situations, there is a need to transmit data in both directions.
 One way of achieving full-duplex data transmission is to run two instances of one of the
previous protocols, each using a separate link for simplex data traffic (in different
directions).
 Each link is then comprised of a ‘‘forward’’ channel (for data) and a ‘‘reverse’’ channel
(for acknowledgements).
 In both cases the capacity of the reverse channel is almost entirely wasted. A better idea
is to use the same link for data in both directions.
 In this model the data frames from A to B are intermixed with the acknowledgement
frames from A to B. By looking at the kind field in the header of an incoming frame, the
receiver can tell whether the frame is data or an acknowledgement.
 Although interleaving data and control frames on the same link is a big improvement
over having two separate physical links, yet another improvement is possible.
 When a data frame arrives, instead of immediately sending a separate control frame, the
receiver restrains itself and waits until the network layer passes it the next packet.
 The acknowledgement is attached to the outgoing data frame (using the ack field in the
frame header).
 In effect, the acknowledgement gets a free ride on the next outgoing data frame. The
technique of temporarily delaying outgoing acknowledgements so that they can be
hooked onto the next outgoing data frame is known as piggybacking.
 The next three protocols are bidirectional protocols that belong to a class called sliding
window protocols.
 The three differ among themselves in terms of efficiency, complexity, and buffer
requirements, as discussed later. In these, as in all sliding window protocols, each
outbound frame contains a sequence number, ranging from 0 up to some maximum.
 The maximum is usually 2n − 1 so the sequence number fits exactly in an n-bit field.
The stop-and-wait sliding window protocol uses n = 1, restricting the sequence numbers
to 0 and 1, but more sophisticated versions can use an arbitrary n.
 The essence of all sliding window protocols is that at any instant of time, the
Sender maintains a set of sequence numbers corresponding to frames it is permitted to send.
These frames are said to fall within the sending window.
 Similarly, the receiver also maintains a receiving window corresponding to the set of
frames it is permitted to accept. The sender’s window and the receiver’s window need
not have the same lower and upper limits or even have the same size.
 In some protocols they are fixed in size, but in others they can grow or shrink over the
course of time as frames are sent and received.

Figure 2.13: A sliding window of size 1, with a 3-bit sequence number. (a) Initially.
(b) After the first frame has been sent. (c) After the first frame has been
Received. (d) After the first acknowledgement has been received.

Noisy channel protocols: noiseless protocols specify how to control the flow of data,
where as noisy protocols specified how to control the error. Noisy protocols are classified as
follows:
i) Stop and wait automated repeat request (ARQ)
ii) Sliding window protocol using Go-back-n ARQ
iii) Sliding window protocol using selective Repeat ARQ

1. Stop and wait automated repeat request (ARQ):


 The completed and lost frames need to be resent in this protocol. If the receiver does not
respond when there is an error, how can the sender know which frame to resend? To
remedy this problem, the sender keeps a copy of the sent frame.
 At the same time, it starts a timer. If the timer expires and there is no ACK for the sent
frame, the frame is resent, the copy is held, and the timer is restarted. Since the protocol
uses the stop-and-wait mechanism, there is only one specific frame that needs an ACK
even though several copies of the same frame can be in the network
 Under normal circumstances, one of the two data link layers goes first and transmits the
first frame. In other words, only one of the data link layer programs should contain the
two physical layers and start timer procedure calls outside the main loop.
 The starting machine fetches the first packet from its network layer, builds a frame from
it, and sends it. When this (or any) frame arrives, the receiving data link layer checks to
see if it is a duplicate, just as in protocol 3.
 If the frame is the one expected, it is passed to the network layer and the receiver’s
window is slid up.
 The acknowledgement field contains the number of the last frame received without error.
If this number agrees with the sequence number of the frame the sender is trying to send,
the sender knows it is done with the frame stored in buffer and can fetch the next packet
from its network layer.
 If the sequence number disagrees, it must continue trying to send the same frame.
Whenever a frame is received, a frame is also sent back. Now let us examine protocol 4
to see how resilient it is to pathological scenarios.
 Assume that computer A is trying to send its frame 0 to computer B and that B is trying to
send its frame 0 to A.
 Suppose that A sends a frame to B, but A’s timeout interval is a little too short.
Consequently, A may time out repeatedly, sending a series of identical frames, all with
seq = 0 and ack = 1. When the first valid frame arrives at computer B, it will be accepted
and frame expected will be set to a value of 1.
 All the subsequent frames received will be rejected because B is now expecting frames
with sequence number 1, not 0. Furthermore, since all the duplicates will have ack = 1
and B is still waiting for an acknowledgement of 0, B will not go and fetch a new packet
from its network layer. After every rejected duplicate comes in, B will send A a frame
containing seq = 0 and ack = 0.
 Eventually, one of these will arrive correctly at A, causing A to begin sending the next
packet. No combination of lost frames or premature timeouts can cause the protocol to
deliver duplicate packets to either network layer, to skip a packet, or to deadlock.
 The protocol is correct. However, to show how subtle protocol interactions can be, we
note that a peculiar situation arises if both sides simultaneously send an initial packet.
 This synchronization difficulty is illustrated by Fig. 3-17. In part (a), the normal
operation of the protocol is shown. In (b) the peculiarity is illustrated.
 If B waits for A’s first frame before sending one of its own, the sequence is as shown in
(a), and every frame is accepted.
 However, if A and B simultaneously initiate communication, their first frames cross, and
the data link layers then get into situation (b). In (a) each frame arrival brings a new
packet for the network layer; there are no duplicates. In (b) half of the frames contain
duplicates, even though there are no transmission errors. Similar situations can occur as a
result of premature timeouts, even when one side clearly starts first. In fact, if multiple
premature timeouts occur, frames may be sent three or more times, wasting valuable
bandwidth.
.

Figure 2-15. Two scenarios for protocol 4. (a) Normal case. (b) Abnormal case. The notation is
(seq, ack, packet number). An asterisk indicates where a network layer accepts a packet.

2.A Protocol Using Go-Back-N:


 The go-back-n, is for the receiver simply to discard all subsequent frames, sending no
acknowledgements for the discarded frames.
 This strategy corresponds to a receive window of size 1. In other words, the data link layer
refuses to accept any frame except the next one it must give to the network layer.
 If the sender’s window fills up before the timer runs out, the pipeline will begin to empty.
Eventually, the sender will time out and retransmit all unacknowledged frames in order,
starting with the damaged or lost one.
 This approach can waste a lot of bandwidth if the error rate is high. In Fig. 2-18(b) we see
go-back-n for the case in which the receiver’s window is large. Frames 0 and 1 are correctly
received and acknowledged. Frame 2, however, is damaged or lost.
 The sender, unaware of this problem, continues to send frames until the timer for frame 2
expires. Then it backs up to frame 2 and starts over with it, sending 2, 3, 4, etc. all over
again.
 The other general strategy for handling errors when frames are pipelined is called selective
repeat. When it is used, a bad frame that is received is discarded, but any good frames
received after it are accepted and buffered.
 When the sender times out, only the oldest unacknowledged frame is retransmitted. If that
frame arrives correctly, the receiver can deliver to the network layer, in sequence, all the
frames it has buffered. Selective repeat corresponds to a receiver window larger than 1.
 This approach can require large amounts of data link layer memory if the window is large.
Selective repeat is often combined with having the receiver send a negative
acknowledgement (NAK) when it detects an error, for example, when it receives a checksum
error or a frame out of sequence.
 NAKs stimulate retransmission before the corresponding timer expires and thus improve
performance.
 In Fig. 2-18(b), frames 0 and 1 are again correctly received and acknowledged and frame 2 is
lost. When frame 3 arrives at the receiver, the data link layer there notices that it has missed a
frame, so it sends back a NAK for 2 but buffers 3. When frames 4 and 5 arrive, they, too, are
buffered by the data link layer instead of being passed to the network layer. Eventually, the
NAK 2 gets back to the sender, which immediately resends frame 2.
 When that arrives, the data link layer now has 2, 3, 4, and 5 and can pass all of them to the
network layer in the correct order.
 It can also acknowledge all frames up to and including 5, as shown in the figure. If the NAK
should get lost, eventually the sender will time out for frame 2 and send it (and only it) of its
own accord, but that may be a quite a while later.
Figure 2.16: Pipelining and error recovery. Effect of an error when (a) receiver’s window size is
1 and (b) receiver’s window size is large.

3. A Protocol Using Selective Repeat

 In this protocol, both sender and receiver maintain a window of outstanding and
acceptable sequence numbers, respectively.
 The sender’s window size starts out at 0 and grows to some predefined maximum. The
receiver’s window, in contrast, is always fixed in size and equal to the predetermined
maximum.
 The receiver has a buffer reserved for each sequence number within its fixed window.
Associated with each buffer is a bit (arrived) telling whether the buffer is full or empty.
Whenever a frame arrives, its sequence number is checked by the function between to see
if it falls within the window.

Figure 2-17. (a) Initial situation with a window of size7. (b) After 7 frames have been sent and
received but not acknowledged. (c) Initial situation with a window size of 4. (d) After 4 frames
have been sent and received but not acknowledged.
 Non sequential receive introduces further constraints on frame sequence numbers
compared to protocols in which frames are only accepted in order.
 We can illustrate the trouble most easily with an example. Suppose that we have a 3-bit
sequence number, so that the sender is permitted to transmit up to seven frames before
being required to wait for an acknowledgement.
 Initially, the sender’s and receiver’s windows are as shown in Fig. 2-17(a). The sender
now transmits frames 0 through 6.
 The receiver’s window allows it to accept any frame with a sequence number between 0
and 6 inclusive.
 All seven frames arrive correctly, so the receiver acknowledges them and advances its
window to allow receipt of 7, 0, 1, 2, 3, 4, or 5, as shown in Fig. 2-17(b). All seven
buffers are marked empty.
 It is at this point that disaster strikes in the form of a lightning bolt hitting the telephone
pole and wiping out all the acknowledgements.
 The protocol should operate correctly despite this disaster.
 The sender eventually times out and retransmits frame 0.
 When this frame arrives at the receiver, a check is made to see if it falls within the
receiver’s window.
 Unfortunately, in Fig. 2-17(b) frame 0 is within the new window, so it is accepted as a
new frame.
 The receiver also sends a (piggybacked) acknowledgement for frame 6, since 0 through 6
have been received.
 The sender is happy to learn that all its transmitted frames did actually arrive correctly, so
it advances its window and immediately sends frames 7, 0, 1, 2, 3, 4, and 5.
 Frame 7 will be accepted by the receiver and its packet will be passed directly to the
network layer. Immediately thereafter, the receiving data link layer checks to see if it has
a valid frame 0 already, discovers that it does, and passes the old buffered packet to the
network layer as if it were a new packet.
 Consequently, the network layer gets an incorrect packet, and the protocol fails.
 The essence of the problem is that after the receiver advanced its window, the new range
of valid sequence numbers overlapped the old one.
 Consequently, the following batch of frames might be either duplicates (if all the
acknowledgements were lost) or new ones (if all the acknowledgements were received).
The poor receiver has no way of distinguishing these two cases.
 The way out of this dilemma lies in making sure that after the receiver has advanced its
window there is no overlap with the original window.
 To ensure that there is no overlap, the maximum window size should be at most half the
range of the sequence numbers.
 This situation is shown in Fig. 2-17(c) and Fig. 2-17(d). With 3 bits, the sequence
numbers range from 0 to 7.
 Only four unacknowledged frames should be outstanding at any instant. That way, if the
receiver has just accepted frames 0 through 3 and advanced its window to permit
acceptance of frames 4 through 7, it can unambiguously tell if subsequent frames are
retransmissions (0 through 3) or new ones (4 through 7). In general, the window size for
protocol 6 will be (MAX SEQ + 1)/2.

2.9 Data link layer in HDLC:

Configurations and Transfer Modes


HDLC provides two common transfer modes that can be used in different configurations:
normal response mode (NRM) and asynchronous balanced mode (ABM). Normal Response
Mode
In normal response mode (NRM), the station configuration is unbalanced. We
have one primary station and multiple secondary stations. A primary station can send
commands; a secondary station can only respond. The NRM is used for both point-to-
point and multiple-point links, as shown in Figure.

Figure 2.19: Normal response mode

Asynchronous Balanced Mode

In asynchronous balanced mode (ABM), the configuration is balanced. The link is point-to-point,
and each station can function as a primary and a secondary (acting as peers), as shown in Figure .
This is the common mode today.
Figure 2.20: Asynchronous Balanced Mode

Frames

 To provide the flexibility necessary to support all the options possible in the modes and
configurations just described, HDLC defines three types of frames: information frames
(I-frames), supervisory frames (S-frames), and unnumbered frames (V-frames).
 Each type of frame serves as an envelope for the transmission of a different type of
message. I-frames are used to transport user data and control information relating to user
data (piggybacking). S-frames are used only to transport control information. V-frames
are reserved for system management. Information carried by V-frames is intended for
managing the link itself.

Frame Format
 Each frame in HDLC may contain up to six fields, as shown in Figure: a beginning flag
field, an address field, a control field, an information field, a frame check sequence (FCS)
field, and an ending flag field.
 In multiple-frame transmissions, the ending flag of one frame can serve as the beginning
flag of the next frame.

Figure 2.21: HDLC FRAMES


Fields
Let us now discuss the fields and their use in different frame types.
 Flag field. The flag field of an HDLC frame is an 8-bit sequence with the bit pattern
01111110 that identifies both the beginning and the end of a frame and serves as a
Synchronization pattern for the receiver.
 Address field. The second field of an HDLC frame contains the address of the Secondary
station. If a primary station created the frame, it contains a to address. If a secondary creates
the frame, it contains a from address. An address field can be 1 byte or several bytes long,
depending on the needs of the network. One byte can identify up to 128 stations (l bit is used
for another purpose). Larger networks require multiple-byte address fields. If the address field
is only 1 byte, the last bit is always a 1. If the address is more than 1 byte, all bytes but the last
one will end with 0; only the last will end with 1. Ending each intermediate byte with 0
indicates to the receiver that there are more address bytes to come.
 Control field. The control field is a 1- or 2-byte segment of the frame used for flow and
error control. The interpretation of bits in this field depends on the frame type. We
discuss this field later and describe its format for each frame type.
 Information field. The information field contains the user's data from the network layer
or management information. Its length can vary from one network to another.
 FCS field. The frame check sequence (FCS) is the HDLC error detection field. It can
contain either a 2- or 4-byte ITU-T CRC.

2.10 POINT-TO-POINT PROTOCOL:


Although HDLC is a general protocol that can be used for both point-to-point and
Multipoint configurations, one of the most common protocols for point-to-point access is the
Point-to-Point Protocol (PPP). Today, millions of Internet users who need to connect their home
computers to the server of an Internet service provider use PPP. The majority of these users have
a traditional modem; they are connected to the Internet through a telephone line, which provides
the services of the physical layer. But to control and manage the transfer of data, there is a need
for a point-to-point protocol at the data link layer. PPP is by far the most common.
PPP provides several services:
1. PPP defines the format of the frame to be exchanged between devices.
2. PPP defines how two devices can negotiate the establishment of the link and the exchange of
data.
3. PPP defines how network layer data are encapsulated in the data link frame.
4. PPP defines how two devices can authenticate each other.
5. PPP provides multiple network layer services supporting a variety of network layer protocols.
6. PPP provides connections over multiple links.
7. PPP provides network address configuration. This is particularly useful when a home user
needs a temporary network address to connect to the Internet.
On the other hand, to keep PPP simple, several services are missing:
I. PPP does not provide flow control. A sender can send several frames one after another with no
concern about overwhelming the receiver.
2. PPP has a very simple mechanism for error control. A CRC field is used to detect errors. If the
frame is corrupted, it is silently discarded; the upper-layer protocol needs to take care of the
problem. Lack of error control and sequence numbering may cause a packet to be received out of
order.
3. PPP does not provide a sophisticated addressing mechanism to handle frames in a multipoint
configuration.

Framing
PPP is a byte-oriented protocol. Framing is done according to the discussion of byte oriented
Protocols at the beginning of this chapter.
Frame Format
Figure shows the format of a PPP frame. The description of each field follows:

Figure 3.22: PPP Frame format

 Flag. A PPP frame starts and ends with a I-byte flag with the bit pattern 01111110. Although
this pattern is the same as that used in HDLC, there is a big difference. PPP is a byte-oriented
protocol; HDLC is a bit-oriented protocol. The flag is treated as a byte, as we will explain
later.
 Address. The address field in this protocol is a constant value and set to 11111111 (broadcast
address). During negotiation (discussed later), the two parties may agree to omit this byte.
 Control. This field is set to the constant value 11000000 (imitating unnumbered frames in
HDLC). As we will discuss later, PPP does not provide any flow control. Error control is also
limited to error detection. This means that this field is not needed at all, and again, the two
parties can agree, during negotiation, to omit this byte.
 Protocol. The protocol field defines what is being carried in the data field: either user data or
other information. We discuss this field in detail shortly. This field is by default 2 bytes long,
but the two parties can agree to use only I byte.
 Payload field. This field carries either the user data or other information that we will discuss
shortly. The data field is a sequence of bytes with the default of a maximum of 1500 bytes; but
this can be changed during negotiation. The data field is byte stuffed if the flag byte pattern
appears in this field. Because there is no field defining the size of the data field, padding is
needed if the size is less than the maximum default value or the maximum negotiated value.
 FCS. The frame check sequence (FCS) is simply a 2-byte or 4-byte standard CRC.
Byte Stuffing
The similarity between PPP and HDLC ends at the frame format. PPP, as we discussed before, is
a byte-oriented protocol totally different from HDLC. As a byte-oriented protocol, the flag in
PPP is a byte and needs to be escaped whenever it appears in the data section of the frame. The
escape byte is 01111101, which means that every time the flag like pattern appears in the data,
this extra byte is stuffed to tell the receiver that the next byte is not a flag.
Transition Phases
A PPP connection goes through phases which can be shown in a transition phase
Diagram

Figure 2.23: Transition Phases


 Dead. In the dead phase the link is not being used. There is no active carrier (at the
physical layer) and the line is quiet.
 Establish. When one of the nodes starts the communication, the connection goes into
this phase. In this phase, options are negotiated between the two parties. If the negotiation
is successful, the system goes to the authentication phase (if authentication is required) or
directly to the networking phase. The link control protocol packets, discussed shortly, are used
for this purpose. Several packets may be exchanged here.
 Authenticate. The authentication phase is optional; the two nodes may decide, during the
establishment phase, not to skip this phase. However, if they decide to proceed with
authentication, they send several authentication packets, discussed later. If the result is
successful, the connection goes to the networking phase; otherwise, it goes to the termination
phase.
 Network. In the network phase, negotiation for the network layer protocols takes place.
PPP specifies that two nodes establish a network layer agreement before data at the
network layer can be exchanged. The reason is that PPP supports multiple protocols at
the network layer. If a node is running multiple protocols simultaneously at the network
layer, the receiving node needs to know which protocol will receive the data.
 Open. In the open phase, data transfer takes place. When a connection reaches this phase,
the exchange of data packets can be started. The connection remains in this phase until
one of the endpoints wants to terminate the connection.
 Terminate. In the termination phase the connection is terminated. Several packets are
exchanged between the two ends for house cleaning and closing the link.
Assignment-Cum-Tutorial Questions
A. Questions testing the remembering / understanding level of students
I) Multiple Choice Questions

1. Fragmentation means................... [ ]
A. adding of small packets to form large packet
B. breaking large packet into small packets
C. combining large packets in to a single packet
D. forwarding a packet through different networks
2.An error-detecting code inserted as a field in a block of data to be transmitted is known as
A. Frame check sequence B. Error detecting code [ ]
C. Checksum D. flow control
3.Error detecting code is [ ]
an error-detecting code based on a summation operation performed on the bits to be
a).
checked

b). a check bit appended to an array of binary digits to make the sum of all the binary digits.

a code in which each expression conforms to specify rules of construction, so that if


c). certain errors occur in an expression, the resulting expression will not conform to the
rules of construction and thus the presence of the error is detected.

d) the ratio of the data units in error to the total number of data units

4. The data link layer takes the packets from _____ and encapsulates them into frames for
transmission. [ ]
A. network layer B. physical layer C. transport layer D. application layer

5. Which one of the following task is not done by data link layer? [ ]
A. framing B. error control C. flow control D. channel coding

6. CRC stands for


A. cyclic redundancy check B. code repeat check
C. code redundancy check D. cyclic repeat check [ ]

7. The technique of temporarily delaying outgoing outgoing acknowledgements so that they can
be hooked onto the next outgoing data frame is called [ ]
A. piggybacking B. cyclic redundancy check
C. fletcher’s checksum D. none of the mentioned

8. Error detection at the data link layer is achieved by? [ ]


A. Bit stuffing B .Cyclic redundancy codes
C. Hamming code D. Equalization
9.Automatic repeat request error management mechanism is provided by [ ]
A. logical link control sub layer
B. media access control sub layer
C. network interface control sub layer
D. none of the mentioned
10. When 2 or more bits in a data unit has been changed during the transmission, the error is
called [ ]
A. random error
B. burst error
C. inverted error
D. none of the mentioned
11. What is the abbreviation of FCS field in HDLC frame format-----------------.

II) Descriptive Questions


1. What are the design issues of Data link layer?
2. Describe the services provided to the network layer?
3. Explain framing & discuss framing methods.
4. Discuss in detail error control and flow control.
5. What are Error correcting codes & Explain Hamming code with an example?
6. What are error detecting codes explain CRC & checksum with examples?
7. Explain Elementary data link protocols?
8. Define Sliding window protocol and briefly explain categories of protocols.
9. Explain Data link layer in HDLC?
10. Discuss briefly PPP (Point to Point Protocol).

B. Question testing the ability of students in applying the concepts.

Problems
1. Find out the CRC for message m[x] = 11101101, generator g[x] = 1101.
2. Find out the checksum for the following data from sender’s side 7,11,12,0,6 is to be
transmitted to the receiver side using 1’s complement method.
3. Do character stuffing to the following (using DLE)?
A DLE DLE DLE DLE B
4. Do bit stuffing for 111100000001111111111110001111000111111110011?
5. Using polynomial method, calculate the redundant bits.
M(x)=111110000010 and G(x)=11011
6. Perform the bit stuffing for 011011111111101001.
7. Consider the message 1001111 is transmitted through the channel. Obtain the redundancy
bits and the transmitted unit needed using hamming code. Assume bit 8 has been
changed. How to locate it.
C. Questions testing the analyzing/evaluating/creative ability of students
1. Imagine that you are writing the data link software for a line used to send data to you, but not
from you. The other end uses HDLC, with a 3-bit sequence number and a window size of
seven frames. You would like to buffer as many out-of-sequence frames as possible to enhance
efficiency, but you are not allowed to modify the software on the sending side. Is it possible to
have a receiver window greater than 1, and still guarantee that the protocol will never fail? If
so, what is the largest window that can be safely used?

Potrebbero piacerti anche