Sei sulla pagina 1di 7

Master of Computer Application (MCA) Semester 6 MC0087 Internetworking with TCP/IP 4 Credits (Book ID: B1008)

1. What is fragmentation? Explain its significance When a data packet travels from one host to another, it can pass through different physical networks. Each physical network has a maximum frame size. This is called the maximum transmission unit (MTU). It limits the length of a datagram that can be placed in one physical frame. IP implements a process to fragment datagrams exceeding the MTU. The process creates a set of datagrams within the maximum size. The receiving host reassembles the original datagram. IP requires that each link support a minimum MTU of 68 octets. This is the sum of the maximum IP header length (60 octets) and the minimum possible length of data in a non-final fragment (8 octets). If any network provides a lower value than this, fragmentation and reassembly must be implemented in the network interface layer. This must be transparent to IP. IP implementations are not required to handle unfragmented datagrams larger than 576 bytes. In practice, most implementations will accommodate larger values. An unfragmented datagram has an all-zero fragmentation information field. That is, the more fragments flag bit is zero and the fragment offset is zero. The following steps fragment the datagram: The DF flag bit is checked to see if fragmentation is allowed. If the bit is set, the datagram will be discarded and an ICMP error returned to the originator. 2. Based on the MTU value, the data field is split into two or more parts. All newly created data portions must have a length that is a multiple of 8 octets, with the exception of the last data portion. 3. Each data portion is placed in an IP datagram. The headers of these datagram are minor modifications of the original: The more fragments flag bit is set in all fragments except the last. The fragment offset field in each is set to the location this data portion occupied in the original datagram, relative to the beginning of the original unfragmented datagram. The offset is measured in 8-octet units. If options were included in the original datagram, the high order bit of the option type byte determines if this information is copied to all fragment datagrams or only the first datagram. For example, source route options are copied in all fragments. The header length field of the new datagram is set. The total length field of the new datagram is set. The header checksum field is re-calculated. 4. Each of these fragmented datagrams is now forwarded as a normal IP datagram. IP handles each fragment independently. The fragments can traverse different routers to the intended destination. They can be subject to further fragmentation if they pass through networks specifying a smaller MTU. At the destination host, the data is reassembled into the original datagram. The identification field set by the sending host is used together with the source and destination IP addresses in the datagram. Fragmentation does not alter this field. In order to reassemble the fragments, the receiving host allocates a storage buffer when the first fragment arrives. The host also starts a timer. When subsequent fragments of the datagram arrive, the data is copied into the buffer storage at the location indicated by the fragment offset field. When all fragments have arrived, the complete original unfragmented datagram is restored. Processing continues as for unfragmented datagrams. If the timer is exceeded and fragments remain outstanding, the datagram is discarded. The initial value of this timer is called the IP datagram time to live (TTL) value. It is implementation-dependent. Some implementations allow it to be configured. The netstat command can be used on some IP hosts to list the details of fragmentation. 2. Briefly discuss the functions of transport layer Ans:-Transport layer accepts data from session layer breaks it into packets and delivers these packets to the network layer. It is the responsibility of transport layer to guarantee successful arrival of data at the destination device. It provides an end-to-end dialog that is the transport layer at the source device directly communicates with transport layer at destination device. Message headers and control messages are used for this purpose. It separates the upper layers from the low level details of data transmission and makes sure an efficient delivery. OSI model provides connection-oriented service at transport layer. It is responsible for the determination of the type of service that is to be provided to the upper layer. Normally it transmits packets in the same order in which they are sent however it can also facilitate the transmission of isolated messages. There is no surety that these isolated messages are delivered to the destination devices in case of broadcast networks and they will be in the same order as were sent from

the source. If the network layer do not provide adequate services for the data transmission. Data loss due to poor network management is handled by using transport layer. It checks for any packets that are lost or damaged along the way. 3 . What is CIDR? Explain. Ans: CIDR (Classless Inter-Domain Routing, sometimes known as super netting) is a way to allocate and specify the Internet addresses used in inter-domain routing more flexibly than with the original system of Internet Protocol (IP) address classes. As a result, the number of available Internet addresses has been greatly increased. CIDR is now the routing system used by virtually all gateway hosts on the Internet's backbone network. The Internet's regulating authorities now expect every Internet service provider (ISP) to use it for routing. The original Internet Protocol defines IP addresses in four major classes of address structure, Classes A through D. Each of these classes allocates one portion of the 32-bit Internet address format to a network address and the remaining portion to the specific host machines within the network specified by the address. One of the most commonly used classes is (or was) Class B, which allocates space for up to 65,533 host addresses. A company who needed more than 254 host machines but far fewer than the 65,533 host addresses possible would essentially be "wasting" most of the block of addresses allocated. For this reason, the Internet was, until the arrival of CIDR, running out of address space much more quickly than necessary. CIDR effectively solved the problem by providing a new and more flexible way to specify network addresses in routers. (With a new version of the Internet Protocol - IPv6 - a 128-bit address is possible, greatly expanding the number of possible addresses on the Internet. However, it will be some time before IPv6 is in widespread use.) Using CIDR, each IP address has a network prefix that identifies either an aggregation of network gateways or an individual gateway. The length of the network prefix is also specified as part of the IP address and varies depending on the number of bits that are needed (rather than any arbitrary class assignment structure). A destination IP address or route that describes many possible destinations has a shorter prefix and is said to be less specific. A longer prefix describes a destination gateway more specifically. Routers are required to use the most specific or longest network prefix in the routing table when forwarding packets CIDR network address looks like this: 192.30.250.00/18.The "192.30.250.00" is the network address itself and the "18" says that the first 18 bits are the network part of the address, leaving the last 14 bits for specific host addresses. CIDR lets one routing table entry represent an aggregation of networks that exist in the forward path that don't need to be specified on that particular gateway, much as the public telephone system uses area codes to channel calls toward a certain part of the network. This aggregation of networks in a single address is sometimes referred to as a supernet. CIDR is supported by the Border Gateway Protocol, the prevailing exterior (interdomain) gateway protocol. (The older exterior or interdomain gateway protocols, Exterior Gateway Protocol and Routing Information Protocol, do not support CIDR.) CIDR is also supported by the OSPF interior or intradomain gateway protocol.Block of 211.17.180.0/24 subnetted into 32 subnets: a) Given block /24 have 256 addresses (0-255). Divide 256 by 32 to determine that each subnet will have 8 addresses. Using binary, determine the net mask that will achieve a total of 8 addresses in each network. Since 0 is the first value in a range of 8 addresses, we want zero through 7 or (000-111). This confirms that only 3 bits are required to represent the addresses in each of the 32 subnets. (xxxxxyyy - xxxxxyyy) illustrates the binary range for the addresses within each of the 32 subnets. The "xxxxx" represents the additional 5 bits that will be used to define the network portion of the IP address. The "yyy" portion illustrates the host portion of the IP address. This means that the resulting netmask is /24 + /5 = /29 or 255.255.255.248 11111111.11111111.11111111.11111000 = 255.255.255.248 b) The subnet mask above defines that 3 bits are used to define the host portion of the IP address.

binary 111 = decimal 7 (considering range of 0-7, shows 8 addresses per subnet) c) xxxxxyyy - represents the bits used to define the network and host portion of the IP address. 00000yyy = first subnet in range 00001yyy = second subnet in range 00010yyy = third subnet in range 00011yyy = fourth subnet in range 11100yyy = 29th subnet in range 11101yyy = 30th subnet in range 11110yyy = 31st subnet in range 11111yyy = 32nd subnet in range The first and last addresses in subnet 1: 00000yyy = first subnet in range 00000000 = first address in subnet 1 (decimal 0) 211.17.180.0/29 00000111 = last address in subnet 1 (decimal 7) 211.17.180.7/29 The first and last addresses in subnet 32: 11111yyy = 32nd subnet in range 11111000 = first address in subnet 32 (decimal 248) 211.17.180.248/29 11111111 = last address in subnet 32 (decimal 255) 211.17.180.255/29 Q.4. what is congestion .mention few algorithms to overcome congestion Ans: TCP is the popular transport protocol for best-effort trafc in Internet. However, TCP is not wellsuited for many applications such as streaming multimedia, because TCP congestion control algorithms introduce large variations in the congestion window size (and corresponding large variations in the sending rate). Such variability in the sending rate is not acceptable to many multimedia applications. Hence, many multimedia applications are built over UDP and use no congestion control at all. The absence of congestion control in applications built over UDP may lead to congestion collapse on the Internet. In addition, the UDP ows may starve any competing TCP ows. To overcome these adverse effects, congestion control needs to be incorporated into all applications using the Internet, whether at the transport layer or provided by the application itself. Furthermore, the congestion control algorithms must be TCP-friendly, i.e. the TCP-friendly ows should not gain more throughput than competing TCP ows in the long run. Thus, in recent years, many researchers have focussed on developing TCP -friendly transport protocols which are suitable for many applications that currently use UDP. In this direction, IETF is currently working on developing a new protocol called, Datagram Congestion Control Protocol (DCCP), that provides an unreliable datagram service with congestion control. DCCP is designed to use any suitable TCP- friendly congestion control algorithm. With a multitude of TCP-friendly congestion control algo- rithms available, some important questions that need to be answered are: What are the strengths and weakness of the various TCP-friendly algorithms? Is there a single algorithm which is uniformly superior over other algorithms. The rst step in answering these questions is to study the short -term and

long-term behavior of these algorithms. Although the goal of all TCP-friendly algorithms is to emulate the behavior of TCP in the long term, these algorithms may have an adverse impact in the short-term on competing TCP ows. Since TCP-friendly algorithms are designed for smoother sending rates than TCP, these algorithms may react slowly to new connections that share a common bottleneck link. Such a slower response may have a deleterious effect on TCP ows. For example, a TCP connection suffering losses in its slow start phase may enter the congestion avoidance phase with a small window, and consequently obtain lesser throughput than other competing ows. Hence, it is clear that a detailed study is required on the short-term (transient)behavior of TCP-friendly ows in addition to their long-term behavior. In this paper, we study the transient behavior of three TCP-friendly congestion control algorithms: general AIMD congestion control, TFRC and binomial congestion control algorithm . Prior work has studied the transient behavior of these algorithms when RED queues are used at the bottleneck link. However, as droptail queues are still widely used in practice, in this paper we study the transient behavior of these algorithms with droptail queues. Past work has also identied certain unfairness of AIMD and binomial congestion control algorithms to TCP with droptail queues, but has not identied the reasons for this unfairness. In this paper, we analyze the reasons for this unfairness, and validate the analysis by simulations. The rest of the paper is organized as follows. In Section II, we briey overvi ew the various TCP-friendly congestioncontrol algorithms proposed in literature. In Section III, we dene the transient behaviors studied in this paper, and analyze the expected transient behaviors of the various TCP-friendly congestion control algorithms. Section IV analyzes in detail the reasons for unfairness of AIMD and binomial congestion control algorithms with droptail queues. We present our sim- ulation results in Section V, and we conclude in Section VI. few algorithms to overcome congestion A. Transient behaviors evaluated in the paper B. Equation-Based Congestion Control Algorithm C. General AIMD-Based Congestion Control Algorithms D. Binomial Congestion Control Algorithm Q.5. Explain the following with respect to Transport Protocols: a. User Datagram Protocol (UDP) b. Transmission Control Protocol (TCP) Ans: User Datagram Protocol (UDP) : The User Datagram Protocol (UDP) is a transport layer protocol defined for use with the IP network layer protocol. It is defined by RFC 768 written by John Postel. It provides a best-effort datagram service to an End System (IP host). The service provided by UDP is an unreliable service that provides no guarantees for delivery and no protection from duplication (e.g. if this arises due to software errors within an Intermediate System (IS)). The simplicity of UDP reduces the overhead from using the protocol and the services may be adequate in many cases. UDP provides a minimal, unreliable, best-effort, message-passing transport to applications and upperlayer protocols. Compared to other transport protocols, UDP and its UDP-Lite variant are unique in that they do not establish end-to-end connections between communicating end systems. UDP communication consequently does not incur connection establishment and teardown overheads and there is minimal associated end system state. Because of these characteristics, UDP can offer a very efficient communication transport to some applications, but has no inherent congestion control or reliability. A second unique characteristic of UDP is that it provides no inherent On many platforms, applications can send UDP datagrams at the line rate of the link interface, which is often much greater than the available path capacity, and doing so would contribute to congestion along the path, applications therefore need to be designed responsibly.One increasingly popular use of UDP is as a tunneling protocol, where a tunnel endpoint encapsulates the packets of another protocol inside UDP datagrams and transmits them to

another tunnel endpoint, which decapsulates the UDP datagrams and forwards the original packets contained in the payload. Tunnels establish virtual links that appear to directly connect locations that are distant in the physical Internet topology, and can be used to create virtual (private) networks. Using UDP as a tunneling protocol is attractive when the payload protocol is not supported by middleboxes that may exist along the path, because many middleboxes support UDP transmissions. UDP does not provide any communications security. Applications that need to protect their communications against eavesdropping, tampering, or message forgery therefore need to separately provide security services using additional protocol mechanisms. Protocol Header A computer may send UDP packets without first establishing a connection to the recipient. A UDP datagram is carried in a single IP packet and is hence limited to a maximum payload of 65,507 bytes for IPv4 and 65,527 bytes for IPv6. The transmission of large IP packets usually requires IP fragmentation. Fragmentation decreases communication reliability and efficiency and should theerfore be avoided. To transmit a UDP datagram, a computer completes the appropriate fields in the UDP header (PCI) and forwards the data together with the header for transmission by the IP network layer.

The UDP protocol header consists of 8 bytes of Protocol Control Information (PCI) The UDP header consists of four fields each of 2 bytes in length: Source Port (UDP packets from a client use this as a service access point (SAP) to indicate the session on the local client that originated the packet. UDP packets from a server carry the server SAP in this field) Destination Port (UDP packets from a client use this as a service access point (SAP) to indicate the service required from the remote server. UDP packets from a server carry the client SAP in this field) UDP length (The number of bytes comprising the combined UDP header information and payload data) UDP Checksum (A checksum to verify that the end to end data has not been corrupted by routers or bridges in the network or by the processing in an end system. The algorithm to compute the checksum is the Standard Internet Checksum algorithm. This allows the receiver to verify that it was the intended destination of the packet, because it covers the IP addresses, port numbers and protocol number, and it verifies that the packet is not truncated or padded, because it covers the size field. Therefore, this protects an application against receiving corrupted payload data in place of, or in addition to, the data that was sent. In the cases where this check is not required, the value of 0x0000 is placed in this field, in which case the data is not checked by the receiver.

Like for other transport protocols, the UDP header and data are not processed by Intermediate Systems (IS) in the network, and are delivered to the final destination in the same form as originally transmitted. a) At the final destination, the UDP protocol layer receives packets from the IP network layer. These are checked using the checksum (when >0, this checks correct end-to-end operation of the network service) and all invalid PDUs are discarded. UDP does not make any provision for error reporting if the packets are not delivered. Valid data are passed to the appropriate session layer protocol identified by the source and destination port numbers (i.e. the session service access points).

UDP and UDP-Lite also may be used for multicast and broadcast, allowing senders to transmit to multiple receivers. Transmission Control Protocol (TCP) : The Transmission Control Protocol (TCP) is a connectionoriented reliable protocol. It provides a reliable transport service between pairs of processes executing on End Systems (ES) using the network layer service provided by the IP protocol.

TCP providing reliable data transfer to FTP over an IP network using Ethernet TCP is stream oriented, that is, TCP protocol entities exchange streams of data. Individual bytes of data 9e.g. from an application or session layer protocol) are placed in memory buffers and transmitted by TCP in transport Protocol Data Units (for TCP these are usually known as "segments"). The reliabel, flowcontrolled TCP service is much more complex than UDP, which only provides a Best Effort service. To implement the service, TCP uses a number of protocol timers that ensure reliable and synchronised communication between the two End Systems. For most networks approximately 90% of current traffic uses this transport service. It is used by such applications as telnet, World Wide Web (WWW), ftp, electronic mail. The transport header contains a Service Access Point which indicates the protocol which is being used (e.g. 23 = Telnet; 25 = Mail; 69 = TFTP; 80 = WWW (http)). The port numbers associated with these services generally have the same value as those used for UDP services (a full list of all port numbers is provided in the reference at the end of this page). 6. With diagram explain the components of a VoIP networking system. Ans: IP Telephony Server(s) This is the heart of the IP Telephony systems which provides complete Call Control, Dial Plan control and all the basic vocie applications (In case of smaller systems, all the functionalities of the below mentioned application servers can also be bundled with this) Application Servers Some times applications like IVR (Interactive Voice Response Auto Attendant), Call Recording, Voice Mail, Data Base Integration require to be hosted in separate servers Especially for larger VOIP installations. IP Phones These IP Phones connect directly to the IP Network (RJ-45 based UTP Cables) and provide all the voice functionalities hitherto provided by analog phones like caller ID display, speaker phones, speed dial keys, memory etc. Soft Phones These are basically software utilities that have all the telephony functions but use the computer, head-set with microphone to make and receive calls. Wi-Fi Phones/ Dual Mode Cell Phones Wi-Fi phones are based on IP Technology and connect to the wireless network and act as mobile extensions. Certain Cell phones come with WiFi adaptors and can be used as a Wi-Fi Phone (if the manufacturer supports the same). Cell Phones can also connect to the IP Telephony server through 3G Networks/ CDMA networks for making a VOIP Call. Analog Telephony Adapters (ATA) These are specialised devices that connect to the LAN at one end and connect to FXO (Analog Trunks) or FXS (Analog Extensions) at the other end. PRI Cards These are used to connect PRI/E1/T1 Trunk Lines to IP Telephony Servers Usually they connect directly with the PCI/ PCI Express Slot in the server.

Computer IP Network An IP based Computer Network is used to carry the voice signals across the enterprise and sometimes even to remote locations. IP Phones are much more expensive when compared to the cost of analog phones. The voice call quality (over IP Networks) depends on a number of parameters like the configuration of right QoS parameters, latency, jitter, available bandwidth etc across the network. IP Networks need to be built with sufficient redundancy and security for continuous availability of IP Telephony services If there is a DOS attack on the network (for example), the telephones also become inactive along with the computers. Scaling of IP Telephony systems needs to be planned properly Failing which, the IP telephony server may not be able to handle high concurrent call loads. There are hardware/ license based restrictions on the maximum number of concurrent calls that a single server can handle/ maximum number of end points that can connect to a single server.

Potrebbero piacerti anche