Sei sulla pagina 1di 94

www.bookspatch.

com
COURSE MATERIAL
COMPUTER NETWORKS Aug – Dec 2009
DEPTT. : IT Paper Code : IT-305E SEMESTER: V

UNIT 1:OSI REFRENCE MODEL & NETWORK ARCHITECTURE

The readings referred to in the table below are recommended material from
A - Fourozan “data communications & networking” 3e
B - Tanenbaum “Computer networks” 4e

LECTURE NO.1 READINGS: A- PAGE1to2, B-PAGE 49 to68

INTRODUCTION TO COMPUTER NETWORKS:


“Computer communications” refers to the electrical transmission of data from one system to another,it
describes the manner in which computers exchange information with each other.
“Networking” refers to the concept of connecting a group of systems for the express purpose of sharing
information.
A computer network is a group of interconnected computers. In the world of Computer Networks:
• The connected entities of a network are called computers or other devices.
• The link through which communications takes place is called a Network medium.
• The rules that govern the manner in which data are exchanged between devices are achieved
through a common network protocol.
“A Computer Network is a collection of computers & other devices that use a common network protocol
to share resources with each other over a network medium”
A computer network is any set of computers or devices connected to each other with the ability to exchange
data.[2] Examples of different networks are:
• Local area network (LAN), which is usually a small network constrained to a small geographic
area.
• Wide area network (WAN) that is usually a larger network that covers a large geographic area.
• Wireless LANs and WANs (WLAN & WWAN) are the wireless equivalent of the LAN and
WAN.
All networks are interconnected to allow communication with a variety of different kinds of media,
including twisted-pair copper wire cable, coaxial cable, optical fiber, and various wireless technologies.
The devices can be separated by a few meters (e.g. via Bluetooth) or nearly unlimited distances (e.g. via the
interconnections of the Internet).

ARPANET: In the mid 1960’s,the mainframe computers in research organisations were standalone
devices.Computers from different manufactuers were unable to communicate with each other.The
Advanced research Projects agency in the Department of defense was intrested in finding a way to connect
computers so that the researchers they funded could share their findings.
In 1967, at an association for computing machinery meeting,ARPA presents its ideas for
ARPANET,a small network of connected computers.The idea was that each host computer would be
attached to a specialized computer called an Interface Message Processor(IMP).The IMP’s would be in turn
be connected to one another.Each IMP had to be able to communicate with other IMP’s as well as with its
own attached host.
In 1969, work began on the ARPAnet, grandfather to the Internet. Designed as a computer version of the
nuclear bomb shelter, ARPAnet protected the flow of information between military installations by creating
a network of geographically separated computers that could exchange information via a newly developed
protocol (rule for how computers interact) called NCP (Network Control Protocol). One opposing view to
ARPAnet's origins comes from Charles M. Herzfeld, the former director of ARPA. He claimed that
ARPAnet was not created as a result of a military need, stating "it came out of our frustration that there
were only a limited number of large, powerful research computers in the country and that many research
investigators who should have access were geographically separated from them." ARPA stands for the
Advanced Research Projects Agency, a branch of the military that developed top secret systems and
weapons during the Cold War. The first data exchange over this new network occurred between computers
at UCLA and Stanford Research Institute. On their first attempt to log into Stanford's computer by typing
"log win", UCLA researchers crashed their computer when they typed the letter 'g'.
 Four computers were the first connected in the original ARPAnet. They were located in the
respective computer research labs of UCLA (Honeywell DDP 516 computer), Stanford Research
Institute (SDS-940 computer), UC Santa Barbara (IBM 360/75), and the University of Utah (DEC
PDP-10). As the network expanded, different models of computers were connected, creating
compatibility problems. The solution rested in a better set of protocols called TCP/IP
(Transmission Control Protocol/Internet Protocol) designed in 1982.
LECTURE NO 2 READINGS: A-PAGE 16,A-PAGE 8 to 13
INTERNET:
The Internet is a global system of interconnected computer networks that interchange data by packet
switching using the standardized Internet Protocol Suite (TCP/IP). It is a "network of networks" that
consists of millions of private and public, academic, business, and government networks of local to global
scope that are linked by copper wires, fiber-optic cables, wireless connections, and other technologies.
The Internet carries various information resources and services, such as electronic mail, online chat, file
transfer and file sharing, online gaming, and the inter-linked hypertext documents and other resources of
the World Wide Web (WWW)
The Internet is a specific internetwork. It consists of a worldwide interconnection of governmental,
academic, public, and private networks based upon the networking technologies of the Internet Protocol
Suite. It is the successor of the Advanced Research Projects Agency Network (ARPANET) developed by
DARPA of the U.S. Department of Defense. The Internet is also the communications backbone underlying
the World Wide Web (WWW). The 'Internet' is most commonly spelled with a capital 'I' as a proper noun,
for historical reasons and to distinguish it from other generic internetworks.
Participants in the Internet use a diverse array of methods of several hundred documented, and often
standardized, protocols compatible with the Internet Protocol Suite and an addressing system (IP
Addresses) administered by the Internet Assigned Numbers Authority and address registries. Service
providers and large enterprises exchange information about the reachability of their address spaces through
the Border Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission paths.
Private network
In Internet terminology, a private network is typically a network that uses private IP address space,
following the standards set by RFC 1918 and RFC 4193. These addresses are common in home and office
local area networks (LANs), as using globally routable addresses is seen as impractical or unnecessary.
Private IP addresses were originally created due to the shortage of publicly registered IP addresses created
by the IPv4 standard, but are also a feature of the next generation Internet Protocol, IPv6.
These addresses are private because they are not globally assigned, meaning they aren't allocated to a
specific organisation--instead, any organisation needing private address space can use these addresses
without needing approval from a regional Internet registry (RIR). Consequently, they are not routable on
the public Internet, meaning that if such a private network wishes to connect to the Internet, it must use
either a Network Address Translation (NAT) gateway, or a proxy server.
The most common use of these addresses is in home networks, since most Internet Service Providers (ISPs)
only allocate a single IP address to each customer, but many homes have more than one networking device
(for example, several computers, or a printer). In this situation, a NAT gateway is almost always used to
provide Internet connectivity. They are also commonly used in corporate networks, which for security
reasons, are not connected directly to the internet, meaning globally routable addresses are unnecessary.
Often a proxy, SOCKS gateway, or similar is used to provide restricted internet access to internal users. In
both cases, private addresses are seen as adding security to the internal network, since it's impossible for an
Internet host to connect directly to an internal system.
Because many internal networks use the same private IP addresses, a common problem when trying to
merge two such networks (e.g. during a company merger or takeover) is that both organisations have
allocated the same IPs in their networks. In this case, either one network must renumber, often a difficult
and time-consuming task, or a NAT router must be placed between the networks to translate one network's
addresses before they can reach the other side.
It is not uncommon for private address space to "leak" onto the Internet in various ways. Poorly configured
private networks often attempt reverse DNS lookups for these addresses, putting extra load on the Internet's
root nameservers. The AS112 project mitigates this load by providing special "blackhole" anycast
nameservers for private addresses which only return "not found" answers for these queries. Organisational
edge routers are usually configured to drop ingress IP traffic for these networks, which can occur either by
accident, or from malicious traffic using a spoofed source address. Less commonly, ISP edge routers will
drop such ingress traffic from customers, which reduces the impact to the Internet of such misconfigured or
malicious hosts on the customer's network.
A common misconception is that these addresses are not routable. However, while not routable on the
public Internet, they are routable within an organisation or site.
The Internet Engineering Task Force (IETF) has directed IANA to reserve the following IPv4 address
ranges for private networks, as published in RFC 1918:
RFC1918 number of classful largest CIDR block host id
IP address range
name addresses description (subnet mask) size
10.0.0.0 –
24-bit block 16,777,216 single class A 10.0.0.0/8 (255.0.0.0) 24 bits
10.255.255.255
172.16.0.0 – 16 contiguous 172.16.0.0/12
20-bit block 1,048,576 20 bits
172.31.255.255 class Bs (255.240.0.0)
192.168.0.0 – 256 contiguous 192.168.0.0/16
16-bit block 65,536 16 bits
192.168.255.255 class Cs (255.255.0.0)
Note that classful addressing is obsolete and no longer used on the Internet. For example, while 10.0.0.0/8
would be a single class A network, it is not uncommon for organisations to divide it into smaller /16 or /24
networks.
Network Classification
The following list presents categories used for classifying networks.
Network topology
Computer networks may be classified according to the network topology upon which the network is based,
such as Bus network, Star network, Ring network, Mesh network, Star-bus network, Tree or Hierarchical
topology network. Network Topology signifies the way in which devices in the network see their logical
relations to one another. The use of the term "logical" here is significant. That is, network topology is
independent of the "physical" layout of the network. Even if networked computers are physically placed in
a linear arrangement, if they are connected via a hub, the network has a Star topology, rather than a Bus
Topology. In this regard the visual and operational characteristics of a network are distinct; the logical
network topology is not necessarily the same as the physical layout. Networks may be classified based on
the method of data used to convey the data, these include digital and analog networks
1. Bus network
A bus network topology is a network architecture in which a set of clients are connected via a shared
communications line, called a bus. There are several common instances of the bus architecture, including
one in the motherboard of most computers, and those in some versions of Ethernet networks.Bus networks
are the simplest way to connect multiple clients, but may have problems when two clients want to transmit
at the same time on the same bus. Thus systems which use bus network architectures normally have some
scheme of collision handling or collision avoidance for communication on the bus, quite often using Carrier
Sense Multiple Access(Carrier Sense Multiple Access (CSMA) is a probabilistic Media Access Control
(MAC) protocol in which a node verifies the absence of other traffic before transmitting on a shared
transmission medium, such as an electrical bus, or a band of the electromagnetic spectrum."Carrier
Sense" describes the fact that a transmitter listens for a carrier wave before trying to send. That is, it tries
to detect the presence of an encoded signal from another station before attempting to transmit. If a carrier
is sensed, the station waits for the transmission in progress to finish before initiating its own
transmission."Multiple Access" describes the fact that multiple stations send and receive on the medium.
Transmissions by one node are generally received by all other stations using the medium.) or the presence
of a bus master which controls access to the shared bus resource.

The bus topology makes the addition of new devices straightforward. The term used to describe clients is
station or workstation in this type of network. Bus network topology uses a broadcast channel which means
that all attached stations can hear every transmission and all stations have equal priority in using the
network to transmit[1] data.
Advantages and disadvantages of a bus network
Advantages
• Easy to implement and extend
• Well suited for temporary or small networks not requiring high speeds (quick setup)
• Cheaper than other topologies.
• Cost effective as only a single cable is used
• Cable faults are easily identified
Disadvantages
• Limited cable length and number of stations.
• If there is a problem with the cable, the entire network goes down.
• Maintenance costs may be higher in the long run.
• Performance degrades as additional computers are added or on heavy traffic.
• Proper termination is required (loop must be in closed path).
• Significant Capacitive Load (each bus transaction must be able to stretch to most distant link).
• It works best with limited number of nodes.
• It is slower than the other topologies.
2. Star networks are one of the most common computer network topologies. In its simplest form, a star
network consists of one central switch, hub or computer, which acts as a conduit to transmit messages.
Thus, the hub and leaf nodes, and the transmission lines between them, form a graph with the topology of a
star. If the central node is passive, the originating node must be able to tolerate the reception of an echo of
its own transmission, delayed by the two-way transmission time (i.e. to and from the central node) plus any
delay generated in the central node. An active star network has an active central node that usually has the
means to prevent echo-related problems.
The star topology reduces the chance of network failure by connecting all of the systems to a central node.
When applied to a bus-based network, this central hub rebroadcasts all transmissions received from any
peripheral node to all peripheral nodes on the network, sometimes including the originating node. All
peripheral nodes may thus communicate with all others by transmitting to, and receiving from, the central
node only. The failure of a transmission line linking any peripheral node to the central node will result in
the isolation of that peripheral node from all others, but the rest of the systems will be unaffected
Advantages
• Better performance: Passing of Data Packet through unnecessary nodes is prevented by this
topology. At most 3 devices and 2 links are involved in any communication between any two
devices which are part of this topology. This topology induces a huge overhead on the central hub,
however if the central hub has adequate capacity, then very high network utilization by one device
in the network does not affect the other devices in the network.
• Isolation of devices: Each device is inherently isolated by the link that connects it to the hub. This
makes the isolation of the individual devices fairly straightforward, and amounts to disconnecting
the device from the hub. This isolated nature also prevents any non-centralized failure to affect the
network.
• Benefits from centralization: As the central hub is the bottleneck, increasing capacity of the
central hub or adding additional devices to the star, can help scale the network very easily. The
central nature also allows the inspection traffic through the network. This can help analyze all the
traffic in the network and determine suspicious behavior.
• Simplicity: The topology is easy to understand, establish, and navigate. The simple topology
obviates the need for complex routing or message passing protocols. As noted earlier, the isolation
and centralization simplifies fault detection, as each link or device can be probed individually.
Disadvantages
The primary disadvantage of a star topology is the high dependence of the system on the functioning of the
central hub. While the failure of an individual link only results in the isolation of a single node, the failure
of the central hub renders the network inoperable, immediately isolating all nodes. The performance and
scalability of the network also depend on the capabilities of the hub. Network size is limited by the number
of connections that can be made to the hub, and performance for the entire network is capped by its
throughput. While in theory traffic between the hub and a node is isolated from other nodes on the network,
other nodes may see a performance drop if traffic to another node occupies a significant portion of the
central node's processing capability or throughput. Furthermore, wiring up of the system can be very
complex.
3. A ring network is a network topology in which each node connects to exactly two other nodes,
forming a single continuous pathway for signals through each node - a ring. Data travels from node to
node, with each node along the way handling every packet.
Because a ring topology provides only one pathway between any two nodes, ring networks may be
disrupted by the failure of a single link. A node failure or cable break might isolate every node attached to
the ring. FDDI networks overcome this vulnerability by sending data on a clockwise and a
counterclockwise ring: in the event of a break data is wrapped back onto the complementary ring before it
reaches the end of the cable, maintaining a path to every node along the resulting "C-Ring". 802.5 networks
-- also known as IBM Token Ring networks -- avoid the weakness of a ring topology altogether: they
actually use a star topology at the physical layer and a Multistation Access Unit to imitate a ring at the
datalink layer.
Advantages
• Very orderly network where every device has access to the token and the opportunity to transmit
• Performs better than a star topology under heavy network load
• Can create much larger network using Token Ring
• Does not require network server to manage the connectivity between the computers
Disadvantages
• One malfunctioning workstation or bad port in the MAU can create problems for the entire
network
• Moves, adds and changes of devices can affect the network
• Network adapter cards and MAU's are much more expensive than Ethernet cards and hubs
• Much slower than an Ethernet network under normal load
LECTURE NO 3 READINGS: A-PAGE 8 TO 13
4. Mesh networking is a way to route data, voice and instructions between nodes. It allows for
continuous connections and reconfiguration around broken or blocked paths by “hopping” from node to
node until the destination is reached. A mesh network whose nodes are all connected to each other is a fully
connected network. Mesh networks differ from other networks in that the component parts can all connect
to each other via multiple hops, and they generally are not mobile. Mesh networks can be seen as one type
of ad hoc network. Mobile ad-hoc networks (MANETs) and mesh networks are therefore closely related,
but MANETs also have to deal with the problems introduced by the mobility of the nodes.Mesh networks
are self-healing: the network can still operate even when a node breaks down or a connection goes bad. As
a result, a very reliable network is formed. This concept is applicable to wireless networks, wired networks,
and software interaction. Wireless mesh networks is the most topical application of mesh architectures.
Wireless mesh was originally developed for military applications but have undergone significant evolution
in the past decade. As the cost of radios plummeted, single radio products evolved to support more radios
per mesh node with the additional radios providing specific functions- such as client access, backhaul
service or scanning radios for high speed handover in mobility applications. The mesh node design also
became more modular - one box could support multiple radio cards - each operating at a different
frequency
5. Tree Topology
Tree topologies integrate multiple star topologies together onto a bus. In its simplest form, only hub devices
connect directly to the tree bus, and each hub functions as the "root" of a tree of devices. This bus/star
hybrid approach supports future expandability of the network much better than a bus (limited in the number
of devices due to the broadcast traffic it generates) or a star (limited by the number of hub connection
points) alone

6.Hybrid topology: Hybrid network topologies


The hybrid topology is a type of network topology that is composed of one or more interconnections of two
or more networks that are based upon the same physical topology, but where the physical topology of the
network resulting from such an interconnection does not meet the definition of the original physical
topology of the interconnected networks (e.g., the physical topology of a network that would result from an
interconnection of two or more networks that are based upon the physical star topology might create a
hybrid topology which resembles a mixture of the physical star and physical bus topologies or a mixture of
the physical star and the physical tree topologies, depending upon how the individual networks are
interconnected, while the physical topology of a network that would result from an interconnection of two
or more networks that are based upon the physical distributed bus network retains the topology of a
physical distributed bus network).
 Star-bus
A type of network topology in which the central nodes of one or more individual networks that are based
upon the physical star topology are connected together using a common 'bus' network whose physical
topology is based upon the physical linear bus topology, the endpoints of the common 'bus' being
terminated with the characteristic impedance of the transmission medium where required – e.g., two or
more hubs connected to a common backbone with drop cables through the port on the hub that is provided
for that purpose (e.g., a properly configured 'uplink' port) would comprise the physical bus portion of the
physical star-bus topology, while each of the individual hubs, combined with the individual nodes which
are connected to them, would comprise the physical star portion of the physical star-bus topology.

 Hybrid mesh
A type of hybrid physical network topology that is a combination of the physical partially connected
topology and one or more other physical topologies the mesh portion of the topology consisting of
redundant or alternate connections between some of the nodes in the network – the physical hybrid mesh
topology is commonly used in networks which require a high degree of availability
LECTURE NO. 4 READINGS: A-PAGE 13 to 15
Types of networks
1.Personal Area Network (PAN)
A Personal Area Network (PAN) is a computer network used for communication among computer
devices close to one person. Some examples of devices that are used in a PAN are printers, fax machines,
telephones, PDAs and scanners. The reach of a PAN is typically about 20-30 feet (approximately 6-9
meters), but this is expected to increase with technology improvements.
2. Local Area Network (LAN)
A Local Area Network (LAN) is a computer network covering a small physical area, like a home, office,
or small group of buildings, such as a school, or an airport. This is a network covering a small geographic
area, like a home, office, or building. Current LANs are most likely to be based on Ethernet technology.
For example, a library may have a wired or wireless LAN for users to interconnect local devices (e.g.,
printers and servers) and to connect to the internet. On a wired LAN, PCs in the library are typically
connected by category 5 (Cat5) cable, running the IEEE 802.3 protocol through a system of interconnected
devices and eventually connect to the Internet..
3.Metropolitan Area Network (MAN)
A Metropolitan Area Network (MAN) is a network that connects two or more Local Area Networks or
Campus Area Networks together but does not extend beyond the boundaries of the immediate town/city.
Routers, switches and hubs are connected to create a Metropolitan Area Network.
4.Wide Area Network (WAN)
A Wide Area Network (WAN) is a computer network that covers a broad area (i.e., any network whose
communications links cross metropolitan, regional, or national boundaries [1]). Less formally, a WAN is a
network that uses routers and public communications links [1]. Contrast with personal area networks
(PANs), local area networks (LANs), campus area networks (CANs), or metropolitan area networks
(MANs) which are usually limited to a room, building, campus or specific metropolitan area (e.g., a city)
respectively. The largest and most well-known example of a WAN is the Internet. A WAN is a data
communications network that covers a relatively broad geographic area (i.e. one city to another and one
country to another country) and that often uses transmission facilities provided by common carriers, such as
telephone companies. WAN technologies generally function at the lower three layers of the OSI reference
model: the physical layer, the data link layer, and the network layer.
5. Internetwork
A Internetworking involves connecting two or more distinct computer networks or network segments via
a common routing technology. The result is called an internetwork (often shortened to internet). Two or
more networks or network segments connected using devices that operate at layer 3 (the 'network' layer) of
the OSI Basic Reference Model, such as a router. Any interconnection among or between public, private,
commercial, industrial, or governmental networks may also be defined as an internetwork.
In modern practice, the interconnected networks use the Internet Protocol. There are at least three variants
of internetwork, depending on who administers and who participates in them:
• Intranet
• Extranet
• Internet
Intranets and extranets may or may not have connections to the Internet. If connected to the Internet, the
intranet or extranet is normally protected from being accessed from the Internet without proper
authorization. The Internet is not considered to be a part of the intranet or extranet, although it may serve as
a portal for access to portions of an extranet.
Intranet:An intranet is a set of networks, using the Internet Protocol and IP-based tools such as web
browsers and file transfer applications, that is under the control of a single administrative entity. That
administrative entity closes the intranet to all but specific, authorized users. Most commonly, an intranet is
the internal network of an organization. A large intranet will typically have at least one web server to
provide users with organizational information.
Extranet:An extranet is a network or internetwork that is limited in scope to a single organization or
entity but which also has limited connections to the networks of one or more other usually, but not
necessarily, trusted organizations or entities (e.g. a company's customers may be given access to some part
of its intranet creating in this way an extranet, while at the same time the customers may not be considered
'trusted' from a security standpoint). Technically, an extranet may also be categorized as a CAN, MAN,
WAN, or other type of network, although, by definition, an extranet cannot consist of a single LAN; it must
have at least one connection with an external network.
Internet: The Internet is a specific internetwork. It consists of a worldwide interconnection of
governmental, academic, public, and private networks based upon the networking technologies of the
Internet Protocol Suite. It is the successor of the Advanced Research Projects Agency Network
(ARPANET) developed by DARPA of the U.S. Department of Defense. The Internet is also the
communications backbone underlying the World Wide Web (WWW). The 'Internet' is most commonly
spelled with a capital 'I' as a proper noun, for historical reasons and to distinguish it from other generic
internetworks.
Participants in the Internet use a diverse array of methods of several hundred documented, and often
standardized, protocols compatible with the Internet Protocol Suite and an addressing system (IP
Addresses) administered by the Internet Assigned Numbers Authority and address registries. Service
providers and large enterprises exchange information about the reachability of their address spaces through
the Border Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission paths.

LECTURE NO. 5 READINGS: A-PAGE 27 to 29, B PAGE-37 TO 49

OSI MODEL:
The Open Systems Interconnection Basic Reference Model (OSI Reference Model or OSI Model) is an
abstract description for layered communications and computer network protocol design. It was developed
as part of the Open Systems Interconnection (OSI) initiative.In its most basic form, it divides network
architecture into seven layers which, from top to bottom, are the Application, Presentation, Session,
Transport, Network, Data-Link, and Physical Layers. It is therefore often referred to as the OSI Seven
Layer Model.
 A layer is a collection of conceptually similar functions that provide services to the layer above it
and receives service from the layer below it. For example, a layer that provides error-free
communications across a network provides the path needed by applications above it, while it calls
the next lower layer to send and receive packets that make up the contents of the path.
Description of OSI layers
OSI Model
Data unit Layer Function
Host Data 7. Application Network process to application
layers
6. Presentation Data representation and encryption
5. Session Interhost communication
Segment 4. Transport End-to-end connections and reliability
Packet 3. Network Path determination and logical addressing
Media
Frame 2. Data Link Physical addressing (MAC & LLC)
layers
Bit 1. Physical Media, signal and binary transmission
Layer 7: Application Layer
The application layer is the OSI layer closest to the end user, which means that both the OSI application
layer and the user interact directly with the software application. This layer interacts with software
applications that implement a communicating component. Such application programs fall outside the scope
of the OSI model. Application layer functions typically include identifying communication partners,
determining resource availability, and synchronizing communication. When identifying
communication partners, the application layer determines the identity and availability of communication
partners for an application with data to transmit. When determining resource availability, the application
layer must decide whether sufficient network resources for the requested communication exist. In
synchronizing communication, all communication between applications requires cooperation that is
managed by the application layer. Some examples of application layer implementations include Telnet,
File Transfer Protocol (FTP), and Simple Mail Transfer Protocol (SMTP).
Layer 6: Presentation Layer
The Presentation Layer establishes a context between Application Layer entities, in which the higher-layer
entities can use different syntax and semantics, as long as the Presentation Service understands both and the
mapping between them. The presentation service data units are then encapsulated into Session Protocol
Data Units, and moved down the stack.
This layer provides independence from differences in data representation (e.g., encryption) by translating
from application to network format, and vice versa. The presentation layer works to transform data into the
form that the application layer can accept. This layer formats and encrypts data to be sent across a network,
providing freedom from compatibility problems. It is sometimes called the syntax layer.
Layer 5: Session Layer
The Session Layer controls the dialogues/connections (sessions) between computers. It establishes,
manages and terminates the connections between the local and remote application. It provides for full-
duplex, half-duplex, or simplex operation, and establishes checkpointing, adjournment, termination, and
restart procedures. The OSI model made this layer responsible for "graceful close" of sessions, which is a
property of TCP, and also for session checkpointing and recovery, which is not usually used in the Internet
Protocol Suite. The Session Layer is commonly implemented explicitly in application environments that
use remote procedure calls (RPCs).it offers various services, including
1. dialog control:The session layer allows two systems to enter into a dialog.It allows the communication
between two process to take place either in half duplex or full duplex mode.
2. synchronization:It allows a process to add checkpoints or synchronization points to a stream of data.for
eg.,if a system is sending a file of 2000 pages,it is advisable to insert checkpoints after every 100 pages to
ensure that eacg 100 page unit is received & acknowleged indepedently..
LECTURE NO. 6 READINGS: - DO-
Layer 4: Transport Layer
The Transport Layer provides transparent transfer of data between end users, providing reliable data
transfer services to the upper layers. It is responsible for process to process delivery of entire message. It
ensures that whole message arrives intact & in order,overseeing both error & flow control at the source to
destination level.The Transport Layer controls the reliability of a given link through flow control,
segmentation/desegmentation, and error control. Some protocols are state and connection oriented. This
means that the Transport Layer can keep track of the segments and retransmit those that fail.
1.Service Point Addressing: Computers often run several programs at the same time. For this reason,
source to destination delivery means delivery not only from one computer to the next but also from a
specific process on one computer to a specific process on other computer .The transport layer header must
therefore include a type of address called port address .The transport layer gets the entire message to the
correct process or computer.
2. Segmentation& reassemble: A message is divided into transmittable segments with each segment
containing a sequence no. These numbers enable the transport layer to reassemble the message correctly
upon arriving at the destination & to identify &replace packets thatt were lost in transmission.
3.Connection Control :It can be either connectionless or connection-oriented.A connection less Transport
layer treat each segment as an independent packet & delivers it to transport layer at destination machine. A
connection oriented Transport layer makes a connection with the transport layer at the destination machine
before delievering the packets.After all data is transferred & connection is terminated.
4. Flow control: It is performed end to end rather than single link.
5. Error control: t is performed process tp process rather than single link.The sending transport layer
makes sure that the entire message arrives at the receiving tranport layer without error(damage,loss or
duplication).it is achieved through retransmission.

Although not developed under the OSI Reference Model and not strictly conforming to the OSI definition
of the Transport Layer, the best known examples of a Layer 4 protocol are the Transmission Control
Protocol (TCP) and User Datagram Protocol (UDP).
Layer 3: Network Layer
The Network Layer provides the functional and procedural means of transferring variable length data
sequences from a source to a destination via one or more networks, while maintaining the quality of service
requested by the Transport Layer. The Network Layer performs network routing functions, and might also
perform fragmentation and reassembly, and report delivery errors. Routers operate at this layer—sending
data throughout the extended network and making the Internet possible.The best-known example of a Layer
3 protocol is the Internet Protocol (IP). It manages the connectionless transfer of data one hop at a time,
from end system to ingress router, router to router, and from egress router to destination end system. It is
not responsible for reliable delivery to a next hop, but only for the detection of errored packets so they may
be discarded. When the medium of the next hop cannot accept a packet in its current length, IP is
responsible for fragmenting into sufficiently small packets that the medium can accept it.
1.Logical addressing: The physical addressing implemented by data link layer handles the addressing
problem locally.the network layer adds a header to the packet coming from the upper layer includes logical
address of sender & reciever.
2.Routing: When independent networks are connected to create internetworks,the router route the packet to
final destination.
Layer 2: Data Link Layer
The Data Link Layer is the second layer in the OSI model, above the Physical Layer, which ensures that the
error free data is transferred between the adjacent nodes in the network.
1. Framing :It breaks the datagrams passed down by above layers and convert them into frames ready for
transfer. This is called Framing. It provides two main functionalities
• Reliable data transfer service between two peer network layers
• Flow Control mechanism which regulates the flow of frames such that data congestion is not there
at slow receivers due to fast senders.
2. Error Control
• The bit stream transmitted by the physical layer is not guaranteed to be error free. The data link
layer is responsible for error detection and correction. The most common error control method is
to compute and append some form of a checksum to each outgoing frame at the sender's data link
layer and to recompute the checksum and verify it with the received checksum at the receiver's
side. If both of them match, then the frame is correctly received; else it is erroneous. The
checksums may be of two types:
# Error detecting : Receiver can only detect the error in the frame and inform the sender about it. #
Error detecting and correcting : The receiver can not only detect the error but also correct it.

3. Flow Control
Consider a situation in which the sender transmits frames faster than the receiver can accept them. If the
sender keeps pumping out frames at high rate, at some point the receiver will be completely swamped and
will start losing some frames. This problem may be solved by introducing flow control. Most flow control
protocols contain a feedback mechanism to inform the sender when it should transmit the next frame.
4. Access Control: When two or more devices are connected to the same link,data link layer protocols are
necessary to determine which device has control over the link at any given time.

5. Physical addressing: If frames are to be distributed to different systems on the network,the data link
layer adds the header to the frame to define the sender or receiver of the frame.
Layer 1: Physical Layer
The Physical Layer is the first level in the seven-layer OSI model of computer networking. It translates
communications requests from the Data Link Layer into hardware-specific operations to effect transmission
or reception of electronic signals.
The Physical Layer is a fundamental layer upon which all higher level functions in a network are based.
However, due to the plethora of available hardware technologies with widely varying characteristics, this is
perhaps the most complex layer in the OSI architecture. The implementation of this layer is often termed
PHY.
The Physical Layer defines the means of transmitting raw bits rather than logical data packets over a
physical link connecting network nodes. The bit stream may be grouped into code words or symbols and
converted to a physical signal that is transmitted over a hardware transmission medium. The Physical Layer
provides an electrical, mechanical, and procedural interface to the transmission medium. The shapes of the
electrical connectors, which frequencies to broadcast on, which modulation scheme to use and similar low-
level parameters are specified here.
List of Physical Layer services
The major functions and services performed by the Physical Layer are:
• Bit-by-bit delivery
• Providing a standardized interface to physical transmission media, including
o Mechanical specification of electrical connectors and cables, for example maximum
cable length
o Electrical specification of transmission line signal level and impedance
o Radio interface, including electromagnetic spectrum frequency allocation and
specification of signal strength, analog bandwidth, etc.
o Specifications for IR over optical fiber or a wireless IR communication link
• Modulation
• Line coding
• Bit synchronization in synchronous serial communication
• Start-stop signalling and flow control in asynchronous serial communication
• Circuit mode multiplexing,[citation needed] as opposed to statistical multiplexing performed at the higher
level
o Establishment and termination of circuit switched connections
• Carrier sense and collision detection utilized by some level 2 multiple access protocols
• Equalization filtering, training sequences, pulse shaping and other signal processing of physical
signals
• Forward error correction,[citation needed] for example bitwise convolutional coding
• Bit-interleaving and other channel coding
The Physical Layer is also concerned with
• Point-to-point, multipoint or point-to-multipoint line configuration
• Physical network topology, for example bus, ring, mesh or star network
• Serial or parallel communication
• Simplex, half duplex or full duplex transmission mode
• Autonegotiation
Physical Layer examples
• V.92 telephone network modems
• IRDA Physical Layer
• USB Physical Layer
• Firewire
• EIA RS-232, EIA-422, EIA-423, RS-449, RS-485
• ITU Recommendations: see ITU-T
• DSL
• ISDN
• T1 and other T-carrier links, and E1 and other E-carrier links
• 10BASE-T, 10BASE2, 10BASE5, 100BASE-TX, 100BASE-FX, 100BASE-T, 1000BASE-T,
1000BASE-SX and other varieties of the Ethernet physical layer
• Varieties of 802.11
• SONET/SDH

LECTURE NO. 7 READINGS: B -PAGE532 to 533,41-44


TCP/IP
Definition: Transmission Control Protocol (TCP) and Internet Protocol (IP) are two distinct network
protocols, technically speaking. TCP and IP are so commonly used together, however, that TCP/IP has
become standard terminology to refer to either or both of the protocols.TCP/IP (Transmission Control
Protocol/Internet Protocol) is the basic communication language or protocol of the Internet. It can also be
used as a communications protocol in a private network (either an intranet or an extranet). When you are
set up with direct access to the Internet, your computer is provided with a copy of the TCP/IP program just
as every other computer that you may send messages to or get information from also has a copy of TCP/IP.
TCP/IP is a two-layer program. The higher layer, Transmission Control Protocol, manages the assembling
of a message or file into smaller packets that are transmitted over the Internet and received by a TCP layer
that reassembles the packets into the original message. The lower layer, Internet Protocol, handles the
address part of each packet so that it gets to the right destination. Each gateway computer on the network
checks this address to see where to forward the message. Even though some packets from the same
message are routed differently than others, they'll be reassembled at the destination.
TCP/IP uses the client/server model of communication in which a computer user (a client) requests and is
provided a service (such as sending a Web page) by another computer (a server) in the network. TCP/IP
communication is primarily point-to-point, meaning each communication is from one point (or host
computer) in the network to another point or host computer. TCP/IP and the higher-level applications that
use it are collectively said to be "stateless" because each client request is considered a new request
unrelated to any previous one (unlike ordinary phone conversations that require a dedicated connection for
the call duration). Being stateless frees network paths so that everyone can use them continuously. (Note
that the TCP layer itself is not stateless as far as any one message is concerned. Its connection remains in
place until all packets in a message have been received.)
Many Internet users are familiar with the even higher layer application protocols that use TCP/IP to get to
the Internet. These include the World Wide Web's Hypertext Transfer Protocol (HTTP), the File Transfer
Protocol (FTP), Telnet (Telnet) which lets you logon to remote computers, and the Simple Mail Transfer
Protocol (SMTP). These and other protocols are often packaged together with TCP/IP as a "suite."
Personal computer users with an analog phone modem connection to the Internet usually get to the Internet
through the Serial Line Internet Protocol (SLIP) or the Point-to-Point Protocol (PPP). These protocols
encapsulate the IP packets so that they can be sent over the dial-up phone connection to an access
provider's modem.
Protocols related to TCP/IP include the User Datagram Protocol (UDP), which is used instead of TCP for
special purposes. Other protocols are used by network host computers for exchanging router information.
These include the Internet Control Message Protocol (ICMP), the Interior Gateway Protocol (IGP), the
Exterior Gateway Protocol (EGP), and the Border Gateway Protocol (BGP).
Introduction to TCP/IP
TCP and IP were developed by a Department of Defense (DOD) research project to connect a number
different networks designed by different vendors into a network of networks (the "Internet"). It was initially
successful because it delivered a few basic services that everyone needs (file transfer, electronic mail,
remote logon) across a very large number of client and server systems. Several computers in a small
department can use TCP/IP (along with other protocols) on a single LAN. The IP component provides
routing from the department to the enterprise network, then to regional networks, and finally to the global
Internet. On the battlefield a communications network will sustain damage, so the DOD designed TCP/IP
to be robust and automatically recover from any node or phone line failure. This design allows the
construction of very large networks with less central management. However, because of the automatic
recovery, network problems can go undiagnosed and uncorrected for long periods of time.
As with all other communications protocol, TCP/IP is composed of layers:
IP - is responsible for moving packet of data from node to node. IP forwards each packet based on a four
byte destination address (the IP number). The Internet authorities assign ranges of numbers to different
organizations. The organizations assign groups of their numbers to departments. IP operates on gateway
machines that move data from department to organization to region and then around the world.
TCP - is responsible for verifying the correct delivery of data from client to server. Data can be lost in the
intermediate network. TCP adds support to detect errors or lost data and to trigger retransmission until the
data is correctly and completely received.
Sockets - is a name given to the package of subroutines that provide access to TCP/IP on most systems.

History of TCP/IP
In the late 1960's, most computer users bought a single large system for all of their data processing needs.
As their needs expanded, they rarely bought a different system from a different vendor. Instead, they added
on to their existing platforms, or they replaced it with a newer, larger model. Cross-platform connectivity
was essentially unheard of, nor was it expected by customers.
These systems used proprietary networking architectures and protocols. For the most part, networking
consisted of plugging dumb terminals or line printers into an intelligent communications controller. And
just as the networking protocols were proprietary, the network nodes were proprietary as well. To this day
you still can't plug an IBM terminal into a DEC midrange system and expect it to work. The protocols in
use are completely incompatible with each other.
In an effort to cut the costs of development, the Advanced Research Projects Agency (ARPA) of the
Department of Defense (DOD) began coordinating the development of a vendor- independent network to
tie major research sites together. The logic behind this is clear: the cost and time to develop an application
on one system was too much for each site to re-write the application on different systems. Since each
facility used different computers with proprietary networking technology, the need for a vendor-
independent network was the first priority. In 1968, work began on a private packet-switched network.
In the early 1970's, authority of the project was transferred to the Defense Advanced Research Projects
Agency (DARPA). Although the original ARPAnet protocols were written for use with the ARPA packet-
switched network, they were also designed to be usable on other networks as well, and in 1981, DARPA
switched their focus to the TCP/IP protocol suite, placing it into the public domain. Shortly thereafter,
TCP/IP was adopted by the University of California at Berkeley, who began bundling it with their freely
distributed version of UNIX. In 1983, DARPA mandated that all new systems connecting to the ARPA
network had to use TCP/IP, thus guaranteeing its long-term success.
During the same time period, other government agencies like the National Science Foundation (NSF) were
building their own networks, as were private regional network service providers. These other networks also
used TCP/IP as the native protocols, since they were completely "open" as well as readily available on a
number of different platforms.
When these various regional and government networks began connecting to each other, the term "Internet"
came into use. To "internet" (with a lowercase "i") means to interconnect networks. You can create an
internet of Macintosh networks using AppleTalk and some routers, for example. The term "Internet" (with
a capital "I") refers to the global network of TCP/IP-based systems, originally consisting of ARPA and
some regional networks.

The TCP/IP Reference Model


TCP/IP originated out of the investigative research into networking protocols that the US Department of
Defense (DoD) initiated in 1969. In 1968, the DoD Advanced Research Projects Agency (ARPA) began
researching the network technology that is called packet switching.
The original focus of this research was that the network be able to survive loss of subnet hardware, with
existing conversations not being broken off. In other words, DoD wanted connections to remain intact as
long as the source and destination nodes were functioning, even if some of the machines or transmission
lines in between were suddenly put out of operation. The network that was initially constructed as a result
of this research to provide a communication that could function in wartime., then called ARPANET,
gradually became known as the Internet. The TCP/IP protocols played an important role in the
development of the Internet. In the early 1980s, the TCP/IP protocols were developed. In 1983, they
became standard protocols for ARPANET.
Because of the history of the TCP/IP protocol suite, it's often referred to as the DoD protocol suite or the
Internet protocol suite.

Figure 3: TCP/IP model layers


1.Network Access Layer – The lowest layer of the TCP/IP protocol hierarchy. It defines how to use the
network to transmit an IP datagram. Unlike higher-level protocols, Network Access Layer protocols must
know the details of the underlying network (its packet structure, addressing, etc.) to correctly format the
data being transmitted to comply with the network constraints. The TCP/IP Network Access Layer can
encompass the functions of all three lower layers of the OSI reference Model (Physical, Data Link and
Network layers).
As new hardware technologies appear, new Network Access protocols must be developed so that TCP/IP
networks can use the new hardware. Consequently, there are many access protocols - one for each physical
network standard.
Access protocol is a set of rules that defines how the hosts access the shared medium. Access protocol
have to be simple, rational and fair for all the hosts.
Functions performed at this level include encapsulation of IP datagrams into the frames transmitted by the
network, and mapping of IP addresses to the physical addresses used by the network. One of TCP/IP's
strengths is its universal addressing scheme. The IP address must be converted into an address that is
appropriate for the physical network over which the datagram is transmitted.
2.Internet layer – Provides services that are roughly equivalent to the OSI Network layer. The primary
concern of the protocol at this layer is to manage the connections across networks as information is passed
from source to destination. The Internet Protocol (IP) is the primary protocol at this layer of the TCP/IP
model.
3.Transport layer – It is designed to allow peer entities on the source and destination hosts to carry on a
conversation, just as in the OSI transport layer. Two end-to-end transport protocols have been defined here
TCP and UDP Both protocols will be dicussed later.
4.Application Layer – includes the OSI Session, Presentation and Application layers as shown in the
Figure 4. An application is any process that occurs above the Transport Layer. This includes all of the
processes that involve user interaction. The application determines the presentation of the data and controls
the session. There are numerous application layer protocols in TCP/IP, including Simple Mail Transfer
Protocol (SMTP) and Post Office Protocol (POP) used for e-mail, Hyper Text Transfer Protocol (HTTP)
used for the World-Wide-Web, and File Transfer Protocol (FTP). Most application layer protocols are
associated with one or more port number. Port numbers will be dicussed later.
LECTURE NO. 8 READINGS: B-PAGE 433,535-538 A-601

INTERNET PROTOCOL: The Internet Protocol Suite (commonly TCP/IP) is the set of
communications protocols used for the Internet and other similar networks. It is named from two of the
most important protocols in it: the Transmission Control Protocol (TCP) and the Internet Protocol (IP),
which were the first two networking protocols defined in this standard. INTERNET PROTOCOL: IP is the
primary network protocol used on the Internet, developed in the 1970s. On the Internet and many other
networks, IP is often used together with the Transport Control Protocol (TCP) and referred to
interchangeably as TCP/IP.
IP supports unique addressing for computers on a network. Most networks use the IP version 4 (IPv4)
standard that features IP addresses four bytes (32 bits) in length. The newer IP version 6 (IPv6) standard
features addresses 16 bytes (128 bits) in length.
Data on an IP network is organized into [ipackets. Each IP packet includes both a header (that specifies
source, destination, and other information about the data) and the message data itself.

IP corresponds to the Network layer (Layer 3) in the OSI model, whereas TCP corresponds to the Transport
layer (Layer 4) in OSI. In other words, the term TCP/IP refers to network communications where the TCP
transport is used to deliver data across IP networks.
The average person on the Internet works in a predominately TCP/IP environment. Web browsers, for
example, use TCP/IP to communicate with Web servers.Today's IP networking represents a synthesis of
several developments that began to evolve in the 1960s and 1970s, namely the Internet and LANs (Local
Area Networks), which emerged in the mid- to late-1980s, together with the invention of the World Wide
Web by Tim Berners-Lee in 1989 (and which exploded with the availability of the first popular web
browser: Mosaic).
The Internet Protocol Suite, like many protocol suites, may be viewed as a set of layers. Each layer solves a
set of problems involving the transmission of data, and provides a well-defined service to the upper layer
protocols based on using services from some lower layers. Upper layers are logically closer to the user and
deal with more abstract data, relying on lower layer protocols to translate data into forms that can
eventually be physically transmitted.
The User Datagram Protocol (UDP)
The User Datagram Protocol (UDP) is a transport layer protocol defined for use with the IP network layer
protocol. It is defined by RFC 768 written by John Postel. It provides a best-effort datagram service to an
End System (IP host).
The service provided by UDP is an unreliable service that provides no guarantees for delivery and no
protection from duplication (e.g. if this arises due to software errors within an Intermediate System (IS)).
The simplicity of UDP reduces the overhead from using the protocol and the services may be adequate in
many cases.
UDP provides a minimal, unreliable, best-effort, message-passing transport to applications and upper-layer
protocols. Compared to other transport protocols, UDP and its UDP-Lite variant are unique in that they do
not establish end-to-end connections between communicating end systems. UDP communication
consequently does not incur connection
establishment and teardown overheads and there is minimal associated end system state. Because of these
characteristics, UDP can offer a very efficient communication transport to some applications, but has no
inherent congestion control or reliability. A second unique characteristic of UDP is that it provides no
inherent On many platforms, applications can send UDP datagrams at the line rate of the link interface,
which is often much greater than the available path capacity, and doing so would contribute to congestion
along the path, applications therefore need to be designed responsibly (RFC 4505).
One increasingly popular use of UDP is as a tunneling protocol, where a tunnel endpoint encapsulates the
packets of another protocol inside UDP datagrams and transmits them to another tunnel endpoint, which
decapsulates the UDP datagrams and forwards the original packets contained in the payload. Tunnels
establish virtual links that appear to directly connect locations that are distant in the physical Internet
topology, and can be used to create virtual (private) networks. Using UDP as a tunneling protocol is
attractive when the payload protocol is not supported by middleboxes that may exist along the path,
because many middleboxes support UDP transmissions.
UDP does not provide any communications security. Applications that need to protect their
communications against eavesdropping, tampering, or message forgery therefore need to separately provide
security services using additional protocol mechanisms.
Protocol Header
A computer may send UDP packets without first establishing a connection to the recipient. A UDP
datagram is carried in a single IP packet and is hence limited to a maximum payload of 65,507 bytes for
IPv4 and 65,527 bytes for IPv6. The transmission of large IP packets usually requires IP fragmentation.
Fragmentation decreases communication reliability and efficiency and should theerfore be avoided.
To transmit a UDP datagram, a computer completes the appropriate fields in the UDP header (PCI) and
forwards the data together with the header for transmission by the IP network layer.

The UDP protocol header consists of 8 bytes of Protocol Control Information (PCI)
The UDP header consists of four fields each of 2 bytes in length:
• Source Port (UDP packets from a client use this as a service access point (SAP) to indicate the
session on the local client that originated the packet. UDP packets from a server carry the server
SAP in this field)
• Destination Port (UDP packets from a client use this as a service access point (SAP) to indicate
the service required from the remote server. UDP packets from a server carry the client SAP in
this field)
• UDP length (The number of bytes comprising the combined UDP header information and payload
data)
• UDP Checksum (A checksum to verify that the end to end data has not been corrupted by routers
or bridges in the network or by the processing in an end system. The algorithm to compute the
checksum is the Standard Internet Checksum algorithm. This allows the receiver to verify that it
was the intended destination of the packet, because it covers the IP addresses, port numbers and
protocol number, and it verifies that the packet is not truncated or padded, because it covers the
size field. Therefore, this protects an application against receiving corrupted payload data in place
of, or in addition to, the data that was sent. In the cases where this check is not required, the value
of 0x0000 is placed in this field, in which case the data is not checked by the receiver. )
Like for other transport protocols, the UDP header and data are not processed by Intermediate Systems (IS)
in the network, and are delivered to the final destination in the same form as originally transmitted.
At the final destination, the UDP protocol layer receives packets from the IP network layer. These are
checked using the checksum (when >0, this checks correct end-to-end operation of the network service)
and all invalid PDUs are discarded. UDP does not make any provision for error reporting if the packets are
not delivered. Valid data are passed to the appropriate session layer protocol identified by the source and
destination port numbers (i.e. the session service access points).
UDP and UDP-Lite also may be used for multicast and broadcast, allowing senders to transmit to multiple
receivers.
Using UDP
Application designers are generally aware that UDP does not provide any reliability, e.g., it does not
retransmit any lost packets. Often, this is a main reason to consider UDP as a transport. Applications that
do require reliable message delivery therefore need to implement appropriate protocol mechanisms in their
applications (e.g. tftp).
UDP's best effort service does not protect against datagram duplication, i.e., an application may receive
multiple copies of the same UDP datagram. Application designers therefore need to verify that their
application gracefully handles datagram duplication and may need to implement mechanisms to detect
duplicates.
The Internet may also significantly delay some packets with respect to others, e.g., due to routing
transients, intermittent connectivity, or mobility. This can cause reordering, where UDP datagrams arrive at
the receiver in an order different from the transmission order. Applications that require ordered delivery
must restore datagram ordering themselves.
The burdon of needing to code all these protocol mechanims can be avoided by using TCP!
LECTURE NO. 9 READINGS: -DO-
Ports
Generally, clients set the source port number to a unique number that they choose themselves - usually
based on the program that started the connection. Since this number is returned by the server in responses,
this lets the sender know which "conversation" incoming packets are to be sent to. The destination port of
packets sent by the client is usually set to one of a number of well-known ports. These usually correspond
to one of a number of different applications, e.g. port 23 is used for telnet, and port 80 is used for web
servers.
A server process (program), listens for UDP packets received with a particular well-known port number
and tells its local UDP layer to send packets matching this destination port number to the server program. It
determines which client these packets come from by examining the received IP source address and the
received unique UDP source port number. Any responses which the server needs to send to back to a client
are sent with the source port number of the server (the well-known port number) and the destination port
selected by the client. Most people do not memorise the well known ports, instead they look them up in
table (e.g. see below).

20 FTP-DATA File Transfer [Default Data]


21 FTP File Transfer [Control]
23 TELNET Telnet
25 SMTP Simple Mail Transfer
37 TIME Time
69 TFTP Trivial File Transfer
79 FINGER Finger
110 POP3 Post Office Protocol v 3
123 NTP Network Time Protocol
143 IMAP2 Interim Mail Access Prot. v2
161 SNMP Simple Network Man. Prot.
If a client/server application executes on a host with more than one IP
interface, the application needs to ensure that it sends any UDP
responses with an IP source address that matches the IP destination
address of the UDP datagram that carried the request.

TCP:
The Transmission Control Protocol (TCP) is one of the core protocols of the Internet Protocol Suite.
TCP is so central that the entire suite is often referred to as "TCP/IP." Whereas IP handles lower-level
transmissions from computer to computer as a message makes its way across the Internet, TCP operates at
a higher level, concerned only with the two end systems, for example a Web browser and a Web server. In
particular, TCP provides reliable, ordered delivery of a stream of bytes from one program on one computer
to another program on another computer. Besides the Web, other common applications of TCP include e-
mail and file transfer. Among its management tasks, TCP controls message size, the rate at which messages
are exchanged, and network traffic congestion.

TCP segment structure


A TCP segment consists of two sections:
• header
• data
The TCP header[2] consists of 11 fields, of which only 10 are required. The eleventh field is optional (pink
background in table) and aptly named "options".
TCP Header
Bit Bits
4–7 8–15 16–31
offset 0–3
0 Source port Destination port
32 Sequence number
64 Acknowledgment number
Data
96 Reserved CWR ECE URG ACK PSH RST SYN FIN Window Size
offset
128 Checksum Urgent pointer
160 Options (optional)

160/192+ Data

• Source port (16 bits) – identifies the sending port


• Destination port (16 bits) – identifies the receiving port
• Sequence number (32 bits) – has a dual role
• If the SYN flag is set, then this is the initial sequence number and the sequence number of the first
data byte is this sequence number plus 1
• If the SYN flag is not set, then this is the sequence number of the first data byte
• Acknowledgement number (32 bits) – if the ACK flag is set then the value of this field is the next
expected byte that the receiver is expecting.
• Data offset (4 bits) – specifies the size of the TCP header in 32-bit words. The minimum size
header is 5 words and the maximum is 15 words thus giving the minimum size of 20 bytes and
maximum of 60 bytes. This field gets its name from the fact that it is also the offset from the start
of the TCP packet to the data.
• Reserved (4 bits) – for future use and should be set to zero
• Flags (8 bits) (aka Control bits) – contains 8 1-bit flags
• CWR (1 bit) – Congestion Window Reduced (CWR) flag is set by the sending host to indicate that
it received a TCP segment with the ECE flag set (added to header by RFC 3168).
• ECE (ECN-Echo) (1 bit) – indicate that the TCP peer is ECN capable during 3-way handshake
(added to header by RFC 3168).
• URG (1 bit) – indicates that the URGent pointer field is significant
• ACK (1 bit) – indicates that the ACKnowledgment field is significant
• PSH (1 bit) – Push function
• RST (1 bit) – Reset the connection
• SYN (1 bit) – Synchronize sequence numbers
• FIN (1 bit) – No more data from sender
• Window (16 bits) – the size of the receive window, which specifies the number of bytes (beyond
the sequence number in the acknowledgment field) that the receiver is currently willing to receive
(see Flow control)
• Checksum (16 bits) – The 16-bit checksum field is used for error-checking of the header and data
• Urgent pointer (16 bits) – if the URG flag is set, then this 16-bit field is an offset from the
sequence number indicating the last urgent data byte
• Options (Variable bits) – the total length of the option field must be a multiple of a 32-bit word
and the data offset field adjusted appropriately
• 0 - End of options list
• 1 - No operation (NOP, Padding)
• 2 - Maximum segment size (see maximum segment size)
• 3 - Window scale (see window scaling for details)
• 4 - Selective Acknowledgement ok (see selective acknowledgments for details)
• 5-
• 6-
• 7-
• 8 - Timestamp (see TCP Timestamps for details)
The last field is not a part of the header. The contents of this field are whatever the upper layer protocol
wants but this protocol is not set in the header and is presumed based on the port selection.
• Data (Variable bits): As you might expect, this is the payload, or data portion of a TCP packet.
The payload may be any number of application layer protocols. The most common are HTTP,
Telnet, SSH, FTP, but other popular protocols also use TCP.

• LECTURE NO.10 READINGS: A-


PAGE 477
IP addressing
Every machine on the Internet has a unique identifying number, called an IP Address. A typical IP address
looks like this:
• 216.27.61.137
To make it easier for us humans to remember, IP addresses are normally expressed in decimal format as a
"dotted decimal number" like the one above. But computers communicate in binary form. Look at the same
IP address in binary:
• 11011000.00011011.00111101.10001001
The four numbers in an IP address are called octets, because they each have eight positions when viewed in
binary form. If you add all the positions together, you get 32, which is why IP addresses are considered 32-
bit numbers. Since each of the eight positions can have two different states (1 or 0) the total number of
possible combinations per octet is 28 or 256. So each octet can contain any value between 0 and 255.
Combine the four octets and you get 232 or a possible 4,294,967,296 unique values!
Out of the almost 4.3 billion possible combinations, certain values are restricted from use as typical IP
addresses. For example, the IP address 0.0.0.0 is reserved for the default network and the address
255.255.255.255 is used for broadcasts.
An IP multicast address is in the range 224.0.0.0 through 239.255.255.255. In hexadecimal that is
E0.00.00.00 to EF.FF.FF.FF. To be a multicast address, the first three bits of the most significant byte must
be set and the fourth bit must be clear. In the IP address, there are 28 bits for multicasting. Therefore there
are 5 multicasting bits that cannot be mapped into an ethernet data packet. The 5 bits that are not mapped
are the 5 most significant bits.

The 28 IP multicast bits are called the multicast group ID. A host group listening to a multicast can span
multiple networks. There are some assigned hostgroup addresses by the internet assigned numbers
authority (IANA). Some of the assignments are listed below:
• 224.0.0.1 = All systems on the subnet
• 224.0.0.2 = All routers on the subnet
• 224.0.1.1 = Network time protocol (NTP)
• 224.0.0.9 = For RIPv2
224.0.1.2 = Silicon graphic's dogfight application

Network Addressing
IP addresses are broken into 4 octets (IPv4) separated by dots called dotted decimal notation. An octet is a
byte consisting of 8 bits. The IPv4 addresses are in the following form:
192.168.10.1
There are two parts of an IP address:
• Network ID
• Host ID
The various classes of networks specify additional or fewer octets to designate the network ID versus the
host ID.
Class 1st Octet 2nd Octet 3rd Octet 4th Octet
Net ID Host ID
A
Net ID Host ID
B
Net ID Host ID
C
When a network is set up, a netmask is also specified. The netmask determines the class of the network as
shown below, except for CIDR. When the netmask is setup, it specifies some number of most significant
bits with a 1's value and the rest have values of 0. The most significant part of the netmask with bits set to
1's specifies the network address, and the lower part of the address will specify the host address. When
setting addresses on a network, remember there can be no host address of 0 (no host address bits set), and
there can be no host address with all bits set.
Class A-E networks
The addressing scheme for class A through E networks is shown below. Note: We use the 'x' character here
to denote don't care situations which includes all possible numbers at the location. It is many times used to
denote networks.
Network Type Address Range Normal Netmask Comments
Class A 001.x.x.x to 126.x.x.x 255.0.0.0 For very large networks
Class B 128.1.x.x to 191.254.x.x 255.255.0.0 For medium size networks
Class C 192.0.1.x to 223.255.254.x 255.255.255.0 For small networks
Class D 224.x.x.x to 239.255.255.255 Used to support multicasting
Class E 240.x.x.x to 247.255.255.255
RFCs 1518 and 1519 define a system called Classless Inter-Domain Routing (CIDR) which is used to
allocate IP addresses more efficiently. This may be used with subnet masks to establish networks rather
than the class system shown above. A class C subnet may be 8 bits but using CIDR, it may be 12 bits.
There are some network addresses reserved for private use by the Internet Assigned Numbers Authority
(IANA) which can be hidden behind a computer which uses IP masquerading to connect the private
network to the internet. There are three sets of addresses reserved. These address are shown below:
• 10.x.x.x
• 172.16.x.x - 172.31.x.x
• 192.168.x.x
Other reserved or commonly used addresses:
• 127.0.0.1 - The loopback interface address. All 127.x.x.x addresses are used by the loopback
interface which copies data from the transmit buffer to the receive buffer of the NIC when used.
• 0.0.0.0 - This is reserved for hosts that don't know their address and use BOOTP or DHCP
protocols to determine their addresses.
• 255 - The value of 255 is never used as an address for any part of the IP address. It is reserved for
broadcast addressing. Please remember, this is exclusive of CIDR. When using CIDR, all bits of
the address can never be all ones.
To further illustrate, a few examples of valid and invalid addresses are listed below:
1. Valid addresses:
o 10.1.0.1 through 10.1.0.254
o 10.0.0.1 through 10.0.0.254
o 10.0.1.1 through 10.0.1.254
2. Invalid addresses:
o 10.1.0.0 - Host IP can't be 0.
o 10.1.0.255 - Host IP can't be 255.
o 10.123.255.4 - No network or subnet can have a value of 255.
o 0.12.16.89 - No Class A network can have an address of 0.
o 255.9.56.45 - No network address can be 255.
o 10.34.255.1 - No network address can be 255.
Network/Netmask specification
Sometimes you may see a network interface card (NIC) IP address specified in the following manner:
192.168.1.1/24
The first part indicates the IP address of the NIC which is "192.168.1.1" in this case. The second part "/24"
indicates the netmask value meaning in this case that the first 24 bits of the netmask are set. This makes the
netmask value 255.255.255.0. If the last part of the line above were "/16", the netmask would be
255.255.0.0.
IP ADDRESS CLASSES
The octets serve a purpose other than simply separating the numbers. They are used to create classes of IP
addresses that can be assigned to a particular business, government or other entity based on size and need.
The octets are split into two sections: Net and Host. The Net section always contains the first octet. It is
used to identify the network that a computer belongs to. Host (sometimes referred to as Node) identifies the
actual computer on the network. The Host section always contains the last octet. There are five IP classes
plus certain special addresses:
• Default Network - The IP address of 0.0.0.0 is used for the default network.
• Class A - This class is for very large networks, such as a major international company might have.
IP addresses with a first octet from 1 to 126 are part of this class. The other three octets are used to
identify each host. This means that there are 126 Class A networks each with 16,777,214 (224 -2)
possible hosts for a total of 2,147,483,648 (231) unique IP addresses. Class A networks account for
half of the total available IP addresses. In Class A networks, the high order bit value (the very first
binary number) in the first octet is always 0.
Net Host or Node
115. 24.53.107
• Loopback - The IP address 127.0.0.1 is used as the loopback address. This means that it is used
by the host computer to send a message back to itself. It is commonly used for troubleshooting and
network testing.

Other IP Classes
• Class B - Class B is used for medium-sized networks. A good example is a large college campus.
IP addresses with a first octet from 128 to 191 are part of this class. Class B addresses also include
the second octet as part of the Net identifier. The other two octets are used to identify each host.
This means that there are 16,384 (214) Class B networks each with 65,534 (216 -2) possible hosts
for a total of 1,073,741,824 (230) unique IP addresses. Class B networks make up a quarter of the
total available IP addresses. Class B networks have a first bit value of 1 and a second bit value of 0
in the first octet.
Net Host or Node
145.24. 53.107
• Class C - Class C addresses are commonly used for small to mid-size businesses. IP addresses
with a first octet from 192 to 223 are part of this class. Class C addresses also include the second
and third octets as part of the Net identifier. The last octet is used to identify each host. This means
that there are 2,097,152 (221) Class C networks each with 254 (28 -2) possible hosts for a total of
536,870,912 (229) unique IP addresses. Class C networks make up an eighth of the total available
IP addresses. Class C networks have a first bit value of 1, second bit value of 1 and a third bit
value of 0 in the first octet.
Net Host or Node
195.24.53. 107
• Class D - Used for multicasts, Class D is slightly different from the first three classes. It has a first
bit value of 1, second bit value of 1, third bit value of 1 and fourth bit value of 0. The other 28 bits
are used to identify the group of computers the multicast message is intended for. Class D
accounts for 1/16th (268,435,456 or 228) of the available IP addresses.
Net Host or Node
224. 24.53.107
• Class E - Class E is used for experimental purposes only. Like Class D, it is different from the
first three classes. It has a first bit value of 1, second bit value of 1, third bit value of 1 and fourth
bit value of 1. The other 28 bits are used to identify the group of computers the multicast message
is intended for. Class E accounts for 1/16th (268,435,456 or 228) of the available IP addresses.
Net Host or Node
240. 24.53.107
• Broadcast - Messages that are intended for all computers on a network are sent as broadcasts.
These messages always use the IP address 255.255.255.255.
LECTURE NO. 11 READINGS:-DO-
LECTURE NO. 12 READINGS:- A-PAGE 486,B-PAGE 449
SUBNET ADDRESSING:
Subnetting is the process of breaking down a main class A, B, or C network into subnets for routing
purposes. A subnet mask is the same basic thing as a netmask with the only real difference being that you
are breaking a larger organizational network into smaller parts, and each smaller section will use a different
set of address numbers. This will allow network packets to be routed between subnetworks. When doing
subnetting, the number of bits in the subnet mask determine the number of available subnets. Two to the
power of the number of bits minus two is the number of available subnets. When setting up subnets the
following must be determined:
• Number of segments
• Hosts per segment
Subnetting provides the following advantages:
• Network traffic isolation - There is less network traffic on each subnet.
• Simplified Administration - Networks may be managed independently.
• Improved security - Subnets can isolate internal networks so they are not visible from external
networks.
A 14 bit subnet mask on a class B network only allows 2 node addresses for WAN links. A routing
algorithm like OSPF or EIGRP must be used for this approach. These protocols allow the variable length
subnet masks (VLSM). RIP and IGRP don't support this. Subnet mask information must be transmitted on
the update packets for dynamic routing protocols for this to work. The router subnet mask is different than
the WAN interface subnet mask.
One network ID is required by each of:
• Subnet
• WAN connection
One host ID is required by each of:
• Each NIC on each host.
• Each router interface.
Types of subnet masks:
• Default - Fits into a Class A, B, or C network category
• Custom - Used to break a default network such as a Class A, B, or C network into subnets.

Although the individual subscribers do not need to tabulate network numbers or provide explicit routing, it
is convenient for most Class B networks to be internally managed as a much smaller and simpler version of
the larger network organizations. It is common to subdivide the two bytes available for internal assignment
into a one byte department number and a one byte workstation ID.

The enterprise network is built using commercially available TCP/IP router boxes. Each router has small
tables with 255 entries to translate the one byte department number into selection of a destination Ethernet
connected to one of the routers. Messages to the PC Lube and Tune server (130.132.59.234) are sent
through the national and New England regional networks based on the 130.132 part of the number.
Arriving at Yale, the 59 department ID selects an Ethernet connector in the C& IS building. The 234 selects
a particular workstation on that LAN. The Yale network must be updated as new Ethernets and departments
are added, but it is not effected by changes outside the university or the movement of machines within the
department.
INTERNET CONTROL PROTOCOL:
In computer networking, Internet Protocol Control Protocol (IPCP) is a network control protocol for
establishing and configuring Internet Protocol over a Point-to-Point Protocol link. IPCP uses the same
packet exchange mechanism as the Link Control Protocol. IPCP packets may not be exchanged until PPP
has reached the Network-Layer Protocol phase, and any IPCP packets received before this phase is reached
should be silently discarded.
IP Frame
Code ID Length IP Information
1 byte 1 byte 2 bytes variable
IP packet encapsulated in a PPP frame
Flag Address Control 8021(hex) Payload (and padding) FCS flag
The information contained in an IPCP packet is such:
• Configure-request
• Configure-ack
• Configure-nak
• Configure-reject
• Terminate-request
• Terminate-ack
• Code-reject
After the configuration is done, the link is able to carry IP data as a payload of the PPP frame. The protocol
field value is 0021(hex). This code indicates that IP data is being carried
LECTURE NO.13 READINGS: A-PAGE 514
ARP:
The address resolution protocol (arp) is a protocol used by the Internet Protocol (IP) [RFC826], specifically
IPv4, to map IP network addresses to the hardware addresses used by a data link protocol. The protocol
operates below the network layer as a part of the interface between the OSI network and OSI link layer. It
is used when IPv4 is used over Ethernet.
The term address resolution refers to the process of finding an address of a computer in a network. The
address is "resolved" using a protocol in which a piece of information is sent by a client process executing
on the local computer to a server process executing on a remote computer. The information received by the
server allows the server to uniquely identify the network system for which the address was required and
therefore to provide the required address. The address resolution procedure is completed when the client
receives a response from the server containing the required address.
An Ethernet network uses two hardware addresses which identify the source and destination of each frame
sent by the Ethernet. The destination address (all 1's) may also identify a broadcast packet (to be sent to all
connected computers). The hardware address is also known as the Medium Access Control (MAC) address,
in reference to the standards which define Ethernet. Each computer network interface card is allocated a
globally unique 6 byte link address when the factory manufactures the card (stored in a PROM). This is the
normal link source address used by an interface. A computer sends all packets which it creates with its own
hardware source link address, and receives all packets which match the same hardware address in the
destination field or one (or more) pre-selected broadcast/multicast addresses.
The Ethernet address is a link layer address and is dependent on the interface card which is used. IP
operates at the network layer and is not concerned with the link addresses of individual nodes which are to
be used.The address resolution protocol (arp) is therefore used to translate between the two types of
address. The arp client and server processes operate on all computers using IP over Ethernet. The processes
are normally implemented as part of the software driver that drives the network interface card.
There are four types of arp messages that may be sent by the arp protocol. These are identified by four
values in the "operation" field of an arp message. The types of message are:
1. ARP request
2. ARP reply
3. RARP request
4. RARP reply
The format of an arp message is shown below:

Format of an arp message used to resolve the remote MAC Hardware Address (HA)
To reduce the number of address resolution requests, a client normally caches resolved addresses for a
(short) period of time. The arp cache is of a finite size, and would become full of incomplete and obsolete
entries for computers that are not in use if it was allowed to grow without check. The arp cache is therefore
periodically flushed of all entries. This deletes unused entries and frees space in the cache. It also removes
any unsuccessful attempts to contact computers which are not currently running.
Example of use of the Address Resolution Protocol (arp)
The address resolution protocol (arp) is a protocol used by the Internet Protocol (IP) [RFC826], specifically
IPv4, to map IP network addresses to the hardware addresses used by a data link protocol. The protocol
operates below the network layer as a part of the interface between the OSI network and OSI link layer. It
is used when IPv4 is used over Ethernet.
The term address resolution refers to the process of finding an address of a computer in a network. The
address is "resolved" using a protocol in which a piece of information is sent by a client process executing
on the local computer to a server process executing on a remote computer. The information received by the
server allows the server to uniquely identify the network system for which the address was required and
therefore to provide the required address. The address resolution procedure is completed when the client
receives a response from the server containing the required address.
An Ethernet network uses two hardware addresses which identify the source and destination of each frame
sent by the Ethernet. The destination address (all 1's) may also identify a broadcast packet (to be sent to all
connected computers). The hardware address is also known as the Medium Access Control (MAC) address,
in reference to the standards which define Ethernet. Each computer network interface card is allocated a
globally unique 6 byte link address when the factory manufactures the card (stored in a PROM). This is the
normal link source address used by an interface. A computer sends all packets which it creates with its own
hardware source link address, and receives all packets which match the same hardware address in the
destination field or one (or more) pre-selected broadcast/multicast addresses.
The Ethernet address is a link layer address and is dependent on the interface card which is used. IP
operates at the network layer and is not concerned with the link addresses of individual nodes which are to
be used.The address resolution protocol (arp) is therefore used to translate between the two types of
address. The arp client and server processes operate on all computers using IP over Ethernet. The processes
are normally implemented as part of the software driver that drives the network interface card.
There are four types of arp messages that may be sent by the arp protocol. These are identified by four
values in the "operation" field of an arp message. The types of message are:
5. ARP request
6. ARP reply
7. RARP request
8. RARP reply
The format of an arp message is shown below:
Format of an arp message used to resolve the remote MAC Hardware Address (HA)
To reduce the number of address resolution requests, a client normally caches resolved addresses for a
(short) period of time. The arp cache is of a finite size, and would become full of incomplete and obsolete
entries for computers that are not in use if it was allowed to grow without check. The arp cache is therefore
periodically flushed of all entries. This deletes unused entries and frees space in the cache. It also removes
any unsuccessful attempts to contact computers which are not currently running.
Example of use of the Address Resolution Protocol (arp)
The figure below shows the use of arp when a computer tries to contact a remote computer on the same
LAN (known as "sysa") using the "ping" program. It is assumed that no previous IP datagrams have been
received form this computer, and therefore arp must first be used to identify the MAC address of the remote
computer.

The arp request message ("who is X.X.X.X tell Y.Y.Y.Y", where X.X.X.X and Y.Y.Y.Y are IP addresses)
is sent using the Ethernet broadcast address, and an Ethernet protocol type of value 0x806. Since it is
broadcast, it is received by all systems in the same collision domain (LAN). This is ensures that is the target
of the query is connected to the network, it will receive a copy of the query. Only this system responds. The
other systems discard the packet silently.
The target system forms an arp response ("X.X.X.X is hh:hh:hh:hh:hh:hh", where hh:hh:hh:hh:hh:hh is the
Ethernet source address of the computer with the IP address of X.X.X.X). This packet is unicast to the
address of the computer sending the query (in this case Y.Y.Y.Y). Since the original request also included
the hardware address (Ethernet source address) of the requesting computer, this is already known, and
doesn't require another arp message to find this out.
datagrams have been received form this computer, and therefore arp must first be used to identify the MAC
address of the remote computer.

The arp request message ("who is X.X.X.X tell Y.Y.Y.Y", where X.X.X.X and Y.Y.Y.Y are IP addresses)
is sent using the Ethernet broadcast address, and an Ethernet protocol type of value 0x806. Since it is
broadcast, it is received by all systems in the same collision domain (LAN). This is ensures that is the target
of the query is connected to the network, it will receive a copy of the query. Only this system responds. The
other systems discard the packet silently.
The target system forms an arp response ("X.X.X.X is hh:hh:hh:hh:hh:hh", where hh:hh:hh:hh:hh:hh is the
Ethernet source address of the computer with the IP address of X.X.X.X). This packet is unicast to the
address of the computer sending the query (in this case Y.Y.Y.Y). Since the original request also included
the hardware address (Ethernet source address) of the requesting computer, this is already known, and
doesn't require another arp message to find this out.
RARP:
RARP (Reverse Address Resolution Protocol) is a protocol by which a physical machine in a local area
network can request to learn its IP address from a gateway server's Address Resolution Protocol (ARP)
table or cache. A network administrator creates a table in a local area network's gateway router that maps
the physical machine (or Media Access Control - MAC address) addresses to corresponding Internet
Protocol addresses. When a new machine is set up, its RARP client program requests from the RARP
server on the router to be sent its IP address. Assuming that an entry has been set up in the router table, the
RARP server will return the IP address to the machine which can store it for future use.
RARP is available for Ethernet, Fiber Distributed-Data Interface, and Token Ring LANs.
LECTURE NO.15 READINGS: A-PAGE 525
ICMP:
Internet Control Message Protocol
Internet Control Message Protocol (ICMP) defined by RFC 792 and RFC 1122 is used for network error
reporting and generating messages that require attention. The errors reported by ICMP are generally related
to datagram processing. ICMP only reports errors involving fragment 0 of any fragmented datagrams. The
IP, UDP or TCP layer will usually take action based on ICMP messages. ICMP generally belongs to the IP
layer of TCP/IP but relies on IP for support at the network layer. ICMP messages are encapsulated inside IP
datagrams.
ICMP will report the following network information:
• Timeouts
• Network congestion
• Network errors such as an unreachable host or network.
The ping command is also supported by ICMP, and this can be used to debug network problems.
ICMP Messages:
The ICMP message consists of an 8 bit type, an 8 bit code, an 8 bit checksum, and contents which vary
depending on code and type. The below table is a list of ICMP messages showing the type and code of the
messages and their meanings.
Type Codes Description Purpose
0 0 Echo reply Query
3 0 Network Unreachable Error
3 1 Host Unreachable Error
3 2 Protocol Unreachable Error
3 3 Protocol Unreachable Error
3 4 Fragmentation needed with don't fragment bit set Error
3 5 Source route failed Error
3 6 Destination network unknown Error
3 7 Destination host unknown Error
3 8 Source host isolated Error
3 9 Destination network administratively prohibited Error
3 10 Destination host administratively prohibited Error
3 11 Network Unreachable for TOS Error
3 12 Host Unreachable for TOS Error
3 13 Communication administratively prohibited by filtering Error
3 14 Host precedence violation Error
3 15 Precedence cutoff in effect Error
4 0 Source quench Error
5 0 Redirect for network Error
5 1 Redirect for host Error
5 2 Redirect for type of service and network Error
5 3 Redirect for type of service and host Error
8 0 Echo request Query
9 0 Normal router advertisement Query
9 16 Router does not route common traffic Query
10 0 Router Solicitation Query
11 0 Time to live is zero during transit Error
11 1 Time to live is zero during reassembly Error
12 0 IP header bad Error
12 1 Required option missing Error
12 2 Bad length Error
13 0 Timestamp request Query
14 0 Timestamp reply Query
15 0 Information request Query
16 0 Information reply Query
17 0 Address mask request Query
18 0 Address mask request Query
ICMP is used for many different functions, the most important of which is error reporting. Some of these
are "port unreachable", "host unreachable", "network unreachable", "destination network unknown", and
"destination host unknown". Some not related to errors are:
• Timestamp request and reply allows one system to ask another one for the current time.
• Address mask and reply is used by a diskless workstation to get its subnet mask at boot time.
• Echo request and echo reply is used by the ping program to test to see if another unit will respond
LECTURE NO.16 READINGS: B-PAGE 579
DOMAIN NAME SYSYTEM:
DNS (Domain Name System) is used on the Internet as well on many private networks. Networks using
Microsoft Active Directory directory service use DNS to resolve computer names and to locate computers
within their local networks and the Internet. Networks based on Windows 2000 Server and Windows
Server 2003 use DNS as a primary means of locating resources in Active Directory.
The domain namespace is the naming scheme that provides the hierarchical structure for the DNS database.
Each node, referred to as a domain, represents a partition of the DNS database. The DNS database is
indexed by name, so each domain must have a name. As you add domains to the hierarchy, the name of the
parent domain is added to its child domain (subdomain). A domain’s name identifies its position in the
hierarchy.
At the top of the DNS hierarchy, there is a single domain called the root domain, which is represented by a
single period (.).
Top level domains are grouped by organization type or geographic location. Top level domains are
controlled by the Internet Architecture Board (IAB), an Internet authority controlling the assignment of
domain names, among other things. Examples are .com, .gov and .net
Anyone can register a second level domain name. Second level domain names are registered to individuals
and organizations by a number of different domain registry companies. A second level name has two name
parts: a top level name and a unique second level name such as microsoft.com.
A DNS name server stores the zone database file. Name servers can store data for one zone or multiple
zones. A name server is said to have authority for the domain name space that the zone encompasses. One
name server contains the master zone database file, referred to as the primary zone database file, for the
specified zone. As a result, there must be at least one name server for a zone. Changes to a zone, such as
adding domains or hosts, are performed on the server that contains the primary zone database file.
Name resolution is the process of resolving names to IP addresses. It is similar to looking up a name in a
telephone book, in which the name is associated with a telephone number. For example, when you connect
to the Microsoft Web site, you use the name www.microsoft.com. DNS resolves www.microsoft.com to its
associated IP address. The mapping of names to IP addresses is stored in the DNS distributed database.
DNS name servers resolve forward and reverse lookup queries. A forward lookup query resolves a name to
an IP address, and a reverse lookup query resolves an IP address to a name. A name server can resolve a
query only for a zone for which it has authority. If a name server cannot resolve the query, it passes the
query to other name servers that can resolve it. The name server caches the query results to reduce the DNS
traffic on the network.
1. The client passes a forward lookup query for www.microsoft.com to its local name server.
2. The local name server checks its zone database file to determine whether it contains the name-to-IP
address mapping for the client query. The local name server does not have authority for the microsoft.com
domain, so it passes the query to one of the DNS root servers, requesting resolution of the host name. The
root name server sends back a referral to the com name server.
3. The local name server sends a request to a com name server, which responds with a referral to the
Microsoft name server.
4. The local name server sends a request to the Microsoft name server. Because the
Microsoft name server has authority for that portion of the domain namespace, when it receives the request,
it returns the IP address for www.microsoft.com to the local name server.
5. The local name server sends the IP address for www.microsoft.com to the client.
6. The name resolution is complete, and the client can access www.microsoft.com.
Structure of DNS

Fig. Structure of Domain Name System (DNS)


The structure of DNS is similar to the structure of Unix file system. It is a tree-like structure in which the
root is known as the Root DNS sever. Each node in the tree is associated with a resource record which
holds the information associated with it, and can have any number of branches. There can be a maximum of
127 levels in a tree; however, you will never find any domain name that long. Each node in a tree
represents its part in a domain name which can contain a maximum of 63 characters long.
The full domain name of any node in the tree is the sequence of each node in that path from the node to the
root. Domain name is read from the node to the root with a dot placed separating the names in the path. No
two nodes can have the same name if and only if they have the same parent. This guarantees that each
domain name in the DNS tree corresponds to unique domain name in the entire DNS structure. E.g. you can
not have two directory named “Program Files” in your C drive root directory, but if you wish you can have
a directory name “Program File” in your C drive root directory and another directory of that name in your
“Windows” directory.

E-MAIL:
The birth of electronic mail (email) occurred in the early 1960s. The mailbox was a file in a user's home
directory that was readable only by that user. Primitive mail applications appended new text messages to
the bottom of the file, making the user had to wade through the constantly growing file to find any
particular message. This system was only capable of sending messages to users on the same system.
The first network transfer of an electronic mail message file took place in 1971 when a computer engineer
named Ray Tomlinson sent a test message between two machines via ARPANET — the precursor to the
Internet. Communication via email soon became very popular, comprising 75 percent of ARPANET's
traffic in less than two years.
Today, email systems based on standardized network protocols have evolved into some of the most widely
used services on the Internet. Red Hat Enterprise Linux offers many advanced applications to serve and
access email.
This chapter reviews modern email protocols in use today and some of the programs designed to send and
receive email.
Email Protocols
Today, email is delivered using a client/server architecture. An email message is created using a mail client
program. This program then sends the message to a server. The server then forwards the message to the
recipient's email server, where the message is then supplied to the recipient's email client.
To enable this process, a variety of standard network protocols allow different machines, often running
different operating systems and using different email programs, to send and receive email.
The following protocols discussed are the most commonly used in the transfer of email.
Mail Transport Protocols
Mail delivery from a client application to the server, and from an originating server to the destination
server, is handled by the Simple Mail Transfer Protocol (SMTP) .
Mail Access Protocols
There are two primary protocols used by email client applications to retrieve email from mail servers: the
Post Office Protocol (POP) and the Internet Message Access Protocol (IMAP).
Unlike SMTP, both of these protocols require connecting clients to authenticate using a username and
password. By default, passwords for both protocols are passed over the network unencrypted.
POP
The default POP server under Red Hat Enterprise Linux is /usr/sbin/ipop3d and is provided by the imap
package. When using a POP server, email messages are downloaded by email client applications. By
default, most POP email clients are automatically configured to delete the message on the email server after
it has been successfully transferred, however this setting usually can be changed.
POP is fully compatible with important Internet messaging standards, such as Multipurpose Internet Mail
Extensions (MIME), which allow for email attachments.
POP works best for users who have one system on which to read email. It also works well for users who do
not have a persistent connection to the Internet or the network containing the mail server. Unfortunately for
those with slow network connections, POP requires client programs upon authentication to download the
entire content of each message. This can take a long time if any messages have large attachments.
The most current version of the standard POP protocol is POP3.
There are, however a variety of lesser-used POP protocol variants:
• APOP — POP3 with MDS authentication. An encoded hash of the user's password is sent from
the email client to the server rather then sending an unencrypted password.
• KPOP — POP3 with Kerberos authentication. Refer to Chapter 18 Kerberos for more information
about Kerberos.
• RPOP — POP3 with RPOP authentication. This uses a per-user ID, similar to a password, to
authenticate POP requests. However, this ID is not encrypted, so RPOP is no more secure than
standard POP.
For added security, it is possible to use Secure Socket Layer (SSL) encryption for client authentication and
data transfer sessions. This can be enabled by using the ipop3s service or by using the /usr/sbin/stunnel
program. Refer to Section 12.5.1 Securing Communication for more information.
IMAP
The default IMAP server under Red Hat Enterprise Linux is /usr/sbin/imapd and is provided by the imap
package. When using an IMAP mail server, email messages remain on the server where users can read or
delete them. IMAP also allows client applications to create, rename, or delete mail directories on the server
to organize and store email.
IMAP is particularly useful for those who access their email using multiple machines. The protocol is also
convenient for users connecting to the mail server via a slow connection, because only the email header
information is downloaded for messages until opened, saving bandwidth. The user also has the ability to
delete messages without viewing or downloading them.
For convenience, IMAP client applications are capable of caching copies of messages locally, so the user
can browse previously read messages when not directly connected to the IMAP server.
IMAP, like POP, is fully compatible with important Internet messaging standards, such as MIME, which
allow for email attachments.
Email Program Classifications
In general, all email applications fall into at least one of three classifications. Each classification plays a
specific role in the process of moving and managing email messages. While most users are only aware of
the specific email program they use to receive and send messages, each one is important for ensuring that
email arrives at the correct destination.
Mail Transfer Agent
A Mail Transfer Agent (MTA) transfers email messages between hosts using SMTP. A message may
involve several MTAs as it moves to its intended destination.
While the delivery of messages between machines may seem rather straightforward, the entire process of
deciding if a particular MTA can or should accept a message for delivery is quite complicated. In addition,
due to problems from spam, use of a particular MTA is usually restricted by the MTA's configuration or
access configuration for the network on which the MTA resides.
Many modern email client programs can act as an MTA when sending email. However, this action should
not be confused with the role of a true MTA. The sole reason email client programs are capable of sending
email like an MTA is because the host running the application does not have its own MTA. This is
particularly true for email client programs on non-Unix-based operating systems. However, these client
programs only send outbound messages to an MTA they are authorized to use and do not directly deliver
the message to the intended recipient's email server.
Since Red Hat Enterprise Linux installs two MTAs, Sendmail and Postfix, email client programs are often
not required to act as an MTA. Red Hat Enterprise Linux also includes a special purpose MTA called
Fetchmail.

Mail Delivery Agent


A Mail Delivery Agent (MDA) is invoked by the MTA to file incoming email in the proper user's mailbox.
In many cases, the MDA is actually a Local Delivery Agent (LDA), such as mail or Procmail.
Any program that actually handles a message for delivery to the point where it can be read by an email
client application can be considered an MDA. For this reason, some MTAs (such as Sendmail and Postfix)
can fill the role of an MDA when they append new email messages to a local user's mail spool file. In
general, MDAs do not transport messages between systems nor do they provide a user interface; MDAs
distribute and sort messages on the local machine for an email client application to access.
Mail User Agent
A Mail User Agent (MUA) is synonymous with an email client application. An MUA is a program that, at
the very least, allows a user to read and compose email messages. Many MUAs are capable of retrieving
messages via the POP or IMAP protocols, setting up mailboxes to store messages, and sending outbound
messages to an MTA.
MUAs may be graphical, such as Mozilla Mail, or have a very simple, text-based interface, such as mutt.
Mail Transport Agents
Red Hat Enterprise Linux includes two primary MTAs, Sendmail and Postfix. Sendmail is configured as
the default MTA, although it is easy to switch the default MTA to Postfix.
Sendmail
Sendmail's core purpose, like other MTAs, is to safely transfer email among hosts, usually using the SMTP
protocol. However, Sendmail is highly configurable, allowing control over almost every aspect of how
email is handled, including the protocol used. Many system administrators elect to use Sendmail as their
MTA due to its power and scalability.
Purpose and Limitations
It is important to be aware of what Sendmail is and what it can do as opposed to what it is not. In these days
of monolithic applications that fulfill multiple roles, Sendmail may seem like the only application needed to
run an email server within an organization. Technically, this is true, as Sendmail can spool mail to each
users' directory and deliver outbound mail for users. However, most users actually require much more than
simple email delivery. They usually want to interact with their email using an MUA, that uses POP or
IMAP, to download their messages to their local machine. Or, they may prefer a Web interface to gain
access to their mailbox. These other applications can work in conjunction with Sendmail, but they actually
exist for different reasons and can operate separately from one another.
LECTURE NO. 17 READINGS: A-PAGE 705
SMTP:Simple Mail Transfer Protocol
Simple Mail Transfer Protocol (SMTP) is used to send mail across the internet. There are four types of
programs used in the process of sending and receiving mail. They are:
• MUA - Mail users agent. This is the program a user will use to type e-mail. It usually incorporates
an editor for support. The user types the mail and it is passed to the sending MTA.
• MTA - Message transfer agent is used to pass mail from the sending machine to the receiving
machine. There is a MTA program running on both the sending and receiving machine. Sendmail
is a MTA.
• LDA - Local delivery agent on the receiving machine receives the mail from its MTA. This
program is usually procmail.
• Mail notifier - This program notifies the recipient that they have mail. Normally this requires two
programs, biff and comsat. Biff allows the administrator or user to turn on comsat service.
The MTA on both machines use the network SMTP (simple mail transfer protocol) to pass mail between
them, usually on port 25.
Other components of mail service include:
• Directory services - A list of users on a system. Microsoft provides a Global Address List and a
Personal Address Book.
• Post Office - This is where the messages are stored.
The primary purpose of SMTP is to transfer email between mail servers. However, it is critical for email
clients as well. To send email, the client sends the message to an outgoing mail server, which in turn
contacts the destination mail server for delivery. For this reason, it is necessary to specify an SMTP server
when configuring an email client.
Under Red Hat Enterprise Linux, a user can configure an SMTP server on the local machine to handle mail
delivery. However, it is also possible to configure remote SMTP servers for outgoing mail.
One important point to make about the SMTP protocol is that it does not require authentication. This allows
anyone on the Internet to send email to anyone else or even to large groups of people. It is this
characteristic of SMTP that makes junk email or spam possible. Modern SMTP servers attempt to
minimize this behavior by allowing only known hosts access to the SMTP server. Those servers that do not
impose such restrictions are called open relay servers.
By default, Sendmail (/usr/sbin/sendmail) is the default SMTP program under Red Hat Enterprise Linux.
However, a simpler mail server application called Postfix (/usr/sbin/postfix) is also available.
SMTP Commands:
• HELO - Sent by client with domain name such as mymachine.mycompany.com.
• MAIL - From <myself@mymachine.mycompany.com>
• RCPT - To <myfriend@theirmachine.theirorg.org>
• DATA - Sends the contents of the message. The headers are sent, then a blank line, then the
message body is sent. A line with "." and no other characters indicates the end of the message.
• QUIT
If you recall from the DNS section mail servers are specified in DNS configuration files as follows:
dept1.mycompany.com. IN MX 5 mail.mycompany.com.
dept1.mycompany.com. IN MX 10 mail1.mycompany.com.
dept1.mycompany.com. IN MX 15 mail2.mycompany.com.
The host dept1.mycompany.com may not be directly connected to the internet or network but may be
connected periodically using a PPP line. The servers mail, mail1, and mail2 are used as mail forwarders to
send mail to the host dept1. The one with the lowest number, 5, is normally used for sending the mail, but
the others are used when the first one or ones are down.
POP:
the Post Office Protocol version 3 (POP3) is an application-layer Internet standard protocol used by
local e-mail clients to retrieve e-mail from a remote server over a TCP/IP connection. POP3 and
IMAP4 (Internet Message Access Protocol) are the two most prevalent Internet standard protocols
for e-mail retrieval. Virtually all modern e-mail clients and servers support both.
 POP3 has made earlier versions of the protocol, informally called POP1 and POP2, obsolete. In
contemporary usage, the less precise term POP almost always means POP3 in the context of e-
mail protocols.
The design of POP3 and its procedures supports end-users with intermittent connections (such as dial-up
connections), allowing these users to retrieve e-mail when connected and then to view and manipulate the
retrieved messages without needing to stay connected. Although most clients have an option to leave mail
on server, e-mail clients using POP3 generally connect, retrieve all messages, store them on the user's PC
as new messages, delete them from the server, and then disconnect. Most e-mail clients support either
POP3 or IMAP to retrieve messages; however, fewer Internet Service Providers (ISPs) support IMAP. The
fundamental difference between POP3 and IMAP4 is that POP3 offers access to a mail drop; the mail exists
on the server until it is collected by the client. Even if the client leaves some or all messages on the server,
the client's message store is considered authoritative. In contrast, IMAP4 offers access to the mail store; the
client may store local copies of the messages, but these are considered to be a temporary cache; the server's
store is authoritative.
Clients with a leave mail on server option generally use the POP3 UIDL (Unique IDentification Listing)
command. Most POP3 commands identify specific messages by their ordinal number on the mail server.
This creates a problem for a client intending to leave messages on the server, since these message numbers
may change from one connection to the server to another. For example if a mailbox contains five messages
at last connect, and a different client then deletes message #3, the next connecting user will find the last two
messages' numbers decremented by one. UIDL provides a mechanism to avoid these numbering issues. The
server assigns a string of characters as a permanent and unique ID for the message. When a POP3-
compatible e-mail client connects to the server, it can use the UIDL command to get the current mapping
from these message IDs to the ordinal message numbers. The client can then use this mapping to determine
which messages it has yet to download, which saves time when downloading. Whether using POP3 or
IMAP to retrieve messages, e-mail clients typically use the SMTP_Submit profile of the Simple Mail
Transfer Protocol (SMTP) to send messages. E-mail clients are commonly categorized as either POP or
IMAP clients, but in both cases the clients also use SMTP.
.POP4
While not yet an official standardized mail protocol, a proposal has been outlined for a POP4 specification,
complete with a working server implementation.
The proposed POP4 extension adds basic folder management, multipart message support, as well as
message flag management, allowing for a light protocol which supports some popular IMAP features which
POP3 currently lacks.
No progress has been observed in the POP4 specification since 2003.
LECTURE NO. 18 RAEDINGS: A-PAGE 718
IMAP:
History
IMAP was designed by Mark Crispin in 1986 as a remote mailbox protocol, in contrast to the widely used
POP, a protocol for retrieving the contents of a mailbox.
Original IMAP
The original Interim Mail Access Protocol was implemented as a Xerox Lisp machine client and a TOPS-
20 server.
No copies of the original interim protocol or its software exist; all known installations of the original
protocol were updated to IMAP2. Although some of its commands and responses were similar to IMAP2,
the interim protocol lacked command/response tagging and thus its syntax was incompatible with all other
versions of IMAP.
IMAP2
The interim protocol was quickly replaced by the Interactive Mail Access Protocol (IMAP2), defined in
RFC 1064 and later updated by RFC 1176. IMAP2 introduced command/response tagging and was the
first publicly distributed version.
IMAP2bis
With the advent of MIME, IMAP2 was extended to support MIME body structures and add mailbox
management functionality (create, delete, rename, message upload) that was absent in IMAP2. This
experimental revision was called IMAP2bis; its specification was never published in non-draft form. Early
versions of Pine were widely distributed with IMAP2bis support (Pine 4.00 and later supports IMAP4rev1).
IMAP4
An IMAP Working Group formed in the IETF in the early 1990s and took over responsibility for the
IMAP2bis design. The IMAP WG decided to rename IMAP2bis to IMAP4 to avoid confusion with a
competing IMAP3 proposal from another group that never got off the ground. The expansion of the IMAP
acronym also changed to the Internet Message Access Protocol.
Some design flaws in the original IMAP4 (defined by RFC 1730) that came out in implementation
experience led to its revision and replacement by IMAP4rev1 two years later. There were very few IMAP4
client or server implementations due to its short lifetime.
IMAP4rev1
The current version of IMAP since 1996, IMAP version 4 revision 1 (IMAP4rev1), is defined by RFC
3501 which revised the earlier RFC 2060.
IMAP4rev1 is backwards compatible with IMAP2 and IMAP2bis; and is largely backwards compatible
with IMAP4. However, the older versions are either extinct or nearly so.
Unlike many older Internet protocols, IMAP4 natively supports encrypted login mechanisms. Plain-text
transmission of passwords in IMAP4 is also possible. Because the encryption mechanism to be used must
be agreed between the server and client, plain-text passwords are used in some combinations of clients and
servers (typically Microsoft Windows clients and non-Windows servers). It is also possible to encrypt
IMAP4 traffic using SSL, either by tunneling IMAP4 communications over SSL on port 993, or by issuing
STARTTLS within an established IMAP4 session (see RFC 2595).
Advantages over POP3
Connected and disconnected modes of operation
When using POP3, clients typically connect to the e-mail server briefly, only as long as it takes to
download new messages. When using IMAP4, clients often stay connected as long as the user interface is
active and download message content on demand. For users with many or large messages, this IMAP4
usage pattern can result in faster response times.
Multiple clients simultaneously connected to the same mailbox
The POP3 protocol requires the currently connected client to be the only client connected to the mailbox. In
contrast, the IMAP protocol specifically allows simultaneous access by multiple clients and provides
mechanisms for clients to detect changes made to the mailbox by other, concurrently connected, clients.
Access to MIME message parts and partial fetch
Nearly all internet e-mail is transmitted in MIME format, allowing messages to have a tree structure where
the leaf nodes are any of a variety of single part content types and the non-leaf nodes are any of a variety of
multipart types. The IMAP4 protocol allows clients to separately retrieve any of the individual MIME parts
and also to retrieve portions of either individual parts or the entire message. These mechanisms allow
clients to retrieve the text portion of a message without retrieving attached files or to stream content as it is
being fetched.
] Message state information
Through the use of flags defined in the IMAP4 protocol, clients can keep track of message state; for
example, whether or not the message has been read, replied to, or deleted. These flags are stored on the
server, so different clients accessing the same mailbox at different times can detect state changes made by
other clients. POP3 provides no mechanism for clients to store such state information on the server so if a
single user accesses a mailbox with two different POP3 clients, state information--such as whether a
message has been accessed--cannot be synchronized between the clients. The IMAP4 protocol supports
both pre-defined system flags and client defined keywords. System flags indicate state information such as
whether a message has been read. Keywords, which are not supported by all IMAP servers, allow messages
to be given one or more tags whose meaning is up to the client. Adding user created tags to messages is an
operation supported by some web-based email services, such as Gmail.
Multiple mailboxes on the server
IMAP4 clients can create, rename, and/or delete mailboxes (usually presented to the user as folders) on the
server, and move messages between mailboxes. Multiple mailbox support also allows servers to provide
access to shared and public folders.
Server-side searches
IMAP4 provides a mechanism for a client to ask the server to search for messages meeting a variety of
criteria. This mechanism avoids requiring clients to download every message in the mailbox in order to
perform these searches.
Built-in extension mechanism
Reflecting the experience of earlier Internet protocols, IMAP4 defines an explicit mechanism by which it
may be extended. Many extensions to the base protocol have been proposed and are in common use.
IMAP2bis did not have an extension mechanism, and POP3 now has one defined by RFC 2449.
Disadvantages of IMAP
While IMAP remedies many of the shortcomings of POP, this inherently introduces additional complexity.
Much of this complexity (e.g., multiple clients accessing the same mailbox at the same time) is
compensated for by server-side workarounds such as maildir or database backends.
Unless the mail store and searching algorithms on the server are carefully implemented, a client can
potentially consume large amounts of server resources when searching massive mailboxes.
IMAP4 clients need to explicitly request new email message content potentially causing additional delays
on slow connections such as those commonly used by mobile devices. A private proposal, push IMAP,
would extend IMAP to implement push e-mail by sending the entire message instead of just a notification.
However, push IMAP has not been generally accepted and current IETF work has addressed the problem in
other ways (see the Lemonade Profile for more information).
Unlike some proprietary protocols which combine sending and retrieval operations, sending a message and
saving a copy in a server-side folder with a base-level IMAP client requires transmitting the message
content twice, once to SMTP for delivery and a second time to IMAP to store in a sent mail folder.
FTP:FTP: File Transfer Protocol (FTP) is a network protocol used to transfer data from one computer
to another through a network such as the Internet.
FTP is a file transfer protocol for exchanging and manipulating files over a TCP computer network. An
FTP client may connect to an FTP server to manipulate files on that server.
FTP sites are typically used for uploading and downloading files to a central server computer, for the sake
of file distribution.
In order to download and upload files to an FTP site, you need to connect using special FTP software.
There are both commercial and free FTP software programs, and some browser-based free FTP programs
as well.[1]
The typical information needed to connect to an FTP site is:
1. The "server address" or "hostname". This is the network address of the computer you wish to
connect to, such as ftp.microsoft.com.
2. The username and password. These are the credentials you use to access the specific files on the
computer you wish to connect to.
Connection methods
FTP runs exclusively over TCP. It defaults to listen on port 21 for incoming connections from FTP clients.
A connection to this port from the FTP Client forms the control stream on which commands are passed
from the FTP client to the FTP server and on occasion from the FTP server to the FTP client. FTP uses out-
of-band control, which means it uses a separate connection for control and data. Thus, for the actual file
transfer to take place, a different connection is required which is called the data stream. Depending on the
transfer mode, the process of setting up the data stream is different. Port 21 for control (or program), port
20 for data.
In active mode, the FTP client opens a dynamic port, sends the FTP server the dynamic port number on
which it is listening over the control stream and waits for a connection from the FTP server. When the FTP
server initiates the data connection to the FTP client it binds the source port to port 20 on the FTP server.
In order to use active mode, the client sends a PORT command, with the IP and port as argument. The
format for the IP and port is "h1,h2,h3,h4,p1,p2". Each field is a decimal representation of 8 bits of the host
IP, followed by the chosen data port. For example, a client with an IP of 192.168.0.1, listening on port
49154 for the data connection will send the command "PORT 192,168,0,1,192,2". The port fields should be
interpreted as p1×256 + p2 = port, or, in this example, 192×256 + 2 = 49154.
In passive mode, the FTP server opens a dynamic port, sends the FTP client the server's IP address to
connect to and the port on which it is listening (a 16-bit value broken into a high and low byte, as explained
above) over the control stream and waits for a connection from the FTP client. In this case, the FTP client
binds the source port of the connection to a dynamic port.
To use passive mode, the client sends the PASV command to which the server would reply with something
similar to "227 Entering Passive Mode (127,0,0,1,192,52)". The syntax of the IP address and port are the
same as for the argument to the PORT command.
In extended passive mode, the FTP server operates exactly the same as passive mode, however it only
transmits the port number (not broken into high and low bytes) and the client is to assume that it connects
to the same IP address that was originally connected to. Extended passive mode was added by RFC 2428 in
September 1998.
While data is being transferred via the data stream, the control stream sits idle. This can cause problems
with large data transfers through firewalls which time out sessions after lengthy periods of idleness. While
the file may well be successfully transferred, the control session can be disconnected by the firewall,
causing an error to be generated.
The FTP protocol supports resuming of interrupted downloads using the REST command. The client passes
the number of bytes it has already received as argument to the REST command and restarts the transfer. In
some commandline clients for example, there is an often-ignored but valuable command, "reget" (meaning
"get again") that will cause an interrupted "get" command to be continued, hopefully to completion, after a
communications interruption.
Resuming uploads is not as easy. Although the FTP protocol supports the APPE command to append data
to a file on the server, the client does not know the exact position at which a transfer got interrupted. It has
to obtain the size of the file some other way, for example over a directory listing or using the SIZE
command.
In ASCII mode (see below), resuming transfers can be troublesome if client and server use different end of
line characters.
The objectives of FTP, as outlined by its RFC, are:
1. To promote sharing of files (computer programs and/or data).
2. To encourage indirect or implicit use of remote computers.
3. To shield a user from variations in file storage systems among different hosts.
4. To transfer data reliably, and efficiently.
LECTURE NO.19 READINGS: A-PAGE 718
Criticisms of FTP
• Passwords and file contents are sent in clear text, which can be intercepted by eavesdroppers.
There are protocol enhancements that remedy this, for instance by using SSL, TLS or Kerberos.
• Multiple TCP/IP connections are used, one for the control connection, and one for each download,
upload, or directory listing. Firewalls may need additional logic and/or configuration changes to
account for these connections.
• It is hard to filter active mode FTP traffic on the client side by using a firewall, since the client
must open an arbitrary port in order to receive the connection. This problem is largely resolved by
using passive mode FTP.
• It is possible to abuse the protocol's built-in proxy features to tell a server to send data to an
arbitrary port of a third computer; see FXP.
• FTP is a high latency protocol due to the number of commands needed to initiate a transfer.
• No integrity check on the receiver side. If a transfer is interrupted, the receiver has no way to
know if the received file is complete or not. Some servers support undocumented extensions to
calculate for example a file's MD5 sum (e.g. using the SITE MD5 command), XCRC, XMD5,
XSHA or CRC checksum, however even then the client has to make explicit use of them. In the
absence of such extensions, integrity checks have to be managed externally.
• No date/timestamp attribute transfer. Uploaded files are given a new current timestamp, unlike
other file transfer protocols such as SFTP, which allow attributes to be included. There is no way
in the standard FTP protocol to set the time-last-modified (or time-created) datestamp that most
modern filesystems preserve. There is a draft of a proposed extension that adds new commands for
this, but as of yet, most of the popular FTP servers do not support it.
Security problems
The original FTP specification is an inherently insecure method of transferring files because there is no
method specified for transferring data in an encrypted fashion. This means that under most network
configurations, user names, passwords, FTP commands and transferred files can be "sniffed" or viewed by
anyone on the same network using a packet sniffer. This is a problem common to many Internet protocol
specifications written prior to the creation of SSL such as HTTP, SMTP and Telnet. The common solution
to this problem is to use either SFTP (SSH File Transfer Protocol), or FTPS (FTP over SSL), which adds
SSL or TLS encryption to FTP as specified in RFC 4217.
Anonymous FTP
A host that provides an FTP service may additionally provide Anonymous FTP access as well. Under this
arrangement, users do not strictly need an account on the host. Instead the user typically enters 'anonymous'
or 'ftp' when prompted for username. Although users are commonly asked to send their email address as
their password, little to no verification is actually performed on the supplied data.
As modern FTP clients typically hide the anonymous login process from the user, the ftp client will supply
dummy data as the password (since the user's email address may not be known to the application). For
example, the following ftp user agents specify the listed passwords for anonymous logins:
• Mozilla Firefox (2.0) — mozilla@example.com
• KDE Konqueror (3.5) — anonymous@
• wget (1.10.2) — -wget@
• lftp (3.4.4) — lftp@
The Gopher protocol has been suggested as an alternative to anonymous FTP, as well as Trivial File
Transfer Protocol and File Service Protocol.[citation needed]
Data format
While transferring data over the network, several data representations can be used. The two most common
transfer modes are:
1. ASCII mode
2. Binary mode: In "Binary mode", the sending machine sends each file byte for byte and as such the
recipient stores the bytestream as it receives it. (The FTP standard calls this "IMAGE" or "I"
mode)
In "ASCII mode", any form of data that is not plain text will be corrupted. When a file is sent using an
ASCII-type transfer, the individual letters, numbers, and characters are sent using their ASCII character
codes. The receiving machine saves these in a text file in the appropriate format (for example, a Unix
machine saves it in a Unix format, a Windows machine saves it in a Windows format). Hence if an ASCII
transfer is used it can be assumed plain text is sent, which is stored by the receiving computer in its own
format. Translating between text formats might entail substituting the end of line and end of file characters
used on the source platform with those on the destination platform, e.g. a Windows machine receiving a file
from a Unix machine will replace the line feeds with carriage return-line feed pairs. It might also involve
translating characters; for example, when transferring from an IBM mainframe to a system using ASCII,
EBCDIC characters used on the mainframe will be translated to their ASCII equivalents, and when
transferring from the system using ASCII to the mainframe, ASCII characters will be translated to their
EBCDIC equivalents.
By default, most FTP clients use ASCII mode. Some clients try to determine the required transfer-mode by
inspecting the file's name or contents, or by determining whether the server is running an operating system
with the same text file format.
The FTP specifications also list the following transfer modes:
1. EBCDIC mode - this transfers bytes, except they are encoded in EBCDIC rather than ASCII.
Thus, for example, the ASCII mode server
2. Local mode - this is designed for use with systems that are word-oriented rather than byte-
oriented. For example mode "L 36" can be used to transfer binary data between two 36-bit
machines. In L mode, the words are packed into bytes rather than being padded. Given the
predominance of byte-oriented hardware nowadays, this mode is rarely used. However, some FTP
servers accept "L 8" as being equivalent to "I".
In practice, these additional transfer modes are rarely used. They are however still used by some legacy
mainframe systems.
The text (ASCII/EBCDIC) modes can also be qualified with the type of carriage control used (e.g.
TELNET NVT carriage control, ASA carriage control), although that is rarely used nowadays.
NNTP:
The Network News Transfer Protocol or NNTP is an Internet application protocol used primarily for
reading and posting Usenet articles (aka netnews), as well as transferring news among news servers. Brian
Kantor of the University of California, San Diego and Phil Lapsley of the University of California,
Berkeley completed RFC 977, the specification for the Network News Transfer Protocol, in March 1986.
Other contributors included Stan Barber from the Baylor College of Medicine and Erik Fair of Apple
Computer.
Usenet was originally designed around the UUCP network, with most article transfers taking place over
direct computer-to-computer telephone links. Readers and posters would log into the same computers that
hosted the servers, reading the articles directly from the local disk.
As local area networks and the Internet became more commonly used, it became desirable to allow
newsreaders to be run on personal computers, and a means of employing the Internet to handle article
transfers was desired. A newsreader, also known as a news client, is an application software that reads
articles on Usenet (generally known as newsgroup), either directly from the news server's disks or via the
NNTP.
Because networked Internet-compatible filesystems were not yet widely available, it was decided to
develop a new protocol that resembled SMTP, but was tailored for reading newsgroups.
The well-known TCP port 119 is reserved for NNTP. When clients connect to a news server with SSL,
TCP port 563 is used. This is sometimes referred to as NNTPS.
In October 2006, the IETF released RFC 3977 which updates the NNTP protocol and codifies many of the
additions made over the years since RFC 977. The IMAP protocol can also be used for reading
newsgroups.
LECTURE NO.20 READINGS: A-PAGE 731 to 736,528
HTTP:
Hypertext Transfer Protocol (HTTP) is a communications protocol. Its use for retrieving inter-linked
text documents (hypertext) led to the establishment of the World Wide Web.
HTTP is a request/response standard between a client and a server. A client is the end-user, the server is the
web site. The client making a HTTP request—using a web browser, spider, or other end-user tool—is
referred to as the user agent. The responding server—which stores or creates resources such as HTML files
and images—is called the origin server. In between the user agent and origin server may be several
intermediaries, such as proxies, gateways, and tunnels. HTTP is not constrained to using TCP/IP and its
supporting layers, although this is its most popular application on the Internet. Indeed HTTP can be
"implemented on top of any other protocol on the Internet, or on other networks. HTTP only presumes a
reliable transport; any protocol that provides such guarantees can be used."
Typically, an HTTP client initiates a request. It establishes a Transmission Control Protocol (TCP)
connection to a particular port on a host (port 80 by default; see List of TCP and UDP port numbers). An
HTTP server listening on that port waits for the client to send a request message. Upon receiving the
request, the server sends back a status line, such as "HTTP/1.1 200 OK", and a message of its own, the
body of which is perhaps the requested resource, an error message, or some other information.
HTTP predominantly uses TCP and not UDP because much data must be sent for a webpage, and TCP
provides transmission control, presents the data in order, and provides error correction. See the difference
between TCP and UDP.
Resources to be accessed by HTTP are identified using Uniform Resource Identifiers (URIs) (or, more
specifically, Uniform Resource Locators (URLs)) using the http: or https URI schemes.
Request message
The request message consists of the following:
• Request line, such as GET /images/logo.gif HTTP/1.1, which requests a resource called
/images/logo.gif from server
• Headers, such as Accept-Language: en
• An empty line
• An optional message body
The request line and headers must all end with <CR><LF> (that is, a carriage return followed by a line
feed). The empty line must consist of only <CR><LF> and no other whitespace. In the HTTP/1.1 protocol,
all headers except Host are optional.
A request line containing only the path name is accepted by servers to maintain compatibility with HTTP
clients before the HTTP/1.0 specification.
Request methods

A HTTP request made using telnet. The request, response headers and response body are highlighted.
HTTP defines eight methods (sometimes referred to as "verbs") indicating the desired action to be
performed on the identified resource. What this resource represents, whether pre-existing data or data that
is generated dynamically, depends on the implementation of the server. Often, the resource corresponds to
a file or the output of an executable residing on the server.
HEAD
Asks for the response identical to the one that would correspond to a GET request, but without the response
body. This is useful for retrieving meta-information written in response headers, without having to
transport the entire content.
GET
Requests a representation of the specified resource. Note that GET should not be used for operations that
cause side-effects, such as using it for taking actions in web applications. One reason for this is that GET
may be used arbitrarily by robots or crawlers, which should not need to consider the side effects that a
request should cause. See safe methods below.
POST
Submits data to be processed (e.g., from an HTML form) to the identified resource. The data is included in
the body of the request. This may result in the creation of a new resource or the updates of existing
resources or both.
PUT
Uploads a representation of the specified resource.
DELETE
Deletes the specified resource.
TRACE
Echoes back the received request, so that a client can see what intermediate servers are adding or changing
in the request.
OPTIONS
Returns the HTTP methods that the server supports for specified URL. This can be used to check the
functionality of a web server by requesting '*' instead of a specific resource.
CONNECT
Converts the request connection to a transparent TCP/IP tunnel, usually to facilitate SSL-encrypted
communication (HTTPS) through an unencrypted HTTP proxy.
Safe methods
Some methods (for example, HEAD, GET, OPTIONS and TRACE) are defined as safe, which means they
are intended only for information retrieval and should not change the state of the server. In other words,
they should not have side effects, beyond relatively harmless effects such as logging, caching, the serving
of banner advertisements or incrementing a web counter. Making arbitrary GET requests without regard to
the context of the application's state should therefore be considered safe.
By contrast, methods such as POST, PUT and DELETE are intended for actions which may cause side
effects either on the server, or external side effects such as financial transactions or transmission of email.
Such methods are therefore not usually used by conforming web robots or web crawlers, which tend to
make requests without regard to context or consequences.
Despite the prescribed safety of GET requests, in practice their handling by the server is not technically
limited in any way, and careless or deliberate programming can just as easily (or more easily, due to lack of
user agent precautions) cause non-trivial changes on the server. This is discouraged, because it can cause
problems for Web caching, search engines and other automated agents, which can make unintended
changes on the server.
HTTP versions
HTTP has evolved into multiple, mostly backwards-compatible protocol versions. RFC 2145 describes the
use of HTTP version numbers. The client tells in the beginning of the request the version it uses, and the
server uses the same or earlier version in the response.
HTTP/0.9 (1991)
Deprecated. Supports only one command, GET, which does not specify the HTTP version. Does not
support headers. Since this version does not support POST, the information a client can pass to the server is
limited by the URL length.
HTTP/1.0 (May 1996)
This is the first protocol revision to specify its version in communications and is still in wide use, especially
by proxy servers.
HTTP/1.1 (1997-1999)[3][4]
Current version; persistent connections enabled by default and works well with proxies. Also supports
request pipelining, allowing multiple requests to be sent at the same time, allowing the server to prepare for
the workload and potentially transfer the requested resources more quickly to the client.
HTTP/1.2
The initial 1995 working drafts of the document PEP—an Extension Mechanism for HTTP (which
proposed the Protocol Extension Protocol, abbreviated PEP) were prepared by the World Wide Web
Consortium and submitted to the Internet Engineering Task Force. PEP was originally intended to become
a distinguishing feature of HTTP/1.2.[5] In later PEP working drafts, however, the reference to HTTP/1.2
was removed. The experimental RFC 2774, HTTP Extension Framework, largely subsumed PEP. It was
published in February 2000.
The major changes between HTTP/1.0 and HTTP/1.1 include the way HTTP handles caching; how it
optimizes bandwidth and network connections usage, manages error notifications; how it transmits
messages over the network; how internet addresses are conserved; and how it maintains security and
integrity.[6]
HTTP
Persistence · Compression · SSL
Headers
ETag · Cookie · Referrer
Status codes
200 OK
207 Multi-Status
301 Moved permanently
302 Found
303 See Other
403 Forbidden
404 Not Found
This box: view • talk • edit

Status codes
In HTTP/1.0 and since, the first line of the HTTP response is called the status line and includes a numeric
status code (such as "404") and a textual reason phrase (such as "Not Found"). The way the user agent
handles the response primarily depends on the code and secondarily on the response headers. Custom status
codes can be used since, if the user agent encounters a code it does not recognize, it can use the first digit of
the code to determine the general class of the response.
Also, the standard reason phrases are only recommendations and can be replaced with "local equivalents" at
the web developer's discretion. If the status code indicated a problem, the user agent might display the
reason phrase to the user to provide further information about the nature of the problem. The standard also
allows the user agent to attempt to interpret the reason phrase, though this might be unwise since the
standard explicitly specifies that status codes are machine-readable and reason phrases are human-readable
Persistent connections
In HTTP/0.9 and 1.0, the connection is closed after a single request/response pair. In HTTP/1.1 a keep-
alive-mechanism was introduced, where a connection could be reused for more than one request.
Such persistent connections reduce lag perceptibly, because the client does not need to re-negotiate the
TCP connection after the first request has been sent.
Version 1.1 of the protocol made bandwidth optimization improvements to HTTP/1.0. For example,
HTTP/1.1 introduced chunked transfer encoding to allow content on persistent connections to be streamed,
rather than buffered. HTTP pipelining further reduces lag time, allowing clients to send multiple requests
before a previous response has been received to the first one. Another improvement to the protocol was
byte serving, which is when a server transmits just the portion of a resource explicitly requested by a client.
HTTP session state
HTTP is a stateless protocol. The advantage of a stateless protocol is that hosts do not need to retain
information about users between requests, but this forces web developers to use alternative methods for
maintaining users' states. For example, when a host needs to customize the content of a website for a user,
the web application must be written to track the user's progress from page to page. A common method for
solving this problem involves sending and receiving cookies. Other methods include server side sessions,
hidden variables (when the current page is a form), and URL encoded parameters (such as /index.php?
session_id=some_unique_session_code).

IPv6
IPv6 is 128 bits.It has eight octet pairs, each with 16 bits and written in hexadecimal as follows:
2b63:1478:1ac5:37ef:4e8c:75df:14cd:93f2
Extension headers can be added to IPv6 for new features.

It is the next-generation Internet Layer protocol for packet-switched internetworks and the Internet. IPv6
has a much larger address space than IPv4. This is based on the definition of a 128-bit address, whereas
IPv4 used only 32 bits. The new address space thus supports 2128 (about 3.4×1038) addresses. This
expansion provides flexibility in allocating addresses and routing traffic and eliminates the need for
network address translation (NAT). NAT gained wide-spread deployment as an effort to alleviate IPv4
address exhaustion.
IPv6 also implements new features that simplify aspects of address assignment (stateless address
autoconfiguration) and network renumbering (prefix and router announcements) when changing Internet
connectivity providers. The IPv6 subnet size has been standardized by fixing the size of the host identifier
portion of an address to 64 bits to facilitate automatic mechanism for forming the host identifier from Link
Layer media addressing information (MAC address).
Network security is integrated by design in the IPv6 architecture. Internet Protocol Security (IPsec) was
originally developed for IPv6, but found wide-spread optional deployment first in IPv4 into which it was
re-engineered. The IPv6 specifications mandate IPsec implementation as a fundamental interoperability
requirement.
The general requirements for implementing IPv6 on a net
IPv6 packet format
The IPv6 packet is composed of two main parts: the header and the payload.
Header
+ Bits 0–3 4–7 8–11 12-15 16–23 24–31
0 Version Traffic Class Flow Label
32 Payload Length Next Header Hop Limit
64
96
12 Source Address
8
16
0
19
2
22
4
Destination Address
25
6
28
8

The header is in the first 40 octets (320 bits) of the packet and contains:
• Version - version 6 (4-bit IP version).
• Traffic class - packet priority (8-bits). Priority values subdivide into ranges: traffic where the
source provides congestion control and non-congestion control traffic.
• Flow label - QoS management (20 bits). Originally created for giving real-time applications
special service, but currently unused.
• Payload length - payload length in bytes (16 bits). When cleared to zero, the option is a "Jumbo
payload" (hop-by-hop).
• Next header - Specifies the next encapsulated protocol. The values are compatible with those
specified for the IPv4 protocol field (8 bits).
• Hop limit - replaces the time to live field of IPv4 (8 bits).
• Source and destination addresses - 128 bits each.
The payload can have a size of up to 64KiB in standard mode, or larger with a "jumbo payload" option.
Fragmentation is handled only in the sending host in IPv6: routers never fragment a packet, and hosts are
expected to use PMTU discovery.
The protocol field of IPv4 is replaced with a Next Header field. This field usually specifies the transport
layer protocol used by a packet's payload.
In the presence of options, however, the Next Header field specifies the presence of an extra options
header, which then follows the IPv6 header; the payload's protocol itself is specified in a field of the
options header. This insertion of an extra header to carry options is analogous to the handling of AH and
ESP in IPsec for both IPv4 and IPv6.
LECTURE NO.21 READINGS:
http://en.wikipedia.org/wiki/Local_area_network,
http://www.wb.nic.in/nicnet/lan.asp

INTRODUCTION TO LANs:
http://en.wikipedia.org/wiki/Local_area_network,
local area network (LAN) is a computer network covering a small physical area, like a home, office, or
small group of buildings, such as a school, or an airport. The defining characteristics of LANs, in contrast
to wide-area networks (WANs), include their usually higher data-transfer rates, smaller geographic range,
and lack of a need for leased telecommunication lines.
Ethernet over unshielded twisted pair cabling, and Wi-Fi are the two most common technologies currently,
but ARCNET, Token Ring and many others have been used in the past.
FEATURES OF LAN:
Internet Access over LAN
 There are various methods of connecting a LAN to the Internet Gateway, which are explained as
below :

Dial-up
Leased Line
ISDN
VSAT Technology
RF Technology (Wireless Access)
Cable Modem

Dial - Up
A common way of accessing Internet over LAN is the Dial-Up approach. In this method, a remote user gets
to Internet as follows - Initially the remote user¹s PC is linked to the local gateway through an existing
dialup line using modems, once the user has reached the local gateway, further routing up to Internet is
taken care of, by the local gateway itself. The routing procedures are transparent to the end user.
Leased line
Leased line facility provides reliable, high speed services starting as low as 2.4kbps and ranging as high as
45 Mbps (T3 service). A leased line connection is an affordable way to link two or more sites for a fixed
monthly charge. Leased Lines can be either fiber optic or copper lines High capacity leased line service is
an excellent way to provide data, voice and video links between sites. Leased line service provides a
consistent amount of bandwidth for all your communication needs.
ISDN
Integrated Services digital Network (ISDN) is a digital telephone system. ISDN involves the digitization of
telephone network so that voice, data, graphics, text, music, video and other source material can be
provided to end users from a single end-user terminal over existing telephone wiring.
ISDN BRI (Basic Rate ISDN) delivers two 64 kbps channels called B channels and one at 16kbps (D
channel). ISDN offers speed at 64 Kbps and 128 Kbps and is an alternative for those with a need for greater
Bandwidth than dial service.For utilizing the ISDN service, the User needs to have an ISDN Terminal
Adapter and an ISDN Card on the system.
VSAT
VSAT technology has emerged as a very useful, everyday application of modern telecommunications.
VSAT stands for 'Very Small Aperture Terminal' and refers to 'receive/transmit' terminals installed at
dispersed sites connecting to a central hub via satellite using small diameter antenna dishes (0.6 to 3.8
meter). VSAT technology represents a cost effective solution for users seeking an independent
communications network connecting a large number of geographically dispersed sites. VSAT networks
offer value-added satellite-based services capable of supporting the Internet, data, voice/fax etc. over LAN.
Generally, these systems operate in the Ku-band and C-band frequencies.
Cable Modem
The Internet Access over cable modem is a very new and fast emerging technology. A "Cable Modem" is a
device that allows high speed data access via a cable TV (CATV) network. A cable modem will typically
have two connections, one to the cable wall outlet and the other to the PC. This will enable the typical array
of Internet services at speeds of 100 to 1000 times as fast as the telephone modem. The speed of cable
modems range from 500 Kbps to 10 Mbps
COMPONENTS OF LAN:
Basic LAN components

There are essentially five basic components of a LAN


Network Devices such as Workstations, Printers, File Servers which are normally accessed by all other
computers
Network Communication Devices i.e. devices such as hubs, routers, switches etc., used for network
operations
Network Interface Cards (NICs) for each network device required to access the network .
Cable as a physical transmission medium.
Network Operating System - software applications required to control the use of the network LAN
standards.
LECTURE NO.22: READINGS:

http://www.wb.nic.in/nicnet/lan.asp,
http://www.rocw.raifoundation.org/computing/BCA/datacommunication/lectu
re-notes/lecture-21.pdfINTERNET
http://www.rocw.raifoundation.org/computing/BCA/datacommunication/lecture-notes/lecture-
21.pdfINTERNET

USAGE OF LANs
LAN STANDARDS:
There are many LAN standards as Ethernet, Token Ring , FDDI etc
IEEE 802 STANDARDS:
IEEE 802 Standard
The Data Link Layer and IEEE
When we talk about Local Area Network (LAN) technology the IEEE 802 standard may be heard. This
standard defines networking connections for the interface card and the physical connections, describing
how they are done. The 802 standards were published by the Institute of Electrical and Electronics
Engineers (IEEE). The 802.3 standard is called ethernet, but the IEEE standards do not define the exact
original true ethernet standard that is common today. There is a great deal of confusion caused by this.
There are several types of common ethernet frames. Many network cards support more than one type.
The ethernet standard data encapsulation method is defined by RFC 894. RFC 1042 defines the IP to link
layer data encapsulation for networks using the IEEE 802 standards. The 802 standards define the two
lowest levels of the seven layer network model and primarily deal with the control of access to the network
media. The network media is the physical means of carrying the data such as network cable. The control of
access to the media is called media access control (MAC). The 802 standards are listed below:
• 802.1 - Internetworking
• 802.2 - Logical Link Control *
• 802.3 - Ethernet or CSMA/CD, Carrier-Sense Multiple Access with Collision detection LAN *
• 802.4 - Token-Bus LAN *
• 802.5 - Token Ring LAN *
• 802.6 - Metropolitan Area Network (MAN)
• 802.7 - Broadband Technical Advisory Group
• 802.8 - Fiber-Optic Technical Advisory Group
• 802.9 - Integrated Voice/Data Networks
• 802.10 - Network Security
• 802.11 - Wireless Networks
• 802.12 - Demand Priority Access LAN, 100 Base VG-AnyLAN
*The Ones with stars should be remembered in order for network certification testing.

LECTURE NO.21:CHANNEL ACESS METHOD:


In telecommunications and computer networks, a channel access method or multiple access method
allows several terminals connected to the same physical medium to transmit over it and to share its
capacity. Examples of shared physical media are bus networks, ring networks, hub networks, wireless
networks and half-duplex point-to-point links. Respective wording is recommended with IETF on Mobile
Ad Hoc Networking Terminology.
Multiple access protocols and control mechanisms are called media access control (MAC) for Data links,
which is provided by the Data Link Layer in the OSI model and the Link Layer of the TCP/IP model.
A multiple access method is based on a multiplex method, that allows several data streams or signals to
share the same communication channel or physical media. Multiplexing is provided by the Physical Layer.
Note that multiplexing also may be used in full-duplex point-to-point communication between nodes in a
switched network, which should not be considered as multiple access.
List of channel access methods
1. Circuit mode and channelization methods
The following are common circuit mode and channelization channel access methods:
• Frequency division multiple access (FDMA)
o Orthogonal frequency division multiple access (OFDMA)
o Wavelength division multiple access (WDMA)
• Time-division multiple access (TDMA)
o Multi-Frequency Time Division Multiple Access (MF-TDMA)
• Spread spectrum multiple access (SSMA)
o Direct-sequence spread spectrum (DSSS)
o Frequency-hopping spread spectrum (FHSS)
o Orthogonal Frequency-Hopping Multiple Access (OFHMA)
o Code division multiple access (CDMA) - the overarching form of DS-SS and FH-SS
o Multi-carrier code division multiple access (MC-CDMA)
• Space division multiple access (SDMA)
2. Packet mode methods
The following are examples of packet mode channel access methods:
• Contention based random multiple access methods:
o Aloha
o Slotted Aloha
o Multiple Access with Collision Avoidance (MACA)
o Multiple Access with Collision Avoidance for Wireless (MACAW)
o Carrier sense multiple access (CSMA)
o Carrier sense multiple access with collision detection (CSMA/CD)
o Carrier sense multiple access with collision avoidance (CSMA/CA)
 Distributed Coordination Function (DCF)
 Point Coordination Function (PCF)
o Carrier sense multiple access with collision avoidance and Resolution using Priorities
(CSMA/CARP)
• Token passing:
o Token ring
o Token bus
• Polling
• Resource reservation (scheduled) packet-mode protocols:
o Dynamic Time Division Multiple Access (Dynamic TDMA)
o Packet reservation multiple access (PRMA)
o Reservation ALOHA (R-ALOHA)
3. Duplexing methods
Where these methods are used for dividing forward and reverse communication channels, they are known
as duplexing methods, such as:
• Time division duplex (TDD)
• Frequency division duplex (FDD)

ALOHA
ALOHAnet, also known as ALOHA, was a pioneering computer networking system developed at the
University of Hawaii. It was first deployed in 1970, and while the network itself is no longer used, one of
the core concepts in the network is the basis for the widely used Ethernet.

Like the ARPANET group, ALOHA was important because it used a shared medium for transmission. This
revealed the need for more modern medium access control schemes such as CSMA/CD, used by Ethernet.
Unlike the ARPANET where each node could only talk to a node on the other end of the wire, in ALOHA
all nodes were communicating on the same frequency. This meant that some sort of system was needed to
control who could talk at what time. ALOHA's situation was similar to issues faced by Ethernet (non-
switched) and Wi-Fi networks.
This shared transmission medium system generated interest by others. ALOHA's scheme was very simple.
Because data was sent via a teletype the data rate usually did not go beyond 80 characters per second.
When two stations tried to talk at the same time, both transmissions were garbled. Then data had to be
manually resent. ALOHA proved that it was possible to have a useful network without solving this
problem, and this sparked interest in others, most significantly Bob Metcalfe and other researchers working
at Xerox PARC. This team went on to create the Ethernet protocol.
The ALOHA protocol
The ALOHA protocol is an OSI layer 2 protocol for LAN networks with broadcast topology.
The first version of the protocol was basic:
• If you have data to send, send the data
• If the message collides with another transmission, try resending "later"
Many people have made a study of the protocol. The critical aspect is the "later" concept. The quality of the
backoff scheme chosen significantly influences the efficiency of the protocol, the ultimate channel
capacity, and the predictability of its behavior.
The difference between Aloha and Ethernet on a shared medium is that Ethernet uses CSMA/CD, which
broadcasts a jamming signal to notify all computers connected to the channel that a collision occurred,
forcing computers on the network to reject their current packet or frame. The use of a jamming signal
enables early release of the transmission medium where transmission delays dominate propagation delays,
and is appropriate for many Ethernet variants. As Aloha was a wireless system, there were additional
problems, such as the hidden node problem, which meant that protocols which work well on a small scale
wired LAN would not always work. Even though the extent of the Hawaiian island network is about 400
km in diameter, propagation delays were almost certainly small in comparison with transmission delays, so
the protocol used had to be one which was robust enough to cope.
Pure Aloha had a maximum throughput of about 18.4%. This means that about 81.6% of the total available
bandwidth was essentially wasted due to losses from packet collisions. The basic throughput calculation
involves the assumption that the aggregate arrival process follows a Poisson distribution with an average
number of arrivals of 2G arrivals per 2X seconds. Therefore, the lambda parameter in the Poisson
distribution becomes 2G. The mentioned peak is reached for G = 0.5 resulting in a maximum throughput of
0.184, i.e. 18.4%.
An improvement to the original Aloha protocol was Slotted Aloha, which introduced discrete timeslots and
increased the maximum throughput to 36.8%. A station can send only at the beginning of a timeslot, and
thus collisions are reduced. In this case, the average number of aggregate arrivals is G arrivals per 2X
seconds. This leverages the lambda parameter to be G. The maximum throughput is reached for G = 1.
It should be noted that Aloha's characteristics are still not much different from those experienced today by
Wi-Fi, and similar contention-based systems that have no carrier sense capability. There is a certain amount
of inherent inefficiency in these systems. For instance 802.11b sees about a 2-4 Mbit/s real throughput with
a few stations talking, versus its theoretical maximum of 11 Mbit/s. It is typical to see these types of
networks' throughput break down significantly as the number of users and message burstiness increase. For
these reasons, applications which need highly deterministic load behavior often use token-passing schemes
(such as token ring) instead of contention systems. For instance ARCNET is very popular in embedded
applications. Nonetheless, contention based systems also have significant advantages, including ease of
management and speed in initial communication.
Because Listen before send (CSMA - Carrier Sense Multiple Access), as used in the Ethernet, works much
better than Aloha for all cases where all the stations can hear each other, Slotted Aloha is used on low
bandwidth tactical Satellite communications networks by the US Military, subscriber based Satellite
communications networks, and contactless RFID technologies
LECTURE NO. 23: READINGS: A-PAGE 312
CSMA:
Carrier sense multiple access
Carrier Sense Multiple Access (CSMA) is a probabilistic Media Access Control (MAC) protocol in
which a node verifies the absence of other traffic before transmitting on a shared transmission medium,
such as an electrical bus, or a band of the electromagnetic spectrum.
"Carrier Sense" describes the fact that a transmitter listens for a carrier wave before trying to send. That is,
it tries to detect the presence of an encoded signal from another station before attempting to transmit. If a
carrier is sensed, the station waits for the transmission in progress to finish before initiating its own
transmission.
"Multiple Access" describes the fact that multiple stations send and receive on the medium. Transmissions
by one node are generally received by all other stations using the medium.
Types of CSMA
• 1-persistent CSMA
When the sender (station) is ready to transmit data, it checks if the physical medium is busy. If so, it senses
the medium continually until it becomes idle, and then it transmits a piece of data (a frame). In case of a
collision, the sender waits for a random period of time and attempts to transmit again.
• p-persistent CSMA
This protocol is a generalization of 1-persistent CSMA. When the sender is ready to send data, it checks
continually if the medium is busy. If the medium becomes idle, the sender transmits a frame with a
probability p. In case the transmission did not happen (the probability of this event is 1-p) the sender waits
until the next available time slot and transmits again with the same probability p. This process repeats until
the frame is sent or some other sender starts transmitting. In the latter case the sender waits a random
period of time, checks the channel, and if it is idle, transmits with a probability p, and so on.
• Nonpersistent CSMA
When the sender is ready to send data, it checks if the medium is busy. If so, it waits for a random amount
of time and checks again. When the medium becomes idle, the sender starts transmitting. If collision
occurs, the sender waits for a random amount of time, and checks the medium, repeating the process.

CSMA/CD
Carrier Sense Multiple Access With Collision Detection (CSMA/CD), in computer networking, is a
network control protocol in which
• a carrier sensing scheme is used.
• a transmitting data station that detects another signal while transmitting a frame, stops transmitting
that frame, transmits a jam signal, and then waits for a random time interval (known as "backoff
delay" and determined using the truncated binary exponential backoff algorithm) before trying to
send that frame again.
CSMA/CD is a modification of pure Carrier Sense Multiple Access (CSMA).
A Shared Medium
The Ethernet network may be used to provide shared access by a group of attached nodes to the physical
medium which connects the nodes. These nodes are said to form a Collision Domain. All frames sent on
the medium are physically received by all receivers, however the Medium Access Control (MAC) header
contains a MAC destination address which ensure only the specified destination actually forwards the
received frame (the other computers all discard the frames which are not addressed to them).
Consider a LAN with four computers each with a Network Interface Card (NIC) connected by a common
Ethernet cable:

One computer (Blue) uses a NIC to send a frame to the shared medium, which has a destination address
corresponding to the source address of the NIC in the red computer.

The cable propagates the signal in both directions, so that the signal (eventually) reaches the NICs in all
four of the computers. Termination resistors at the ends of the cable absorb the frame energy, preventing
reflection of the signal back along the cable.
All the NICs receive the frame and each examines it to check its length and checksum. The header
destination MAC address is next examined, to see if the frame should be accepted, and forwarded to the
network-layer software in the computer.

Only the NIC in the red computer recognises the frame destination address as valid, and therefore this NIC
alone forwards the contents of the frame to the network layer. The NICs in the other computers discard the
unwanted frame.
The shared cable allows any NIC to send whenever it wishes, but if two NICs happen to transmit at the
same time, a collision will occur, resulting in the data being corrupted.
ALOHA & Collisions
To control which NICs are allowed to transmit at any given time, a protocol is required. The simplest
protocol is known as ALOHA (this is actually an Hawaiian word, meaning "hello"). ALOHA allows any
NIC to transmit at any time, but states that each NIC must add a checksum/CRC at the end of its
transmission to allow the receiver(s) to identify whether the frame was correctly received.
ALOHA is therefore a best effort service, and does not guarantee that the frame of data will actually reach
the remote recipient without corruption. It therefore relies on ARQ protocols to retransmit any data which
is corrupted. An ALOHA network only works well when the medium has a low utilisation, since this leads
to a low probability of the transmission colliding with that of another computer, and hence a reasonable
chance that the data is not corrupted.
Collision Detection (CD)
A second element to the Ethernet access protocol is used to detect when a collision occurs. When there is
data waiting to be sent, each transmitting NIC also monitors its own transmission. If it observes a collision
(excess current above what it is generating, i.e. > 24 mA for coaxial Ethernet), it stops transmission
immediately and instead transmits a 32-bit jam sequence. The purpose of this sequence is to ensure that any
other node which may currently be receiving this frame will receive the jam signal in place of the correct
32-bit MAC CRC, this causes the other receivers to discard the frame due to a CRC error.
To ensure that all NICs start to receive a frame before the transmitting NIC has finished sending it, Ethernet
defines a minimum frame size (i.e. no frame may have less than 46 bytes of payload). The minimum frame
size is related to the distance which the network spans, the type of media being used and the number of
repeaters which the signal may have to pass through to reach the furthest part of the LAN. Together these
define a value known as the Ethernet Slot Time, corresponding to 512 bit times at 10 Mbps.
When two or more transmitting NICs each detect a corruption of their own data (i.e. a collision), each
responds in the same way by transmitting the jam sequence. The following sequence depicts a collision:

At time t=0, a frame is sent on the idle medium by NIC A.


A short time later, NIC B also transmits. (In this case, the medium, as observed by the NIC at B happens to
be idle too).

After a period, equal to the propagation delay of the network, the NIC at B detects the other transmission
from A, and is aware of a collision, but NIC A has not yet observed that NIC B was also transmitting. B
continues to transmit, sending the Ethernet Jam sequence (32 bits).

After one complete round trip propagation time (twice the one way propagation delay), both NICs are
aware of the collision. B will shortly cease transmission of the Jam Sequence, however A will continue to
transmit a complete Jam Sequence. Finally the cable becomes idle.

Collision detection is used to improve CSMA performance by terminating transmission as soon as a


collision is detected, and reducing the probability of a second collision on retry.
Methods for collision detection are media dependent, but on an electrical bus such as Ethernet, collisions
can be detected by comparing transmitted data with received data.If they differ, another transmitter is
overlaying the first transmitter's signal (a collision), and transmission terminates immediately. A jam signal
is sent which will cause all transmitters to back off by random intervals, reducing the probability of a
collision when the first retry is attempted. CSMA/CD is a layer 2 protocol in the OSI model.
Collisions are detected by monitoring the collisionDetect signal provided by the Physical Layer. When a
collision is detected during a frame transmission, the transmission is not terminated immediately. Instead,
the transmission continues until additional bits specified by jamSize have been transmitted (counting from
the time collisionDetect went on). This collision enforcement or jam guarantees that the duration of the
collisionis sufficient to ensure its detection by all transmitting stations on the network.
It shall be emphasized that the description of the MAC layer in a computer language is in no way intended
to imply that procedures shall be implemented as a program executed by a computer. The implementation
may consist of any appropriate technology including hardware, firmware,software, or any combination.For
example a NIC (Network Interface card) may contain hardware for complete implementation of Physical
and MAC layers, hence it takes layer three packets from the Operating System and performs the rest of
activity described above on it own using its own hardware, Or in another scenario the NIC can be a dumb
device leaving the MAC layer intelligence to the operating system , Here NIC just gives the proper signal
using its hardware to the operating system which does all the intelligent functions of MAC layer
Reference[IEEE Std 802.3TM-2002 (Revision of IEEE Std 802.3, 2000 Edition Part 3)]
Ethernet is the classic CSMA/CD protocol. However, CSMA/CD is no longer used in the 10 Gigabit
Ethernet specifications, due to the requirement of switches replacing all hubs and repeaters. Similarly,
while CSMA/CD operation (half duplex) is defined in the Gigabit Ethernet specifications, few
implementations support it and in practice it is nonexistent. Also, in Full Duplex Ethernet, collisions are
impossible since data is transmitted and received on different wires, and each segment is connected directly
to a switch. Therefore, CSMA/CD is not used on Full Duplex Ethernet networks.

TOKEN PASSING:
In telecommunication, token passing is a channel access method where a signal called a token is passed
around between nodes that authorizes the node to communicate.
Token passing schemes are a technique in which only the system which has the token can communicate.
The token is a control mechanism which gives authority to the system to communicate or use the resources
of that network. Once the communication is over, the token is passed to the next candidate in a sequential
manner. The most well-known examples are token ring and ARCNET.
Token passing schemes provide round-robin scheduling. If the packets are equally sized, the scheduling is
max-min fair.
The advantage over contention based channel access is that collisions are eliminated, and that the channel
bandwidth can be fully utilized without idle time when demand is heavy.
The disadvantage is that even when demand is light, a station wishing to transmit must wait for the token,
increasing latency.
e.g
TOKEN RING: Token ring local area network (LAN) technology is a local area network protocol which
resides at the data link layer (DLL) of the OSI model. It uses a special three-byte frame called a token that
travels around the ring. Token ring frames travel completely around the loop.
Token frame
When no station is transmitting a data frame, a special token frame circles the loop. This special token
frame is repeated from station to station until arriving at a station that needs to transmit data. When a
station needs to transmit data, it converts the token frame into a data frame for transmission. Once the
sending station receives its own data frame, it converts the frame back into a token. If a transmission error
occurs and no token frame, or more than one, is present, a special station referred to as the Active Monitor
detects the problem and removes and/or reinserts tokens as necessary (see Active and standby monitors).
On 4 Mbit/s Token Ring, only one token may circulate; on 16 Mbit/s Token Ring, there may be multiple
tokens.
The special token frame consists of three bytes as described below (J and K are special non-data characters,
referred to as code violations).
Token priority
Token ring specifies an optional medium access scheme allowing a station with a high-priority
transmission to request priority access to the token.
8 priority levels, 0-7, are used. When the station wishing to transmit receives a token or data frame with a
priority less than or equal to the station's requested priority, it sets the priority bits to its desired priority.
The station does not immediately transmit; the token circulates around the medium until it returns to the
station. Upon sending and receiving its own data frame, the station downgrades the token priority back to
the original priority.
Token ring frame format
A data token ring frame is an expanded version of the token frame that is used by stations to transmit media
access control (MAC) management frames or data frames from upper layer protocols and applications.
Token Ring and IEEE 802.5 support two basic frame types: tokens and data/command frames. Tokens are
3 bytes in length and consist of a start delimiter, an access control byte, and an end delimiter.
Data/command frames vary in size, depending on the size of the Information field. Data frames carry
information for upper-layer protocols, while command frames contain control information and have no data
for upper-layer protocols. Token ring can be connected to physical rings via equipment such as 100Base-
TX equipment and CAT5e UTP cable.
Data/Command Frame
SD AC FC DA SA PDU from LLC (IEEE 802.2) CRC ED FS
8 bits 8 bits 8 bits 48 bits 48 bits up to 18200x8 bits 32 bits 8 bits 8 bits
Token Frame
SD AC ED
8 bits 8 bits 8 bits
Abort Frame
SD ED
8 bits 8 bits
Starting Delimiter
consists of a special bit pattern denoting the beginning of the frame. The bits from most significant to least
significant are J,K,0,J,K,0,0,0. J and K are code violations. Since Manchester encoding is self clocking, and
has a transition for every encoded bit 0 or 1, the J and K codings violate this, and will be detected by the
hardware.
J K 0 J K 0 0 0
1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit
Access Control
this byte field consists of the following bits from most significant to least significant bit order:
P,P,P,T,M,R,R,R. The P bits are priority bits, T is the token bit which when set specifies that this is a token
frame, M is the monitor bit which is set by the Active Monitor (AM) station when it sees this frame, and R
bits are reserved bits.
+ Bits 0–2 3 4 5-7
0 Priority Token Monitor Reservation
Frame Control
a one byte field that contains bits describing the data portion of the frame contents.Indicates whether the
frame contains data or control information. In control frames, this byte specifies the type of control
information.
+ Bits 0–2 3
0 Frame type Control Bits
Frame type - 01 indicates LLC frame IEEE 802.2 (data) and ignore control bits 00 indicates MAC frame
and control bits indicate the type of MAC control frame

Destination address
a six byte field used to specify the destination(s) physical address .
Source address
Contains physical addressa of sending station . It is six byte field that is either the local assigned address
(LAA) or universally assigned address (UAA) of the sending station adapter.
Data
a variable length field of 0 or more bytes, the maximum allowable size depending on ring speed containing
MAC management data or upper layer information.Maximum length of 4500 bytes
Frame Check Sequence
a four byte field used to store the calculation of a CRC for frame integrity verification by the receiver.
Ending Delimiter
The counterpart to the starting delimiter, this field marks the end of the frame and consists of the following
bits from most significant to least significant: J,K,1,J,K,1,I,E. I is the intermediate frame bit and E is the
error bit.
J K 1 J K 1 I E
1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit
Frame Status :a one byte field used as a primitive acknowledgement scheme on whether the frame was
recognized and copied by its intended receiver.

A C 0 0 A C 0 0
1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit
A = 1 , Address recognized C = 1 , Frame copied
Abort Frame :Used to abort transmission by the sending station.
LECTURE NO. 24 READINGS: A-PAGE 314 to 316
Ethernet
The IEEE 802.3 standard defines ethernet at the physical and data link layers of the OSI network model.
Most ethernet systems use the following:
• Carrier-sense multiple-access with collision detection (CSMA/CD) for controlling access to the
network media.
• Use baseband broadcasts
• A method for packing data into data packets called frames
• Transmit at 10Mbps, 100Mbps, and 1Gbps.
Types of Ethernet
• 10Base5 - Uses Thicknet coaxial cable which requires a transceiver with a vampire tap to connect
each computer. There is a drop cable from the transceiver to the Attachment Unit Interface (AIU).
The AIU may be a DIX port on the network card. There is a transceiver for each network card on
the network. This type of ethernet is subject to the 5-4-3 rule meaning there can be 5 network
segments with 4 repeaters, and three of the segments can be connected to computers. It uses bus
topology. Maximum segment length is 500 Meters with the maximum overall length at 2500
meters. Minimum length between nodes is 2.5 meters. Maximum nodes per segment is 100.
• 10Base2 - Uses Thinnet coaxial cable. Uses a BNC connector and bus topology requiring a
terminator at each end of the cable. The cable used is RG-58A/U or RG-58C/U with an impedance
of 50 ohms. RG-58U is not acceptable. Uses the 5-4-3 rule meaning there can be 5 network
segments with 4 repeaters, and three of the segments can be connected to computers. The
maximum length of one segment is 185 meters. Barrel connectors can be used to link smaller
pieces of cable on each segment, but each barrel connector reduces signal quality. Minimum
length between nodes is 0.5 meters.
• 10BaseT - Uses Unshielded twisted pair (UTP) cable. Uses star topology. Shielded twisted pair
(STP) is not part of the 10BaseT specification. Not subject to the 5-4-3 rule. They can use
category 3, 4, or 5 cable, but perform best with category 5 cable. Category 3 is the minimum.
Require only 2 pairs of wire. Cables in ceilings and walls must be plenum rated. Maximum
segment length is 100 meters. Minimum length between nodes is 2.5 meters. Maximum number of
connected segments is 1024. Maximum number of nodes per segment is 1 (star topology). Uses
RJ-45 connectors.
• 10BaseF - Uses Fiber Optic cable. Can have up to 1024 network nodes. Maximum segment length
is 2000 meters. Uses specialized connectors for fiber optic. Includes three categories:
o 10BaseFL - Used to link computers in a LAN environment, which is not commonly done
due to high cost.
o 10BaseFP - Used to link computers with passive hubs to get cable distances up to 500
meters.
o 10BaseFB - Used as a backbone between hubs.
• 100BaseT - Also known as fast ethernet. Uses RJ-45 connectors. Topology is star. Uses
CSMA/CD media access. Minimum length between nodes is 2.5 meters. Maximum number of
connected segments is 1024. Maximum number of nodes per segment is 1 (star topology).
IEEE802.3 specification.
o 100BaseTX - Requires category 5 two pair cable. Maximum distance is 100 meters.
o 100BaseT4 - Requires category 3 cable with 4 pair. Maximum distance is 100 meters.
o 100BaseFX - Can use fiber optic to transmit up to 2000 meters. Requires two strands of
fiber optic cable.
100VG-AnyLAN - Requires category 3 cable with 4 pair. Maximum distance is 100 meters with cat 3 or 4
cable. Can reach 150 meters with cat 5 cable. Can use fiber optic to transmit up to 2000 meters. This
ethernet type supports transmission of Token-Ring network packets in addition to ethernet packets. IEEE
802.12 specification. Uses demand-priority media access control. The topology is star. It uses a series of
interlinked cascading hubs. Uses RJ-45 connectors.
Types of ethernet frames
• Ethernet 802.2 - These frames contain fields similar to the ethernet 802.3 frames with the addition
of three Logical Link Control (LLC) fields. Novell NetWare 4.x networks use it.
• Ethernet 802.3 - It is mainly used in Novell NetWare 2.x and 3.x networks. The frame type was
developed prior to completion of the IEEE 802.3 specification and may not work in all ethernet
environments.
• Ethernet II - This frame type combines the 802.3 preamble and SFD fields and include a protocol
type field where the 802.3 frame contained a length field. TCP/IP networks and networks that use
multiple protocols normally use this type of frames.
• Ethernet SNAP - This frame type builds on the 802.2 frame type by adding a type field indicating
what network protocol is being used to send data. This frame type is mainly used in AppleTalk
networks.
The packet size of all the above frame types is between 64 and 1,518 bytes.
Ethernet Message Formats
The ethernet data format is defined by RFC 894 and 1042. The addresses specified in the ethernet protocol
are 48 bit addresses.
The types of data passed in the type field are as follows:
1. 0800 IP Datagram
2. 0806 ARP request/reply
3. 8035 RARP request/reply
There is a maximum size of each data packet for the ethernet protocol. This size is called the maximum
transmission unit (MTU). What this means is that sometimes packets may be broken up as they are passed
through networks with MTUs of various sizes. SLIP and PPP protocols will normally have a smaller MTU
value than ethernet. This document does not describe serial line interface protocol (SLIP) or point to point
protocol (PPP) encapsulation.
LAYER 2 & 3 SWITCHING
Layer 2 switches are frequently installed in the enterprise for high-speed connectivity between end stations
at the data link layer. Layer 3 switches are a relatively new phenomenon, made popular by (among others)
the trade press.
Layer 2 Switches
Bridging involves segmentation of local-area networks (LANs) at the Layer 2 level. A multiport bridge
typically learns about the Media Access Control (MAC) addresses on each of its ports and transparently
passes MAC frames destined to those ports. These bridges also ensure that frames destined for MAC
addresses that lie on the same port as the originating station are not forwarded to the other ports.
Layer 2 switches effectively provide the same functionality. They are similar to multiport bridges in that
they learn and forward frames on each port. The major difference is the involvement of hardware that
ensures that multiple switching paths inside the switch can be active at the same time. For example,
consider Figure 1, which details a four-port switch with stations A on port 1, B on port 2, C on port 3 and D
on port 4. Assume that A desires to communicate with B, and C desires to communicate with D. In a single
CPU bridge, this forwarding would typically be done in software, where the CPU would pick up frames
from each of the ports sequentially and forward them to appropriate output ports. This process is highly
inefficient in a scenario like the one indicated previously, where the traffic between A and B has no relation
to the traffic between C and D.
Figure 1: Layer 2 switch with External Router for Inter-VLAN traffic and connecting to the Internet

(Click on image to enlarge.)

Enter hardware-based Layer 2 switching. Layer 2 switches with their hardware support are able to forward
such frames in parallel so that A and B and C and D can have simultaneous conversations. The parallel-ism
has many advantages. Assume that A and B are NetBIOS stations, while C and D are Internet Protocol (IP)
stations. There may be no rea-son for the communication between A and C and A and D. Layer 2 switching
allows this coexistence without sacrificing efficiency.
Characteristics
Layer 2 switches themselves act as IP end nodes for Simple Network Management Protocol (SNMP)
management, Telnet, and Web based management. Such management functionality involves the presence
of an IP stack on the router along with User Datagram Protocol (UDP), Transmission Control Protocol
(TCP), Telnet, and SNMP functions. The switches themselves have a MAC address so that they can be
addressed as a Layer 2 end node while also providing transparent switch functions. Layer 2 switching does
not, in general, involve changing the MAC frame. However, there are situations when switches change the
MAC frame.
The same principles also apply towards Layer 2 switches, and most commercial Layer 2 switches support
the Spanning-Tree Protocol. The previous discussion provides an outline of Layer 2 switching func-tions.
Layer 2 switching is MAC frame based, does not involve altering the MAC frame, in general, and provides
transparent switching in par-allel with MAC frames. Since these switches operate at Layer 2, they are
protocol independent. However, Layer 2 switching does not scale well because of broadcasts. Although
VLANs alleviate this problem to some extent, there is definitely a need for machines on different VLANs
to communicate. One example is the situation where an orga-nization has multiple intranet servers on
separate subnets (and hence VLANs), causing a lot of intersubnet traffic. In such cases, use of a router is
unavoidable; Layer 3 switches enter at this point.

Layer 3 Switches
one school uses this term to describe fast IP routing via hardware, while another school uses it to describe
Multi Protocol Over ATM (MPOA). For the purpose of this discussion, Layer 3 switches are superfast rout-
ers that do Layer 3 forwarding in hardware. In this article, we will mainly discuss Layer 3 switching in the
context of fast IP routing, with a brief discussion of the other areas of application.

Evolution
Consider the Layer 2 switching context shown in Figure 1. Layer 2 switches operate well when there is
very little traffic between VLANs. Such VLAN traffic would entail a router. one of the ports as a one-
armed router or present internally within the switch. To augment Layer 2 functionality, we need a router?
which leads to loss of performance since routers are typically slower than switches. This scenario leads to
the question: Why not implement a router in the switch itself, as discussed in the previous section, and do
the forwarding in hardware?

Although this setup is possible, it has one limitation: Layer 2 switches need to operate only on the Ethernet
MAC frame. This scenario in turn leads to a well-defined forwarding algorithm which can be implemented
in hardware. The algorithm cannot be extended easily to Layer 3 protocols because there are multiple Layer
3 routable protocols such as IP, IPX, AppleTalk, and so on; and second, the forwarding decision in such
protocols is typically more complicated than Layer 2 forwarding decisions.

What is the engineering compromise? Because IP is the most common among all Layer 3 protocols today,
most of the Layer 3 switches today perform IP switching at the hardware level and forward the other
protocols at Layer 2 (that is, bridge them). The second issue of complicated Layer 3 forwarding decisions is
best illustrated by IP option processing, which typically causes the length of the IP header to vary,
complicating the building of a hardware forwarding engine. However, a large number of IP packets do not
include IP options?so, it may be overkill to design this processing into silicon. The compromise is that the
most common (fast path) forwarding decision is designed into silicon, whereas the others are handled
typically by a CPU on the Layer 3 switch.

To summarize, Layer 3 switches are routers with fast forwarding done via hardware. IP forwarding
typically involves a route lookup, decrementing the Time To Live (TTL) count and recalculating the
checksum, and forwarding the frame with the appropriate MAC header to the correct output port. Lookups
can be done in hardware, as can the decrementing of the TTL and the recalculation of the checksum. The
routers run routing protocols such as Open Shortest Path First (OSPF) or Routing Information Protocol
(RIP) to communicate with other Layer 3 switches or routers and build their routing tables. These routing
tables are looked up to determine the route for an incoming packet.

Combined Layer 2/Layer 3 Switches


We have implicitly assumed that Layer 3 switches also provide Layer 2 switching functionality, but this
assumption does not always hold true. Layer 3 switches can act like traditional routers hanging off multiple
Layer 2 switches and provide inter-VLAN connectivity. In such cases, there is no Layer 2 functionality
required in these switches. This concept can be illustrated by extending the topology in Figure 1?consider
placing a pure Layer 3 switch between the Layer 2 Switch and the router. The Layer 3 Switch would off-
load the router from inter-VLAN processing.
Figure 2: Combined Layer2/Layer3 Switch connecting directly to the Internet

(Click on image to enlarge.)

Figure 2 illustrates the combined Layer 2/Layer 3 switching function-ality. The combined Layer 2/Layer 3
switch replaces the traditional router also. A and B belong to IP subnet 1, while C and D belong to IP
subnet 2. Since the switch in consideration is a Layer 2 switch also, it switches traffic between A and B at
Layer 2. Now consider the situ-ation when A wishes to communicate with C. A sends the IP packet
addressed to the MAC address of the Layer 3 switch, but with an IP destination address equal to C?s IP
address. The Layer 3 switch strips out the MAC header and switches the frame to C after performing the
lookup, decrementing the TTL, recalculating the checksum and inserting C?s MAC address in the
destination MAC address field. All of these steps are done in hardware at very high speeds.

Now how does the switch know that C?s IP destination address is Port 3? When it performs learning at
Layer 2, it only knows C?s MAC address. There are multiple ways to solve this problem. The switch can
perform an Address Resolution Protocol (ARP) lookup on all the IP subnet 2 ports for C?s MAC address
and determine C?s IP-to-MAC mapping and the port on which C lies. The other method is for the switch to
determine C?s IP-to-MAC mapping by snooping into the IP header on reception of a MAC frame.

Characteristics
Configuration of the Layer 3 switches is an important issue. When the Layer 3 switches also perform Layer
2 switching, they learn the MAC addresses on the ports?the only configuration required is the VLAN
configuration. For Layer 3 switching, the switches can be configured with the ports corresponding to each
of the subnets or they can perform IP address learning. This process involves snooping into the IP header of
the MAC frames and determining the subnet on that port from the source IP address. When the Layer 3
switch acts like a one-armed router for a Layer 2 switch, the same port may consist of multiple IP subnets.

Management of the Layer 3 switches is typically done via SNMP. Layer 3 switches also have MAC
addresses for their ports?this setup can be one per port, or all ports can use the same MAC address. The
Layer 3 switches typically use this MAC address for SNMP, Telnet, and Web management communication.

Conceptually, the ATM Forum?s LAN Emulation (LANE) specificat-ion is closer to the Layer 2 switching
model, while MPOA is closer to the Layer 3 switching model. Numerous Layer 2 switches are equipped
with ATM interfaces and provide a LANE client function on that ATM interface. This scenario allows the
bridging of MAC frames across an ATM network from switch to switch. The MPOA is closer to combined
Layer2/Layer 3 switching, though the MPOA client does not have any routing protocols running on it.
(Routing is left to the MPOA server under the Virtual Router model.)

Do Layer 3 switches completely eliminate need for the traditional router ? No, routers are still needed,
especially where connections to the wide area are required. Layer 3 switches may still connect to such
routers to learn their tables and route packets to them when these packets need to be sent over the WAN.
The switches will be very effective on the workgroup and the backbone within an enterprise, but most
likely will not replace the router at the edge of the WAN (read Internet in many cases). Routers perform
numerous other functions like filtering with access lists, inter-Autonomous System (AS) routing with
protocols such as the Border Gateway Protocol (BGP), and so on. Some Layer 3 switches may completely
replace the need for a router if they can provide all these functions (see Figure 2).
FAST ETHERNET:
Definition: Fast Ethernet supports a maximum data rate of 100 Mbps. It is so named because original
Ethernet technology supported only 10 Mbps. Fast Ethernet began to be widely deployed in the mid-1990s
as the need for greater LAN performance became critical to universities and businesses.
A key element of Fast Ethernet's success was its ability to coexist with existing network installations.
Today, many network adapters support both traditional and Fast Ethernet. These so-called "10/100"
adapters can usually sense the speed of the line automatically and adjust accordingly. Just as Fast Ethernet
improved on traditional Ethernet, Gigabit Ethernet improves on Fast Ethernet, offering rates up to 1000
Mbps instead of 100 Mbps.
Also Known As: 100 Mbps Ethernet

GIGABIT ETHERNET
Definition: Gigabit Ethernet is an extension to the family of Ethernet computer networking and
communication standards. The Gigabit Ethernet standard supports a theoretical maximum data rate of 1
Gbps (1000 Mbps).
At one time, it was believed that achieving Gigabit speeds with Ethernet required fiber optic or other
special cables. However, Gigabit Ethernet can be implemented on ordinary twisted pair copper cable
(specifically, the CAT5e and CAT6 cabling standards).
Migration of existing computer networks from 100 Mbps Fast Ethernet to Gigabit Ethernet is happening
slowly. Much legacy Ethernet technology exists (in both 10 and 100 Mbps varieties), and these older
technologies offers sufficient performance in many cases.
Today, Gigabit Ethernet can only be found mainly in research institutions. A decrease in cost, increase in
demand, and improvements in other aspects of LAN technology will be required before Gigabit Ethernet
surpasses other forms of wired networking in terms of adoption.
Also Known As: 1000 Mbps Ethernet

LECTURE NO.25 READINGS: A-PAGE389,390,396


LAN INTERCONNECTING DEVICE:
A repeater is an electronic device that receives a signal and retransmits it at a higher power level, or to the
other side of an obstruction, so that the signal can cover longer distances without degradation. In most
twisted pair ethernet configurations, repeaters are required for cable runs longer than 100 meters away from
the computer.
Hubs
A hub contains multiple ports. When a packet arrives at one port, it is copied to all the ports of the hub for
transmission. When the packets are copied, the destination address in the frame does not change to a
broadcast address. It does this in a rudimentary way: It simply copies the data to all of the Nodes connected
to the hub.[2]
Bridges
A network bridge connects multiple network segments at the data link layer (layer 2) of the OSI model.
Bridges do not promiscuously copy traffic to all ports, as hubs do, but learn which MAC addresses are
reachable through specific ports. Once the bridge associates a port and an address, it will send traffic for
that address only to that port. Bridges do send broadcasts to all ports except the one on which the broadcast
was received.
Bridges learn the association of ports and addresses by examining the source address of frames that it sees
on various ports. Once a frame arrives through a port, its source address is stored and the bridge assumes
that MAC address is associated with that port. The first time that a previously unknown destination address
is seen, the bridge will forward the frame to all ports other than the one on which the frame arrived.
Bridges come in three basic types:
1. Local bridges: Directly connect local area networks (LANs)
2. Remote bridges: Can be used to create a wide area network (WAN) link between LANs. Remote
bridges, where the connecting link is slower than the end networks, largely have been replaced by
routers.
3. Wireless bridges: Can be used to join LANs or connect remote stations to LANs.
Switches
A switch is a device that performs switching. Specifically, it forwards and filters OSI layer 2 datagrams
(chunk of data communication) between ports (connected cables) based on the MAC addresses in the
packets.[3] This is distinct from a hub in that it only forwards the datagrams to the ports involved in the
communications rather than all ports connected. Strictly speaking, a switch is not capable of routing traffic
based on IP address (layer 3) which is necessary for communicating between network segments or within a
large or complex LAN. Some switches are capable of routing based on IP addresses but are still called
switches as a marketing term. A switch normally has numerous ports, with the intention being that most or
all of the network is connected directly to the switch, or another switch that is in turn connected to a switch.
[4]

Switch is a marketing term that encompasses routers and bridges, as well as devices that may distribute
traffic on load or by application content (e.g., a Web URL identifier). Switches may operate at one or more
OSI model layers, including physical, data link, network, or transport (i.e., end-to-end). A device that
operates simultaneously at more than one of these layers is called a multilayer switch.
Overemphasizing the ill-defined term "switch" often leads to confusion when first trying to understand
networking. Many experienced network designers and operators recommend starting with the logic of
devices dealing with only one protocol level, not all of which are covered by OSI. Multilayer device
selection is an advanced topic that may lead to selecting particular implementations, but multilayer
switching is simply not a real-world design concept.
Routers
Routers are networking devices that forward data packets between networks using headers and forwarding
tables to determine the best path to forward the packets. Routers work at the network layer of the TCP/IP
model or layer 3 of the OSI model. Routers also provide interconnectivity between like and unlike media
(RFC 1812). This is accomplished by examining the Header of a data packet, and making a decision on the
next hop to which it should be sent (RFC 1812) They use preconfigured static routes, status of their
hardware interfaces, and routing protocols to select the best route between any two subnets. A router is
connected to at least two networks, commonly two LANs or WANs or a LAN and its ISP's network. Some
DSL and cable modems, for home (and even office) use, have been integrated with routers to allow
multiple home/office computers to access the Internet through the same connection. Many of these new
devices also consist of wireless access points (waps) or wireless routers to allow for IEEE 802.11g/b
wireless enabled devices to connect to the network without the need for cabled connections.
GATEWAYS:
A device that connects a networked computer with other computers on dissimilar networks. The gateway is
capable of converting data frames and network protocols into the format needed by another network.A
node on a network that serves as an entrance to another network. In enterprises, the gateway is the
computer that routes the traffic from a workstation to the outside network that is serving the Web pages. In
homes, the gateway is the ISP that connects the user to the internet.
In enterprises, the gateway node often acts as a proxy server and a firewall. The gateway is also associated
with both a router, which use headers and forwarding tables to determine where packets are sent, and a
switch, which provides the actual path for the packet in and out of the gateway.
(2) A computer system located on earth that switches data signals and voice signals between satellites and
terrestrial networks.
(3) An earlier term for router, though now obsolete in this sense as router is commonly used.

LECTURE NO.26 READINGS:

http://compnetworking.about.com/cs/lanvlanwan/g/bldef_wan.htm
A-497
WAN:
Definition: A WAN spans a large geographic area, such as a state, province or country. WANs often
connect multiple smaller networks, such as local area networks (LANs) or metro area networks (MANs).
The world's most popular WAN is the Internet. Some segments of the Internet, like VPN-based extranets,
are also WANs in themselves. Finally, many WANs are corporate or research networks that utilize leased
lines.
WANs generally utilize different and much more expensive networking equipment than do LANs. Key
technologies often found in WANs include SONET, Frame Relay, and ATM.

ROUTING:
Routing (also spelled routeing) is the process of selecting paths in a network along which to send network
traffic. Routing is performed for many kinds of networks, including the telephone network, electronic data
networks (such as the Internet), and transportation (transport) networks. This article is concerned primarily
with routing in electronic data networks using packet switching technology.
In packet switching networks, routing directs forwarding, the transit of logically addressed packets from
their source toward their ultimate destination through intermediate nodes; typically hardware devices called
routers, bridges, gateways, firewalls, or switches. Ordinary computers with multiple network cards can also
forward packets and perform routing, though they are not specialized hardware and may suffer from limited
performance. The routing process usually directs forwarding on the basis of routing tables which maintain a
record of the routes to various network destinations. Thus constructing routing tables, which are held in the
routers' memory, becomes very important for efficient routing. Most routing algorithms use only one
network path at a time, but multipath routing techniques enable the use of multiple alternative paths.
Routing, in a more narrow sense of the term, is often contrasted with bridging in its assumption that
network addresses are structured and that similar addresses imply proximity within the network. Because
structured addresses allow a single routing table entry to represent the route to a group of devices,
structured addressing (routing, in the narrow sense) outperforms unstructured addressing (bridging) in large
networks, and has become the dominant form of addressing on the Internet, though bridging is still widely
used within localized environments.

Routing schemes differ in their delivery semantics:


• unicast delivers a message to a single specified node;
• broadcast delivers a message to all nodes in the network;
• multicast delivers a message to a group of nodes that have expressed interest in receiving the
message;
• anycast delivers a message to any one out of a group of nodes, typically the one nearest to the
source.
Unicast is the dominant form of message delivery on the Internet, and this article focuses on unicast routing
algorithms.
Adaptive routing
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Adaptive routing describes the capability of a system, through which routes are characterised by their
destination, to alter the path that the route takes through the system in response to a change in conditions.
The adaptation is intended to allow as many routes as possible to remain valid (that is, have destinations
that can be reached) in response to the change.
People using a transport system can display adaptive routing. For example, if a local railway station is
closed, people can alight from a train at a different station and use another method, such as a bus, to reach
their destination.
The term is commonly used in data networking to describe the capability of a network to 'route around'
damage, such as loss of a node or a connection between nodes, so long as other path choices are available.
There are several protocols used to achieve this:
• RIP
• OSPF
• IS-IS
• IGRP/EIGRP

Systems that do not implement adaptive routing are described as using static routing, where routes through
a network are described by fixed paths (statically). A change, such as the loss of a node, or loss of a
connection between nodes, is not compensated for. This means that anything that wishes to take an affected
path will either have to wait for the failure to be repaired before restarting its journey, or will have to fail to
reach its destination and give up the journey
LECTURE NO.27 READINGS: A-PAGE638
Congestion control:
Congestion control concerns controlling traffic entry into a telecommunications network, so as to avoid
congestive collapse by attempting to avoid oversubscription of any of the processing or link capabilities of
the intermediate nodes and networks and taking resource reducing steps, such as reducing the rate of
sending packets. It should not be confused with flow control, which prevents the sender from
overwhelming the receiver.
Theory of congestion control
The modern theory of congestion control was pioneered by Frank Kelly, who applied microeconomic
theory and convex optimization theory to describe how individuals controlling their own rates can interact
to achieve an "optimal" network-wide rate allocation
Examples of "optimal" rate allocation are max-min fair allocation and Kelly's suggestion of proportional
fair allocation, although many others are possible.
The mathematical expression for optimal rate allocation is as follows. Let xi be the rate of flow i, Cl be the
capacity of link l, and rli be 1 if flow i uses link l and 0 otherwise. Let x, c and R be the corresponding
vectors and matrix. Let U(x) be an increasing, strictly convex function, called the utility, which measures
how much benefit a user obtains by transmitting at rate x. The optimal rate allocation then satisfies
such that
The Lagrange dual of this problem decouples, so that each flow sets its own rate, based only on a "price"
signalled by the network. Each link capacity imposes a constraint, which gives rise to a Lagrange
multiplier, pl. The sum of these Lagrange multipliers,
yi = ∑ plrli,
l
is the price to which the flow responds.
Congestion control then becomes a distributed optimisation algorithm for solving the above problem. Many
current congestion control algorithms can be modelled in this framework, with pl being either the loss
probability or the queueing delay at link l.
A major weakness of this model is that it assumes all flows observe the same price, while sliding window
flow control causes "burstiness" which causes different flows to observe different loss or delay at a given
link.
Classification of congestion control algorithms
There are many ways to classify congestion control algorithms:
• By the type and amount of feedback received from the network: Loss; delay; single-bit or multi-bit
explicit signals
• By incremental deployability on the current Internet: Only sender needs modification; sender and
receiver need modification; only router needs modification; sender, receiver and routers need
modification.
• By the aspect of performance it aims to improve: high bandwidth-delay product networks; lossy
links; fairness; advantage to short flows; variable-rate links
• By the fairness criterion it uses: max-min, proportional, "minimum potential delay"

WAN TECHNOLOGIES:
WANs are all about exchanging information across wide geographic areas. They are also, as you can
probably gather from reading about the Internet, about scalability—the ability to grow to accommodate the
number of users on the network, as well as to accommodate the demands those users place on network
facilities. Although the nature of a WAN—a network reliant on communications for covering sometimes
vast distances—generally dictates slower throughput, longer delays, and a greater number of errors than
typically occur on a LAN, a WAN is also the fastest, most effective means of transferring computer-based
information currently available.
The Way of a WAN
To at least some extent, WANs are defined by their methods of transmitting data packets. True, the means
of communication must be in place. True, too, the networks making up the WAN must be up and running.
And the administrators of the network must be able to monitor traffic, plan for growth, and alleviate
bottlenecks. But in the end, part of what makes a WAN a WAN is its ability to ship packets of data from
one place to another, over whatever infrastructure is in place. It is up to the WAN to move those packets
quickly and without error, delivering them and the data they contain in exactly the same condition they left
the sender, even if they must pass through numerous intervening networks to reach their destination.
Picture, for a moment, a large network with many subnetworks, each of which has many individual users.
To the users, this large network is (or should be) transparent—so smoothly functioning that it is invisible.
After all, they neither know nor care whether the information they need is on server A or server B, whether
the person with whom they want to communicate is in city X or city Y, or whether the underlying network
runs this protocol or that one. They know only that they want the network to work, and that they want their
information needs satisfied accurately, efficiently, and as quickly as possible.
Now picture the same situation from the network's point of view. It "sees" hundreds, thousands, and
possibly even tens of thousands of network computers or terminals and myriad servers of all kinds—print,
file, mail, and even servers offering Internet access—not to mention different types of computers,
gateways, routers, and communications devices. In theory, any one of these devices could communicate
with, or transmit information through, any other device. Any PC, for instance, could decide to access any of
the servers on the network, no matter whether that server is in the same building or in an office in another
country. To complicate matters even more, two PCs might try to access the same server, and even the same
resource, at the same time. And of course, the chance that only one node anywhere on the network is active
at any given time is minuscule, even in the coldest, darkest hours of the night.
So, in both theory and practice, this widespread network ends up interconnecting thousands or hundreds of
thousands of individual network "dots," connecting them temporarily but on demand. How can it go about
the business of shuffling data ranging from quick e-mails to large (in terms of bytes) documents and even
larger graphic images, sound files, and so on, when the possible interconnections between and among
nodes would make a bowl of spaghetti look well organized by comparison? The solution is in the routing,
which involves several different switching technologies.
Switching of any type involves moving something through a series of intermediate steps, or segments,
rather than moving it directly from start point to end point. Trains, for example, can be switched from track
to track, rather than run on a single, uninterrupted piece of track, and still reach their intended destination.
Switching in networks works in somewhat the same way: Instead of relying on a permanent connection
between source and destination, network switching relies on series of temporary connections that relay
messages from station to station. Switching serves the same purpose as the direct connection, but it uses
transmission resources more efficiently.
WANs (and LANs, including Ethernet and Token Ring) rely primarily on packet switching, but they also
make use of circuit switching, message switching, and the relatively recent, high-speed packet-switching
technology known as cell relay.
Circuit Switching
Circuit switching involves creating a direct physical connection between sender and receiver, a connection
that lasts as long as the two parties need to communicate. In order for this to happen, of course, the
connection must be set up before any communication can occur. Once the connection is made, however, the
sender and receiver can count on "owning" the bandwidth allotted to them for as long as they remain
connected.
Although both the sender and receiver must abide by the same data transfer speed, circuit switching does
allow for a fixed (and rapid) rate of transmission. The primary drawback to circuit switching is the fact that
any unused bandwidth remains exactly that: unused. Because the connection is reserved only for the two
communicating parties, that unused bandwidth cannot be "borrowed" for any other transmission.
The most common form of circuit switching happens in that most familiar of networks, the telephone
system, but circuit switching is also used in some networks. Currently available ISDN lines, also known as
narrowband ISDN, and the form of T1 known as switched T1 are both examples of circuit-switched
communications technologies.
Message Switching
Unlike circuit switching, message switching does not involve a direct physical connection between sender
and receiver. When a network relies on message switching, the sender can fire off a transmission—after
addressing it appropriately—whenever it wants. That message is then routed through intermediate stations
or, possibly, to a central network computer. Along the way, each intermediary accepts the entire message,
scrutinizes the address, and then forwards the message to the next party, which can be another intermediary
or the destination node.
What's especially notable about message-switching networks, and indeed happens to be one of their
defining features, is that the intermediaries aren't required to forward messages immediately. Instead, they
can hold messages before sending them on to their next destination. This is one of the advantages of
message switching. Because the intermediate stations can wait for an opportunity to transmit, the network
can avoid, or at least reduce, heavy traffic periods, and it has some control over the efficient use of
communication lines.

Packet Switching
Packet switching, although it is also involved in routing data within and between LANs such as Ethernet
and Token Ring, is also the backbone of WAN routing. It's not the highway on which the data packets
travel, but it is the dispatching system and to some extent the cargo containers that carry the data from
place to place. In a sense, packet switching is the Federal Express or United Parcel Service of a WAN.
In packet switching, all transmissions are broken into units called packets, each of which contains
addressing information that identifies both the source and destination nodes. These packets are then routed
through various intermediaries, known as Packet Switching Exchanges (PSEs), until they reach their
destination. At each stop along the way, the intermediary inspects the packet's destination address, consults
a routing table, and forwards the packet at the highest possible speed to the next link in the chain leading to
the recipient.
As they travel from link to link, packets are often carried on what are known as virtual circuits—temporary
allocations of bandwidth over which the sending and receiving stations communicate after agreeing on
certain "ground rules," including packet size, flow control, and error control. Thus, unlike circuit switching,
packet switching typically does not tie up a line indefinitely for the benefit of sender and receiver.
Transmissions require only the bandwidth needed for forwarding any given packet, and because packet
switching is also based on multiplexing messages, many transmissions can be interleaved on the same
networking medium at the same time.
Connectionless and Connection-Oriented Services
So packet-switched networks transfer data over variable routes in little bundles called packets. But how do
these networks actually make the connection between the sender and the recipient? The sender can't just
assume that a transmitted packet will eventually find its way to the correct destination. There has to be
some kind of connection—some kind of link between the sender and the recipient. That link can be based
on either connectionless or connection-oriented services, depending on the type of packet-switching
network involved.
• In a (so to speak) connectionless "connection," an actual communications link isn't established
between sender and recipient before packets can be transmitted. Each transmitted packet is
considered an independent unit, unrelated to any other. As a result, the packets making up a
complete message can be routed over different paths to reach their destination.
In a connection-oriented service, the communications link is made before any packets are transmitted.
Because the link is established before transmission begins, the packets comprising a message all follow the
same route to their destination. In establishing the link between sender and recipient, a connection-oriented
service can make use of either switched virtual circuits (SVCs) or permanent virtual circuits (PVCs):
• Using a switched virtual circuit is comparable to calling someone on the telephone. The
caller connects to the called computer, they exchange information, and then they
terminate the connection.
• Using a permanent virtual circuit, on the other hand, is more like relying on a leased line.
The line remains available for use at all times, even when no transmissions are passing
through it.
Types of Packet-Switching Networks
As you've seen, packet-based data transfer is what defines a packet-switching network. But—to confuse the
issue a bit—referring to a packet-switching network is a little like referring to tail-wagging canines as dogs.
Sure, they're dogs. But any given dog can also be a collie or a German shepherd or a poodle. Similarly, a
packet-switching network might be, for example, an X.25 network, a frame relay network, an ATM
(Asynchronous Transfer Mode) network, an SMDS (Switched Multimegabit Data Service), and so on.
X.25 packet-switching networks
Originating in the 1970s, X.25 is a connection-oriented, packet-switching protocol, originally based on the
use of ordinary analog telephone lines, that has remained a standard in networking for about twenty years.
Computers on an X.25 network carry on full-duplex communication, which begins when one computer
contacts the other and the called computer responds by accepting the call.
Although X.25 is a packet-switching protocol, its concern is not with the way packets are routed from
switch to switch between networks, but with defining the means by which sending and receiving computers
(known as DTEs) interface with the communications devices (DCEs) through which the transmissions
actually flow. X.25 has no control over the actual path taken by the packets making up any particular
transmission, and as a result the packets exchanged between X.25 networks are often shown as entering a
cloud at the beginning of the route and exiting the cloud at the end.

A recommendation of the ITU (formerly the CCITT), X.25 relates to the lowest three network layers—
physical, data link, and network— in the ISO reference model:
• At the lowest (physical) layer, X.25 specifies the means—electrical, mechanical, and so on—by
which communication takes place over the physical media. At this level, X.25 covers standards
such as RS-232, the ITU's V.24 specification for international connections, and the ITU's V.35
recommendation for high-speed modem signaling over multiple telephone circuits.
• At the next (data link) level, X.25 covers the link access protocol, known as LAPB (Link Access
Protocol, Balanced), that defines how packets are framed. The LAPB ensures that two
communicating devices can establish an error-free connection.
• At the highest level (in terms of X.25), the network layer, the X.25 protocol covers packet formats
and the routing and multiplexing of transmissions between the communicating devices.
On an X.25 network, transmissions are typically broken into 128-byte packets. They can, however, be as
small as 64 bytes or as large as 4096 bytes.
LECTURE NO.28 READINGS:

http://www.networkdictionary.com/protocols/dqdb.php
http://www.nationmaster.com/encyclopedia/Synchronous-optical-networking

DQDB:
Distributed-queue dual-bus
In telecommunication, a distributed-queue dual-bus network (DQDB) is a distributed multi-access
network that (a) supports integrated communications using a dual bus and distributed queuing, (b) provides
access to local or metropolitan area networks, and (c) supports connectionless data transfer, connection-
oriented data transfer, and isochronous communications, such as voice communications.
IEEE 802.6 is an example of a network providing DQDB access methods.
DQDB Concept of Operation
The DQDB Medium Access Control (MAC) algorithm is generally credited to Robert Newman who
developed this algorithm in his PhD thesis in the 1980s at the University of Western Australia. To
appreciate the innovative value of the DQDB MAC algorithm, it must be seen against the background of
LAN protocols at that time, which were based on broadcast (such as ethernet IEEE 802.3) or a ring (like
token ring IEEE 802.5 and FDDI). The DQDB may be thought of as two token rings, one carrying data in
each direction around the ring. The ring is broken between two of the nodes in the ring. (An advantage of
this is that if the ring breaks somewhere else, the broken link can be closed to form a ring with only one
break again. This gives reliability which is important in Metropolitan Area Networks (MAN), where repairs
may take longer than in a LAN because the damage may be inaccessible).
The DQDB standard IEEE 802.6 was developed while ATM (Broadband ISDN) was still in early
development, but there was strong interaction between the two standards. ATM cells and DQDB frames
were harmonized. They both settled on essentially a 48-byte data frame with a 5-byte header. In the DQDB
algorithm, a distributed queue was implemented by communicating queue state information via the header.
Each node in a DQDB network maintains a pair of state variables which represent its position in the
distributed queue and the size of the queue. The headers on the reverse bus communicated requests to be
inserted in the distributed queue so that upstream nodes would know that they should allow DQDB cells to
pass unused on the forward bus. The algorithm was remarkable for its extreme simplicity.
Currently DQDB systems are being installed by many carriers in entire cities, with lengths that reach up to
160 Km (100 miles) with speeds of a DS3 line (44.736 Mbps) [5]. Other implementations use optical fiber
for a length of up to 100 Km and speeds around 150 Mbps
DQDB: Distributed Queue Dual Bus Defined in IEEE 802.6
Data Over Cable Service Interface Distributed Queue Dual Bus (DQDB) is a Data-link layer
communication protocol for Metropolitan Area Networks (MANs), specified in the IEEE 802.6 standard,
designed for use in MANs. DQDB is designed for data as well as voice and video transmission based on
cell switching technology (similar to ATM). DQDB, which permits multiple systems to interconnect using
two unidirectional logical buses, is an open standard that is designed for compatibility with carrier
transmission standards such as SMDS, which is based on the DQDB standards.
For a MAN to be effective it requires a system that can function across long, ¡°city-wide¡± distances of
several miles, have a low susceptibility to error, adapt to the number of nodes attached and have variable
bandwidth distribution. Using DQDB, networks can be thirty miles long and function in the range of 34
Mbps to 155 Mbps. The data rate fluctuates due to many hosts sharing a dual bus as well as the location of
a single host in relation to the frame generator, but there are schemes to compensate for this problem
making DQDB function reliably and fairly for all hosts.
The DQDB is composed of a two bus lines with stations attached to both and a frame generator at the end
of each bus. The buses run in parallel in such a fashion as to allow the frames generated to travel across the
stations in opposite directions.
Below is a picture of the basic DQDB architecture:

Protocol Structure - DQDB: Distributed Queue Dual Bus Defined in IEEE 802.6
DQDB cell has the similar format as the ATM:

DQDB cell header:

SDH/SONET : SDH/SONET:
Synchronous optical networking (SONET) and Synchronous Digital Hierarchy (SDH), are two closely
related multiplexing protocols for transferring multiple digital bit streams using lasers or light-emitting
diodes (LEDs) over the same optical fiber. The method was developed to replace the Plesiochronous
Digital Hierarchy (PDH) system for transporting larger amounts of telephone calls and data traffic over the
same fiber wire without synchronization problems.
SONET and SDH were originally designed to transport circuit mode communications (eg, T1, T3) from a
variety of different sources. The primary difficulty in doing this prior to SONET was that the
synchronization source of these different circuits were different, meaning each circuit was actually
operating at a slightly different rate and with different phase. SONET allowed for the simultaneous
transport of many differnet circuits of differing origin within one single framing protocol. In a sense, then,
SONET is not itself a communications protocol per se, but a transport protocol.
Due to SONET's essential protocol neturality and transport-oriented features, SONET was the obvious
choice for transporting ATM (Asynchronous Transfer Mode) frames, and so quickly evolved mapping
structures and concatenated payload containers so as to transport ATM connections. In other words, for
ATM (and eventually other protocols such as TCP/IP and ethernet), the internal complex structure
previously used to transport circuit-oriented connections is removed, and replaced with a large and
concatenated frame (such as STS-3c) into which ATM frames, IP packets, or ethernet is placed.

Both SDH and SONET are widely used today: SONET in the U.S. and Canada and SDH in the rest of the
world. Although the SONET standards were developed before SDH, their relative penetrations in the
worldwide market dictate that SONET now is considered the variation.
The two protocols are standardized according to the following:
• SDH or Synchronous Digital Hierarchy standard developed by the International
Telecommunication Union (ITU), documented in standard G.707 and its extension G.708
• SONET or Synchronous Optical Networking standard as defined by GR-253-CORE from
Telcordia and T1.105 from American National Standards Institute

Structure of SONET/SDH signals


SONET and SDH often use different terms to describe identical features or functions, sometimes leading to
confusion that exaggerates their differences. With a few exceptions, SDH can be thought of as a superset of
SONET. The two main differences between the two:
• SONET can use either of two basic units for framing while SDH has one
• SDH has additional mapping options which are not available in SONET.
The basic unit of transmission
The basic unit of framing in SDH is a STM-1 (Synchronous Transport Module level - 1), which operates at
155.52 Mbit/s. SONET refers to this basic unit as an STS-3c (Synchronous Transport Signal - 3,
concatenated), but its high-level functionality, frame size, and bit-rate are the same as STM-1.
SONET offers an additional basic unit of transmission, the STS-1 (Synchronous Transport Signal - 1),
operating at 51.84 Mbit/s - exactly one third of an STM-1/STS-3c. Some manufacturers also support the
SDH equivalent STM-0, but this is not part of the standard.
Framing
In packet oriented data transmission such as Ethernet, a packet frame usually consists of a header and a
payload, with the header of the frame being transmitted first, followed by the payload (and possibly a
trailer, such as a CRC). In synchronous optical networking, this is modified slightly. The header is termed
the overhead and the payload still exists, but instead of the overhead being transmitted before the payload,
it is interleaved, with part of the overhead being transmitted, then part of the payload, then the next part of
the overhead, then the next part of the payload, until the entire frame has been transmitted. In the case of an
STS-1, the frame is 810 octets in size while the STM-1/STS-3c frame is 2430 octets in size. For STS-1, the
frame is transmitted as 3 octets of overhead, followed by 87 octets of payload. This is repeated nine times
over until 810 octets have been transmitted, taking 125 microseconds. In the case of an STS-3c/STM-1
which operates three times faster than STS-1, 9 octets of overhead are transmitted, followed by 261 octets
of payload. This is also repeated nine times over until 2,430 octets have been transmitted, also taking 125
microseconds. For both SONET and SDH, this is normally represented by the frame being displayed
graphically as a block: of 90 columns and 9 rows for STS-1; and 270 columns and 9 rows for STM1/STS-
3c. This representation aligns all the overhead columns, so the overhead appears as a contiguous block, as
does the payload.
The internal structure of the overhead and payload within the frame differs slightly between SONET and
SDH, and different terms are used in the standards to describe these structures. However, the standards are
extremely similar in implementation, such that it is easy to interoperate between SDH and SONET at
particular bandwidths.
It is worth noting that the choice of a 125 microsecond interval is not an arbitrary one. What it means is that
the same octet position in each frame comes past every 125 microseconds. If one octet is extracted from the
bitstream every 125 microseconds, this gives a data rate of 8 bits per 125 microseconds - or 64 kbit/s, the
basic DS0 telecommunications rate. This relation allows an extremely useful behaviour of synchronous
optical networking, which is that low data rate channels or streams of data can be extracted from high data
rate streams by simply extracting octets at regular time intervals - there is no need to understand or decode
the entire frame. This is not possible in PDH networking. Furthermore, it shows that a relatively simple
device is all that is needed to extract a datastream from an SDH framed connection and insert it into a
SONET framed connection and vice versa.
In practice, the terms STS-1 and OC-1 are sometimes used interchangeably, though the OC-N format refers
to the signal in its optical form. It is therefore incorrect to say that an OC-3 contains 3 OC-1s: an OC-3 can
be said to contain 3 STS-1s.
SDH Frame

A STM-1 Frame. The first 9 columns contain the overhead and the pointers. For the sake of simplicity, the
frame is shown as a rectangular structure of 270 columns and 9 rows, but the protocol does not transmit the
bytes in this order in practice
For the sake of simplicity, the frame is shown as a rectangular structure of 270 columns and 9 rows. The
first 3 rows and 9 columns contain Regenerator Section Overhead (RSOH) and the last 5 rows and 9
columns contain Multiplex Section Overhead (MSOH). The 4th row from the top contains pointers
The STM-1 (Synchronous Transport Module level - 1) frame is the basic transmission format for SDH or
the fundamental frame or the first level of the synchronous digital hierarchy. The STS-1 frame is
transmitted in exactly 125 microseconds, therefore there are 8000 frames per second on a fiber-optic circuit
designated OC-1 (optical carrier one). The STM-1 frame consists of overhead plus a virtual container
capacity. The first 9 columns of each frame make up the Section Overhead, and the last 261 columns make
up the Virtual Container (VC) capacity. The VC plus the pointers (H1, H2, H3 bytes) is called the AU
(Administrative Unit).
Carried within the VC capacity, which has its own frame structure of 9 rows and 261 columns, is the Path
Overhead and the Container. The first column is for Path Overhead; it’s followed by the payload container,
which can itself carry other containers. Virtual Containers can have any phase alignment within the
Administrative Unit, and this alignment is indicated by the Pointer in row four,
The Section overhead of an STM-1 signal (SOH) is divided into two parts: the Regenerator Section
Overhead (RSOH) and the Multiplex Section Overhead (MSOH). The overheads contain information from
the system itself, which is used for a wide range of management functions, such as monitoring transmission
quality, detecting failures, managing alarms, data communication channels, service channels, etc.
The STM frame is continuous and is transmitted in a serial fashion, byte-by-byte, row-by-row.
STM–1 frame contains
• Total content : 9 x 270 bytes = 2430 bytes
• overhead : 9 rows x 9 bytes
• payload : 9 rows x 261 bytes
• Period : 125 μsec
• Bitrate : 155.520 Mbit/s (2430 x 8 bits x 8000 frame/s )
• payload capacity : 150.336 Mbit/s (2349 x 8 bits x 8000 frame/s)
The transmission of the frame is done row by row, from the top left corner.
LECTURE NO.29 READINGS: A-PAGE 446,441 to 443,
FRAME RELAY:
In the context of computer networking, frame relay consists of an efficient data transmission technique
used to send digital information. It is a message forwarding "relay race" like system in which data packets,
called frames, are passed from one or many start-points to one or many destinations via a series of
intermediate node points.
Network providers commonly implement frame relay for voice and data as an encapsulation technique,
used between local area networks (LANs) over a wide area network (WAN). Each end-user gets a private
line (or leased line) to a frame-relay node. The frame-relay network handles the transmission over a
frequently-changing path transparent to all end-users.
With the advent of MPLS, VPN and dedicated broadband services such as cable modem and DSL, the end
may loom for the frame relay protocol and encapsulation. However many rural areas remain lacking DSL
and cable modem services. In such cases the least expensive type of "always-on" connection remains a 64-
kbit/s frame-relay line. Thus a retail chain, for instance, may use frame relay for connecting rural stores
into their corporate WAN.
A basic Frame relay network
Design
The designers of frame relay aimed at a telecommunication service for cost-efficient data transmission for
intermittent traffic between local area networks (LANs) and between end-points in a wide area network
(WAN). Frame relay puts data in variable-size units called "frames" and leaves any necessary error-
correction (such as re-transmission of data) up to the end-points. This speeds up overall data transmission.
For most services, the network provides a permanent virtual circuit (PVC), which means that the customer
sees a continuous, dedicated connection without having to pay for a full-time leased line, while the service-
provider figures out the route each frame travels to its destination and can charge based on usage.
An enterprise can select a level of service quality - prioritizing some frames and making others less
important. Frame relay can run on fractional T-1 or full T-carrier system carriers. Frame relay complements
and provides a mid-range service between basic rate ISDN, which offers bandwidth at 128 kbit/s, and
Asynchronous Transfer Mode (ATM), which operates in somewhat similar fashion to frame relay but at
speeds from 155.520 Mbit/s to 622.080 Mbit/s.
Frame relay has its technical base in the older X.25 packet-switching technology, designed for transmitting
data on analog voice lines. Unlike X.25, whose designers expected analog signals, frame relay offers a fast
packet technology, which means that the protocol does not attempt to correct errors. When a frame relay
network detects an error in a frame, it simply drops that frame. The end points have the responsibility for
detecting and retransmitting dropped frames. (However, digital networks offer an incidence of error
extraordinarily small relative to that of analog networks.)
Frame relay often serves to connect local area networks (LANs) with major backbones as well as on public
wide-area networks (WANs) and also in private network environments with leased lines over T-1 lines. It
requires a dedicated connection during the transmission period. Frame relay does not provide an ideal path
for voice or video transmission, both of which require a steady flow of transmissions. However, under
certain circumstances, voice and video transmission do use frame relay.
Frame relay relays packets at the data link layer (layer 2) of the Open Systems Interconnection (OSI) model
rather than at the network layer (layer 3). A frame can incorporate packets from different protocols such as
Ethernet and X.25. It varies in size up to a thousand bytes or more.
Frame Relay originated as an extension of Integrated Services Digital Network (ISDN). Its designers aimed
to enable a packet-switched network to transport the circuit-switched technology. The technology has
become a stand-alone and cost-effective means of creating a WAN.
Frame Relay switches create virtual circuits to connect remote LANs to a WAN. The Frame Relay network
exists between a LAN border device, usually a router, and the carrier switch. The technology used by the
carrier to transport the data between the switches is variable and changes between carrier (i.e. Frame Relay
does not rely directly on the transportation mechanism to function).
The sophistication of the technology requires a thorough understanding of the terms used to describe how
Frame Relay works. Without a firm understanding of Frame Relay, it is difficult to troubleshoot its
performance.
Frame Relay has become one of the most extensively-used WAN protocols. Its cheapness (compared to
leased lines) provided one reason for its popularity. The extreme simplicity of configuring user equipment
in a Frame Relay network offers another reason for Frame Relay's popularity.
Frame-relay frame structure essentially mirrors almost exactly that defined for LAP-D. Traffic analysis can
distinguish frame relay format from LAP-D by its lack of a control field.
Each frame relay PDU consists of the following fields:
1. Flag Field. The flag is used to perform high-level data link synchronization which indicates the
beginning and end of the frame with the unique pattern 01111110. To ensure that the 01111110
pattern does not appear somewhere inside the frame, bit stuffing and destuffing procedures are
used.
2. Address Field. Each address field may occupy either octet 2 to 3, octet 2 to 4, or octet 2 to 5,
depending on the range of the address in use. A two-octet address field comprises the
EA=ADDRESS FIELD EXTENSION BITS and the C/R=COMMAND/RESPONSE BIT.
3. DLCI-Data Link Connection Identifier Bits. The DLCI serves to identify the virtual connection so
that the receiving end knows which information connection a frame belongs to. Note that this
DLCI has only local significance. A single physical channel can multiplex several different virtual
connections.
4. FECN, BECN, DE bits. These bits report congestion:
o FECN=Forward Explicit Congestion Notification bit
o BECN=Backward Explicit Congestion Notification bit
o DE=Discard Eligibility bit
5. Information Field. A system parameter defines the maximum number of data bytes that a host
can pack into a frame. Hosts may negotiate the actual maximum frame length at call set-up time.
The standard specifies the maximum information field size (supportable by any network) as at
least 262 octets. Since end-to-end protocols typically operate on the basis of larger information
units, frame relay recommends that the network support the maximum value of at least 1600 octets
in order to avoid the need for segmentation and reassembling by end-users.
6. Frame Check Sequence (FCS) Field. Since one cannot completely ignore the bit error-rate of the
medium, each switching node needs to implement error detection to avoid wasting bandwidth due
to the transmission of erred frames. The error detection mechanism used in frame relay uses the
cyclic redundancy check (CRC) as its basis.
The frame relay network uses a simplified protocol at each switching node. It achieves simplicity by
omitting link-by-link flow-control. As a result, the offered load has largely determined the performance of
frame relay networks. When high offered load is high, due to the bursts in some services, temporary
overload at some frame relay nodes causes a collapse in network throughput. Therefore, frame-relay
networks require some effective mechanisms to control the congestion.
Congestion control in frame-relay networks includes the following elements:
1. Admission Control. This provides the principal mechanism used in frame relay to ensure the
guarantee of resource requirement once accepted. It also serves generally to achieve high network
performance. The network decides whether to accept a new connection request, based on the
relation of the requested traffic descriptor and the network's residual capacity. The traffic
descriptor consists of a set of parameters communicated to the switching nodes at call set-up time
or at service-subscription time, and which characterizes the connection's statistical properties. The
traffic descriptor consists of three elements:
2. Committed Information Rate (CIR). The average rate (in bit/s) at which the network guarantees to
transfer information units over a measurement interval T. This T interval is defined as: T =
Bc/CIR.
3. Committed Burst Size (BC). The maximum number of information units transmittable during the
interval T.
4. Excess Burst Size (BE). The maximum number of uncommitted information units (in bits) that the
network will attempt to carry during the interval.
Once the network has established a connection, the edge node of the frame relay network must monitor the
connection's traffic flow to ensure that the actual usage of network resources does not exceed this
specification. Frame relay defines some restrictions on the user's information rate. It allows the network to
enforce the end user's information rate and discard information when the subscribed access rate is
exceeded.
Explicit congestion notification is proposed as the congestion avoidance policy. It tries to keep the network
operating at its desired equilibrium point so that a certain Quality of Service (QOS) for the network can be
met. To do so, special congestion control bits have been incorporated into the address field of the frame
relay: FECN and BECN. The basic idea is to avoid data accumulation inside the network. FECN means
Forward Explicit Congestion Notification. The FECN bit can be set to 1 to indicate that congestion was
experienced in the direction of the frame transmission, so it informs the destination that congestion has
occurred. BECN means Backwards Explicit Congestion Notification. The BECN bit can be set to 1 to
indicate that congestion was experienced in the network in the direction opposite of the frame transmission,
so it informs the sender that congestion has occurred.
WIRELESS LINKS:
A WWAN differs from a WLAN (wireless LAN) in that it uses Mobile telecommunication cellular network
technologies such as WIMAX (though it's better applicated into WMAN Networks), UMTS, GPRS,
CDMA2000, GSM, CDPD, Mobitex, HSDPA or 3G to transfer data. It can use also LMDS and Wi-Fi to
connect to the Internet. These cellular technologies are offered regionally, nationwide, or even globally and
are provided by a wireless service provider for a monthly usage fee.[1] WWAN connectivity allows a user
with a laptop and a WWAN card to surf the web, check email, or connect to a Virtual Private Network
(VPN) from anywhere within the regional boundaries of cellular service. Various computers now have
integrated WWAN capabilities (Such as HSDPA in Centrino). This means that the system has a cellular
radio (GSM/CDMA) built in, which allows the user to send and receive data. There are two basic means
that a mobile network may use to transfer data:
• Packet-switched Data Networks (GPRS/CDPD)
• Circuit-switched dial-up connections
Since radio communications systems do not provide a physically secure connection path, WWANs
typically incorporate encryption and authentication methods to make them more secure. Unfortunately
some of the early GSM encryption techniques were flawed, and security experts have issued warnings that
cellular communication, including WWANs, is no longer secure.[2] UMTS(3G) encryption was developed
later and has yet to be broken.
Examples of providers for WWAN include Sprint Nextel, Verizon, and AT&T.
ATM:
In electronic digital data transmission systems, the network protocol Asynchronous Transfer Mode
(ATM) encodes data traffic into small fixed-sized cells. The standards for ATM were first developed in the
mid 1980s. The goal was to design a single networking strategy that could transport real-time video and
audio as well as image files, text and email. Two groups, the International Telecommunications Union and
the ATM Forum were involved in the creation of the standards.
ATM, as a connection-oriented technology, establishes a virtual circuit between the two endpoints before
the actual data exchange begins. ATM is a cell relay, packet switching protocol which provides data link
layer services that run over Layer 1 links. This differs from other technologies based on packet-switched
networks (such as the Internet Protocol or Ethernet), in which variable sized packets (known as frames
when referencing Layer 2) are used. ATM exposes properties from both circuit- and packet switched
networking, making it suitable for wide area data networking as well as real-time media transport. It is a
core protocol used in the SONET/SDH backbone of the public switched telephone network.
When purchasing ATM service, you generally have a choice of four different types of service:
 constant bit rate (CBR): specifies a fixed bit rate so that data is sent in a steady stream. This is
analogous to a leased line.
 variable bit rate (VBR): provides a specified throughput capacity but data is not sent evenly. This is a
popular choice for voice and videoconferencing data.
 available bit rate (ABR): provides a guaranteed minimum capacity but allows data to be bursted at
higher capacities when the network is free.
 unspecified bit rate (UBR): does not guarantee any throughput levels. This is used for applications,
such as file transfer
, that can tolerate delays.
ATM addressing
A Virtual Channel (VC) provides the transport of ATM cells which have the same unique identifier,
called the Virtual Channel Identifier (VCI). This identifier is encoded in the cell header. A virtual channel
represents the basic means of communication between two end-points, and is analogous to an X.25 virtual
circuit.[1]
A Virtual Path (VP) transports ATM cells belonging to virtual channels which share a common identifier,
called the Virtual Path Identifier (VPI), which is also encoded in the cell header. A virtual path, in other
words, is a grouping of virtual channels which connect the same end-points, and which share a traffic
allocation. This two layer approach can be used to separate the management of routers and bandwidth from
the setup of individual connections.
ATM concepts
Why cells?
The designers of ATM utilized small data cells in order to reduce jitter (delay variance, in this case) in the
multiplexing of data streams. Reduction of jitter (and also end-to-end round-trip delays) is particularly
important when carrying voice traffic, because the conversion of digitized voice into an analog audio signal
is an inherently real-time process, and to do a good job, the codec that does this needs an evenly spaced (in
time) stream of data items. If the next data item is not available when it is needed, the codec has no choice
but to produce silence or guess - and if the data is late, it is useless, because the time period when it should
have been converted to a signal has already passed.
Now consider a speech signal reduced to packets, and forced to share a link with bursty data traffic (traffic
with some large data packets). No matter how small the speech packets could be made, they would always
encounter full-size data packets, and under normal queuing conditions, might experience maximum
queuing delays.
At the time of the design of ATM, 155 Mbit/s SDH (135 Mbit/s payload) was considered a fast optical
network link, and many PDH links in the digital network were considerably slower, ranging from 1.544 to
45 Mbit/s in the USA (2 to 34 Mbit/s in Europe).
At this rate, a typical full-length 1500 byte (12000-bit) data packet would take 77.42 µs to transmit. In a
lower-speed link, such as a 1.544 Mbit/s T1 link, a 1500 byte packet would take up to 7.8 milliseconds.
A queuing delay induced by several such data packets might exceed the figure of 7.8 ms several times over,
in addition to any packet generation delay in the shorter speech packet. This was clearly unacceptable for
speech traffic, which needs to have low jitter in the data stream being fed into the codec if it is to produce
good-quality sound. A packet voice system can produce this in a number of ways:
• Have a playback buffer between the network and the codec, one large enough to tide the codec
over almost all the jitter in the data. This allows smoothing out the jitter, but the delay introduced
by passage through the buffer would require echo cancellers even in local networks; this was
considered too expensive at the time. Also, it would have increased the delay across the channel,
and conversation is difficult over high-delay channels.
• Build a system which can inherently provide low jitter (and minimal overall delay) to traffic which
needs it.
• Operate on a 1:1 user basis (i.e., a dedicated pipe).
The design of ATM aimed for a low-jitter network interface. However, to be able to provide short queueing
delays, but also be able to carry large datagrams, it had to have cells. ATM broke up all packets, data, and
voice streams into 48-byte chunks, adding a 5-byte routing header to each one so that they could be
reassembled later. The choice of 48 bytes was political rather than technical.[2] When the CCITT was
standardizing ATM, parties from the United States wanted a 64-byte payload because this was felt to be a
good compromise between larger payloads optimized for data transmission and shorter payloads optimized
for real-time applications like voice; parties from Europe wanted 32-byte payloads because the small size
(and therefore short transmission times) simplify voice applications with respect to echo cancellation. Most
of the European parties eventually came around to the arguments made by the Americans, but France and a
few others held out for a shorter cell length. With 32 bytes, France would have been able to implement an
ATM-based voice network with calls from one end of France to the other requiring no echo cancellation.
48 bytes (plus 5 header bytes = 53) was chosen as a compromise between the two sides, but it was ideal for
neither and everybody has had to live with it ever since. 5-byte headers were chosen because it was thought
that 10% of the payload was the maximum price to pay for routing information. ATM multiplexed these
53-byte cells instead of packets. Doing so reduced the worst-case queuing ji tter by a factor of almost 30,
removing the need for echo cancellers.
Structure of an ATM cell
An ATM cell consists of a 5-byte header and a 48-byte payload. The payload size of 48 bytes was chosen
as described above ("Why cells?").
ATM defines two different cell formats: NNI (Network-Network Interface) and UNI (User-Network
Interface). Most ATM links use UNI cell format.
Diagram of the NNI ATM Cell
Diagram of the UNI ATM Cell 7 4 3 0
7 4 3 0 VPI
GFC VPI VPI VCI
VPI VCI VCI
VCI VCI PT CLP
VCI PT CLP HEC
HEC

Payload (48 bytes)


Payload (48 bytes.The user data may be less than
48 bytes)

GFC = Generic Flow Control (4 bits) (default: 4-zero bits)


VPI = Virtual Path Identifier (8 bits UNI) or (12 bits NNI)
VCI = Virtual channel identifier (16 bits)
PT = Payload Type (3 bits)
CLP = Cell Loss Priority (1-bit)
HEC = Header Error Control (8-bit CRC, polynomial = X8 + X2 + X + 1)
ATM uses the PT field to designate various special kinds of cells for operations, administration, and
maintenance (OAM) purposes, and to delineate packet boundaries in some AALs.
Several of ATM's link protocols use the HEC field to drive a CRC-Based Framing algorithm, which allows
locating the ATM cells with no overhead required beyond what is otherwise needed for header protection.
The 8-bit CRC is used to correct single-bit header errors and detect multi-bit header errors. When multi-bit
header errors are detected, the current and subsequent cells are dropped until a cell with no header errors is
found.
A UNI cell reserves the GFC field for a local flow control/submultiplexing system between users. This was
intended to allow several terminals to share a single network connection, in the same way that two ISDN
phones can share a single basic rate ISDN connection. All four GFC bits must be zero by default.
The NNI cell format replicates the UNI format almost exactly, except that the 4-bit GFC field is re-
allocated to the VPI field, extending the VPI to 12 bits. Thus, a single NNI ATM interconnection is capable
of addressing almost 212 VPs of up to almost 216 VCs each (in practice some of the VP and VC numbers are
reserved).

LECTURE NO. 30

REMOTE MONITORING TECHNIQUES:


Polling: Any equipment or system (router, switch, server, RF anteenas) in a network sent the hello packets
to the neighbouring equipments in order to check the network link whether the link is up or done.
When the polling is done sucessfully that means network links are ok.
LECTURE NO. 31

Class of Service
Class of Service (CoS) is a way of managing traffic in a network by grouping similar types of traffic (for
example, e-mail, streaming video, voice, large document file transfer) together and treating each type as a
class with its own level of service priority. Unlike Quality of Service (QoS) traffic management, Class of
Service technologies do not guarantee a level of service in terms of bandwidth and delivery time; they offer
a "best-effort." On the other hand, CoS technology is simpler to manage and more scalable as a network
grows in structure and traffic volume. One can think of CoS as "coarsely-grained" traffic control and QoS
as "finely-grained" traffic control.
Class of Service (CoS) is a 3 bit field within a layer two Ethernet frame header when using IEEE 802.1Q.
It specifies a priority value of between 0 (signifying best-effort) and 7 (signifying priority real-time data)
that can be used by Quality of Service disciplines to differentiate traffic.
Class of Service (CoS) is a way of managing traffic in a network by grouping similar types of traffic (for
example, e-mail, streaming video, voice, large document file transfer) together and treating each type as a
class with its own level of service priority. Unlike Quality of Service (QoS) traffic management, Class of
Service technologies do not guarantee a level of service in terms of bandwidth and delivery time; they offer
a "best-effort." On the other hand, CoS technology is simpler to manage and more scalable as a network
grows in structure and traffic volume. One can think of CoS as "coarsely-grained" traffic control and QoS
as "finely-grained" traffic control.
Voice terminology
Class of Service as related to legacy telephone systems, is often used to define the permissions an extension
will have on a PBX or Centrex. The Class of Service acronym is normally written as COS vs. CoS as is
often used in data networking parlance. Certain groups of users may have a need for extended voice mail
message retention while another group may need the ability to forward calls to a cell phone, and still others
have no need to make calls outside the office. Permissions for a group of extensions can be changed by
modifying a COS variable applied to the entire group.
COS is also used on trunks to define if they are full-duplex, incoming only, or outgoing only.
Most IP Phones tag the VoIP packets with CoS marking of 5 or 6 in the Ethernet header of the outgoing
frame.

There are three main CoS technologies:


802.1p Layer 2 Tagging
Type of Service (ToS) : Type of Service
8 bits in the IP header are reserved for the service type. They can be divided into 5 subfields:
The 3 precedence bits have a value from 0 to 2 and are used to indicate the importance of a datagram.
Default is 0 (higher is better). Bits 3 4 5 represent the following:
• D: requests low delay
• T: requests high throughput
• R: requests high reliability
Support for ToS in routers may become a ‘MUST’ in the future, but for now it’s only a ‘SHOULD’. A
router maintains a ToS value for each route in its routing table. Routes learned through a protocol that does
not support ToS are assigned a ToS of zero. Routers use the ToS to choose a destination for the packet.
1. The router locates in its routing table all available routes to the destination.
2. If there are none, the router drops the packet because the destination is unreachable.
3. If one or more of those routes have a TOS that exactly matches the TOS specified in the packet,
the router chooses the route with the best metric.
4. Otherwise, the router repeats the above step, except looking at routes whose TOS is zero.
5. If no route was chosen above, the router drops the packet because the destination is unreachable.
The router returns an ICMP Destination Unreachable error specifying the appropriate code: either
Network Unreachable with Type of Service (code 11) or Host Unreachable with Type of Service
(code 12).
6. if no route was chosen above ,the router drops the packet.
0 1 2 3 4 5 6 7
Precedence D T R ECN field

Differentiated Services : Differentiated Services or DiffServ is a computer networking architecture that


specifies a simple, scalable and coarse-grained mechanism for classifying, managing network traffic and
providing Quality of Service (QoS) guarantees on modern IP networks. DiffServ can, for example, be used
to provide low-latency, guaranteed service (GS) to critical network traffic such as voice or video while
providing simple best-effort traffic guarantees to non-critical services such as web traffic or file transfers.
QUALITY OF SERVICE:
QoS (Quality of Service) refers to a broad collection of networking technologies and techniques. The goal
of QoS is to provide guarantees on the ability of a network to deliver predictable results. Elements of
network performance within the scope of QoS often include availability (uptime), bandwidth (throughput),
latency (delay), and error rate.
QoS involves prioritization of network traffic. QoS can be targeted at a network interface, toward a given
server or router's performance, or in terms of specific applications. A network monitoring system must
typically be deployed as part of QoS, to insure that networks are performing at the desired level.
QoS is especially important for the new generation of Internet applications such as VoIP, video-on-demand
and other consumer services. Some core networking technologies like Ethernet were not designed to
support prioritized traffic or guaranteed performance levels, making it much more difficult to implement
QoS solutions across the Internet.
Applications requiring QoS
A defined Quality of Service may be required for certain types of network traffic, for example:
• streaming multimedia may require guaranteed throughput to ensure that a minimum level of
quality is maintained.
• IPTV offered as a service from a service provider such as AT&T's U-verse
• IP telephony or Voice over IP (VOIP) may require strict limits on jitter and delay
• Video Teleconferencing (VTC) requires low jitter and latency
• Alarm signalling (e.g., Burglar alarm)
• dedicated link emulation requires both guaranteed throughput and imposes limits on maximum
delay and jitter
• a safety-critical application, such as remote surgery may require a guaranteed level of availability
(this is also called hard QoS).
• a remote system administrator may want to prioritize variable, and usually small, amounts of SSH
traffic to ensure a responsive session even over a heavily-laden link.
• online games, such as fast paced real time simulations with multiple players. Lack of QoS may
produce 'lag'.
These types of service are called inelastic, meaning that they require a certain minimum level of bandwidth
and a certain maximum latency to function.
By contrast, elastic applications can take advantage of however much or little bandwidth is available. Bulk
file transfer applications that rely on TCP are generally elastic.
QoS mechanisms
An alternative to complex QoS control mechanisms is to provide high quality communication by
generously over-provisioning a network so that capacity is based on peak traffic load estimates. This
approach is simple and economical for networks with predictable and light traffic loads. The performance
is reasonable for many applications. This might include demanding applications that can compensate for
variations in bandwidth and delay with large receive buffers, which is often possible for example in video
streaming.
Commercial VoIP services are often competitive with traditional telephone service in terms of call quality
even though QoS mechanisms are usually not in use on the user's connection to his ISP and the VoIP
provider's connection to a different ISP. Under high load conditions, however, VoIP quality degrades to
cell-phone quality or worse. The mathematics of packet traffic indicate that a network with QoS can handle
four times as many calls with tight jitter requirements as one without QoS[citation needed]. The amount of over-
provisioning in interior links required to replace QoS depends on the number of users and their traffic
demands. As the Internet now services close to a billion users, there is little possibility that over-
provisioning can eliminate the need for QoS when VoIP becomes more commonplace[citation needed].
For narrowband networks more typical of enterprises and local governments, however, the costs of
bandwidth can be substantial and over provisioning is hard to justify.[citation needed] In these situations, two
distinctly different philosophies were developed to engineer preferential treatment for packets which
require it.
Early work used the "IntServ" philosophy of reserving network resources. In this model, applications used
the Resource reservation protocol (RSVP) to request and reserve resources through a network. While
IntServ mechanisms do work, it was realized that in a broadband network typical of a larger service
provider, Core routers would be required to accept, maintain, and tear down thousands or possibly tens of
thousands of reservations. It was believed that this approach would not scale with the growth of the
Internet, and in any event was antithetical to the notion of designing networks so that Core routers do little
more than simply switch packets at the highest possible rates.
The second and currently accepted approach is "DiffServ" or differentiated services. In the DiffServ model,
packets are marked according to the type of service they need. In response to these markings, routers and
switches use various queuing strategies to tailor performance to requirements. (At the IP layer,
differentiated services code point (DSCP) markings use the 6 bits in the IP packet header. At the MAC
layer, VLAN IEEE 802.1Q and IEEE 802.1D can be used to carry essentially the same information)
Routers supporting DiffServ use multiple queues for packets awaiting transmission from bandwidth
constrained (e.g., wide area) interfaces. Router vendors provide different capabilities for configuring this
behavior, to include the number of queues supported, the relative priorities of queues, and bandwidth
reserved for each queue.
In practice, when a packet must be forwarded from an interface with queuing, packets requiring low jitter
(e.g., VoIP or VTC) are given priority over packets in other queues. Typically, some bandwidth is allocated
by default to network control packets (e.g., ICMP and routing protocols), while best effort traffic might
simply be given whatever bandwidth is left over.
Additional bandwidth management mechanisms may be used to further engineer performance, to include:
• Traffic shaping (rate limiting):
o Token bucket
o Leaky bucket
o TCP rate control - artificially adjusting TCP window size as well as controlling the rate
of ACKs being returned to the sender[citation needed]
• Scheduling algorithms:
o Weighted fair queuing (WFQ)
o Class based weighted fair queuing
o Weighted round robin (WRR)
o Deficit weighted round robin (DWRR)
o Hierarchical Fair Service Curve (HFSC)
• congestion avoidance:
o RED, WRED - Lessens the possibility of port queue buffer tail-drops and this lowers the
likelihood of TCP global synchronization
o Policing (marking/dropping the packet in excess of the committed traffic rate and burst
size)
o Explicit congestion notification
o Buffer tuning
As mentioned, while DiffServ is used in many sophisticated enterprise networks, it has not been widely
deployed in the Internet. Internet peering arrangements are already complex, and there appears to be no
enthusiasm among providers for supporting QoS across peering connections, or agreement about what
policies should be supported in order to do so.
One compelling example of the need for QoS on the Internet relates to this issue of congestion collapse.
The Internet relies on congestion avoidance protocols, as built into TCP, to reduce traffic load under
conditions that would otherwise lead to Internet Meltdown. QoS applications such as VoIP and IPTV,
because they require largely constant bitrates and low latency cannot use TCP, and cannot otherwise reduce
their traffic rate to help prevent meltdown either. QoS contracts limit traffic that can be offered to the
Internet and thereby enforce traffic shaping that can prevent it from becoming overloaded, hence they're an
indispensable part of the Internet's ability to handle a mix of real-time and non-real-time traffic without
meltdown.
Asynchronous Transfer Mode (ATM) network protocol has an elaborate framework to plug in QoS
mechanisms of choice. Shorter data units and built-in QoS were some of the unique selling points of ATM
in the telecommunications applications such as video on demand, voice over IP.
QoS Priority Levels
Priority Level Traffic Type
0 Best Effort
1 Background
2 Standard (Spare)
Excellent Load
3
(Business Critical)
Controlled Load
4
(Streaming Multimedia)
Voice and Video
5 (Interactive Media and Voice)
[Less than 100ms latency and jitter]
Layer 3 Network Control Reserved
6 Traffic
[Less than 10ms latency and jitter]
Layer 2 Network Control Reserved
Traffic
7

[Lowest latency and jitter]

LECTURE NO.32 READINGS: B-PAGE776

FIREWALLS
Firewalls are mainly used as a means to protect an organization's internal network from those on the outside
(internet). It is used to keep outsiders from gaining information to secrets or from doing damage to internal
computer systems. Firewalls are also used to limit the access of individuals on the internal network to
services on the internet along with keeping track of what is done through the firewall. Please note the
difference between firewalls and routers as described in the second paragraph in the IP Masquerading
section.

Types of Firewalls
1. Packet Filtering - Blocks selected network packets.
2. Circuit Level Relay - SOCKS is an example of this type of firewall. This type of proxy is not
aware of applications but just cross links your connects to another outside connection. It can log
activity, but not as detailed as an application proxy. It only works with TCP connections, and
doesn't provide for user authentication.
3. Application Proxy Gateway - The users connect to the outside using the proxy. The proxy gets the
information and returns it to the user. The proxy can record everything that is done. This type of
proxy may require a user login to use it. Rules may be set to allow some functions of an
application to be done and other functions denied. The "get" function may be allowed in the FTP
application, but the "put" function may not.
Proxy Servers can be used to perform the following functions.
• Control outbound connections and data.
• Monitor outbound connections and data.
• Cache requested data which can increase system bandwidth performance and decrease the time it
takes for other users to read the same data.
Application proxy servers can perform the following additional functions:
• Provide for user authentication.
• Allow and deny application specific functions.
Apply stronger authentication mechanisms to some applications.
Packet Filtering Firewalls
In a packet filtering firewall, data is forwarded based on a set of firewall rules. This firewall works at the
network level. Packets are filtered by type, source address, destination address, and port information. These
rules are similar to the routing rules explained in an earlier section and may be thought of as a set of
instructions similar to a case statement or if statement. This type of firewall is fast, but cannot allow access
to a particular user since there is no way to identify the user except by using the IP address of the user's
computer, which may be an unreliable method. Also the user does not need to configure any software to
use a packet filtering firewall such as setting a web browser to use a proxy for access to the web. The user
may be unaware of the firewall. This means the firewall is transparent to the client.
Circuit Level Relay Firewall
A circuit level relay firewall is also transparent to the client. It listens on a port such as port 80 for http
requests and redirect the request to a proxy server running on the machine. Basically, the redirect function
is set up using ipchains then the proxy will filter the package at the port that received the redirect.
LECTURE NO.33
VLAN: A virtual LAN, commonly known as a VLAN, is a group of hosts with a common set of
requirements that communicate as if they were attached to the Broadcast domain, regardless of their
physical location. A VLAN has the same attributes as a physical LAN, but it allows for end stations to be
grouped together even if they are not located on the same network switch. Network reconfiguration can be
done through software instead of physically relocating devices.
Uses
VLANs are created to provide the segmentation services traditionally provided by routers in LAN
configurations. VLANs address issues such as scalability, security, and network management. Routers in
VLAN topologies provide broadcast filtering, security, address summarization, and traffic flow
management. By definition, switches may not bridge IP traffic between VLANs as it would violate the
integrity of the VLAN broadcast domain.
This is also useful if one wants to create multiple Layer 3 networks on the same Layer 2 switch. For
example if a DHCP server (which will broadcast its presence) were plugged into a switch it would serve
anyone on that switch that was configured to do so. By using VLANs you easily split the network up so
some hosts won't use that server and default to Link-local addresses.
Virtual LANs are essentially Layer 2 constructs, compared with IP subnets which are Layer 3 constructs. In
a LAN employing VLANs, a one-to-one relationship often exists between VLANs and IP subnets, although
it is possible to have multiple subnets on one VLAN or have one subnet spread across multiple VLANs.
Virtual LANs and IP subnets provide independent Layer 2 and Layer 3 constructs that map to one another
and this correspondence is useful during the network design process.
By using VLAN, one can control traffic patterns and react quickly to relocations. VLANs provide the
flexibility to adapt to changes in network requirements and allow for simplified administration’
Technologies able to implement VLANs are:
• Asynchronous Transfer Mode (ATM)
• Fiber Distributed Data Interface (FDDI)
• Fast Ethernet
• Gigabit Ethernet
• 10 Gigabit Ethernet
• HiperSockets
Protocols and design
The protocol most commonly used today in configuring virtual LANs is IEEE 802.1Q. The IEEE
committee defined this method of multiplexing VLANs in an effort to provide multivendor VLAN support.
Prior to the introduction of the 802.1Q standard, several proprietary protocols existed, such as Cisco's ISL
(Inter-Switch Link, a variant of IEEE 802.10) and 3Com's VLT (Virtual LAN Trunk). ISL is no longer
supported by Cisco.
Both ISL and IEEE 802.1Q tagging perform explicit tagging as the frame is tagged with VLAN
information explicitly. ISL uses an external tagging process that does not modify the existing Ethernet
frame whereas 802.1Q uses an internal tagging process that does modify the Ethernet frame. This internal
tagging process is what allows IEEE 802.1Q tagging to work on both access and trunk links, because the
frame appears to be a standard Ethernet frame.
The IEEE 802.1Q header contains a 4-byte tag header containing a 2-byte tag protocol identifier (TPID)
and a 2-byte tag control information (TCI). The TPID has a fixed value of 0x8100 that indicates that the
frame carries the 802.1Q/802.1p tag information. The TCI contains the following elements:
• Three-bit user priority
• One-bit canonical format indicator (CFI)
• Twelve-bit VLAN identifier (VID)-Uniquely identifies the VLAN to which the frame belongs
The 802.1Q standard can create an interesting scenario on the network. Recalling that the maximum size
for an Ethernet frame as specified by IEEE 802.3 is 1518 bytes, this means that if a maximum-sized
Ethernet frame gets tagged, the frame size will be 1522 bytes, a number that violates the IEEE 802.3
standard. To resolve this issue, the 802.3 committee created a subgroup called 802.3ac to extend the
maximum Ethernet size to 1522 bytes. Network devices that do not support a larger frame size will process
the frame successfully but may report these anomalies as a "baby giant."
Inter-Switch Link (ISL) is a Cisco proprietary protocol used to interconnect multiple switches and maintain
VLAN information as traffic travels between switches on trunk links. This technology provides one method
for multiplexing bridge groups (VLANs) over a high-speed backbone. It is defined for Fast Ethernet and
Gigabit Ethernet, as is IEEE 802.1Q. ISL has been available on Cisco routers since Cisco IOS Software
Release 11.1.
With ISL, an Ethernet frame is encapsulated with a header that transports VLAN IDs between switches and
routers. ISL does add overhead to the packet as a 26-byte header containing a 10-bit VLAN ID. In addition,
a 4-byte CRC is appended to the end of each frame. This CRC is in addition to any frame checking that the
Ethernet frame requires. The fields in an ISL header identify the frame as belonging to a particular VLAN.
A VLAN ID is added only if the frame is forwarded out a port configured as a trunk link. If the frame is to
be forwarded out a port configured as an access link, the ISL encapsulation is removed.
Early network designers often configured VLANs with the aim of reducing the size of the collision domain
in a large single Ethernet segment and thus improving performance. When Ethernet switches made this a
non-issue (because each switch port is a collision domain), attention turned to reducing the size of the
broadcast domain at the MAC layer. Virtual networks can also serve to restrict access to network resources
without regard to physical topology of the network, although the strength of this method remains debatable
as VLAN Hopping [1] is a common means of bypassing such security measures.
Virtual LANs operate at Layer 2 (the data link layer) of the OSI model. Administrators often configure a
VLAN to map directly to an IP network, or subnet, which gives the appearance of involving Layer 3 (the
network layer). In the context of VLANs, the term "trunk" denotes a network link carrying multiple
VLANs, which are identified by labels (or "tags") inserted into their packets. Such trunks must run between
"tagged ports" of VLAN-aware devices, so they are often switch-to-switch or switch-to-router links rather
than links to hosts. (Note that the term 'trunk' is also used for what Cisco calls "channels" : Link
Aggregation or Port Trunking). A router (Layer 3 device) serves as the backbone for network traffic going
across different VLANs.
On Cisco devices, VTP (VLAN Trunking Protocol) maintains VLAN configuration consistency across the
entire network. VTP uses Layer 2 trunk frames to manage the addition, deletion, and renaming of VLANs
on a network-wide basis from a centralized switch in the VTP server mode. VTP is responsible for
synchronizing VLAN information within a VTP domain and reduces the need to configure the same VLAN
information on each switch.
VTP minimizes the possible configuration inconsistencies that arise when changes are made. These
inconsistencies can result in security violations, because VLANs can crossconnect when duplicate names
are used. They also could become internally disconnected when they are mapped from one LAN type to
another, for example, Ethernet to ATM LANE ELANs or FDDI 802.10 VLANs. VTP provides a mapping
scheme that enables seamless trunking within a network employing mixed-media technologies.
VTP provides the following benefits:
• VLAN configuration consistency across the network
• Mapping scheme that allows a VLAN to be trunked over mixed media
• Accurate tracking and monitoring of VLANs
• Dynamic reporting of added VLANs across the network
• Plug-and-play configuration when adding new VLANs
As beneficial as VTP can be, it does have disadvantages that are normally related to the Spanning Tree
Protocol (STP) as a bridging loop propagating throughout the network can occur. Cisco switches run an
instance of STP for each VLAN, and since VTP propagates VLANs across the campus LAN, VTP
effectively creates more opportunities for a bridging loop to occur.
Before creating VLANs on the switch that will be propagated via VTP, a VTP domain must first be set up.
A VTP domain for a network is a set of all contiguously trunked switches with the same VTP domain
name. All switches in the same management domain share their VLAN information with each other, and a
switch can participate in only one VTP management domain. Switches in different domains do not share
VTP information.
Using VTP, each Catalyst Family Switch advertises the following on its trunk ports:
• Management domain
• Configuration revision number
• Known VLANs and their specific parameters
LECTURE NO. 34 READINGS: A-PAGE 738

Proxy Server:
A proxy server is a server that acts as an intermediary between a workstation user and the Internet so that
the enterprise can ensure security, administrative control, and caching service. A proxy server is associated
with or part of a gateway server that separates the enterprise network from the outside network and a
firewall server that protects the enterprise network from outside intrusion.
A proxy server receives a request for an Internet service (such as a Web page request) from a user. If it
passes filtering requirements, the proxy server, assuming it is also a cache server , looks in its local cache
of previously downloaded Web pages. If it finds the page, it returns it to the user without needing to
forward the request to the Internet. If the page is not in the cache, the proxy server, acting as a client on
behalf of the user, uses one of its own IP addresses to request the page from the server out on the Internet.
When the page is returned, the proxy server relates it to the original request and forwards it on to the user.
To the user, the proxy server is invisible; all Internet requests and returned responses appear to be directly
with the addressed Internet server. (The proxy is not quite invisible; its IP address has to be specified as a
configuration option to the browser or other protocol program.)
An advantage of a proxy server is that its cache can serve all users. If one or more Internet sites are
frequently requested, these are likely to be in the proxy's cache, which will improve user response time. In
fact, there are special servers called cache servers. A proxy can also do logging.
The functions of proxy, firewall, and caching can be in separate server programs or combined in a single
package. Different server programs can be in different computers. For example, a proxy server may in the
same machine with a firewall server or it may be on a separate server and forward requests through the
firewall.
Proxy servers implement one or more of the following functions:-
Caching proxy server
A caching proxy server accelerates service requests by retrieving content saved from a previous request
made by the same client or even other clients. Caching proxies keep local copies of frequently requested
resources, allowing large organizations to significantly reduce their upstream bandwidth usage and cost,
while significantly increasing performance. Most ISPs and large businesses have a caching proxy. These
machines are built to deliver superb file system performance (often with RAID and journaling) and also
contain hot-rodded versions of TCP. Caching proxies were the first kind of proxy server.
The HTTP 1.0 and later protocols contain many types of headers for declaring static (cacheable) content
and verifying content freshness with an original server, e.g. ETAG (validation tags), If-Modified-Since
(date-based validation), Expiry (timeout-based invalidation), etc. Other protocols such as DNS support
expiry only and contain no support for validation.
Some poorly-implemented caching proxies have had downsides (e.g., an inability to use user
authentication). Some problems are described in RFC 3143 (Known HTTP Proxy/Caching Problems).

Another important use of the proxy server is to reduce the hardware cost. In organization there may be
many systems working in the same network or under control of one server, now in this situation we can not
have individual connection for all systems with internet. We can simply connect those systems with one
proxy server and proxy server with the main server.
Web proxy
A proxy that focuses on WWW traffic is called a "web proxy". The most common use of a web proxy is to
serve as a web cache. Most proxy programs (e.g. Squid) provide a means to deny access to certain URLs in
a blacklist, thus providing content filtering. This is usually used in a corporate environment, though with
the increasing use of Linux in small businesses and homes, this function is no longer confined to large
corporations. Some web proxies reformat web pages for a specific purpose or audience (e.g., cell phones
and PDAs).
AOL dialup customers used to have their requests routed through an extensible proxy that 'thinned' or
reduced the detail in JPEG pictures. This sped up performance, but caused trouble, either when more
resolution was needed or when the thinning program produced incorrect results. This is why in the early
days of the web many web pages would contain a link saying "AOL Users Click Here" to bypass the web
proxy and to avoid the bugs in the thinning software

Introduction to Network Operating Systems


1. Introduction
Network Operating Systems extend the facilities and services provided by computer operating systems to
support a set of computers, connected by a network. The environment managed by a network operating
system consists of an interconnected group of machines that are loosely connected. By loosely connected,
we mean that such computers possess no hardware connections at the CPU – memory bus level, but are
connected by external interfaces that run under the control of software. Each computer in this group run
an autonomous operating system, yet cooperate with each other to allow a variety of facilities including
file sharing, data sharing, peripheral sharing, remote execution and cooperative computation. Network
operating systems are autonomous operating systems that support such cooperation. The group of
machines comprising the management domain of the network operating system is called a distributed
system. A close cousin of the network operating system is the distributed operating system. A distributed
operating system is an extension of the network operating system that supports even higher levels of
cooperation and integration of the machines on the network (features include task migration, dynamic
resource location, and so on) (1,2).
An operating system is low-level software controlling the inner workings of a machine. Typical functions
performed by an operating system include managing the CPU among many concurrently executing tasks,
managing memory allocation to the tasks, handling of input and output and controlling all the
peripherals. Applications programs and often the human user are unaware of the existence of the features
of operating systems as the features are embedded and hidden below many layers of software. Thus, the
term low-level software is used. Operating systems were developed, in many forms, since the early
1960’s and have matured in the 1970’s. The emergence of networking in the 1970’s and its explosive
growth since the early 1980’s have had a significant impact on the networking services provided by an
operating system. As more network management features moved into the operating systems, network
operating systems evolved.
Like regular operating systems, network operating systems provide services to the programs that run on
top of the operating system. However, the type of services and the manner in which the services are
provided are quite different. The services tend to be much more complex than those provided by regular
operating systems. In addition, the implementation of these services requires the use of multiple
machines, message passing and server processes.
The set of typical services provided by a network operating system includes (but are not limited to):
1. Remote logon and file transfer
2. Transparent, remote file service
3. Directory and naming service
4. Remote procedure call service
2
5. Object and Brokerage service
6. Time and synchronization service
7. Remote memory service
The network operating system is an extensible operating system. It provides mechanisms to easily add
and remove services, reconfigure the resources, and has the ability of supporting multiple services of the
same kind (for example two kinds of file systems). Such features make network operating systems
indispensable in large networked environments.
In the early 1980’s network operating systems were mainly research projects. Many network and
distributed operating systems were built. These include such names as Amoeba, Argus, Berkeley Unix,
Choices, Clouds, Cronus, Eden, Mach, Newcastle Connection, Sprite, and the V-System. Many of the
ideas developed by these research projects have now moved into the commercial products. The
commonly available network operating systems include Linux (freeware), Novell Netware,
SunOS/Solaris, Unix and Windows NT.
In addition to the software technology that goes into networked systems, theoretical foundations of
distributed (or networked) systems has been developed. Such theory includes topics such as distributed
algorithms, control of concurrency, state management, deadlock handling and so on.
Services for Network Operating Systems
System-wide services are the main facility a network operating system provides. These services come in
many flavors and types. Services are functions provided by the operating system and forms a substrate
used by those applications, which need to interact beyond the simplistic boundaries imposed by the
process concept.
A service is provided by a server and accessed by clients. A server is a process or task that continuously
monitors incoming service requests (similar to telephone operators). When a service request comes in,
the server process reacts to the request, performs the task requested and then returns a response to the
requestor. Often, one or more such server processes run on a computer and the computer is called a
server. However, a server process does not have to run on a server and the two terms are often,
confusingly used interchangeably.
What is a service? In regular operating systems, the system call interface or API (Application
Programming Interface) defines the set of services provided by the operating system. For example,
operating system services include process creation facilities, file manipulation facilities and so on. These
services (or system calls) are predefined and static. However, this is not the case in a network operating
system. Network operating systems do provide a set of static, predefined services, or system calls like the
regular operating system, but in addition provides a much larger, richer set of dynamically creatable and
configurable services. Additional services are added to the network operating system by the use of server
processes and associated libraries.
Any process making a request to a server process is called a client. A client makes a request by sending a
message to a server containing details of the request and awaiting a response. For each server, there is a
4
well-defined protocol defining the requests that can be made to that server and the responses that are
expected. In addition, any process can make a request; that is anyone can become a client, even
temporarily. For example, a server process can obtain services from yet another server process, and while
it is doing so, it can be termed a temporary client.
Services provided by a network operating system include file service, name service, object service, time
service, memory service and so on.
1. Peripheral Sharing Service
Peripherals connected to one computer are often shared by other computers, by the use of peripheral
sharing services. These services go by many names, such as remote device access, printer sharing, shared
disks and so on. A computer having a peripheral device makes it available by exporting it. Other
computers can connect to the exported peripheral. After a connection is made, to a user on the machine
connected to a shared peripheral, that peripheral appears to be local (that is, connected to the users
machine). The sharing service is the most basic service provided by a network operating system.
2. File Service
The most common service that a network operating system provides is file service. File services allow
user of a set of computers to access files and other persistent storage object from any computer connected
to the network. The files are stored in one or more machines called the file server(s). The machines that
use these files, often called workstations have transparent access to these files.
Note only is the file service a common service, but it is also the most important service in the network
operating system. Consequently, it is the most heavily studied and optimized service. There are many
different, often non-interoperable protocols for providing file service (3).
The first full-fledged implementation of a file service system was done by Sun Microsystems and is
called the Sun Network File System (Sun-NFS). Sun-NFS has become an industry standard network file
system for computers running the Unix operating system. Sun-NFS can also be used from computers
running Windows (all varieties) and MacOS but with some limitations.
Under Sun-NFS a machine on a network can export a file system tree (i.e. a directory and all its contents
and subdirectories). A machine that exports one of more directories is called a file server. After a
directory has been exported, any machine connected to the file server (could be connected over the
Internet) can import, or mount that file tree. Mounting is a process, by which the exported directory, all
its contents, and all its subdirectories appear to be a local directory on the machine that mounted it.
Mounting is a common method used in Unix system to build unified file systems from a set of disk
partitions. The mounting of one exported directory from one machine to a local directory on another
machine via Sun-NFS is termed remote mounting.
Figure 1 shows two file servers, each exporting a directory containing many directories and files. These
two exported directories are mounted on a set of workstations, each workstation mounting both the
exported directories from each of the file servers. This configuration results in a uniform file space
structure at each the workstation.
While many different configurations are possible by the innovative use of remote mounting, the system
configuration shown in Figure 1 is quite commonly used. This is called the dataless workstation
configuration. Is such a setup, all files, data and critical applications are kept on the file servers and
mounted on the workstations. The local disks of the workstations only contain the operating system,
some heavily used applications and swap space.
5
Sun-NFS works by using a protocol defined for remote file service. When an application program makes
a request to read (or write) a file, it makes a local system call to the operating system. The operating
system then consults its mounting tables to determine if the file is a local file or a remote file. If the file is
local, the conventional file access mechanisms handle the task. If the file is remote, the operating system
creates a request packet confirming to the NFS protocol and sends the packet to the machine having the
file.
The remote machine runs a server process, also called a daemon, named nfsd. Nfsd receives the request
and reads (or writes) the file, as requested by the application and returns a confirmation to the requesting
machine. Then the requesting machine informs the application of the success of the operation. Of course,
the application does not know whether the execution of the file operation was local or remote.
Similar to Sun-NFS, there are several other protocols for file service. These include Appleshare for
Macintosh computers, the SMB protocol for Windows 95/NT and the DFS protocol used in the Andrew
file system. Of these, the Andrew file system is the most innovative.
Andrew, developed at CMU in the late 1980’s is a scalable file system. Andrew is designed to handle
hundreds of file servers and many thousands of workstations without degrading the file service
performance. Degraded performance in other file systems is the result of bottlenecks at file servers and
network access points. The key feature that makes Andrew a scalable system is the use of innovative file
caching strategies. The Andrew file system is also available commercially and is called DFS (Distributed
File System).
In Andrew/DFS when an application accesses a file, the entire file is transmitted from the server to the
workstation, or a special intermediate file storage system, closer to the workstation. Then the application
uses the file, in a manner similar to NFS. After the user running the application logs out of the
workstation, the file is sent back to the server. Such a system however has the potential of suffering from
file inconsistencies if the same user uses two workstations at two locations.
In order to keep files consistent, when it is used concurrently, the file server uses a callback protocol.
The server can recall the file in use by a workstation if another workstation uses it simultaneously. Under
the callback scheme, the server stores the file and both workstations reach the file remotely. Performance
suffers, but consistency is retained. Since concurrent access to a file is rare, the callback protocol is very
infrequently used; and thus does not hamper the scalability of the system.
3. Directory or Name Service
A network of computers managed by a network operating system can get rather large. A particular
problem in large networks is the maintenance of information about the availability of services and their
physical location. For example, a particular client needs access to a database. There are many different
database services running on the network. How would the client know whether the particular service it is
interested in, is available, and if so, on what server?
Directory services, sometimes called name services address such problems. Directory services are the
mainstay of large network operating systems. When a client application needs to access a server process,
it contacts the directory server and requests the address of the service. The directory server identifies the
service by its name – all services have unique names. Then the directory server informs the client of the
address of the service – the address contains the name of the server. The directory server is responsible
for knowing the current locations and availability of all services and hence can inform the client of the
unique network address (somewhat like a telephone number) of the service.
The directory service is thus a database of service names and service addresses. All servers register
themselves with the directory service upon startup. Clients find server addresses upon startup. Clients can
retain the results of a directory lookup for the duration of its life, or can store it in a file and thus retain it
6
potentially forever. Retaining addresses of services is termed address caching. Address caching causes
gains in performance and reduces loads on the directory server. Caching also has disadvantages. If the
system is reconfigured and the service address changes, then the cached data is wrong and can indeed
cause serious disruptions if some other service is assigned that address. Thus, when caching is used,
clients and servers have to verify the accuracy of cached information.
The directory service is just like any other service, i.e. it is provided by a service process. So there are
two problems:
1. How does the client find the address of the directory service?
2. What happens if the directory service process crashes?
Making the address of the directory service a constant, solves the first problem. Different systems have
different techniques for doing this, but a client always has enough information about contacting the
directory service.
To ensure the directory service is robust and not dependent on one machine, the directory service is often
replicated or mirrored. That is, there are several independent directory servers and all of them contain
(hopefully) the same information. A client is aware of all these services and contacts any one. As long as
one directory service is reachable, the client gets the information it seeks. However, keeping the directory
servers consistent, i.e. have the same information is not a simple task. This is generally done by using one
of many replication control protocols (see section on Theoretical Foundations).
The directory service has been subsequently expanded not just to handle service addresses, but higher
level information such as user information, object information, web information and so on. A standard
for worldwide directory services over large networks such as the Internet has been developed and is
known as the X.500 directory service. However the deployment of X.500 has been low and thus its
importance has eroded. As of this now, a simpler directory service called LDAP (Lightweight Directory
Access Protocol) is gaining momentum, and most network operating systems provide support for this
protocol.
4. RPC service
A particular mechanism for implementing the services in a network operating system is called Remote
Procedure Calls or RPC. The RPC mechanism is discussed later in the section entitled Mechanisms for
Network Operating Systems. The RPC mechanism needs the availability of an RPC server accessible by
an RPC client. However, a particular system may contain tens if not hundreds or even thousands of RPC
servers. In order to avoid conflicts and divergent communication protocols the network operating system
provides support for building and managing and accessing RPC servers.
Each RPC service is an application-defined service. However, the operating system also provides an RPC
service, which is a meta-service, which allows the application specific RPC services to be used in a
uniform manner. This service provides several features:
1. Management of unique identifiers (or addresses) for each RPC server.
2. Tools for building client and server stubs for packing and unpacking (also known as marshalling and
unmarshalling) of arguments between clients and servers.
3. A per-machine RPC listening service.
The RPC service defines a set of unique numbers that can be used by all RPC servers on the network.
Each specific RPC server is assigned one of these numbers (addresses). The operating system manages
the creation and assignment of these identifiers. The operating system also provides tools that allow the
programmers of RPC services to build a consistent client-server interface. This is done by the use of
7
language processing tools and stub generators, which embed routines in the client and server code. These
routines package the data sent from the client to the server (and vice versa) in some predefined format,
which is also machine independent.
When a client uses the number to contact the service, its looks up the directory and finds the name of the
physical machine that contains the service. Then it sends a RPC request to the RPC listener on that
machine. The RPC listener is an operating system provided service that redirects RPC calls to the actual
RPC server process that should handle the call.
RPC services are available in all network operating systems. The three most common types of RPC
systems are Sun RPC, DCE RPC and Microsoft RPC.
5. Object and Brokerage Service
The success and popularity of RPC services coupled with the object-orientation frenzy of the mid-1980’s
led to the development of Object Services and then to Brokerage services. The concept of object services
is as follows.
Services in networked environments can be thought of as basic services and composite services. Each
basic service is implemented by an object. An object is an instance of a class, while a class is inherited
from one or more base or composite classes. The object is a persistent entity that stores data in a
structured form, and may contain other objects. The object has an external interface, visible from clients
and is defined by the public methods the object supports.
Composite services are composed of multiple objects (basic and composite) which can be embedded or
linked. Thus we can build a highly structured service infrastructure that is flexible, modular and has
unlimited growth potential.
In order to achieve the above concept, the network operating systems started providing uniform methods
of describing, implementing and supporting objects (similar to the support for RPC).
While the concept sounds very attractive in theory, there are some practical problems. These are:
1. How does a client access a service?
2. How does a client know of the available services and the interfaces they offer?
3. How does one actually build objects (or services)?
We discuss the questions in reverse order. The services or objects are built using a language that allows
the specification of objects, classes, and methods; and allows for inheritance and overloading. While C++
seems to be a natural choice, C++ does not provide the features of defining external service interfaces
and does not have the power of remote linking. Therefore, languages have been defined, based on C++
that provide such features.
The client knows of the object interface, due to the predefined type of the object providing the service.
The programming language provides and enforces the type information. Hence at compile time, the client
can be configured by the compiler to use the correct interface – based on the class of object the client is
using. However, such a scheme makes the client use a static interface. That is, once a client has been
complied the service cannot be updated with new features that change the interface. This need for
dynamic interface management leads to the need for Brokerage Services.
After the client knows of the existence of the service, and the interface it offers, the client accesses the
service using two key mechanisms – the client stub and the ORB (Object Request Broker). The client
stub transforms a method invocation into a transmittable service request. Embedded in the service request
is ample information about the type of service requested and the arguments (and type of these arguments)
8
and the type of expected results. The client stub then sends a message to the ORB handling requests of
this type.
The ORB is just one of the many services a brokerage system provides. The ORB is responsible for
handling client requests and is an intermediate between the client and the object. Thus, the ORB is a
server-side stub that receives incoming service requests and converts them to correct formats, and sends
them to the appropriate objects.
The Brokerage Service is a significantly more complex entity. It is responsible for handling:
1. Names and types of objects and their locations and types.
2. Controlling the concurrency of method invocations on objects, if they happen concurrently.
3. Event notification and error handling.
4. Managing the creation and deletion of objects and updates of objects as they happen, dynamically.
5. Handling the persistence and consistency of objects. Some critical objects may need transaction
management.
6. Handle queries about object capabilities and interfaces.
7. Handle reliability and replication.
8. Provide Trader Services.
The Trader Service mentioned above is interesting. The main power in object services is unleashed when
clients can pick and choose services dynamically. For example, a client wants access to a database object
containing movies. Many such services may exist on the network offering different or even similar
features. The client can first contact the trader, get information about services (including quality, price,
range of offerings and so on) and then decide to use one of them. This is, of course, based on the
successful, real-world business model. Trader services thus offer viable and useful methods of interfacing
clients and objects on a large network.
The object and brokerage services depend heavily upon standards, as all programs running on a network
have to conform to the same standard, in order to inter-operate. As of writing, the OSF-DCE (Open
Software Foundation, Distributed Computing Environment) is the oldest multi-platform standard, but has
limited features (does not support inheritance, dynamic interfaces and so on). The CORBA (Common
Object Request Broker Architecture) standard is gaining importance as a much better standard and is
being deployed quite aggressively. Its competition, the DCOM (Distributed Common Object Model)
standard is also gaining momentum, but its availability seems to be currently limited to the Windows
family of operating systems.
6. Group Communication Service
Group communication is an extension of multicasting for communicating process groups. When the
recipient of a message is a set of processes the message is called a multicast message (a single recipient
message – unicast, all processes are recipients – broadcast). A process group is a set of processes whose
membership may change over time. If a process sends a multicast message to a process group, all
processes that are members of the group will receive this message. Simple implementations of
multicasting does not work for group communications for a variety of reasons, such as follows:
1. A process may leave the group and then get messages sent to the group from a process who is not yet
aware of the membership change.
9
2. Process P1 sends a multicast. In response to the multicast, process P2 sends another multicast.
However, P2’s message arrives at P3 before P1’s message. This is causally inconsistent.
3. Some processes, which are members of the group, may not receive a multicast due to message loss or
corruption.
Group communication protocols solve such problems by providing several important multicasting
primitives. These include reliable multicasting, atomic multicasting, causally-related multicasting as well
as dynamic group membership maintenance protocols.
The main provision in a group communication system is the provision of multicasting primitives. Some
of the important ones are:
Reliable Multicast: The multicast is send to all processes and then retransmitted to processes that did not
get the message, until all processes get the multicast. Reliable multicasts may not deliver all messages if
some network problems arise.
Atomic Multicast: Similar to the reliable multicast, but guarantees that all processes will receive the
message. If it is not possible for all processes to receive the message, then no process will receive the
message.
Totally Ordered Multicast: All the multicasts are ordered strictly, that is all the receivers get all the
messages in exactly the same order. Totally ordered multicasting is expensive to implement and is not
necessary (in most cases). Causal multicasting is powerful enough for use by applications that need
ordered multicasting.
Causally Ordered Multicast: If two multicast messages are causally related in some way then all
recipients of these multicasts will get them in the correct order.
Imperative in the notion of multicasting is the notion of dynamic process groups. A multicast is sent to a
process group and all current members of that group receive the message. The sender does not have to
belong to the group.
Group communications is especially useful in building fault-tolerant services. For example, a set of
separate servers, providing the same service is assigned to a group and all service requests are sent via
causally ordered multicasting. Now all the servers will do exactly the same thing, and if one serer fails, it
can be removed from the group. This approach is used in the ISIS system (4).
7. Time, Memory and Locking Services
Managing time on a distributed system is inherently conceptually difficult. Each machine runs its own
clock and these clocks drift independently. In fact there is no method to even “initially” synchronize the
clocks. Time servers provide a notion of time to any program interested in time, based on one of many
clock algorithms (see section on theoretical foundations). Time services have two functions: provide
consistent time information to all processes on the system and to provide a clock synchronization method
that ensures all clocks on all systems appear to be logically synchronized.
Memory services provide a logically shared memory segment to processes not running on the same
machine. The method used for this service is described later. A shared memory server provides the
service and processes can attach to a shared memory segment which is automatically kept consistent by
the server.
There is often a need for locking a resource on the network, by a process. This is especially true in
systems using shared memory. While locking is quite common and simple in single computers, it is not
so easy on a network. Thus, networks use a locking service. A locking service is typically a single server
process that tracks all locked resources. When a process asks for a lock on a resource, the server grants
10
the lock if that lock is currently not in use, else it makes the requesting process wait till the lock is
released.
8. Other Services
A plethora of other services exists in network operating systems. These services can be loosely divided
into two classes (1) services provided by the core network operating system and (2) services provided by
applications.
Services provided by the operating system are generally low-level services used by the operating system
itself, or by applications. These services of course vary from one operating system to another. The
following is a brief overview of services provided by most operating systems that use the TCP-IP
protocol for network communications:
1. Logon services: These include telnet, rlogin, ftp, rsh and other authentication services that allow
users on one machine to access facilities of other machines.
2. Mail services: These include SMTP (Simple Mail Transfer Protocol), POP (Post Office Protocol),
and IMAP (Internet Message Access Protocol). These services provide the underlying framework for
transmitting and accessing electronic mail. The mail application provides a nicer interface to the end
user, but uses several of these low-level protocols to actually transmit and receive mail messages.
3. User Services: These include finger, rwho, whois and talk.
4. Publishing services: These include HTTP (Hyper Text Transfer Protocol), NNTP (Network News
Transfer Protocol), Gopher and WAIS. These protocols provide the backbone of the Internet
information services such as the WWW and the news network.
Application defined services, on the other hand, are used by specific applications that run on the network
operating system. One of the major attributes of a network operating system is that is can provide support
for distributed applications. These application programs span machine boundaries and user boundaries.
That is, these applications use resources (both hardware and software) of multiple machines and input
from multiple users to perform a complex task. Examples include parallel processing and CSCW
(Computer Supported Cooperative Work).
Such distributed applications use the RPC services or object services provided by the underlying system
to build services specific to the type of computation being performed. Parallel processing systems use the
message passing and RPC mechanisms to provide remote job spawning and distribution of computational
workload among all available machines on the network. CSCW applications provide services such as
whiteboards and shared workspaces, which can be used by multiple persons at different locations on the
network.
A particular, easy to understand application is a calendering program. In calendaring applications, a
server maintains information about appointments and free periods of a set of people. All individuals set
up their own schedules using a front-end program, which downloads such data into a server. If a person
wants to set up a meeting, he or she can query the server for a list of free periods, for a specified set of
people. After the server provides some alternatives, the person schedules a particular time and informs all
the participants. While the scheduling decision is pending, the server marks the appointment time
temporarily unavailable on the calendars of all participating members. Thus, the calendering application
provides its own unique service – the calendar server.

LECTURE NO. 35 &36 READINGS: A-PAGE 738

Mechanisms for Network Operating Systems

Network operating systems provide three basic mechanisms that are used to the support the services
provided by the operating system and applications. These mechanisms are (1) Message Passing (2)
Remote Procedure Calls and (3) Distributed Shared Memory. These mechanisms support a feature called
Inter Process Communication or IPC. While all the above mechanisms are suitable for all kinds of
interprocess
communication, RPC and DSM are favored over message passing by programmers.
1. Message Passing
Message passing is the most basic mechanism provided by the operating system. This mechanism allows
a process on one machine to send a packet of raw, uninterpreted stream of bytes to another process.
In order to use the message passing system, a process wanting to receive messages (or the receiving
process) creates a port (or mailbox). A port is an abstraction for a buffer, in which incoming messages
are stored. Each port has a unique system-wide address, which is assigned, when the port is created. A
port is created by the operating system upon a request from the receiving process and is created at the
machine where the receiving process executes. Then the receiving process may choose to register the port
address with a directory service.
After a port is created, the receiving process can request the operating system to retrieve a message from
the port and provide the received data to the process. This is done via a receive system call. If there are
no messages in the port, the process is blocked by the operating system until a message arrives. When a
message arrives, the process is woken up and is allowed to access the message.
A message arrives at a port, after a process sends a message to that port. The sending process creates the
data to be sent and packages the data in a packet. Then it requests the operating system to deliver this
message to the particular port, using the address of the port. The port can be on the same machine as the
sender, or a machine connected to the same network.
When a message is sent to a port that is not on the same machine as the sender (the most common case)
this message traverses a network. The actual transmission of the message uses a networking protocol that
provides routing, reliability, accuracy and safe delivery. Then most common networking protocol is TCPIP.
Other protocols include IPX/SPX, AppleTalk, NetBEUI, PPTP and so on. Network protocols use
techniques such as packetizing, checksums, acknowledgements, gatewaying, routing and flow control to
ensure messages that are sent are received correctly and in the order they were sent.
Message passing is the basic building block of distributed systems. Network operating system use
message passing for inter-kernel as well as inter-process communications. Inter-kernel communications
are necessary as the operating system on one machine needs to cooperate with operating systems on other
machines to authenticate users, manage files, handle replication and so on.
Programming using message passing is achieved by using the send/receive system calls and the port
creation and registering facilities. These facilities are part of the message passing API provided by the
operating system. However, programming using message passing is considered to be a low-level
technique that is error prone and best avoided. This is due to the unstructured nature of message passing.
Message passing is unstructured, as there are no structural restrictions on its usage. Any process can send
a message to any port. A process may send messages to a process that is not expecting any. A process
may wait for messages from another process, and no message may originate from the second process.
Such situations can lead to bugs that are very difficult to detect. Sometimes timeouts are used to get out
of the blocked receive calls when no messages arrive – but the message may actually arrive just after the
timeout fires.
Even worse, the messages contain raw data. Suppose a sender sends three integers to a receiver who is
expecting one floating-point value. This will cause very strange and often undetected behaviors in the
programs. Such errors occur frequently due to the complex nature of message passing programs and
hence better mechanisms have been developed for programs that need to cooperate.
Even so, a majority of the software developed for providing services and applications in networked
environments use message passing. Some minimization of errors is done by strictly adhering to a
programming style called the client-server programming paradigm. In this paradigm, some processes are
pre-designated as servers. A server process consists of an infinite loop. Inside the loop is a receive
statement which waits for messages to arrive at a port called the service port. When a message arrives,
the server performs some task requested by the message and then executes a send call to send back
results to the requestor and goes back to listening for new messages.
The other processes are clients. These processes send a message to a server and then waits for a response
using a receive. In other words, all sends in a client process must be followed by a receive and all
receives at a server process must be followed by a send. Following this scheme significantly reduced
timing related bugs.
The performance of client-server based programs are however poorer than what can be achieved by
other, nastier coding techniques. To alleviate this, often a multi-threaded server is used. In a
multithreaded server several parallel threads can listen to the same port for incoming messages and
perform requests in parallel. This causes quicker service response times.
Two better inter-process communication techniques are RPC and DSM, described below.
2. Remote Procedure Calls (RPC)
Remote Procedure Calls, or RPC is a method of performing inter-process communication with a familiar,
procedure call like mechanism. In this scheme, to access remote services, a client makes a procedure call,
just like a regular procedure call, but the procedure executes within the context of a different process,
possibly on a different machine. The RPC mechanism is similar to the client-server programming style
used in message passing. However, unlike message passing where the programmer is responsible for
writing all the communication code, in RPC a compiler automates much of the intricate details of the
communication.
In concept, RPC works as follows: A client process wishes to get service from a server. It makes a remote
procedure call on a procedure defined in the server. In order to do this the client sends a message to the
RPC listening service on the machine where the remote procedure is stored. In the message, the client
sends all the parameters needed to perform the task. The RPC listener then activates the procedure in the
proper context, lets it run and returns the results generated by the procedure to the client program.
However, much of this task is automated and not under programmer control.
An RPC service is created by a programmer who (let us assume) writes the server program as well as the
client program. In order to do this; he or she first writes an interface description using a special language
called the Interface Description Language (IDL). All RPC systems provide an IDL definition and an IDL
compiler. The interface specification of a server documents all the procedures available in the server and
the types of arguments they take and the results they provide.
The IDL compiler compiles this specification into two files, one containing C code that is to be used for
writing the server program and the other containing code used to write the client program.
The part for the server contains the definitions (or prototypes) of the procedures supported by the server.
It also contains some code called the server loop. To this template, the programmer adds the global
variables, private functions and the implementation of the procedures supported by the interface. When
the resulting program is compiled, a server is generated. The server loop is inserted by the IDL compiler
contains code to:
1. Register the service with a name server.
2. Listen for incoming requests (could be via the listening service provided by the operating system).
3. Parse the incoming request and call the appropriate procedure using the supplied parameters. This
step requires the extraction of the parameters from the message sent by the client. The extraction
process is called unmarshalling. During unmarshalling some type-checking can also be performed.
4. After the procedure returns, the server loop packages the return results into a message (marshalling)
and sends a reply message to the client.
Note that all the above functionality is automatically inserted into the RPC server by the IDL compiler
and the programmer does not have to write any of these.
Then the programmer writes the client. In the client program, the programmer #include’s the header file
for clients generated by the IDL compiler. This file has the definitions and pseudo-implementations (or
proxies) of the procedures that are actually in the server. The client program is written as if the calls to
the remote procedures are in fact local procedure calls. When the client program is run, the stubs inserted
via the header files play an important role in the execution f the RPC’s.
When the client process makes a call to a remote procedure, it actually calls a local procedure, which is a
proxy for the remote procedure. This proxy procedure (or stub) gets all the arguments passed to it and
packages them in some predefined format. This packaging is called marshalling. After the arguments are
marshaled, they are sent to the RPC server that handles requests for this procedure. Of course, as
described above, the RPC server unmarshals arguments, runs the procedure and marshals results. The
results flow back to the client, and the proxy procedure gets them. It unmarshals the results and returns
control to the calling statement, just like a regular local procedure.
One problem remains. How does the client know what is the address of the server handling a particular
procedure call? This function is automated too. The IDL compiler, when compiling an interface
definition, obtains a unique number from the operating system and inserts it into both the client stub and
the server stub, as a constant. The server registers this number with its address on the name service. The
client uses this number to look up the server’s address from the name service.
The net effect is that a programmer can write a set of server routines, which can be used from multiple
client processes running on a network of machines. The writing of these routines take minimal effort and
calling them from remote processes is not difficult either. There is no need to write communications
routines and routines to manage arguments and handle type checking. Automation reduces chances of
bugs quite heavily. This has led to the acceptance of RPC as the preferred distributed programming tool.
3. Distributed Shared Memory (DSM)
While message passing and RPC are the mainstays of distributed programming, and is available on all
network operating systems, Distributed Shared Memory or DSM is not at all ubiquitous. On a distributed
system, DSM provides a logical equivalent to (real) shared memory, which is normally available only on
multiprocessor systems.
Multiprocessor systems have the ability of providing the same physical memory to multiple processors.
This is a very useful feature and has been utilized heavily for parallel processing and inter-process
communication in multiprocessor machines. While RPC and message passing is also possible on
multiprocessor systems, using shared memory for communication and data sharing is more natural and is
preferred by most programmers.
While shared memory is naturally available in multiprocessors, due to the physical design of the
computer, it is neither available nor was thought to be possible on a distributed system. However, the
DSM concept has proven that a logical version of shared memory, which works just like the physical
version, albeit at reduced performance, is both possible and is quite useful.
DSM is a feature by which two or more processes on two or more machines can map a single shared
memory segment to their address spaces. This shared segment behaves like real shared memory, that is,
any change made by any process to any byte in the shared segment is instantaneously seen by all the
processes that map the segment. Of course, this segment cannot be at all the machines at the same time,
and updates cannot be immediately propagated, due to the limitations of speed of the network.
DSM is implemented by having a DSM server that stores the shared segment, that is, it has the data
contained by shared segment. The segment is an integral number of pages. When a process maps the
segment to its address space, the operating system reserves the address range in memory and marks the
virtual addresses of the mapped pages as inaccessible (via the page table). If this process accesses any
page in the shared segment, a page fault is caused. The DSM client is the page fault handler of the
process.
The workings of DSM are rather complex due to the enormous number of cases the algorithm has to
handle. Modern DSM systems provide intricate optimizations that make the system run faster but are
hard to understand. In this section, we discuss a simple, un-optimized DSM system – which if
implemented would work, but would be rather inefficient.
DSM works with memory by organizing it as pages (similar to virtual memory systems). The mapped
segment is a set of pages. The protection attributes of these pages are set to inaccessible, read-only or
read-write:
1. Inaccessible: This denotes that the current version of the page is not available on this machine and
the server needs to be contacted before the page can be read or written.
2. Read-only: This denotes that the most recent version of the page is available on this machine, i.e. the
process on this machine holds the page in read mode. Other processes may also have the page in
read-only mode, but no process has it in write mode. This page can be freely read, but not updated
without informing the DSM server.
3. Read-write: This denotes that this machine has the sole, latest version of the page, i.e. the process on
this machine holds the page in write mode. No other process has a copy of this page. It can be freely
read or updated. However, if this page is needed anywhere else, the DSM server may yank the
privileges by invalidating the page.
The DSM client or page fault handler is activated whenever there is a page fault. When activated, the
DSM client first determines whether the page fault was due to a read access or a write access. The two
cases are different and are described separately, below:
Read Access Fault:
On a read access fault, the DSM client contacts the DSM server and asks for the page in read mode. If
there are no clients that have already requested the page in write mode, the server sends the page to the
DSM client. After getting the page, the DSM client copies it into the memory of the process, at the
correct address, and sets the protection of the page as readonly. It then restarts the process that caused the
page fault.
If there is one client already holding the page in write mode (there can be at most one client in write
mode) then the server first asks the client to relinquish the page. This is called invalidation. The client
relinquishes the page by sending it back to the server and marking the page as inaccessible. After the
invalidation is done, the server sends the page to the requesting client, as before.
Write Access Fault:
On a write access fault, the DSM client contacts the server and requests the page in write mode. If the
page is not currently used in read or write mode by any other process, the server provides a copy of the
page to the client. The client then copies the page to memory, sets the protection to read-write and
restarts the process.
If the page is currently held by some processes in read or write mode, the server invalidates all these
copies of the page. Then it sends the page to the requesting client, which installs it and sets the protection
to read-write.
The net effects of the above algorithm are as follows:
1. Only pages that are used by a process on a machine migrate to that machine.
2. Pages that are read by several processes migrate to the machines these processes are running on.
Each machine has a copy.
3. Pages that are being updated, migrate to the machines they are being updated on, however there is at
most one update copy of the page at any point in time. If the page is being simultaneously read and
updated by two or more machines, then the page shuttles back and forth between these machines.
Page shuttling is a serious problem in DSM systems. There are many algorithms used to prevent page
shuttling. Effective page shuttling prevention is done by relaxed memory coherence requirements, such
as release consistency. Also, with careful design of applications page shuttling can be minimized.
The first system to incorporate DSM was Ivy (5). Several DSM packages are available, these include
TreadMarks, Quarks, Avalanche and Calypso.
Kernel Architectures
Operating systems have been always constructed (and often still are) using the monolithic kernel
approach. The monolithic kernel is a large piece of protected software that implements all the services
the operating system has to offer via a system call interface (or API). This approach has some significant
disadvantages. The kernel, unlike application programs, is not a sequential program. A kernel is an
interrupt driven program. That is, different parts of the kernel are triggered and made to execute at
different (and unpredictable) points in time, due to interrupts. In fact, the entire kernel is interrupt driven.
The net effect of this structure is that:
1. The kernel is hard to program. The dependencies of the independently interrupt-triggerable parts are
hard to keep track of.
2. The kernel is hard to debug. There is no way of systematically running and testing the kernel. When a
kernel is deployed, random parts start executing quite unpredictably.
3. The kernel is crucial. A bug in the kernel causes applications to crash, often mysteriously.
4. The kernel is very timing dependent. Timing errors are very hard to catch problems that are not
repeatable and the kernel often contains many such glitches that are not detectable.
The emergence of network operating systems saw the sudden drastic increase in the size of kernels. This
is due to the addition of a whole slew of facilities in the kernel, such as message passing, protocol
handling, network device handling, network file systems, naming systems, RPC handling, time
16
management and so on. Soon it was apparent that this bloat led to kernel implementations that are
unwieldy, buggy and doomed to fail.
This rise in complexity, resulted in the development of an innovative kernel architecture, targeted at
network operating systems, called the microkernel architecture. A true microkernel places only those
features in the kernel, that positively have to be in the kernel. This includes low-level service such as
CPU scheduling, memory management, device drivers, network drivers. Then it places a low-level
message passing interface in the kernel. The user-level API is just essentially the message passing
routines.
All other services are built outside the kernel, using server processes. It has been shown that almost every
API service and all networking services can be placed outside the kernel. This architecture has some
significant benefits, a few of which are listed below:
1. Services can be programmed and tested separately. Changes to the service do not need recompiling
the microkernel.
2. All services are insulated from each other – bugs in one service do not affect another service. This is
not only a good feature, but makes debugging significantly easier.
3. Adding, updating and reconfiguring services are trivial.
4. Many different implementations of the same service can co-exist.
Microkernel operating systems that proved successful include Amoeba (10), Mach (12) and the VSystem
(14). A commercial microkernel operating system called Chorus is marketed by Chorus Systems
(France).
The advantages of microkernels come at a price, namely performance. Performance of operating systems
is an all-important feature that can make or break the usage of the system, especially commercial
systems. Hence, commercial systems typically shun the microkernel approach but choose a compromise
called the hybrid kernel. A hybrid kernel is a microkernel in spirit, but a monolithic kernel in reality. The
Chorus operating system pioneered the hybrid kernel. Windows NT is also a hybrid system.
A hybrid system starts as a microkernel. Then as services are developed and debugged they are migrated
into the kernel. This retains some of the advantages of the microkernel, but the migration of services into
the kernel significantly improves the performance.
Network operating systems (NOS) typically are used to run computers that act as servers. They provide the
capabilities required for network operation. Network operating systems are also designed for client
computers and provide functions so the distinction between network operating systems and stand alone
operating systems is not always obvious. Network operating systems provide the following functions:
File and print sharing.
Account administration for users.
Security.
Installed Components
Client functionality
Server functionality
Functions provided:
Account Administration for users
Security
File and print sharing
Network services
File Sharing
Print shari

Potrebbero piacerti anche