Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
A network is a group of connected devices. For people using computers to communicate with one another,
a network must be used. Simply stated, the requirements of a network are to: Move information from a
source to a destination
Networks are an interconnection of computers. These computers can be linked together using a wide
variety of different cabling types, and for a wide variety of different purposes.
People use computers and networks for a wide variety of reasons. Three common reasons that people use
networks to send information from a source, such as a personal computer (PC), to a destination, such as a
printer, are:
1. Communicate and collaborate (i.e., e-mail and newsgroups)
2. Share information (i.e., document sharing)
3. Share resources (i.e., printers and servers)
Take for example a typical office scenario where a number of users in a small business require access to
common information. As long as all user computers are connected via a network, they can share their
files, exchange mail, schedule meetings, send faxes and print documents all from any point of the
network.
It would not be necessary for users to transfer files via electronic mail or floppy disk, rather, each user
could access all the information they require, thus leading to less wasted time and hence greater
productivity.
Imagine the benefits of a user being able to directly fax the Word document they are working on, rather
than print it out, then feed it into the fax machine, dial the number etc.
Small networks are often called Local Area Networks (LAN). A LAN is a network allowing easy access to
other computers or peripherals. The typical characteristics of a LAN are,
Physically limited (< 2km)
High bandwidth (> 1mbps)
Inexpensive cable media (coax or twisted pair)
Data and hardware sharing between users
Owned by the user
Network Devices
A source or destination can be any device capable of transferring information electronically from one point
(source) to another point (destination). There are many examples of devices that communicate over a
network. They take many forms and vary widely in capabilities. These include:
PCs
Macintosh computers
Workstations
Printers
Servers
Generically speaking, these devices are referred to as nodes. Nodes are the various endpoints on a
network, connected together to form a network. The connection between nodes is made using some type
of connection medium. Examples of connection mediums include:
Copper cables
Fiber optic cables
Radio waves/wireless
Networks are used in a wide variety of ways to tie computers together so they can communicate with one
another and provide services to the user of a network.
Computer Components
Computers come in all shapes and sizes, and are manufactured to serve different purposes. Some
computers are made for operation in a single-user environment. Other computers are made to support a
small number of users in a workgroup environment, while still others may support thousands of users in a
large corporation.
Computers attach to a network through a network interface card (NIC). Typically cables are attached to a
NIC to connect to other computers or networks.
Several aspects of computer technology to be considered are:
Video
Microprocessor
Memory
Storage
Input/Output
Application software
System software
Device driver
Whether the computer that attaches to the network is a small desktop computer or a powerful mainframe,
all computers contain the same basic structure and the components mentioned above.
Computers are the endpoints, or nodes, in a network and come in a variety of shapes and sizes. It is
important to understand common components found in most computer systems.
The CPU is the brain of any computer. The CPU executes instructions developed by a computer
programmer, and directs the flow of information within the computer. The terms microprocessor and CPU
are often used interchangeably. At the heart of a CPU is a microprocessor.
A CPU runs computer programs, and directs the flow of information between different components of a
In both the cases, the higher the value, the more powerful the CPU. For example, a 64-bit microprocessor
that runs at 450 MHz is more powerful than a 16-bit microprocessor that runs at 100 MHz. The vast
majority of all desktop PCs incorporate a single Intel architecture processor (such as Pentium). Although
Intel is the world’s largest microprocessor manufacturer, it does not have the entire PC processor market.
For example, Advanced Micro Devices (AMD) manufactures processors comparable to the Intel Pentium.
Key Point
The I/O of a computer travels over a bus, which is a collection of wires that transmit data from one part of
a computer to another. When used in reference to desktop computers, the term bus usually refers to an
internal bus. An internal bus connects all internal computer components to the CPU and main memory.
Expansion slots connect plug-in expansion boards, such as internal modems or NICs, to the I/O bus. All
buses consist of two parts, an address bus and a data bus. The data bus transfers actual data, and the
address bus transfers information about where the data should go.
The size of a bus, known as its width, is important because it determines how much data can be
transmitted at one time. For example, a 16-bit bus can transmit 16 bits of data, and a 32-bit bus can
transmit 32 bits of data.
Every bus has a clock speed measured in MHz; the faster the bus clock, the faster the bus’ data transfer
rate. A fast bus allows data to be transferred faster, making applications run faster. On PCs, the ISA bus is
being replaced by faster buses, such as the PCI bus. PCs made today include a local bus for data that
requires especially fast transfer speeds, such as video data. The local bus is a high-speed pathway that
connects directly to the processor.
A computer’s memory stores information currently being worked on by the CPU; it is a short-term storage
component of a computer. Memory differs from the long-term storage systems of a computer, such as a
hard drive or floppy, where information is stored for longer periods of time. When you run a program, it is
typically read from a disk drive (such as a hard drive or CD-ROM drive), and put in memory for execution.
After the program has completed or is no longer needed, it is typically removed from memory. Some
programs remain in memory after execution and are called Terminate and Stay Resident (TSR). The
program waits for an event to occur that notifies them to begin processing. Memory is the systems
internal, board-mounted storage area in the computer, also known as physical memory. The term memory
identifies data storage that comes in the form of chips or plug-in modules, such as SIMMs or DIMMs. Most
computers also use virtual memory, which is physical memory swapped to a hard disk. Virtual memory
expands the amount of memory for an application beyond the actual physical, installed memory by
moving old data to the hard drive. Every computer comes with a certain amount of physical memory,
usually referred to as main memory or RAM. A computer that has 1 megabyte (Mb) of memory can hold
approximately 1 million bytes (or characters) of information.
There are several different types of memory, some of which are listed below:
RAM – RAM is the same as main memory. When used by itself, the term RAM refers to read and
write memory; that is, you can both write data into RAM and read data from RAM. This is in
contrast to read-only memory (ROM), which only permits you to read data. Most RAM is volatile,
which means it requires a steady flow of electricity to maintain its contents. As soon as the power
is turned off, whatever data was in RAM is lost.
ROM—Computers almost always contain a small amount of ROM that holds instructions for starting
up the computer. Unlike RAM, ROM cannot be written to after it is initially programmed.
Programmable read-only memory (PROM)—A PROM is a memory chip on which you can store a
program. After the PROM has been programmed, you cannot wipe it clean and reprogram it.
Erasable programmable read-only memory (EPROM)—An EPROM is a special type of PROM that can
be erased by exposing it to ultraviolet light. EPROM can be reprogrammed and reused after it is
has been erased.
The NIC fits into an expansion slot on the motherboard’s I/O bus. This bus connects adapter cards, such
as NICs, to the main CPU and RAM. The speed at which data may be transferred to and from the NIC is
determined by the I/O bus bandwidth, processor speed, NIC design and quality of components, operating
system, and the network topology used.
New cards are software configurable, using a software program to configure the resources used by the
card. Other cards are PNP (Plug and Play), which automatically configure their resources when installed in
the computer, simplifying installation. With an operating system like Windows 95, auto-detection of new
hardware makes network connections simple and quick.
On power-up, the computer detects the new network card, assigns the correct resources to it, and then
installs the networking software required for connection to the network. All the user need do is assign the
network details like computer name.
For Ethernet, a 48-bit number identifies each card. This number uniquely identifies the computer. These
network card numbers are used in the Medium Access (MAC) Layer to identify the destination for the data.
When talking to another computer, the data you send to that computer is prefixed with the number of the
card you are sending the data to.
This allows intermediate devices in the network to decide in which direction the data should go, in order to
transport the data to its correct destination.
There are many ways to connect NICs to a network. The NIC attaches to a network via a twisted pair
cable through a wall outlet. On the other side of the wall, the twisted pair cable goes to a punchdown
block, a place where cables are often terminated to provide continuing connections to other devices. From
the punchdown block, the twisted pair cable is sometimes connected to a hub or Multi-station Access Unit
(MAU), that forms the central connecting point of the network. If the network contains a dedicated server
or other communicating device, it also contains a NIC.
Bus—A bus connects the central processor of a PC with the video controller, disk controller, hard
drives, and memory. There are many types of buses, including internal buses, external buses, and
LANs that operate on bus topologies. Internal buses are buses such as AT, ISA, EISA, and MCA that
are internal to a PC.
Byte—In most computer systems, a byte is a unit of information that is 8 bits long. A byte is the
unit most computers use to represent a character such as a letter, number, or typographic symbol
(e.g., “g,” “5,” or “?”). A byte can also hold a string of bits that need to be used in some larger unit
for application purposes (for example, the stream of bits that constitute a visual image within an
application program).
Central Processing Unit (CPU)—A CPU is the processor in a computer that processes the code
and associated data in a computer system.
Client/Server—Client server or client/server is a mode in computer networking where individual
computers can access data or services from a common high-performance computer. For instance,
when a PC needs data from a common database located on a computer attached to a LAN, the PC
is the client and the network computer where the database resides is the server.
Clustering—Clustering is a grouping of devices or other components, typically for the
enhancement of performance. Clustering computers to execute a single application speeds up the
operation of the application.
Device Driver—A device driver is a program that controls devices attached to a computer, such as
a printer or hard disk drive.
Digital Data—Digital data is electrical information that represents digits (i.e., 1s and 0s). 1s and
0s (bits) are combined to form bytes and characters, such as letters of the alphabet.
Dual Inline Memory Module (DIMM)—DIMM is a small PC circuit board that holds memory
chips. A DIMM has a 64-bit memory path to the CPU, which is compatible with the Intel Pentium
processor.
Electronic Mail (E-mail)—E-mail is a widely used application for transferring messages and files
from one computer system to another. If the two computers sending messages use different types
of e-mail packages, an e-mail gateway is required to convert from one format to another.
Extended Data Output (EDO)—EDO is a type of RAM memory chip with faster performance than
conventional memory. Unlike conventional RAM, EDO RAM retrieves a block of memory as it sends
the previous block to the CPU.
Extended Industry Standard Architecture (EISA)—EISA is a 32-bit bus technology for PCs
that supports multiprocessing. EISA was designed in response to IBM’s MCA; however, both EISA
and MCA were replaced by the PCI bus. See “PCI” and “bus.”
Hardware—Hardware is the physical part of a computer, which can include things such as hard
drives, circuit boards inside a computer, and other computer components.
Industry Standard Architecture (ISA)—ISA is an older, 8- or 16-bit PC bus technology used in
IBM XT and AT computers. See “bus.”
Input/Output (I/O)—An I/O channel is the path from the main processor or CPU of a computer
to its peripheral devices.
Internet Protocol (IP)—IP is the protocol responsible for getting packets of information across a
network.
Local Area Network (LAN)—A LAN is a grouping of computers via a network, typically confined
to a single building or floor of a building.
Mainframe—A mainframe is a large-scale computer system. Mainframe computers are powerful,
and attach to networks and high-speed peripheral devices, such as tape drives, disk drives, and
printers.
Megahertz (Mhz)—One hertz is one cycle of a sine wave per second. One MHz is 1 million cycles
Priyank Pashine – Networking Essentials Page 5
per second.
Micro Channel Architecture (MCA)—MCA is IBM’s 32-bit internal bus architecture for PCs. MCA
was never widely accepted by the PC industry, and was replaced by the PCI bus architecture.
Modulation—Modulation is the process of modifying the form of a carrier wave (electrical signal)
so that it can carry intelligent information on some sort of communications medium. Digital
computer signals (baseband) are converted to analog signals for transmission over analog facilities
(such as the local loop). The opposite process, converting analog signals back into their original
digital state, is referred to as demodulation.
Network Interface Card (NIC)—A NIC is any workstation or PC component (usually a hardware
card) that allows the workstation or PC to communicate with a network. A NIC address is another
term for hardware address or MAC address. The NIC address is built into the network interface card
of the destination node.
Peripheral Component Interconnect (PCI)—PCI is a newer 32-bit and 64-bit local bus
technology for PCs. See “bus.” (Servers use 64-bit PCI, and PCs use 32-bit.)
Peripherals—Peripherals are parts of a computer that are not on the primary board (mother
board) of a computer system. Peripherals include hard drives, floppy drives, and modems.
Personal Computer Memory Card International Association
(PCMCIA)—The PCMCIA slot in a laptop was designed for PC memory expansion. NICs and
modems can attach to a laptop through the PCMCIA slot.
Personal Digital Assistant (PDA)—PDA devices are very small, and provide a subset of the
operations of a typical computer (PC). They are used for scheduling, electronic notepads, and small
database applications.
Redirector—A redirector is a client software component in a client/server configuration. The
redirector is responsible for deciding if a request for a computer service (i.e., read a file) is for the
local computer or network server.
Random Access Memory (RAM)—RAM is a computer’s main working memory. Applications use
RAM to hold instructions and data during processing. Applications can repeatedly write new data to
the same RAM, but all data is erased from RAM when the computer loses power or is shut down.
RJ-45 Connector—An RJ-45 connector is a snap-in connector for UTP cable, similar to the
standard RJ-11 telephone cable connector.
Server—A server is a device attached to a network that provides one or more services to users of
the network.
Single Inline Memory Module (SIMM)—SIMM is a small PC circuit board that holds memory
chips. A SIMM has a 32-bit memory path to the CPU. SIMM capacities are measured in bytes.
Synchronous Dynamic Random Access Memory (SDRAM)—SDRAM is a type of RAM and is
often referred to as DIMM. It is replacing EDO RAM because it is approximately twice as fast (up to
133 MHz).
Synchronous Graphic Random Access Memory (SGRAM)—SGRAM is a type of dynamic RAM
optimized for graphics-intensive operations. Like SDRAM, SGRAM can synchronize itself with the
CPU bus clock, up to speeds of 100 MHz.
Transmission Control Protocol (TCP)—TCP is normally used in conjunction with IP in a TCP/IP-
based network. The two protocols working together provide for connectivity between applications
of networked computers.
UNIX—UNIX is an operating system used in many workstations and mid-range computer systems.
It is an alternative to PC and Macintosh computer operating systems. Linux is a UNIX-like clone.
Workstations—Workstations are a type of computer, typically more powerful than a PC but still
used by a single user.
There are many different types of computers used in organizations, most of which are tied to a network.
Some of these computers are small and can only run a limited amount of applications.
Others are large and can run many applications and service many users at the same time. This lesson
looks at classifications of computers found in networks and the primary purpose of each type.
Priyank Pashine – Networking Essentials Page 6
Computer classifications include:
Desktop computers
Mid-range computers and servers
Mainframe computers
Others
A desktop computer is a computer, possibly attached to a network that is used by a single individual.
Desktop computers are sometimes divided into two broad categories, personal computers (PCs) and
workstations. The difference between PCs and workstations, although not always clear, is generally in the
operating system software used and the graphics capabilities. PCs typically run one of several types of
Microsoft Windows operating systems, or Macintosh Operating Systems in the case of Apple products,
while a workstation typically runs a version of the UNIX operating system. A workstation often features
high-end hardware such as large, fast disk drives, large amounts of Random Access Memory (RAM),
advanced graphics capabilities, and high performance, multiprocessors. There is a great deal of overlap in
features and functions of desktop computers, and the distinctions between PCs and workstations are
blurring.
The term “mid-range” covers a wide range of computer systems that support more than one user, and
may support many users. It covers an extensive range of computer systems, overlapping with desktop
computers at one-end and mainframe computers at the other end. Midrange computers include:
High-end Reduced Instruction Set Computer (RISC) CPU-based
Servers (IBM AS/400)
Intel-based servers (Compaq, Dell, and Hewlett-Packard)
UNIX-based servers of all types
Mid-range and server systems are commonly used in small to medium organizations, such as
departmental information processing. Typical applications include:
Finance and accounting (AS/400)
Database (Intel-based or UNIX-based)
Printer servers (Intel or UNIX-based)
Communications servers (Intel-based)
Mainframe computers (also referred to as super computers) and associated client/server products can
manage huge organization-wide networks, store and ensure the integrity of massive amounts of crucial
data, and drive data across an organization. The unique and inherent capabilities of leading-edge
mainframe systems include:
Constant availability—mainframes are designed to be operated around the clock every day of the
year. This is sometimes referred to as a number of nines; a level of reliability desired and
measured in nines (99.999% reliability).
Rigorous backup, recovery, and security—mainframes provide automatic and constant backup,
tracking, and safeguarding of data.
Huge economies of scale—the vast resources of mainframes reduce the hidden costs associated
with multiple LANs, such as administration and training, extra disk space, and printers.
High bandwidth I/O facilities—the huge I/O bandwidth on mainframes allows rapid and effective
data transfer so that thousands of clients to be serviced simultaneously, and caters to emerging
applications like multimedia, including digitized video on demand.
Laptops, palmtops or PDAs, and thin client terminals have become mainstream devices in the personal
computing world over the last five years. To satisfy a society that is increasingly mobile, laptops and
palmtops have become prevalent. Performance in laptops is closing in on the desktop computer as
miniaturization compounds annually.
Palmtops are also catching up with performance. E-mail, calendering, low-end spreadsheets, and
handwriting recognition compatible with Windows operating systems have become a reality with the
newest generation palmtops. In the last few years, thin client terminals have staged a comeback, in that
they are reminiscent of the terminal of mainframe days, but with graphics capabilities. Thin client
terminals allow for an even greater reduced cost at the desktop computer level by having a central server
Priyank Pashine – Networking Essentials Page 7
handle processing otherwise handled by a PC. The terminal only displays the screen images, mouse
movements, and keystrokes of the application running on the server.
As a user, you normally interact with the operating system through a set of commands. The commands
are accepted and executed by a part of the operating system called the command processor or command
line interpreter.
Graphical user interfaces (GUIs) allow you to enter commands by pointing and clicking at objects that
appear on the screen. Microsoft has generally dominated the PC operating system market since its
foundation, first with its command line-based disk operating system (DOS), and then with the Windows
user interface, a graphical overlay for DOS. The Macintosh operating system is a competitor, which runs
on Macintosh platforms and not PC-based platforms.
Popular OS
Microsoft Windows was the initial GUI that ran on top of DOS. It was first launched in response to the
need to make PCs easier to use. It is similar to the Apple Macintosh computer, known for its easy-to-use
operating system. Windows 3.1 was the most popular Windows software, and is still used in many
networks.
Windows for Workgroups 3.11 was Microsoft’s first peer-to-peer network operating system. It was a
combination of Windows 3.1 and the Microsoft Windows Network software (which provides the peer-to-
peer networking capability), with enhancements such as improved disk I/O performance, connectivity, and
remote access, and a range of features intended to appeal to the network manager.
Windows 95 is a true operating system, not just a GUI as found in standard Windows. Windows 95 is the
first operating system to support Plug and Play, which makes system setup, management, and upgrading
considerably easier. Other enhancements include improvements to the user interface, better support for
NetWare servers, long file names, and video playback; better fax and modem support; improved system
administration; and remote network access.
Windows 98 is another version of Microsoft Windows. It has many of the same features as Windows 95,
but includes a different user interface and Web-related features.
Windows NT Workstation requires only 16 MB of RAM, making it far more accessible. For users with a
strong requirement for security, C2 versions of Windows NT are also available. Windows NT release 4.0
replaced Windows NT 3.1, 3.5, and 3.51. Another product called Windows 2000 has also been released.
A device driver is special-purpose software used to control specific hardware devices in a computer
system. These specific pieces of hardware can be disk drives, floppy drives, or NICs. Device drivers for
NICs control the operation of the NIC and provide an interface for the computer’s operating system. The
operating system and associated applications on the computer can use the device driver to communicate
with the NIC and send and receive information on a network.
Application Software
Applications are computer programs that are used for productivity and automation of tasks. Networks are
used to move application information from source to destination.
Applications are software programs used to perform work. Applications can be divided into two basic
categories: single-user applications, and networked or multiuser applications. Some applications run
primarily as a single-user application, others run almost exclusively as a networked application. Some
applications can run in both modes. Commonly used applications are described below.
Application-to-Application Communication
Applications use the underlying operating system, such as Windows 98, to carry out the needed tasks of
Priyank Pashine – Networking Essentials Page 10
the application. This includes accepting keystrokes from a keyboard and displaying the typed information
on a computer screen. If you are using a word processor and want to store a file on a hard drive, such as
a local hard drive, the application would rely on the operating system to store the information. The
operating system stores the information on the hard drive by communicating with the appropriate hard
drive device driver to physically place the word processor’s information on the drive. Perhaps you want to
store the information on a hard drive located on the other side of the building, in other words, over the
network. What must happen in this case? The following three items must be installed on the local machine
by the local operating system to provide for communication across a network:
NIC and NIC device driver
Client software
Communication software
The appropriate accessory card, such as a NIC, must first be installed in the computer, along with the
corresponding device driver. If you install a 3Com Ethernet NIC, Ethernet device driver software must also
be installed. Client software is also needed to provide an interface between the local operating system and
communication software. Some client software provides file and print sharing capabilities for computers,
while others provide the capability to connect to shared resources on other computers. You must also
have communication software loaded on the machine, such as a TCP/IP protocol stack. Key Point
Client software requests are placed inside protocol headers for delivery across a network. The following
steps are typical
1. The user of the application requests that a file be stored on a drive other than a local drive (a
network drive).
2. Computer software on the client machine (also known as a redirector) determines the file is not
destined for a local disk drive, but is destined for a remote disk drive.
3. The redirector takes the “store file” request from the application and requests the services of the
communication software.
4. The communication software adds the appropriate communication information on the “store file”
request.
5. The request is sent from the main CPU of the computer, across the local bus to the NIC.
6. The NIC transmits the information across the networking cables to the final destination, such as a
file server on the network.
From the user’s perspective, we have the ability to store files at multiple locations, including a hard drive
located in our computer, or other hard drives on the network. It is up to the local operating system to
make sure the file gets properly stored on the local computer hardware. When we want to store
information across a network, the request must be redirected out of the computer via a NIC to the
appropriate machine located on the network.
Where: A circuit/channel is a transmission path for a single type of transmission service (voice or data)
and is generally referred to as the smallest subdivision of the network. A carrier, on the other hand, is a
transmission path in which one or more channels of information are processed, converted to a suitable
format and transported to the proper destination.
Digital Transmission
This is the single fastest growing family of signals. It is discrete and well defined, digital signals vary from
dial tone pulses to complex computer and data signals.
Analog Signals
An analog signal, or sine wave, is a continuously varying signal
Analog transmissions are continuous in both time and value. This makes them particularly susceptible to
errors. Digital transmissions are discrete in both time and value. The exact digital value can be received
at the destination with very few errors.
All types of media can be decomposed into bits, which suggests that digital networks can be used for
integrated services (ISDN). This allows compression and encryption that is not possible with analog
signals.
Local area networks can be defined as privately owned networks that generally span an entire building or
up to a few kilometers in size. LAN can be effectively used to connect personal computers and other
resources like printers.
LANs often use a single cable to connect the resources. Typically, LANs operated at 10 to 100 Mbps, have
low delays and make few errors. Broadcast LANs follow many topologies.
Campus Area Networks - covers a large campus that has might include several city blocks, this level
allows for more refinement in our defitions, it will distinguish between say the Microsoft HQ CAN and the
MAN which covers the city that the CAN is in.
Metropolitan Area Networks - Group of multiple LANs and/or CANs that communicate over a city and/or
several city blocks, MANs are limited to within a single city.
MAN uses similar technology as that of LAN. The difference is that the area coverage of MANs is more than
LANs. They support both voice and data. This property enables them to be efficiently used for cable
television. Typically, a MAN has one or two cables and does not require a switching element.
Wide Area Networks - Groups of MANs that communicate over larger geographical distances like cities or
states, WANs span city to city and State to state but will stay within a country.
Typically in a WAN, there are a number of transmission lines connected to a router. Not all routers are
connected to each other. Thus, unconnected routers interact with each other indirectly via other routers.
In such a situation, when a router receives a packet, it stores the packet until the required output line is
free and then forwards it. Subnets employing such a technique are called point-to-point or store-and-
forward subnets.
Global Area Networks - covers a country's geographical area or a larger area worldwide, GANs span
countries and are our International level of Networking. This level was added to differentiate a WAN from
the larger Global Networks. The Internet is the most famous example of the GAN
In networking, the term topology refers to the layout of connected devices on a network. This article
introduces the standard topologies of computer networking.
One can think of a topology as a network's "shape." This shape does not necessarily correspond to the
actual physical layout of the devices on the network. For example, the computers on a home LAN may be
arranged in a circle, but it would be highly unlikely to find an actual ring topology there.
Network topologies are categorized into the following basic types:
Bus
Ring
Star
Tree
Mesh
Star
Many home networks use the star topology. A star
network features a central connection point called a
Ring Topology
"hub" that may be an actual hub or a switch. Devices
typically connect to the hub with Unshielded Twisted Pair (UTP) Ethernet.
Star Topology
Tree
Tree topologies integrate multiple star topologies together onto a bus. In its simplest form, only hub
devices connect directly to the tree bus, and each hub functions as the "root" of a tree of devices. This
bus/star hybrid approach supports future expandability of the network much better than a bus (limited in
the number of devices due to the broadcast traffic it generates) or a star (limited by the number of hub
ports) alone.
Mesh
Mesh topologies involve the concept of routes. Unlike each of the previous topologies, messages sent on a
mesh network can take any of several possible paths from source to destination. (Recall that in a ring,
although two cable paths exist, messages can only travel in one direction.) Some WANs, like the Internet,
employ mesh routing.
Media
The actual transmission of information in a computer network takes place via a
Twisted Pair
The twisted pair transmission medium consists of two insulated copper wires, typically about 1mm thick.
These two wires are twisted together in a helical form. This twisting reduces the electrical interference
from similar cables close by. The telephone system is an excellent example of a twisted pair network.
Twisted pair can be used for both analog and digital transmission. The bandwidth that can be achieved
with twisted pair depends on the thickness and the distance traveled. Typically, a transmission rate of
several megabits per second can be achieved for a few kilometers.
Coaxial cable has better shielding than twisted pair. Thus, they have the advantage that they can span
longer distance at relatively higher speed.
A coaxial cable consists of a stiff copper wire as the core, which is surrounded by an insulating material. A
cylindrical conductor in the form of a closely woven braided mesh surrounds the insulator. A plastic
coating then covers this entire setup. Two types of coaxial cables that are widely used;
The baseband coaxial cable is a 50-ohm cable and is commonly used for digital transmission. Due
to the shielding structure, they give excellent noise immunity. The bandwidth depends on the
length of the cable. Typically, 1 to 2 Gbps1 is possible for a 1-km cable. Longer cables may also be
used. They, however, provide lower data rates unless used with amplifiers or repeaters.
The broadband coaxial cable is a 75-ohm cable mostly used for analog transmission. The standard
cable television network is an excellent example where broadband coaxial cables are used. The
broadband coaxial cables can give up to 450 MHz and can span for nearly 100 kms for analog
transmission. Broadband systems can be subdivided into a number of independent channels. Each
channel can transmit analog or digital data.
Optical fibers, as the name suggests, employ the medium of light to transmit information. Thus,
information can be transmitted at very high speed - the speed of light and it eliminates problems like heat
dissipation. Optical fibers are typically used to provide a bandwidth of 1 Gbps although bandwidth in
excess of 50,000 Gbps is possible. This limitation is due to the unavailability of technology that can
convert optical signals to electrical signals and vice versa at such a fast rate.
Wireless Transmission
The transmission media described above provides a physical connection between two computers. This is
quite often not feasible especially when the geographical distance between the two computers is very
large. Communication in these types of setup is carried out by employing various other mediums such as
microwaves, radio waves etc. Communication, employing these types of mediums, is called wireless
communication.
Radio Transmission
The obvious advantage of using radio waves comes from the fact that radio waves are easily generated,
can travel longer distances, can penetrate buildings and are omnidirectional. However, radio waves have
the disadvantage that arises from the fact that radio waves are frequency dependent. At low frequencies,
the power of the radio waves deteriorates as the distance traveled from the source increases. At high
frequencies, the radio waves tend to travel in a straight line and bounces of obstacles. They are also
absorbed by rain and are subjected to interference from motors and other electrical equipment.
Microwave Transmission
Microwave transmission offers a high signal to noise ratio. However, it necessitates the transmitter and
the receiver to be aligned in a straight line without interference. In addition, because of the fact that
microwaves travel in a straight line, it becomes necessary to provide repeaters for long distances since the
curvature of the earth becomes an obstacle. Some waves may be
refracted off low-lying atmospheric layers and thus may take slightly
longer to arrive. They may also be out of phase with the direct wave
thus creating a situation called multipath fading where the delayed
wave tend to cancel out the direct wave. Microwaves have the
advantage that they are relatively inexpensive and require less space
to setup antennas. They can also be used long distance transmission.
Cabling
Note that the TX (transmitter) pins are connected to corresponding RX (receiver) pins, plus to plus and
minus to minus. And that you must use a crossover cable to connect units with identical interfaces. If
you use a straight-through cable, one of the two units must, in effect, perform the cross-over function.
Two wire color-code standards apply: EIA/TIA 568A and EIA/TIA 568B.
It makes no functional difference which standard you use for a straight-thru cable. You can start a
crossover cable with either standard as long as the other end is the other standard. It makes no
functional difference which end is which. Despite what you may have read elsewhere, a 568A patch cable
will work in a network with 568B wiring and 568B patch cable will work in a 568A network. The electrons
couldn't care less.
Twisted Pair (Shielded Twisted Pair and Unshielded Twisted Pair) is becoming the cable of choice for new
installations. Twisted pair cable is readily accepted as the preferred solution to cabling. It provides support
for a range of speeds and configurations, and is widely supported by different vendors. Shielded twisted
pair uses a special braided wire, which surrounds all the other wires, which helps to reduce unwanted
interference.
Category 5 cable uses 8 wires. The various jack connectors used in the wiring closet look like,
Distance limitations exist when cabling. For category 5 cabling at 100Mbps, the limitations effectively limit
a workstation to wall outlet of 3 meters, and wall outlet to wiring closet of 90 meters.
All workstations are wired back to a central wiring closet, where they are then patched accordingly. Within
an organization, the IT department either performs this work or sub-contracts it to a third party.
In 10BaseT, each PC is wired back to a central hub using its own cable. There are limits imposed on the
length of drop cable from the PC network card to the wall outlet, the length of the horizontal wiring, and
from the wall outlet to the wiring closet.
Patch Cables
Patch cables come in two varieties, straight through or reversed. One application of patch cables is for
patching between modular patch panels in system centers. These are the straight through variety.
Another application is to connect workstation equipment to the wall jack, and these could be either
straight through or reversed depending upon the manufacturer. Reversed cables are normally used for
voice systems.
If the colors are in the same order on both plugs, the cable is straight through. If the colors appear in the
reverse order, the cable is reversed.
Thin coaxial cable [RG-58AU rated at 50 ohms], as used in Ethernet LAN's, looks like
The connectors used in thin-net Ethernet LAN's are T connectors (used to join cables together and attach
to workstations) and terminators (one at each end of the cable). The T-connectors and terminators look
like
The straight through and cross-over patch cables discussed in this article are terminated with CAT 5 RJ-45
modular plugs. RJ-45 plugs are similar to those you'll see on the end of your telephone cable except they
have eight versus four or six contacts on the end of the plug and they are about twice as big. Make sure
they are rated for CAT 5 wiring. (RJ means "Registered Jack"). Also, there are RJ-45 plugs designed for
both solid core wire and stranded wire. Others are designed specifically for one kind of wire or the other.
Protocols
Computer networks use protocols to communicate. These protocols define the procedures to use for the
systems involved in the communication process. A data communication protocol is a set of rules that must
be followed for the two electronic devices to communicate.
Many protocols are used to provide and support data communications. They form communication
architecture, sometimes referred as “protocol stack” such as the TCP/IP family protocols.
Protocols:
Define the procedures to be used by systems involved in the communication process
In data communications, are a set of rules that must be followed for devices to communicate
Are implemented in software/firmware
Each protocol provides for a function that is needed to make the data communication possible. Many
protocols are used so that the problem can be broken into manageable pieces. Each software module that
implements a protocol can be developed and updated independently of other modules, as long as the
interface between modules remains constant.
Recall that a protocol is a set of rules governing the exchange of data between two entities. These rules
cover:
Syntax – Data format and coding
Semantics – Control information and error handling
Timing – Speed matching and sequencing
Layered Approach
A networking model represents a common structure or protocol to accomplish communication between
systems. These models consist of layers. You can think of a layer as a step that must be completed to go
on to the next step and, ultimately, to communicate between systems.
Implementing a functional internetwork is no simple task. Many challenges must be faced, especially in
the areas of connectivity, reliability, network management, and flexibility. Each area is key in establishing
an efficient and effective internetwork.
The challenge when connecting various systems is to support communication among disparate
technologies. Different sites, for example, may use different types of media operating at varying speeds,
or may even include different types of systems that need to communicate.
Because companies rely heavily on data communication, internetworks must provide a certain level of
reliability. This is an unpredictable world, so many large internetworks include redundancy to allow for
communication even when problems occur.
The communication between the nodes in a packet data network must be precisely defined to ensure
correct interpretation of the packets by the receiving intermediate and the end systems. The packets
exchanged between nodes are defined by a protocol - or communications language.
There are many functions which may be need to be performed by a protocol. These range from the
specification of connectors, addresses of the communications nodes, identification of interfaces, options,
flow control, reliability, error reporting, synchronization, etc. In practice there are so many different
functions, that a set (also known as suite or stack) of protocols are usually defined. Each protocol in the
suite handles one specific aspect of the communication.
The protocols are usually structured together to form a layered design (also known as a "protocol stack").
All major telecommunication network architectures currently used or being developed use layered protocol
architectures. The precise functions in each layer vary. In each case, however, there is a distinction
between the functions of the lower (network) layers, which are primarily designed to provide a connection
or path between users to hide details of underlying communications facilities, and the upper (or higher)
layers, which ensure data exchanged are in correct and understandable form. The upper layers are
sometimes known as "middleware" because they provide software in the computer which convert data
between what the applications programs expect, and what the network can transport. The transport layer
provides the connection between the upper (applications-oriented) layers and the lower (or network-
oriented) layers.
The basic idea of a layered architecture is to divide the design into small pieces. Each layer adds to the
services provided by the lower layers in such a manner that the highest layer is provided a full set of
services to manage communications and run distributed applications. A basic principle is to ensure
independence of layers by defining services provided by each layer to the next higher layer without
defining how the services are to be performed. This permits changes in a layer without affecting other
layers. Prior to the use of layered protocol architectures, simple changes such as adding one terminal type
to the list of those supported by an architecture often required changes to essentially all communications
software at a site.
Protocol Stacks
The protocol stacks were once defined using proprietary documentation - each manufacturer wrote a
comprehensive document describing the protocol. This approach was appropriate when the cost of
computers was very high and communications software was "cheap" in comparison. Once computers
Priyank Pashine – Networking Essentials Page 22
became readily available at economic prices, users saw the need to interconnect the computers from
different manufacturers using computer networks. It was costly to connect computers with different
proprietary protocols, since for each pair of protocols a separate "gateway" product had to be developed.
This process was made more complicated in some cases, since variants of the protocol existed and not all
variants were defined by published documents.
Network Communication
Network Architectures
Peer to Peer
Client Server
Console Terminal
Peer To Peer
A peer-to-peer type of network is one in which each workstation has equivalent capabilities and
responsibilities. This differs from client/server architectures, in which some computers are dedicated to
serving the others. Peer-to-peer networks are generally simpler, but they usually do not offer the same
performance under heavy loads.
This definition captures the traditional meaning of peer-to-peer networking. Computers in a workgroup, or
home computers, are configured for the sharing of resources such as files and printers. Although one
computer may act as the file server or FAX server at any given time, all computers on the network
generally could host those services on short notice. In particular, the computers will typically be situated
near each other physically and will run the same networking protocols.
In this updated view of peer-to-peer computing, devices can now join the network from anywhere with
little effort; instead of dedicated LANs, the Internet itself becomes the network of choice. Easier
configuration and control over the application allows non networking-savvy people to join the user
community. In effect, P2P signifies a shift in emphasis in peer networking from the hardware to the
applications.
Client Server
Ages ago (in Internet time), when mainframe dinosaurs roamed the Earth, a new approach to computer
networking called "client/server" emerged. Client/server proved to be a more cost-effective way to build
many types of networks, particularly PC-based LANs running end-user database applications. Many types
of client/server systems remain popular today.
What Is Client/Server?
The most basic definition of client/server
In general, client/server maintains a distinction between processes and network devices. Usually a client
computer and a server computer are two separate devices, each customized for their designed purpose.
For example, a Web server will often contain large amounts of memory and disk space, whereas Web
Client/server networking, however, focuses primarily on the applications rather than the hardware. The
same device may function as both client and server; for example, Web server hardware functions as both
client and server when local browser sessions are run there. Likewise, a device that is a server at one
moment can reverse roles and become a client to a different server (either for the same application or for
a different application).
Client/Server Applications
Some of the most popular applications on the Internet follow the client/server design:
Email clients
FTP (File transfer) clients
Web browsers
Each of these programs presents a user interface (either graphic- or text-based) in a client process that
allows the user to connect to servers. In the case of email and FTP, the user enters a computer name (or
sometimes an IP address) into the interface to set up future connections to the server process.
When using a Web browser, the name or address of the server appears in the URL of each request.
Although a person may start a Web surfing session by entering a particular server name (such as
www.about.com), the name regularly changes as they click links on the pages. In the Web model, the
HTML content developer encoded in the anchor tags provides server information.
Client/Server at Home
Many home networkers use client/server systems without even realizing it. Microsoft's Internet Connection
Sharing (ICS), for example, relies on DHCP server and client functionality built into the operating system.
Cable modem and DSL routers also include a DHCP server with the hardware unit. Many home LAN
gaming applications also use a single-server/multiple-client configuration.
Console Terminal
The Console-Terminal architecture is one in which the Terminal has to access the Console for every piece
of information. The terminals are also some times referred to as ‘thin clients’.
Modern computer networks are designed in a highly structured way. To reduce their design complexity,
most networks are organized as a series of layers, each one built upon its predecessor.
The OSI Reference Model is based on a proposal developed by the International Organization for
Standardization (ISO). The model is called ISO OSI (Open Systems Interconnection) Reference Model
because it deals with connecting open systems - that is, systems that are open for communication with
other systems.
The OSI model has seven layers. The principles that were applied to arrive at the seven layers are as
follows:
A layer should be created where a different level of abstraction is needed.
Each layer should perform a well defined function.
The function of each layer should be chosen with an eye toward defining internationally
standardized protocols.
The layer boundaries should be chosen to minimize the information flow across the interfaces.
The number of layers should be large enough that distinct functions need not be thrown together in
the same layer out of necessity, and small enough that the architecture does not become unwieldy.
The application layer contains a variety of protocols that are commonly needed. For example, there are
hundreds of incompatible terminal types in the world. Consider the plight of a full screen editor that is
supposed to work over a network with many different terminal types, each with different screen layouts,
escape sequences for inserting and deleting text, moving the cursor, etc.
Priyank Pashine – Networking Essentials Page 25
One way to solve this problem is to define an abstract network virtual terminal for which editors and other
programs can be written to deal with. To handle each terminal type, a piece of software must be written
to map the functions of the network virtual terminal onto the real terminal. For example, when the editor
moves the virtual terminal's cursor to the upper left-hand corner of the screen, this software must issue
the proper command sequence to the real terminal to get its cursor there too. All the virtual terminal
software is in the application layer.
Another application layer function is file transfer. Different file systems have different file naming
conventions, different ways of representing text lines, and so on. Transferring a file between two different
systems requires handling these and other incompatibilities. This work, too, belongs to the application
layer, as do electronic mail, remote job entry, directory lookup, and various other general-purpose and
special-purpose facilities.
A typical example of a presentation service is encoding data in a standard, agreed upon way. Most user
programs do not exchange random binary bit strings. They exchange things such as people's names,
dates, amounts of money, and invoices. These items are represented as character strings, integers,
floating point numbers, and data structures composed of several simpler items. Different computers have
different codes for representing character strings, integers and so on. In order to make it possible for
computers with different representation to communicate, the data structures to be exchanged can be
defined in an abstract way, along with a standard encoding to be used "on the wire". The job of managing
these abstract data structures and converting from the representation used inside the computer to the
network standard representation is handled by the presentation layer.
The presentation layer is also concerned with other aspects of information representation. For example,
data compression can be used here to reduce the number of bits that have to be transmitted and
cryptography is frequently required for privacy and authentication.
One of the services of the session layer is to manage dialogue control. Sessions can allow traffic to go in
both directions at the same time, or in only one direction at a time. If traffic can only go one way at a
time, the session layer can help keep track of whose turn it is.
A related session service is token management. For some protocols, it is essential that both sides do not
attempt the same operation at the same time. To manage these activities, the session layer provides
tokens that can be exchanged. Only the side holding the token may perform the critical operation.
Another session service is synchronization. Consider the problems that might occur when trying to do a
two-hour file transfer between two machines on a network with a 1 hour mean time between crashes.
After each transfer was aborted, the whole transfer would have to start over again, and would probably
fail again with the next network crash. To eliminate this problem, the session layer provides a way to
insert checkpoints into the data stream, so that after a crash, only the data after the last checkpoint has
to be repeated.
Under normal conditions, the transport layer creates a distinct network connection for each transport
connection required by the session layer. If the transport connection requires a high throughput, however,
the transport layer might create multiple network connections, dividing the data among the network
connections to improve throughput. On the other hand, if creating or maintaining a network connection is
expensive, the transport layer might multiplex several transport connections onto the same network
connection to reduce the cost. In all cases, the transport layer is required to make the multiplexing
transparent to the session layer.
The transport layer also determines what type of service to provide to the session layer, and ultimately,
the users of the network. The most popular type of transport connection is an error-free point-to-point
channel that delivers messages in the order in which they were sent. However, other possible kinds of
transport, service and transport isolated messages with no guarantee about the order of delivery, and
broadcasting of messages to multiple destinations. The type of service is determined when the connection
is established.
The transport layer is a true source-to-destination or end-to-end layer. In other words, a program on the
source machine carries on a conversation with a similar program on the destination machine, using the
message headers and control messages.
Many hosts are multi-programmed, which implies that multiple connections will be entering and leaving
each host. There needs to be some way to tell which message belongs to which connection. The transport
header is one place this information could be put.
In addition to multiplexing several message streams onto one channel, the transport layer musk take care
of establishing and deleting connections across the network. This requires some kind of naming
mechanism, so that process on one machine has a way of describing with whom it wishes to converse.
There must also be a mechanism to regulate the flow of information, so that a fast host cannot overrun a
slow one. Flow control between hosts is distinct from flow control between switches, although similar
principles apply to both.
If too many packets are present in the subnet at the same time, they will get in each other's way, forming
bottlenecks. The control of such congestion also belongs to the network layer.
Since the operators of the subnet may well expect remuneration for their efforts, there is often some
accounting function built into the network layer. At the very least, the software must count how many
packets or characters or bits are sent by each customer, to produce billing information. When a packet
crosses a national border, with different rates on each side, the accounting can become complicated.
When a packet has to travel from one network to another to get to its destination, many problems can
arise. The addressing used by the second network may be different from the first one. The second one
may not accept the packet at all because it is too large. The protocols may differ, and so on. It is up to the
network layer to overcome all these problems to allow heterogeneous networks to be interconnected.
In broadcast networks, the routing problem is simple, so the network layer is often thin or even
nonexistent.
The data link layer should provide error control between adjacent nodes.
Another issue that arises in the data link layer (and most of the higher layers as well) is how to keep a
fast transmitter from drowning a slow receiver in data. Some traffic regulation mechanism must be
employed in order to let the transmitter know how much buffer space the receiver has at the moment.
Frequently, flow regulation and error handling are integrated, for convenience.
If the line can be used to transmit data in both directions, this introduces a new complication that the data
link layer software must deal with. The problem is that the acknowledgment frames for A to B traffic
compete for the use of the line with data frames for the B to A traffic.
Each layer acts as though it is communicating with its corresponding layer on the other end.
In reality, data is passed from one layer down to the next lower layer at the sending computer, till the
Physical Layer finally transmits the data onto the network cable. As the data it passed down to a lower
layer, it is encapsulated into a larger unit (in effect, each layer adds its own layer information to that
which it receives from a higher layer). At the receiving end, the message is passed upwards to the desired
layer, and as it passes upwards through each layer, the encapsulation information is stripped off.
The Institute of Electrical and Electronics Engineers (IEEE) has subdivided the data link layer into two
sublayers: Logical Link Control (LLC) and Media Access Control (MAC).
The Logical Link Control (LLC) sublayer of the data link layer manages communications between devices
over a single link of a network. LLC is defined in the IEEE 802.2 specification and supports both
connectionless and connection-oriented services used by higher-layer protocols. IEEE 802.2 defines a
number of fields in data link layer frames that enable multiple higher-layer protocols to share a single
physical data link. The Media Access Control (MAC) sublayer of the data link layer manages protocol
access to the physical network medium. The IEEE MAC specification defines MAC addresses, which enable
multiple devices to uniquely identify one another at the data link layer.
MAC addresses are 48 bits in length and are expressed as 12 hexadecimal digits. The first 6 hexadecimal
digits, which are administered by the IEEE, identify the manufacturer or vendor and thus comprise the
Organizationally Unique Identifier (OUI). The last 6 hexadecimal digits comprise the interface serial
number, or another value administered by the specific vendor. MAC addresses sometimes are called
burned-in addresses (BIAs) because they are burned into read-only memory (ROM) and are copied into
random-access memory (RAM) when the interface card initializes.
Ethernet 802.3: Carrier Sense Multiple Access with Collision Detection (CSMA/CD) - This protocol is
commonly used in bus (Ethernet) implementations. Multiple access refers to the fact that in bus systems,
each station has access to the common cable.
Carrier sense refers to the fact that each station listens to see if no other station is transmitting before
sending data.
Collision detection refers to the principle of listening to see if other stations are transmitting whilst we are
transmitting.
Fast Ethernet
The growing importance of LANs and the increasing demand for data in a variety of forms (text, digital
images, and voice) are fueling the need for high-bandwidth networks. To continue to support business
operations, information systems (IS) managers must consider extending or replacing traditional network
technologies with new, high-performance solutions. Managers with networks already running on
10megabit per second (Mbps) Ethernet have an advantage. They can easily upgrade to Fast Ethernet for
100 Mbps performance with minimal cost and service disruption.
As a simple evolution of the familiar IEEE 802.3 Ethernet standard, 100Base-T Fast Ethernet preserves the
core structure of Ethernet and includes support for up to 100Mbps and the most widely used cabling
schemes. It can be easily integrated with existing 10 Mbps Ethernet networks to provide ten times more
network throughput to the desktop at a minimal incremental cost. Its interoperability with other low- and
high-bandwidth networking technologies offers network administrators more flexibility in structuring high-
speed LANs and WAN.
Fast Ethernet is a natural extension of the existing 10 Mbps Ethernet technologies. It is backward
compatible with existing networking protocols, media, and standards. It is cost-effective, easy to deploy,
and offers 10times the speed of 10Base-T Ethernet at a low cost of ownership. 100Base-T's backward
compatibility allows information resource departments to protect their investment in Ethernet expertise
while delivering the performance required by power users.
100Base-T Fast Ethernet consists of five component specifications: the Media access Control (MAC) layer,
the Media Independent Interface (MII), and three physical layers supporting the most widely used cabling
types. Each of these components was designed to preserve compatibility with 10 Mbps Ethernet and
existing installations.
The Fast Ethernet architecture allows scalable performance up to 100 Mbps, meaning that higher
throughput of the workstation at the sending or receiving end directly translates into more performance
through the network. With a Fast Ethernet connection, users can finally tap into the network I/O
performance of their high-end workstations and servers.
The growing importance of LANs and the increasing complexity of desktop computing applications are
fueling the need for high performance networks. A number of high-speed LAN technologies are proposed
to provide greater bandwidth and improve client/server response times. Among them, Fast Ethernet, or
100BASE-T, provides a non-disruptive, smooth evolution from the current 10BASE-T technology. The
dominating market position virtually guarantees cost effective and high performance Fast Ethernet
solutions in the years to come.
100Mbps Fast Ethernet is a standard specified by the IEEE 802.3 LAN committee. It is an extension of the
10Mbps Ethernet standard with the ability to transmit and receive data at 100Mbps, while maintaining the
Carrier Sense Multiple Access with Collision Detection (CSMA/CD) Ethernet protocol
Gigabit Ethernet
Gigabit Ethernet is based on the same Ethernet standard that IT managers already know and use. One of
the original network architectures, defined during the 1970s, Ethernet is now widespread throughout the
world. From the first implementation of the Ethernet specification — jointly developed by Digital, Intel and
Gigabit Ethernet builds on these proven qualities, but is 100 times faster than regular Ethernet and 10
times faster than Fast Ethernet. The principal benefits of Gigabit Ethernet include:
Increased bandwidth for higher performance and elimination of bottlenecks
Broad deployment capabilities without re-wiring, using 1000BASE-T Gigabit over Category 5 copper
cabling
Aggregate bandwidth to 16Gbps through IEEE 802.3ad and Intel® Link Aggregation using Intel
server adapters and switches
Full-duplex capacity, allowing data to be transmitted and received at the same time so that the
effective bandwidth is virtually doubled
Quality of Service (QoS) features which can be used to help eliminate jittery video or distorted
audio
Low cost of acquisition and ownership
Standards Evolution
Gigabit Ethernet is a function of technological evolution in response to industry demand. It is an extension
of the 10Mbps Ethernet networking standard, 10Base-T, and the 100Mbps Fast Ethernet standards,
100Base-TX and 100Base-FX (Table 1). Two benefits leading to Ethernet's longevity and success are its
low cost and ease of implementation. Besides offering high-speed connectivity at an economical price and
support for a variety of transmission media, the Ethernet standard also offers a broad base of support for
a huge and ever-growing variety of LAN applications. It is also easily scalable from 10Mbps to systems
with higher-speed 100Mbps and 1000Mbps throughput.
In June of 1998, the IEEE approved the Gigabit Ethernet standard over fiber (LX and SX) and short-haul
copper (CX) as IEEE 802.3z. The fiber implementation was widely supported. With approval of 802.3z,
companies could rely on a well-known, standards-based approach to improve traffic flow in congested
areas without having to upgrade to an unproven or non-standardized technology.
Gigabit Ethernet was originally designed as a switched technology, using fiber for uplinks and for
connections between buildings. Since then, Gigabit Ethernet has also been used extensively in servers
with Gigabit Ethernet network adapters and along backbones to remove traffic bottlenecks in these areas
of aggregation.
In June of 1999, the IEEE further standardized IEEE 802.3ab Gigabit Ethernet over copper (1000BASE-T),
allowing 1Gb speeds to be transmitted over Category 5 cable. Since Category 5 makes up a large portion
of the installed cabling base, migrating to Gigabit Ethernet has never been easier. Organizations can now
This is especially important in areas where existing network wiring is difficult to access, such as the utility
risers typically located between floors in large office buildings. Without the new standard, future
deployment of Gigabit Ethernet might have required costly replacement of cabling in these risers.
However, even with the new standard, existing cabling must meet certain characteristics.
Gigabit Ethernet is fully compatible with the large installed base of Ethernet and Fast Ethernet nodes. It
employs all of the same specifications defined by the original Ethernet standard, including:
CSMA/CD protocol
Ethernet frame or "packet" format
Full duplex
Flow control
Management objects as defined by the IEEE 802.3 standard
Because it's part of the Ethernet suite of standards, Gigabit Ethernet also supports traffic management
techniques that deliver Quality of Service over Ethernet, such as:
IEEE 802.1p Layer 2 prioritization
ToS coding bits for Layer 3 prioritization
Differentiated Services
Resource Reservation Protocol (RSVP)
Gigabit Ethernet can also take advantage of 802.1Q VLAN support, Layer 4 filtering, and Layer 3 switching
at Gigabit speeds. In addition, bandwidth up to 16Gbps can be achieved by trunking either several Gigabit
switch ports or Gigabit server adapters together using IEEE 802.3ad or Intel Link Aggregation.
All of these popular Ethernet technologies, which are deployed in a variety of network infrastructure
devices, are applicable to Gigabit Ethernet.
Network Segments
A network segment:
Is a length of cable
Devices can be attached to the cable
Has unique address
Has a limit on its length and the number of devices, which can be attached to it
Combining several individual network segments together, using appropriate devices like routers and/or
bridges, makes large networks.
In the above diagram, a bridge is used to allow traffic from one network segment to the other. Each
network segment is considered unique and has its own limits of distance and the number of connections
possible.
When network segments are combined into a single large network, paths exist between the individual
network segments. These paths are called routes, and devices like routers and bridges keep tables, which
define how to get to a particular computer on the network. When a packet arrives, the router/bridge will
look at the destination address of the packet, and determine which network segment the packet is to be
transmitted on in order to get to its destination.
Dial-up networking refers to the technology that enables you to connect your computer to a network via a
modem. If your computer is not connected to a LAN and you want to connect to the Internet, you need to
configure Dial-Up Networking (DUN) to dial a Point of Presence (POP) and log into your Internet Service
Provider (ISP). Your ISP will need to provide certain information, such as the gateway address and your
computer's IP address.
Repeaters also allow isolation of segments in the event of failures or fault conditions. Disconnecting one
side of a repeater effectively isolates the associated segments from the network.
Using repeaters simply allows you to extend your network distance limitations. It does not give you any
more bandwidth or allow you to transmit data faster.
A repeater works at the Physical Layer by simply repeating all data from one segment to another.
Repeater features
Increase traffic on segments
Have distance limitations
Limitations on the number that can be used
Propagate errors in the network
Cannot be administered or controlled via remote access
Cannot loop back to itself (must be unique single paths)
No traffic isolation or filtering
There are many types of hubs. Passive hubs are simple splitters or combiners that group workstations into
a single segment, whereas active hubs include a repeater function and are thus capable of supporting
many more connections.
Nowadays, with the advent of 10BaseT, hub concentrators are being very popular. These are very
sophisticated and offer significant features, which make them radically different from the older hubs,
which were available during the 1980's.
These 10BaseT hubs provide each client with exclusive access to the full bandwidth, unlike bus networks
where the bandwidth is shared. Each workstation plugs into a separate port, which runs at 10Mbps and is
for the exclusive use of that workstation, thus there is no contention to worry about like in Ethernet.
These 10BaseT hubs also include buffering of packets and filtering, so that unwanted packets (or packets
which contain errors) are discarded. SNMP management is also a common feature.
Ports can also be buffered, to allow packets to be held in case the hub or port is busy. And, because each
workstation has it's own port, it does not contend with other workstations for access, having the entire
bandwidth available for it's exclusive use.
The ports on a hub all appear as one Ethernet segment. In addition, hubs can be stacked or cascaded
(using master/slave configurations) together, to add more ports per segment. As hubs do not count as
repeaters, this is a better option for adding more workstations than the use of a repeater.
Hub options also include an SNMP (Simple Network Management Protocol) agent. This allows the use of
network management software to remotely administer and configure the hub. Detailed statistics related to
port usage and bandwidth is often available, allowing informed decisions to be made concerning the state
of the network.
Bridges are ideally used in environments where there a number of well defined workgroups, each
operating more or less independent of each other, with occasional access to servers outside of their
localized workgroup or network segment. Bridges do not offer performance improvements when used in
diverse or scattered workgroups, where the majority of access occurs outside of the local segment.
The diagram below shows two separate network segments connected via a bridge. Note that each
segment must have a unique network address number in order for the bridge to be able to forward
packets from one segment to the other.
Ideally, if workstations on network segment A needed access to a server, the best place to locate that
server is on the same segment as the workstations, as this minimizes traffic on the other segment, and
avoids the delay incurred by the bridge.
A bridge works at the MAC Layer by looking at the destination address and forwarding the frame to the
appropriate segment upon which the destination computer resides.
Bridge features
Operate at the MAC layer (layer 2 of the OSI model)
Can reduce traffic on other segments
Broadcasts are forwarded to every segment
Most allow remote access and configuration
Often SNMP (Simple Network Management Protocol) enabled
Loops can be used (redundant paths) if using spanning tree algorithm
Small delays introduced
Ethernet switches increase network performance by decreasing the amount of extraneous traffic on
individual network segments attached to the switch. They also filter packets a bit like a router does. In
addition, Ethernet switches work and function like bridges at the MAC layer, but instead of reading the
entire incoming Ethernet frame before forwarding it to the destination segment, usually only read the
destination address in the frame before re-transmitting it to the correct segment. In this way, switches
forward frames faster than bridges, offering fewer delays through the network, hence better performance.
When a packet arrives, the header is checked to determine which segment the packet is destined for, and
then its forwarded to that segment. If the packet is destined for the same segment that it arrives on, the
packet is dropped and not retransmitted. This prevents the packet being "broadcasted" onto unnecessary
segments, reducing the traffic.
Nodes, which inter-communicate frequently, should be placed on the same segment. Switches work at the
MAC layer level.
Switches divide the network into smaller collision domains [a collison domain is a group of workstations
that contend for the same bandwidth]. Each segment into the switch has its own collision domain (where
the bandwidth is competed for by workstations in that segment). As packets arrive at the switch, it looks
at the MAC address in the header, and decides which segment to forward the packet to. Higher protocols
like IPX and TCP/IP are buried deep inside the packet, so are invisible to the switch. Once the destination
segment has been determined, the packet is forwarded without delay.
Each segment attached to the switch is considered to be a separate collision domain. However, the
segments are still part of the same broadcast domain [a broadcast domain is a group of workstations
which share the same network subnet, in TCP/IP this is defined by the subnet mask]. Broadcast packets,
which originate on any segment, will be forwarded to all other segments (unlike a router). On some
switches, it is possible to disable this broadcast traffic.
Some vendors implement a broadcast throttle feature, whereby a limit is placed on the number of
broadcasts forwarded by the switch over a certain time period. Once a threshold level has been reached,
no additional broadcasts are forwarded till the time period has expired and a new time period begins.
Routers were devised in order to separate networks logically. For instance, a TCP/IP router can segment
the network based on groups of TCP/IP addresses. Filtering at this level (on TCP/IP addresses, also known
as level 3 switching) will take longer than that of a bridge or switch, which only looks at the MAC layer.
Most routers can also perform bridging functions. A major feature of routers, because they can filter
packets at a protocol level, is to act as a firewall. This is essentially a barrier, which prevents unwanted
packets either entering or leaving designated areas of the network.
Typically, an organization, which connects to the Internet, will install a router as the main gateway link
between their network and the outside world. By configuring the router with access lists (which define
what protocols and what hosts have access) this enforces security by restricted (or allowing) access to
either internal or external hosts.
For example, an internal WWW server can be allowed IP access from external networks, but other
company servers which contain sensitive data can be protected, so that external hosts outside the
company are prevented access (you could even deny internal workstations access if required).
A router works at the Network Layer or higher, by looking at information embedded within the data field,
like a TCP/IP address, then forwards the frame to the appropriate segment upon which the destination
computer resides.
Router features
Use dynamic routing
Operate at the protocol level
Remote administration and configuration via SNMP
Support complex networks
The more filtering done, the lower the performance
Provides security
Segment networks logically
Broadcast storms can be isolated
Often provide bridge functions also
More complex routing protocols used [such as RIP, IGRP and OSPF]
A Gateway is a machine on a network that serves as an entrance to another network. For example, when
a user connects to the Internet, that person essentially connects to a server that issues the Web Pages to
the user. These two devices are host nodes, not gateways. In enterprises, the gateway is the computer
In enterprises, the gateway node often acts as a proxy server and a firewall. The gateway is also
associated with both a router, which use headers and forwarding tables to determine where packets are
sent, and a switch, which provides the actual path for the packet in and out of the gateway.
Switched Ethernets
Local Area Network (LAN) technology has made a significant impact on almost every industry. Operations
of these industries depend on computers and networking. The data is stored on computers than on paper,
and the dependence on networking is so high that banks, airlines, insurance companies and many
government organizations would stop functioning if there were a network failure. Since, the reliance on
networks is so high and the network traffic is increasing, we have to address some of the bandwidth
problems this has caused and find ways to tackle them.
A LAN switch is a device that provides much higher port density at a lower cost than traditional bridges.
For this reason, LAN switches can accommodate network designs featuring fewer users per segment,
thereby increasing the average available bandwidth per user. This chapter provides a summary of general
LAN switch operation and maps LAN switching to the OSI reference model.
The trend toward fewer users per segment is known as micro segmentation. Micro segmentation allows
the creation of private or dedicated segments, that is, one user per segment. Each user receives instant
access to the full bandwidth and does not have to contend for available bandwidth with other users. As a
result, collisions (a normal phenomenon in shared-medium networks employing hubs) do not occur. A LAN
switch forwards frames based on either the frame's Layer 2 address (Layer 2 LAN switch), or in some
cases, the frame's Layer 3 address (multi-layer LAN switch). A LAN switch is also called a frame switch
because it forwards Layer 2 frames, whereas an ATM switch forwards cells. Although Ethernet LAN
switches are most common, Token Ring and FDDI LAN switches are becoming more prevalent as network
utilization increases.
The earliest LAN switches were developed in 1990. They were Layer 2 devices dedicated to solving
bandwidth issues. Recent LAN switches are evolving to multi-layer devices capable of handling protocol
issues involved in high-bandwidth applications that historically have been solved by routers. Today, LAN
switches are being used to replace hubs in the wiring closet because user applications are demanding
greater bandwidth.
A LAN switch is a device that typically consists of many ports that connect LAN segments (Ethernet and
Token Ring) and a high-speed port (such as 100-Mbps Ethernet, Fiber Distributed Data Interface [FDDI],
or 155-Mbps ATM). The high-speed port, in turn, connects the LAN switch to other devices in the network.
A LAN switch has dedicated bandwidth per port, and each port represents a different segment.
When a LAN switch first starts up and as the devices that are connected to it request services from other
devices, the switch builds a table that associates the MAC address of each local device with the port
number through which that device is reachable. That way, when Host A on Port 1 needs to transmit to
Host B on Port 2, the LAN switch forwards frames from Port 1 to Port 2, thus sparing other hosts on Port 3
from responding to frames destined for Host B. If Host C needs to send data to Host D at the same time
that Host A sends data to Host B, it can do so because the LAN switch can forward frames from Port 3 to
Port 4 at the same time it forwards frames from Port 1 to Port 2.
Whenever a device connected to the LAN switch sends a packet to an address that is not in the LAN
switch's table (for example, to a device that is beyond the LAN switch), or whenever the device sends a
broadcast or multicast packet, the LAN switch sends the packet out all ports (except for the port from
which the packet originated)---a technique known as flooding.
Because they work like traditional "transparent" bridges, LAN switches dissolve previously well-defined
workgroup or department boundaries. A network built and designed only with LAN switches appears as a
flat network topology consisting of a single broadcast domain. Consequently, these networks are liable to
suffer the problems inherent in flat (or bridged) networks---that is, they do not scale well. Note, however,
that LAN switches that support VLANs are more scalable than traditional bridges.
Local Area Networks in many organizations have to deal with increased bandwidth demands. More and
more users are being added to the existing LANs. If this was the only problem, upgrading the backbone
Priyank Pashine – Networking Essentials Page 43
that connects various LANs could solve it. Bridges and routers can be used to keep the number of users
per LAN at an optimal number. However with increase in the speed of workstation the bandwidth
requirement of each machine has grown more that five times in the last few years. Coupled with
bandwidth hungry multimedia applications, and unmanaged and bursty traffic this problem is further
aggravated.
With the increasing use of client-server architecture in which most of the software is stored in the server,
the traffic from workstations to server has increased. Further, the use of a large number of GUI
applications means more pictures and graphics files need to be transferred to the workstations. This is
another cause of increased traffic per workstation. LAN switching is a fast growing market, with virtually
every network vendor marketing its products. Besides LAN switches, switching routers, switching hubs are
also sold.
The reason it works is simple. Ethernet, token ring and FDDI all use shared media. Conventional Ethernet
is bridged or routed. A 100 Mbps Ethernet will have to divide its bandwidth over a number of users
because of shared access. However with a switched network one can connect each port directly so
bandwidth is shared only among a number of users in a workgroup (connected to the ports). Since there
is reduced media sharing more bandwidth is available. Switches can also maintain multiple connections at
one point.
Switches normally have higher port counts than bridges and divide network into several dedicated
channels parallel to each other. These multiple independent data paths increase the throughput capacity
of a switch. There is no contention to gain access and LAN switch architecture is scalable. Another
advantage of switches is that most of them are self-configuring, minimizing network downtime, although
ways for manual configuration are also available. If a segment is attached to a port of a switch then
CSMA/CD is used for media access in that segment. However, if the port has only one station attached
then there is no need for any media access protocol. The basic operation of a switch is like a multiport
bridge. The source and destination Medium Access Control (MAC) address of incoming frame is looked up
and if the frame is to be forwarded, it is sent to the destination port. Although this is mostly what all
switches do, there are a variety of features that distinguish them, like the following.
Full duplex mode of Ethernet allows simultaneous flow of traffic from one station to another without
collision. So, Ethernet in full duplex mode doesn't require collision detection when only one port station is
attached to each port. There is no contention between stations to transmit over a medium, and a station
can transmit whenever a frame is queued in the adapter. The station can also receive at the same time.
This has a potential to double the performance of the server. The effective bandwidth is equal to the
number of switched ports times the bit rate on medium/2 for half duplex and for full duplex equal to
number of switched ports times the bit rate on medium. One catch to this is, that while a client can send
as well as receive the frames at the same time, at peak loads server might be overburdened. This may
lead to frame loss and eventual loss of connection to the server. To avoid such a situation, flow control at
the client level may be used.
Another big advantage of full duplex is that since there cannot be a collision in full duplex, there is no MAC
layer limitation on the distance, e.g. 2500 m for Ethernet. One can have a 100 km Ethernet using a single
mode fiber. The limitation now is at physical layer. Thus, media speed rates can be sustained depending
upon the station and the switch to which it is attached. The user is unaware of full duplex operation, and
no new software applications are needed for this enhancement.
Flow control is necessary when the destination port is receiving more traffic than it can handle. Since the
buffers are only meant for absorbing peaks traffic, with excessive load frames may be dropped. It is a
costly operation as delay is of the order of seconds for each dropped frame. Traditional networks do not
have a layer 2 flow control mechanism, and rely mainly on higher layers for this. Switches come with
various flow control strategies depending on the vendors. Some switches upon finding that the destination
port is overloaded will send jam message to the sender. Since the decoding of MAC address is fast and a
switch can, in very little time, respond with a jam message, collision or packet loss can be avoided. To the
sender, jam packet is like a virtual collision, so it will wait a random time before retransmit ting. This
strategy works as only those frames that go to the overloaded destination port are jammed and not the
Priyank Pashine – Networking Essentials Page 44
others.
Switching Methods
Cut-through switching
Marked by low latency, these switches begin transmission of the frame to the destination port even before
the whole frame is received. Thus frame latency is about 1/20th of that in store-and-forward switches
(explained later). Cut-through switches with runt (collision fragments) detection will store the frame in the
buffer and begin transmission as soon as the possibility of runt is eliminated and it can grab the outgoing
channel. Filtering of runts is important as they seriously waste the bandwidth of the network. The delay in
these switches is about 60 microseconds. Compare this with store-and-forward switches where every
frame is buffered (delay: 0.8 microsecond per byte). The delay thus for 1500 byte frame is 1200
microsecond. No Cyclic Redundancy Check (CRC) verification is done in these switches.
Store-and-forward switching
This type of switches receive whole of the frame before forwarding it. While the frame is being received,
processing is done. Upon complete arrival of the frame, CRC is verified and the frame is directly forwarded
to the output port. Even though there are some disadvantages of store-and-forward switches, in certain
cases they are essential. For example when we have a slow port transmitting to a fast port. The frame
must be buffered and transmitted only when it is completely received. Another advantage would be in
high traffic conditions, when the frames have to be buffered since the output port may be busy. As traffic
increases the chances of a certain output port being busy obviously increase, so even cut-through
switches may need to buffer the frames. Thus, in some cases store-and-forward switching has its obvious
advantage.
In the network diagram above, a redundant link is planned between Switch A and B, but this creates the
possibility of having a bridging loop. This is because, for example, a broadcast or multicast packet
transmitted from Station M and destined for Station N would simply keep circulating again and again
between both switches.
However, with STP running on both switches, the network logically looks as follows:
With STP, the key is for all the switches in the network to elect a root bridge that becomes the focal point
in the network. All other decisions in the network, such as which port is blocked and which port is put in
forwarding mode, are made from the perspective of this root bridge. A switched environment, which is
different from that of a bridge, most likely deals with multiple VLANs. When implemented in a switching
network, the root bridge is usually referred to as the root switch. Each VLAN (because it is a separate
broadcast domain) must have its own root bridge. The root for the different VLANs can all reside in a
single switch, or it can reside in varying switches.
Note: The selection of the root switch for a particular VLAN is very important. You can choose it, or you
can let the switches decide on their own. The second option is risky because there may be sub-optimal
paths in your network if the root selection process is not controlled by you.
Priyank Pashine – Networking Essentials Page 46
All the switches exchange information to use in the selection of the root switch, as well as for subsequent
configuration of the network. This information is carried in Bridge Protocol Data Units (BPDUs). The BPDU
contains parameters that the switches use in the selection process. Each switch compares the parameters
in the BPDU that they are sending to their neighbor with the one that they are receiving from their
neighbor.
The thing to remember in the STP root selection process is that smaller is better. If the Root ID on Switch
A is advertising it is smaller than the Root ID that its neighbor (Switch B) is advertising, Switch A's
information is better. Switch B stops advertising its Root ID, and instead accepts that of Switch A.
Before configuring STP, you need to select a switch to be the root of the spanning-tree. It does not
necessarily have to be the most powerful switch; it should be the most centralized switch on the network.
All dataflow across the network will be from the perspective of this switch. It is also important that this
switch be the least disturbed switch in the network. The backbone switches are often selected for this
function, because they typically do not have end stations connected to them. They are also less likely to
be disturbed during moves and changes within the network.
After you decide which switch should be the root switch, set the appropriate variables to designate it as
the root switch. The only variable you have to set is the bridge priority. If this switch has a bridge priority
that is lower than all other switches, it will be automatically selected by the other switches as the root
switch.
Clients (end stations) on switch ports: You can also issue the set spantree portfast command. This is done
on a per-port basis. The portfast variable, when enabled on a port, causes the port to immediately switch
from blocking mode to forwarding mode. This helps prevent time-outs on clients that use Novell Netware
or that use Dynamic Host Configuration Protocol (DHCP) to obtain an IP address. However, it is important
that you do not use this command when you have switch-to-switch connection. It could potentially result
in a loop. The 30-60 second delay that occurs when transitioning from blocking to forwarding mode
transition prevents a temporal loop condition in the network when connecting two switches.
Note: Remember, there will be one root switch identified per VLAN. After that root switch has been
identified, the switches follow the rules defined below.
STP Rule One: All ports of the root switch must be in forwarding mode (except for some corner
cases where self-looped ports are involved). Next, each switch determines the best path to get to
the root. They determine this path by comparing the information in all the BPDUs received on all
their ports. The port with the smallest information contained in its BPDU is used to get to the root
switch; that port is called the root port. After a switch figures out its root port, it proceeds to Rule
Two.
STP Rule Two: Once a switch determines its root port, that port must be set to forwarding mode.
In addition, for each LAN segment, the switches communicate with each other to determine which
switch on that LAN segment is best to use for moving data from that segment to the root bridge.
This switch is called the designated switch.
STP Rule Three: In a given LAN segment, the designated switch's port that connects to that LAN
segment must be placed in forwarding mode.
STP Rule Four: All other ports in all the switches (VLAN-specific) must be placed in blocking mode.
This is only for ports that are connected to other bridges or switches. Ports connected to
workstations or PCs are not affected by STP; they remain forwarded.
TCP/IP
An architectural model provides a common frame of reference for discussing Internet communications. It
is used not only to explain communication protocols but to develop them as well. It separates the
functions performed by communication protocols into manageable layers stacked on top of each other.
Each layer in the stack performs a specific function in the process of communicating over a network.
Layered Approach
Generally, TCP/IP is described using three to five functional layers. To describe TCP/IP based firewalls
more precisely; we have chosen the common DoD reference model, which is also known as the Internet
reference model.
This model is based on the three layers defined for the DoD Protocol Model in the DDN Protocol Handbook,
Volume 1. These three layers are as follows:
• Network Access Layer
• Host-To-Host Transport Layer
• Application Layer
An additional layer, the internetwork layer, has been added to this model. The internetwork layer is
commonly used to describe TCP/IP.
Another standard architectural model that is often used to describe a network protocol stack is the OSI
reference model. This model consists of a seven-layer protocol stack.
A dependency, however, exists between the layers. Because every layer is involved in sending data from a
local application to an equivalent remote application, the layers must agree on how to pass data between
At the remote end, the data is passed up the stack to the receiving application. The individual layers do
not need to know how the layers above or below them function; they only need to know how to pass data
to them.
Each layer in the stack adds control information (such as destination address, routing controls, and
checksum) to ensure proper delivery. This control information is called a header and/or a trailer because it
is placed in front of or behind the data to be transmitted. Each layer treats all of the information that it
receives from the layer above it as data, and it places its own header and/or trailer around that
information.
These wrapped messages are then passed into the layer below along with additional control information,
some of which may be forwarded or derived from the higher layer. By the time a message exits the
system on a physical link (such as a wire), the original message is enveloped in multiple, nested wrappers
—one for each layer of protocol through which the data passed. When a protocol uses headers or trailers
to package the data from another protocol, the process is called encapsulation.
When data is received, the opposite happens. Each layer strips off its header and/or trailer before passing
the data up to the layer above. As information flows back up the stack, information received from a lower
layer is interpreted as both a header/trailer and data. The process of removing headers and trailers from
data is called decapsulation. This mechanism enables each layer in the transmitting computer to
communicate with its corresponding layer in the receiving computer. Each layer in the transmitting
computer communicates with its peer layer in the receiving computer via a process called peer-to-peer
communication.
Each layer has specific responsibilities and specific rules for carrying out those responsibilities, and it
knows nothing about the procedures that the other layers follow. A layer carries out its tasks and delivers
the message to the next layer in the protocol stack. An address mechanism is the common element that
allows data to be routed through the various layers until it reaches its destination.
Each layer also has its own independent data structures. Conceptually, a layer is unaware of the data
structures used by the layers above and below it. In reality, the data structures of a layer are designed to
be compatible with the structures used by the surrounding layers for the sake of more efficient data
transmission. Still, each layer has its own data structures and its own terminology to describe those
structures.
In the following sections, we describe the function of each layer in more detail, starting with the network
access layer and working our way up to the application layer.
Unlike higher level protocols, the network access layer protocols must understand the details of the
underlying physical network, such as the packet structure, maximum frame size, and the physical address
scheme that is used. Understanding the details and constraints of the physical network ensures that these
protocols can format the data correctly so that it can be transmitted across the network.
Internetwork Layer
In the Internet reference model, the layer above the network access layer is called the internetwork layer.
This layer is responsible for routing messages through internetworks. Two types of devices are responsible
for routing messages between networks. The first device is called a gateway, which is a computer that has
two network adapter cards. This computer accepts network packets from one network on one network
card and routes those packets to a different network via the second network adapter card. The second
device is a router, which is a dedicated hardware device that passes packets from one network to a
different network.
The internetwork layer protocols provide a datagram network service. Datagrams are packets of
information that comprise a header, data, and a trailer. The header contains information, such as the
destination address, that the network needs to route the datagram. A header can also contain other
information, such as the source address and security labels. Trailers typically contain a checksum value,
which is used to ensure that the data is not modified in transit.
The communicating entities—which can be computers, operating systems, programs, processes, or people
—that use the datagram services must specify the destination address (using control information) and the
data for each message to be transmitted. The internetwork layer protocols package the message in a
datagram and send it off.
A datagram service does not support any concept of a session or connection. Once a message is sent or
received, the service retains no memory of the entity with which it was communicating. If such a memory
is needed, the protocols in the host-to-host transport layer maintain it. The abilities to retransmit data and
check it for errors are minimal or nonexistent in the datagram services. If the receiving datagram service
detects a transmission error during transmission using the checksum value of the datagram, it simply
ignores (or drops) the datagram without notifying the receiving higher-layer entity.
In addition to the usual transmit and receive functions, the host-to-host transport layer uses open and
close commands to initiate and terminate the connection. This layer accepts information to be transmitted
as a stream of characters, and it returns information to the recipient as a stream.
The service employs the concept of a connection (or virtual circuit). A connection is the state of the host-
to-host transport layer between the time that an open command is accepted by the receiving computer
and the time that the close command is issued by either computer.
Application Layer
The top layer in the Internet reference model is the application layer. This layer provides functions for
users or their programs, and it is highly specific to the application being performed. It provides the
services that user applications use to communicate over the network, and it is the layer in which user-
access network processes reside. These processes include all of those that users interact with directly, as
well as other processes of which the users are not aware.
This layer includes all applications protocols that use the host-to-host transport protocols to deliver data.
Other functions that process user data, such as data encryption and decryption and compression and
decompression, can also reside at the application layer.
The application layer also manages the sessions (connections) between cooperating applications. In the
TCP/IP protocol hierarchy, sessions are not identifiable as a separate layer, and these functions are
performed by the host-to-host transport layer. Instead of using the term "session," TCP/IP uses the terms
"socket" and "port" to describe the path (or virtual circuit) over which cooperating applications
communicate.
Most of the application protocols in this layer provide user services, and new user services are added
often. For cooperating applications to be able to exchange data, they must agree about how data is
represented. The application layer is responsible for standardizing the presentation of data.
The name TCP/IP refers to a suite of data communication protocols. The name is misleading because TCP
and IP are only two of dozens of protocols that compose the suite. Its name comes from two of the more
important protocols in the suite: the Transmission Control Protocol (TCP) and the Internet Protocol (IP).
TCP/IP originated out of the investigative research into networking protocols that the Department of
Defense (DoD) initiated in 1969. In 1968, the DoD Advanced Research Projects Agency (ARPA) began
researching the network technology that is now called packet switching.
The original focus of this research was to facilitate communication among the DoD community. However,
the network that was initially constructed as a result of this research, then called ARPANET, gradually
became known as the Internet. The TCP/IP protocols played an important role in the development of the
Internet. In the early 1980s, the TCP/IP protocols were developed. In 1983, they became standard
protocols for ARPANET.
Because of the history of the TCP/IP protocol suite, it is often referred to as the DoD protocol suite or the
Internet protocol suite.
Internet Protocol
IP is a connectionless protocol, which means that IP does not exchange control information (called a
handshake) to establish an end-to-end connection before transmitting data. In contrast, a connection-
oriented protocol exchanges control information with the remote computer to verify that it is ready to
receive data before sending it. When the handshaking is successful, the computers are said to have
established a connection. IP relies on protocols in other layers to establish the connection if connection-
oriented services are required.
Priyank Pashine – Networking Essentials Page 51
IP also relies on protocols in another layer to provide error detection and error recovery. Because it
contains no error detection or recovery code, IP is sometimes called an unreliable protocol.
Each type of network has a maximum transmission unit (MTU), which is the largest packet it can transfer.
If the datagram received from one network is longer than the other network's MTU, it is necessary to
divide the datagram into smaller fragments for transmission. This division process is called fragmentation.
The Internet Protocol (IP) is a network-layer (Layer 3) protocol that contains addressing information and
some control information that enables packets to be routed. IP is documented in RFC 791 and is the
primary network-layer protocol in the Internet protocol suite. Along with the Transmission Control Protocol
(TCP), IP represents the heart of the Internet protocols. IP has two primary responsibilities: providing
connectionless, best-effort delivery of datagrams through an internetwork; and providing fragmentation
and reassembly of datagrams to support data links with different maximum-transmission unit (MTU) sizes.
IP Packet Format
After receiving a MAC-layer address, IP devices create an ARP cache to store the recently acquired IP-to-
MAC address mapping, thus avoiding having to broadcast ARPS when they want to recontact a device. If
the device does not respond within a specified time frame, the cache entry is flushed.
In addition to the Reverse Address Resolution Protocol (RARP) is used to map MAC-layer addresses to IP
addresses. RARP, which is the logical inverse of ARP, might be used by diskless workstations that do not
With stream data transfer, TCP delivers an unstructured stream of bytes identified by sequence numbers.
This service benefits applications because they do not have to chop data into blocks before handing it off
to TCP. Instead, TCP groups bytes into segments and passes them to IP for delivery.
TCP offers reliability by providing connection-oriented, end-to-end reliable packet delivery through an
internetwork. It does this by sequencing bytes with a forwarding acknowledgment number that indicates
to the destination the next byte the source expects to receive. Bytes not acknowledged within a specified
time period are retransmitted. The reliability mechanism of TCP allows devices to deal with lost, delayed,
duplicate, or misread packets. A time-out mechanism allows devices to detect lost packets and request
retransmission.
TCP offers efficient flow control, which means that, when sending acknowledgments back to the source,
the receiving TCP process indicates the highest sequence number it can receive without overflowing its
internal buffers.
Full-duplex operation means that TCP processes can both send and receive at the same time.
Finally, TCP's multiplexing means that numerous simultaneous upper-layer conversations can be
multiplexed over a single connection.
A three-way handshake synchronizes both ends of a connection by allowing both sides to agree upon
initial sequence numbers. This mechanism also guarantees that both sides are ready to transmit data and
know that the other side is ready to transmit as well. This is necessary so that packets are not transmitted
or retransmitted during session establishment or after session termination.
Each host randomly chooses a sequence number used to track bytes within the stream it is sending and
receiving. Then, the three-way handshake proceeds in the following manner:
The first host (Host A) initiates a connection by sending a packet with the initial sequence number (X) and
SYN bit set to indicate a connection request. The second host (Host B) receives the SYN, records the
sequence number X, and replies by acknowledging the SYN (with an ACK = X + 1). Host B includes its
own initial sequence number (SEQ = Y). An ACK = 20 means the host has received bytes 0 through 19
and expects byte 20 next. This technique is called forward acknowledgment. Host A then acknowledges all
bytes Host B sent with a forward acknowledgment indicating the next byte Host A expects to receive (ACK
= Y + 1). Data transfer then can begin.
By assigning each packet a sequence number, PAR enables hosts to track lost or duplicate packets caused
by network delays that result in premature retransmission. The sequence numbers are sent back in the
acknowledgments so that the acknowledgments can be tracked.
In TCP, the receiver specifies the current window size in every packet. Because TCP provides a byte-
stream connection, window sizes are expressed in bytes. This means that a window is the number of data
bytes that the sender is allowed to send before waiting for an acknowledgment. Initial window sizes are
indicated at connection setup, but might vary throughout the data transfer to provide flow control. A
window size of zero, for instance, means "Send no data."
In a TCP sliding-window operation, for example, the sender might have a sequence of bytes to send
(numbered 1 to 10) to a receiver who has a window size of five. The sender then would place a window
around the first five bytes and transmit them together. It would then wait for an acknowledgment.
The receiver would respond with an ACK = 6, indicating that it has received bytes 1 to 5 and is expecting
byte 6 next. In the same packet, the receiver would indicate that its window size is 5. The sender then
would move the sliding window five bytes to the right and transmit bytes 6 to 10. The receiver would
respond with an ACK = 11, indicating that it is expecting sequenced byte 11 next. In this packet, the
receiver might indicate that its window size is 0 (because, for example, its internal buffers are full). At this
point, the sender cannot send any more bytes until the receiver sends another packet with a window size
greater than 0.
UDP is useful in situations where the reliability mechanisms of TCP are not necessary, such as in cases
where a higher-layer protocol might provide error and flow control.
UDP is the transport protocol for several well-known application-layer protocols, including Network File
System (NFS), Simple Network Management Protocol (SNMP), Domain Name System (DNS), and Trivial
File Transfer Protocol (TFTP).
The UDP packet format contains four fields. These include source and destination ports, length, and
checksum fields.
Source and destination ports contain the 16-bit UDP protocol port numbers used to demultiplex datagrams
for receiving application-layer processes. A length field specifies the length of the UDP header and data.
Checksum provides an (optional) integrity check on the UDP header and data.
IP Addressing
As with any other network-layer protocol, the IP addressing scheme is integral to the process of routing IP
datagrams through an internetwork. Each IP address has specific components and follows a basic format.
These IP addresses can be subdivided and used to create addresses for subnetworks, as discussed in more
detail later in this chapter.
Priyank Pashine – Networking Essentials Page 56
Each host on a TCP/IP network is assigned a unique 32-bit logical address that is divided into two main
parts: the network number and the host number. The network number identifies a network and must be
assigned by the Internet Network Information Center (InterNIC) if the network is to be part of the
Internet. An Internet Service Provider (ISP) can obtain blocks of network addresses from the InterNIC and
can itself assign address space as necessary. The host number identifies a host on a network and is
assigned by the local network administrator.
IP Address Format
The 32-bit IP address is grouped eight bits at a time, separated by dots, and represented in decimal
format (known as dotted decimal notation). Each bit in the octet has a binary weight (128, 64, 32, 16, 8,
4, 2, 1). The minimum value for an octet is 0, and the maximum value for an octet is 255. Figure 30-3
illustrates the basic format of an IP address.
IP Address Classes
IP addressing supports five different address classes: A, B,C, D, and E. Only classes A, B, and C are
available for commercial use. The left-most (high-order) bits indicate the network class.
1
N = Network number, H = Host number.
Priyank Pashine – Networking Essentials Page 57
2
One address is reserved for the broadcast address, and one address is reserved for the network.
The class of address can be determined easily by examining the first octet of the address and mapping
that value to a class range in the following table. In an IP address of 172.31.1.2, for example, the first
octet is 172. Because 172 falls between 128 and 191, 172.31.1.2 is a Class B address.
IP Subnet Addressing
IP networks can be divided into smaller networks called subnetworks (or subnets). Subnetting provides
the network administrator with several benefits, including extra flexibility, more efficient use of network
addresses, and the capability to contain broadcast traffic (a broadcast will not cross a router).
Subnets are under local administration. As such, the outside world sees an organization as a single
network and has no detailed knowledge of the organization's internal structure.
IP Subnet Mask
A subnet address is created by "borrowing" bits from the host field and designating them as the subnet
field. The number of borrowed bits varies and is specified by the subnet mask.
Subnet mask bits should come from the high-order (left-most) bits of the host field. Details of Class B and
C subnet mask types follow. Class A addresses are not discussed in this chapter because they generally
are subnetted on an 8-bit boundary.
The subnet mask for a Class C address 192.168.2.0 that specifies five bits of subnetting is
255.255.255.248.With five bits available for subnetting, 25 - 2 = 30 subnets possible, with 23 - 2 = 6 hosts
per subnet.
3 255.255.224.0 6 8190
4 255.255.240.0 14 4094
5 255.255.248.0 30 2046
6 255.255.252.0 62 1022
10 255.255.255.192 1022 62
11 255.255.255.224 2046 30
12 255.255.255.240 4094 14
13 255.255.255.248 8190 6
14 255.255.255.252 16382 2
2 255.255.255.192 2 62
3 255.255.255.224 6 30
4 255.255.255.240 14 14
5 255.255.255.248 30 6
6 255.255.255.252 62 2
Three basic rules govern logically "ANDing" two binary numbers. First, 1 "ANDed" with 1 yields 1. Second,
1 "ANDed" with 0 yields 0. Finally, 0 "ANDed" with 0 yields 0. The truth table provided in table 30-4
illustrates the rules for logical AND operations.
The topic of routing has been covered in computer science literature for more than two decades, but
routing achieved commercial popularity as late as the mid-1980s. The primary reason for this time lag is
that networks in the 1970s were simple, homogeneous environments. Only relatively recently has large-
scale internetworking become popular.
Routing involves two basic activities: determining optimal routing paths and transporting information
groups (typically called packets) through an internetwork. In the context of the routing process, the latter
of these is referred to as packet switching. Although packet switching is relatively straightforward, path
determination can be very complex.
Path Determination
Routing protocols use metrics to evaluate what path will be the best for a packet to travel. A metric is a
standard of measurement, such as path bandwidth, that is used by routing algorithms to determine the
optimal path to a destination. To aid the process of path determination, routing algorithms initialize and
maintain routing tables, which contain route information. Route information varies depending on the
routing algorithm used.
Routing algorithms fill routing tables with a variety of information. Destination/next hop associations tell a
router that a particular destination can be reached optimally by sending the packet to a particular router
representing the "next hop" on the way to the final destination. When a router receives an incoming
packet, it checks the destination address and attempts to associate this address with a next hop.
Routing tables also can contain other information, such as data about the desirability of a path. Routers
compare metrics to determine optimal routes, and these metrics differ depending on the design of the
routing algorithm used. A variety of common metrics will be introduced and described later in this chapter.
Routers communicate with one another and maintain their routing tables through the transmission of a
variety of messages. The routing update message is one such message that generally consists of all or a
portion of a routing table. By analyzing routing updates from all other routers, a router can build a
detailed picture of network topology. A link-state advertisement, another example of a message sent
between routers, informs other routers of the state of the sender's links. Link information also can be used
to build a complete picture of network topology to enable routers to determine optimal routes to network
destinations.
Switching
Switching algorithms is relatively simple; it is the same for most routing protocols. In most cases, a host
determines that it must send a packet to another host. Having acquired a router's address by some
means, the source host sends a packet addressed specifically to a router's physical (Media Access Control
[MAC]-layer) address, this time with the protocol (network layer) address of the destination host.
Priyank Pashine – Networking Essentials Page 62
As it examines the packet's destination protocol address, the router determines that it either knows or
does not know how to forward the packet to the next hop. If the router does not know how to forward the
packet, it typically drops the packet. If the router knows how to forward the packet, however, it changes
the destination physical address to that of the next hop and transmits the packet.
The next hop may be the ultimate destination host. If not, the next hop is usually another router, which
executes the same switching decision process. As the packet moves through the internetwork, its physical
address changes, but its protocol address remains constant.
The preceding discussion describes switching between a source and a destination end system. The
International Organization for Standardization (ISO) has developed a hierarchical terminology that is
useful in describing this process. Using this terminology, network devices without the capability to forward
packets between subnetworks are called end systems (ESs), whereas network devices with these
capabilities are called intermediate systems (ISs). ISs are further divided into those that can communicate
within routing domains (intradomain ISs) and those that communicate both within and between routing
domains (interdomain ISs). A routing domain generally is considered a portion of an internetwork under
common administrative authority that is regulated by a particular set of administrative guidelines. Routing
domains are also called autonomous systems. With certain protocols, routing domains can be divided into
routing areas, but intradomain routing protocols are still used for switching both within and between
areas.
Numerous Routers May Come into Play During the Switching Process
Routing Algorithms
Routing algorithms can be differentiated based on several key characteristics. First, the particular goals of
the algorithm designer affect the operation of the resulting routing protocol. Second, various types of
routing algorithms exist, and each algorithm has a different impact on network and router resources.
Finally, routing algorithms use a variety of metrics that affect calculation of optimal routes.
Routing algorithms often have one or more of the following design goals:
Optimality
Simplicity and low overhead
Robustness and stability
Rapid convergence
Priyank Pashine – Networking Essentials Page 63
Flexibility
Optimality refers to the capability of the routing algorithm to select the best route, which depends on the
metrics and metric weightings used to make the calculation. For example, one routing algorithm may use
a number of hops and delays, but it may weigh delay more heavily in the calculation. Naturally, routing
protocols must define their metric calculation algorithms strictly.
Routing algorithms also are designed to be as simple as possible. In other words, the routing algorithm
must offer its functionality efficiently, with a minimum of software and utilization overhead. Efficiency is
particularly important when the software implementing the routing algorithm must run on a computer with
limited physical resources.
Routing algorithms must be robust, which means that they should perform correctly in the face of unusual
or unforeseen circumstances, such as hardware failures, high load conditions, and incorrect
implementations. Because routers are located at network junction points, they can cause considerable
problems when they fail. The best routing algorithms are often those that have withstood the test of time
and that have proven stable under a variety of network conditions.
In addition, routing algorithms must converge rapidly. Convergence is the process of agreement, by all
routers, on optimal routes. When a network event causes routes to either go down or become available,
routers distribute routing update messages that permeate networks, stimulating recalculation of optimal
routes and eventually causing all routers to agree on these routes. Routing algorithms that converge
slowly can cause routing loops or network outages.
In the routing loop displayed in Figure 5-3, a packet arrives at Router 1 at time t1. Router 1 already has
been updated and thus knows that the optimal route to the destination calls for Router 2 to be the next
stop. Router 1 therefore forwards the packet to Router 2, but because this router has not yet been
updated, it believes that the optimal next hop is Router 1. Router 2 therefore forwards the packet back to
Router 1, and the packet continues to bounce back and forth between the two routers until Router 2
receives its routing update or until the packet has been switched the maximum number of times allowed.
Routing algorithms should also be flexible, which means that they should quickly and accurately adapt to a
variety of network circumstances. Assume, for example, that a network segment has gone down. As many
routing algorithms become aware of the problem, they will quickly select the next-best path for all routes
normally using that segment. Routing algorithms can be programmed to adapt to changes in network
bandwidth, router queue size, and network delay, among other variables.
Algorithm Types
Routing algorithms can be classified by type. Key differentiators include these:
Static versus dynamic
Link-state versus distance vector
Because static routing systems cannot react to network changes, they generally are considered unsuitable
for today's large, constantly changing networks. Most of the dominant routing algorithms today are
dynamic routing algorithms, which adjust to changing network circumstances by analyzing incoming
routing update messages. If the message indicates that a network change has occurred, the routing
software recalculates routes and sends out new routing update messages. These messages permeate the
network, stimulating routers to rerun their algorithms and change their routing tables accordingly.
Dynamic routing algorithms can be supplemented with static routes where appropriate. A router of last
resort (a router to which all unroutable packets are sent), for example, can be designated to act as a
repository for all unroutable packets, ensuring that all messages are at least handled in some way.
Because they converge more quickly, link-state algorithms are somewhat less prone to routing loops than
distance vector algorithms. On the other hand, link-state algorithms require more CPU power and memory
than distance vector algorithms. Link-state algorithms, therefore, can be more expensive to implement
and support. Link-state protocols are generally more scalable than distance vector protocols.
Routing Metrics
Routing tables contain information used by switching software to select the best route. But how,
specifically, are routing tables built? What is the specific nature of the information that they contain? How
do routing algorithms determine that one route is preferable to others? Routing algorithms have used
many different metrics to determine the best route. Sophisticated routing algorithms can base route
selection on multiple metrics, combining them in a single (hybrid) metric.
Today's open standard version of RIP, sometimes referred to as IP RIP, is formally defined in two
documents: Request For Comments (RFC) 1058 and Internet Standard (STD) 56. As IP-based networks
became both more numerous and greater in size, it became apparent to the Internet Engineering Task
Force (IETF) that RIP needed to be updated. Consequently, the IETF released RFC 1388 in January 1993,
which was then superceded in November 1994 by RFC 1723, which describes RIP 2 (the second version of
RIP). These RFCs described an extension of RIP's capabilities but did not attempt to obsolete the previous
version of RIP. RIP 2 enabled RIP messages to carry more information, which permitted the use of a
simple authentication mechanism to secure table updates. More importantly, RIP 2 supported subnet
masks, a critical feature that was not available in RIP.
ROUTING UPDATES
RIP sends routing-update messages at regular intervals and when the network topology changes. When a
router receives a routing update that includes changes to an entry, it updates its routing table to reflect
the new route. The metric value for the path is increased by 1, and the sender is indicated as the next
hop. RIP routers maintain only the best route (the route with the lowest metric value) to a destination.
After updating its routing table, the router immediately begins transmitting routing updates to inform
other network routers of the change. These updates are sent independently of the regularly scheduled
updates that RIP routers send.
RIP includes a number of other stability features that are common to many routing protocols. These
features are designed to provide stability despite potentially rapid changes in a network's topology. For
example, RIP implements the split horizon and holddown mechanisms to prevent incorrect routing
information from being propagated.
RIP TIMERS
RIP uses numerous timers to regulate its performance. These include a routing-update timer, a route-
timeout timer, and a route-flush timer. The routing-update timer clocks the interval between periodic
routing updates. Generally, it is set to 30 seconds, with a small random amount of time added whenever
the timer is reset. This is done to help prevent congestion, which could result from all routers
simultaneously attempting to update their neighbors. Each routing table entry has a route-timeout timer
associated with it. When the route-timeout timer expires, the route is marked invalid but is retained in the
table until the route-flush timer expires.
Priyank Pashine – Networking Essentials Page 66
RIP 2 Packet Format
The RIP 2 specification (described in RFC 1723) allows more information to be included in RIP packets and
provides a simple authentication mechanism that is not supported by RIP.
Command—Indicates whether the packet is a request or a response. The request asks that a router
send all or a part of its routing table. The response can be an unsolicited regular routing update or
a reply to a request. Responses contain routing table entries. Multiple RIP packets are used to
convey information from large routing tables.
Version—Specifies the RIP version used. In a RIP packet implementing any of the RIP 2 fields or
using authentication, this value is set to 2.
Unused—Has a value set to zero.
Address-family identifier (AFI)—Specifies the address family used. RIPv2's AFI field functions
identically to RFC 1058 RIP's AFI field, with one exception: If the AFI for the first entry in the
message is 0xFFFF, the remainder of the entry contains authentication information. Currently, the
only authentication type is simple password.
Route tag—Provides a method for distinguishing between internal routes (learned by RIP) and
external routes (learned from other protocols).
IP address—Specifies the IP address for the entry.
Subnet mask—Contains the subnet mask for the entry. If this field is zero, no subnet mask has
been specified for the entry.
Next hop—Indicates the IP address of the next hop to which packets for the entry should be
forwarded.
Metric—Indicates how many internetwork hops (routers) have been traversed in the trip to the
destination. This value is between 1 and 15 for a valid route, or 16 for an unreachable route.
Distance vector routing protocols are often contrasted with link-state routing protocols, which send local
connection information to all nodes in the internetwork.
To provide additional flexibility, IGRP permits multipath routing. Dual equal-bandwidth lines can run a
single stream of traffic in round-robin fashion, with automatic switchover to the second line if one line
goes down. Multiple paths can have unequal metrics yet still be valid multipath routes. For example, if one
path is three times better than another path (its metric is three times lower), the better path will be used
three times as often. Only routes with metrics that are within a certain range or variance of the best route
are used as multiple paths. Variance is another value that can be established by the network
administrator.
Stability Features
IGRP provides a number of features that are designed to enhance its stability. These include holddowns,
split horizons, and poison-reverse updates.
Holddowns are used to prevent regular update messages from inappropriately reinstating a route that
might have gone bad. When a router goes down, neighboring routers detect this via the lack of regularly
scheduled update messages. These routers then calculate new routes and send routing update messages
to inform their neighbors of the route change. This activity begins a wave of triggered updates that filter
through the network. These triggered updates do not instantly arrive at every network device. Thus, it is
possible for a device that has yet to be informed of a network failure to send a regular update message,
which advertises a failed route as being valid to a device that has just been notified of the network failure.
In this case, the latter device would contain (and potentially advertise) incorrect routing information.
Holddowns tell routers to hold down any changes that might affect routes for some period of time. The
holddown period usually is calculated to be just greater than the period of time necessary to update the
entire network with a routing change.
Split horizons derive from the premise that it is never useful to send information about a route back in the
direction from which it came.
The Split-Horizon Rule Helps Protect Against Routing Loops
Split horizons should prevent routing loops between adjacent routers, but poison-reverse updates are
necessary to defeat larger routing loops. Increases in routing metrics generally indicate routing loops.
Poison-reverse updates then are sent to remove the route and place it in holddown. In Cisco's
implementation of IGRP, poison-reverse updates are sent if a route metric has increased by a factor of 1.1
or greater.
Timers
IGRP maintains a number of timers and variables containing time intervals. These include an update
timer, an invalid timer, a hold-time period, and a flush timer. The update timer specifies how frequently
routing update messages should be sent. The IGRP default for this variable is 90 seconds. The invalid
timer specifies how long a router should wait in the absence of routing-update messages about a specific
route before declaring that route invalid. The IGRP default for this variable is three times the update
Priyank Pashine – Networking Essentials Page 68
period. The hold-time variable specifies the holddown period. The IGRP default for this variable is three
times the update timer period plus 10 seconds. Finally, the flush timer indicates how much time should
pass before a route should be flushed from the routing table. The IGRP default is seven times the routing
update period.
OSPF was derived from several research efforts, including Bolt, Beranek, and Newman's (BBN's) SPF
algorithm developed in 1978 for the ARPANET (a landmark packet-switching network developed in the
early 1970s by BBN), Dr. Radia Perlman's research on fault-tolerant broadcasting of routing information
(1988), BBN's work on area routing (1986), and an early version of OSI's Intermediate System-to-
Intermediate System (IS-IS) routing protocol.
OSPF has two primary characteristics. The first is that the protocol is open, which means that its
specification is in the public domain. The OSPF specification is published as Request For Comments (RFC)
1247. The second principal characteristic is that OSPF is based on the SPF algorithm, which sometimes is
referred to as the Dijkstra algorithm, named for the person credited with its creation.
OSPF is a link-state routing protocol that calls for the sending of link-state advertisements (LSAs) to all
other routers within the same hierarchical area. Information on attached interfaces, metrics used, and
other variables is included in OSPF LSAs. As OSPF routers accumulate link-state information, they use the
SPF algorithm to calculate the shortest path to each node.
As a link-state routing protocol, OSPF contrasts with RIP and IGRP, which are distance-vector routing
protocols. Routers running the distance-vector algorithm send all or a portion of their routing tables in
routing-update messages to their neighbors.
Fundamentals of WAN
A WAN is a data communications network that covers a relatively broad geographic area and that often
uses transmission facilities provided by common carriers, such as telephone companies. WAN technologies
generally function at the lower three layers of the OSI reference model: the physical layer, the data link
layer, and the network layer.
A point-to-point link provides a single, pre-established WAN communications path from the customer
premises through a carrier network, such as a telephone company, to a remote network. Point-to-point
lines are usually leased from a carrier and thus are often called leased lines. For a point-to-point line, the
carrier allocates pairs of wire and facility hardware to your line only. These circuits are generally priced
based on bandwidth required and distance between the two connected points. Point-to-point links are
generally more expensive than shared services such as Frame Relay.
Switched circuits allow data connections that can be initiated when needed and terminated when
communication is complete. This works much like a normal telephone line works for voice communication.
Integrated Services Digital Network (ISDN) is a good example of circuit switching. When a router has data
for a remote site, the switched circuit is initiated with the circuit number of the remote network. In the
case of ISDN circuits, the device actually places a call to the telephone number of the remote ISDN circuit.
When the
two networks are connected and authenticated, they can transfer data. When the data transmission is
complete, the call can be terminated.
Packet switching is a WAN technology in which users share common carrier resources. Because this allows
the carrier to make more efficient use of its infrastructure, the cost to the customer is generally much
better than with point-to-point lines. In a packet switching setup, networks have connections into the
carrier's network, and many customers share the carrier's network. The carrier can then create virtual
circuits between customers' sites by which packets of data are delivered from one to the other through the
network. The section of the carrier's network that is shared is often referred to as a cloud.
Virtual Circuits
A virtual circuit is a logical circuit created within a shared network between two network devices. Two
types of virtual circuits exist: switched virtual circuits (SVCs) and permanent virtual circuits (PVCs).
SVCs are virtual circuits that are dynamically established on demand and terminated when transmission is
complete. Communication over an SVC consists of three phases: circuit establishment, data transfer, and
circuit termination. The establishment phase involves creating the virtual circuit between the source and
destination devices. Data transfer involves transmitting data between the devices over the virtual circuit,
and the circuit termination phase involves tearing down the virtual circuit between the source and
destination devices. SVCs are used in situations in which data transmission between devices is sporadic,
largely because SVCs increase bandwidth used due to the circuit establishment and termination phases,
but they decrease the cost associated with constant virtual circuit availability.
PVC is a permanently established virtual circuit that consists of one mode: data transfer. PVCs are used in
situations in which data transfer between devices is constant. PVCs decrease the bandwidth use
associated with the establishment and termination of virtual circuits, but they increase costs due to
constant virtual circuit availability. PVCs are generally configured by the service provider when an order is
placed for service.
Priyank Pashine – Networking Essentials Page 71
Dialup services offer cost-effective methods for connectivity across WANs. Two popular dialup
implementations are dial-on-demand routing (DDR) and dial backup.
DDR is a technique whereby a router can dynamically initiate a call on a switched circuit when it needs to
send data. In a DDR setup, the router is configured to initiate the call when certain criteria are met, such
as a particular type of network traffic needing to be transmitted. When the connection is made, traffic
passes over the line. The router configuration specifies an idle timer that tells the router to drop the
connection when the circuit has remained idle for a certain period.
Dial backup is another way of configuring DDR. However, in dial backup, the switched circuit is used to
provide backup service for another type of circuit, such as point-to-point or packet switching. The router is
configured so that when a failure is detected on the primary circuit, the dial backup line is initiated. The
dial backup line then supports the WAN connection until the primary circuit is restored. When this occurs,
the dial backup connection is terminated.
WAN Devices
WANs use numerous types of devices that are specific to WAN environments. WAN switches, access
servers, modems, CSU/DSUs, and ISDN terminal adapters are discussed in the following sections. Other
devices found in WAN environments that are used in WAN implementations include routers, ATM switches,
and multiplexers.
A WAN switch is a multiport internetworking device used in carrier networks. These devices typically
switch such traffic as Frame Relay, X.25, and SMDS, and operate at the data link layer of the OSI
reference model.
An access server acts as a concentration point for dial-in and dial-out connections.
Modem
A modem is a device that interprets digital and analog signals, enabling data to be transmitted over voice-
grade telephone lines. At the source, digital signals are converted to a form suitable for transmission over
analog communication facilities. At the destination, these analog signals are returned to their digital form.
ADSL transmits more than 6 Mbps to a subscriber and as much as 640 kbps more in both directions. Such
rates expand existing access capacity by a factor of 50 or more without new cabling. ADSL can literally
transform the existing public information network from one limited to voice, text, and low-resolution
graphics to a powerful, ubiquitous system capable of bringing multimedia, including full-motion video, to
every home this century.
WAN PROTOCOLS
Integrated Services Digital Network
Integrated Services Digital Network (ISDN) is comprised of digital telephony and data-transport services
offered by regional telephone carriers. ISDN involves the digitization of the telephone network, which
permits voice, data, text, graphics, music, video, and other source material to be transmitted over
existing telephone wires. The emergence of ISDN represents an effort to standardize subscriber services,
Priyank Pashine – Networking Essentials Page 73
user/network interfaces, and network and internetwork capabilities. ISDN applications include high-speed
image applications (such as Group IV facsimile), additional telephone lines in homes to serve the
telecommuting industry, high-speed file transfer, and videoconferencing. Voice service is also an
application for ISDN. This chapter summarizes the underlying technologies and services associated with
ISDN.
ISDN DEVICES
ISDN devices include terminals, terminal adapters (TAs), network-termination devices, line-termination
equipment, and exchange-termination equipment. ISDN terminals come in two types. Specialized ISDN
terminals are referred to as terminal equipment type 1 (TE1). Non-ISDN terminals, such as DTE, that
predate the ISDN standards are referred to as terminal equipment type 2 (TE2). TE1s connect to the ISDN
network through a four-wire, twisted-pair digital link. TE2s connect to the ISDN network through a TA.
The ISDN TA can be either a standalone device or a board inside the TE2. If the TE2 is implemented as a
standalone device, it connects to the TA via a standard physical-layer interface. Examples include EIA/TIA-
232-C (formerly RS-232-C), V.24, and V.35.
Beyond the TE1 and TE2 devices, the next connection point in the ISDN network is the network
termination type 1 (NT1) or network termination type 2 (NT2) device.
These are network-termination devices that connect the four-wire subscriber wiring to the conventional
two-wire local loop. In North America, the NT1 is a customer premises equipment (CPE) device. In most
other parts of the world, the NT1 is part of the network provided by the carrier. The NT2 is a more
complicated device that typically is found in digital private branch exchanges (PBXs) and that performs
Layer 2 and 3 protocol functions and concentration services. An NT1/2 device also exists as a single device
that combines the functions of an NT1 and an NT2.
ISDN specifies a number of reference points that define logical interfaces between functional groups, such
as TAs and NT1s. ISDN reference points include the following:
R—The reference point between non-ISDN equipment and a TA.
S—The reference point between user terminals and the NT2.
T—The reference point between NT1 and NT2 devices.
U—The reference point between NT1 devices and line-termination equipment in the carrier
network. The U reference point is relevant only in North America, where the NT1 function is not
provided by the carrier network.
Figure illustrates a sample ISDN configuration and shows three devices attached to an ISDN switch at the
central office. Two of these devices are ISDN-compatible, so they can be attached through an S reference
point to NT2 devices. The third device (a standard, non-ISDN telephone) attaches through the reference
point to a TA. Any of these devices also could attach to an NT1/2 device, which would replace both the
NT1 and the NT2. In addition, although they are not shown, similar user stations are attached to the far-
right ISDN switch.
Frame Relay
Frame Relay is a high-performance WAN protocol that operates at the physical and data link layers of the
OSI reference model. Frame Relay originally was designed for use across Integrated Services Digital
Network (ISDN) interfaces. Today, it is used over a variety of other network interfaces as well. This
chapter focuses on Frame Relay's specifications and applications in the context of WAN services.
Statistical multiplexing techniques control network access in a packet-switched network. The advantage of
this technique is that it accommodates more flexibility and more efficient use of bandwidth. Most of
today's popular LANs, such as Ethernet and Token Ring, are packet-switched networks.
Frame Relay often is described as a streamlined version of X.25, offering fewer of the robust capabilities,
such as windowing and retransmission of last data that are offered in X.25. This is because Frame Relay
typically operates over WAN facilities that offer more reliable connection services and a higher degree of
Priyank Pashine – Networking Essentials Page 75
reliability than the facilities available during the late 1970s and early 1980s that served as the common
platforms for X.25 WANs. As mentioned earlier, Frame Relay is strictly a Layer 2 protocol suite, whereas
X.25 provides services at Layer 3 (the network layer) as well. This enables Frame Relay to offer higher
performance and greater transmission efficiency than X.25, and makes Frame Relay suitable for current
WAN applications, such as LAN interconnection.
Devices attached to a Frame Relay WAN fall into the following two general categories:
Data terminal equipment (DTE)
Data circuit-terminating equipment (DCE)
DTEs generally are considered to be terminating equipment for a specific network and typically are located
on the premises of a customer. In fact, they may be owned by the customer. Examples of DTE devices are
terminals, personal computers, routers, and bridges.
DCEs are carrier-owned internetworking devices. The purpose of DCE equipment is to provide clocking and
switching services in a network, which are the devices that actually transmit data through the WAN. In
most cases, these are packet switches.
The connection between a DTE device and a DCE device consists of both a physical layer component and a
link layer component. The physical component defines the mechanical, electrical, functional, and
procedural specifications for the connection between the devices. One of the most commonly used
physical layer interface specifications is the recommended standard (RS)-232 specification. The link layer
component defines the protocol that establishes the connection between the DTE device, such as a router,
and the DCE device, such as a switch. This chapter examines a commonly utilized protocol specification
used in WAN networking: the Frame Relay protocol.
Virtual circuits provide a bidirectional communication path from one DTE device to another and are
uniquely identified by a data-link connection identifier (DLCI). A number of virtual circuits can be
multiplexed into a single physical circuit for transmission across the network. This capability often can
reduce the equipment and network complexity required to connect multiple DTE devices.
A virtual circuit can pass through any number of intermediate DCE devices (switches) located within the
Frame Relay PSN.
Frame Relay virtual circuits fall into two categories: switched virtual circuits (SVCs) and permanent virtual
circuits (PVCs).
DTE devices can begin transferring data whenever they are ready because the circuit is permanently
established.
Frame Relay DLCIs have local significance, which means that their values are unique in the LAN, but not
necessarily in the Frame Relay WAN.
A Single Frame Relay Virtual Circuit Can Be Assigned Different DLCIs on Each End of a VC