Sei sulla pagina 1di 35

Modern Benefits of Networking

For those of us who have grown accustomed to seeing and utilizing various
networks, it's hard to imagine what life would be like without them. The many
conveniences that we enjoy, such as easy sharing of data and sharing of
printers, would be hard to part with even for a day. Since the technology for
linking personal computers together as well as with shared peripherals is not that
old, many of us can remember the pains we had to go through to get a copy of a
file to someone, especially if that someone was some distance away.
Fortunately, those days are past.

Today networks link every part of the globe. As would be expected, they are
primarily found in the developed nations, but new networks are popping up daily
in developing nations. The influence of Hong Kong on Mainland China is spurring
the growth of networking there as well as in surrounding Asian countries. The
Middle East, especially Saudi Arabia and Israel, are investing in networks as
well. Though Eastern European countries were technologically starved under
Soviet control, many are now starting afresh, purchasing advanced technology,
taking a sizable leap in the upgrade path. Gradually a global linking is taking
place, and thousands more join in the benefits of networking daily.

1. Data Sharing

Sharing data today is easier than ever, thanks to networking. Perhaps nothing
else illustrates this better than the proliferation of electronic mail. E-mail has
become one of the leading motivators for companies to invest in networks. As a
means of sharing important information, E-mail is indispensable among
organizations from every industry imaginable. A large number of us have become
used to seeing a flashing icon or some other indicator signaling a letter waiting in
our electronic mailboxes. The letter itself may contain notes about a friendly
after-work game of golf, or last year's fiscal report. The ability to effortlessly and
quickly move data from one person to the next is an option too good to pass up
for many organizations.

Transmitting E-mail is one method of sharing data, but obviously there are
others. Shared files may exist in one location with multiple people accessing
them or updating parts of them. Database applications are found in virtually every
computerized organization. Networks offer the capabilities of multi-user access.
As you can imagine, there is inherent danger in two people accessing and
altering the same file at the same time. What happens if two people update the
same record at once? In times past this scenario would result in the "deadly
embrace", where both parties became locked up and had to reboot, resulting in
lost or corrupted data. More sophisticated database applications incorporate
record locking, a means by which a person updating a record has exclusive use
of the record while others who attempt to access it can not do so. This certainly
eliminates the problems surrounding lock-ups but doesn't really eliminate the
frustration of waiting on a record that someone else is updating, especially if that
someone forgot what they were doing and headed off to lunch.

Novell attempts to add to database functionality by providing BTRIEVE. This


package is NetWare's database manager and it allows the implementation of
features like record locking in the NetWare environment.

Not only data files may be shared, but executable files may be shared as well.
When a user invokes an executable file on a network server, a copy of it is
transmitted over the network into the memory of the local user's workstation. That
is where the actual execution takes place, not on the file server. The fact that
execution takes place locally is what distinguishes PC networks from mainframe
networks where processing is done centrally on the host and the terminals
merely display the result. Once the executable file has been copied, it is then
available for copying by other users. In this manner, a single executable file on a
central file server can work for multiple users. Great care should be taken,
however, to ensure that sufficient licensure has been secured in a multi-user
environment so as to remain legal.

Modern networks can contain several components for allowing data and resource
sharing.

2. Resource Sharing

One of the distinct benefits of modern networking is the ability to share


peripherals. Few companies have the available resources to place a printer on
every user's desk. Networks offer a logical and cost-effective solution. Since,
once again, the introduction of several users could cause conflict at the printer,
spooling is utilized so that print jobs can be arranged in an orderly manner.
NetWare provides such services in the form of print queues and print servers.
The ability of sharing printers and disk space has been the driving force behind
many companies installing PC-based networks. Networks are now found in
nearly every type of industry there is. From small companies to large multi-
national corporations, all benefit from sharing peripherals, including modems.
Shared modems are typically called modem servers. Today's incarnations
support multiple lines and are feature-laden.

The Development of Computer Networks


Computers have been around for several decades now. Forty years ago when
large organizations utilized them, they were neither inexpensive nor portable. It's
interesting to watch television documentaries of the computer industry's growth,
especially the old footage of gentlemen proudly standing next to a glittering
behemoth, full of flashing lights and whirring tapes. Those same film clips usually
show roomfuls of data entry personnel clicking away at card punch machines, a
sight you are not likely to see today.

Early computer systems had no provisions for networking. Data was shared via punched
card or tape.

1. Life Before Networking

The early computers were large in size due to the fact that vacuum tubes were
used to facilitate their processing. It wasn't until the transistor was developed,
and then the integrated circuit, that hardware began to assume a more compact
size.
Memory in the early days of computing was extremely costly so machines had
relatively little. The type of memory utilized was called "core memory" consisting
of metal rings and rods that were bulky at best.

Storing data involved transferring it to tape, to punched cards, or later, to large


hard disk systems. There were no floppy drives, and computers were not hooked
together, so there was no easy way of sharing data without first placing it on tape
or on punched card. As you can imagine, this placed a great deal of overhead on
data sharing, and time truly became scarce as the computer became useful to
more and more departments.

2. Early Connections

The first computers were not sophisticated enough to allow several users to
utilize resources at once. Early operating systems were designed to process one
job at a time. This type of processing was often called "batch" processing. Later,
multitasking operating systems were de-veloped to allow several jobs to be
processed simultaneously. Up to this point, computers were not "interactive".
That is, they did not permit a user or operator to interact with the program while it
was running.

As soon as the operating systems became multitasking, the next trend was to
interactive systems. Operating systems had to be developed that could facilitate
this. Once developed, multiple users could interact with the CPU simultaneously
via a computer terminal. This alleviated the tremendous backlog of jobs waiting
to be done in the single-user, single-task environment. Early connections for
multiple users were the first fledgling steps for computer networking.

As systems grew, it became evident that the complete burden of processing


rested on the CPU. It had to withstand access and processing for many users
and had to oversee the routing of output to printers and terminals. Managing a
CPU's resources effectively meant offloading mundane tasks that ate up CPU
time. These tasks included communication processes. As the number of users
interacting with a machine increased, the need for a device to take over this type
of task became evident. When developed, the front-end processor experienced
widespread usage. Front-end processors are still in use today, freeing up
mainframe CPUs for more important tasks.

Once the attachment of several users to a mainframe at a local site had been
mastered, the next task was to offer connections at remote sites. This was
accomplished via telephone lines. Obviously connecting one user remotely didn't
seem such a chore, but connecting multiple users via a single telephone line
presented a greater challenge. Special devices were created to meet this need.
Concentrators allowed the blending of signals at various rates from terminal
devices. A controller could oversee the routing of these signals to the appropriate
host. The combination of these two devices into a single device, called a cluster
controller, allowed remote terminals to seamlessly interact with a host computer.
This technology opened the door of computing to many organizations that
couldn't afford to buy a mainframe of their own. Computer owners worked out
time-sharing deals with less fortunate companies. In short, computer resource
availability increased quite dramatically.

Remote access to computers via telephone lines greatly enhanced computer resource
availability.

In the midst of these new advances, however, there was a major drawback.
Purchasing a computer from a particular company locked you into the support
provided by that company and it also locked you into using the particular
communication technology employed by that company. If they shut down, so did
your support. This problem was exacerbated by the poor interoperability among
early computer vendors. As has always been the case, third-party companies
sprang up to meet the interoperability needs, but significant differences in
architecture and hardware implementation made their tasks difficult at best,
sometimes impossible.

The major players on the block in early networking included the International
Business Machines Corporation (IBM), which should be no surprise, and Digital
Equipment Corporation (DEC). IBM's early networking followed a specification
called SNA, or Systems Network Architecture. Several devices were developed
using SNA allowing the combination of computer resources from several internal
groups within a large organization. This feat was important because for the first
time, companies could readily share data from one department with another as
well as balance processing loads between computer resources. DEC's DECnet
offered similar advantages.

3. Modern Networking
The ability to balance processing load and resources was the prime motivator for
launching us into the modern era of networking. There was one very large
organization that discovered the necessity of spreading out the loads on its
numerous computers. That organization was the United States Government.
Spearheaded by the Department of Defense, a move to create a network linking
the government's vast computer resources was undertaken. The end result
brought together just about any group that might be in some way involved in
defense and defense research, including many educational institutions. This
expansive network was called ARPANET (Advanced Research Projects Agency
Network).

What was so important about the development of ARPANET was the creation of
protocols for linking dissimilar computers together. The evolution of these
successes in interoperability led to the development of a very dominant set of
protocols (called a suite) called TCP/IP protocols (Transmission Control
Protocol / Internetwork Protocol). This unique group of specifications governs
and facilitates the linking of computers practically all over the world. The huge
internetwork that sprang from ARPANET is now called the Internet.

Development of networking on a more local level was also progressing,


especially among developers of minicomputers. In the late 70s, DEC, Intel and
Xerox developed a scheme for networking across multi-vendor platforms. This
new type of localized network, called Ethernet, served these purposes well.
Ethernet governs the physical aspects of interconnecting local computers such
as the cabling type, allowable distances, how data is placed onto the wire, how
the data is formatted, etc. Because of these characteristics, Ethernet is often
referred to as a "media" protocol. Ethernet is still in use today, in the PC network
era, offering speeds of data transfer up to 10 million bits per second (Mbps).
Current Ethernet standards are governed by IEEE committee 802.

About the same time, a company called DataPoint developed a new protocol
called ARCnet, short for Attached Resources Computer network. Like Ethernet,
ARCnet is a set of media protocols. Interestingly enough, ARCnet is still
marketed today at a price that is very budget oriented. It's speed, which is slow
compared to other PC network protocols, is only 2.5 Mbps. This was based on
the speed of early computer disk drive system speeds. ARCnet standards are
governed by an informal group comprised of ARCnet-related vendors, not by
IEEE. Yet, ARCnet is probably the most standardized network in terms of
interoperability because of the strong commitment to interoperability amongst the
vendors.

The ability to link computers, often those created by different vendors, is made
feasible by the adoption of standards. Standards-setting organizations include
the International Organization for Standardization (formerly the International
Standards Organization or ISO) and the Institute of Electrical and Electronic
Engineers (or IEEE). The contributions of these entities have pushed us into the
next logical step of networking which is internetworking -- the linking of networks,
which may differ significantly.

The technology of performing internetworking is still evolving as new feature-


laden products are introduced almost daily. Realizing the benefits and
importance of data and resource sharing, many companies are now connecting
their networks from various departments or subsidiaries to each other, and
implementing management tools that can govern the entire collection. These
departments or other organizational units might be geographically located on
opposite sides of the world or in the same building. Some may link with other
companies on different continents creating a truly global network. The extension
of networks across organizational, geographical and political boundaries will
serve to bring our information, resources, and consequently our world, closer
together.

Some enterprise networks or global networks span nearly the whole world.

From Novell's point of view, the movement toward global networking requires
appropriate technology. The latest incarnation of NetWare reflects this line of
thinking as it is specifically geared toward managing network resources beyond
the confines of a single office, building or campus. NetWare 4.0 now allows a
multiple file server environment to be administered with greater ease than with
previous versions. Also many of the inner workings of the operating system itself
has been shielded from the user.

The growth of modern networking will continue on its rapid curve for quite some
time as technology continues to develop. Networks will continue to grow in both
size and complexity. From their humble beginnings to the colossal systems of
today, networks have evolved into an integral and necessary part of the
corporate world.
Local Area Networks
This chapter will introduce you to networking concepts, terminology, and
technology from the perspective of the local area network. Since most networking
personnel get their feet wet in local area networks (LANs) as opposed to larger
wide area networks, this seems the appropriate place to start. This chapter will
approach the technology of networking by migrating from a general view to one
of more specifics in order to fully cover the topic. Perhaps the best place to begin
is to look at a definition of a local area network.

Local Area Network - An interconnection of computers and peripheral devices


contained within a limited geographical area utilizing a communication link and
operating under some form of standard control.

Network Topologies
The interconnection mentioned above follows a physical and logical layout. This
layout, called a topology, governs many aspects of LANs including how they
function and how easy they are to troubleshoot.

1. Point-to-Point Topology

Point-to-point topology is the simplest of the physical layouts of network devices.


Point-to-point connections mean that two devices (nodes) have a single path for
data to travel between them and there is nothing that breaks up that path.

Point-to-Point connections can be established between many devices.


A prime example of how this topology is implemented in networking is the
manner in which terminals are now connected to mainframes or mini-computers.
Instead of having many cables from numerous terminals hooked into one of
these computers, a device known as a terminal server allows the data from
several terminals to be transmitted over a single cable. This single cable
connection between the computer's front-end processor and the terminal server
forms a point-to-point link. In addition, some terminal servers form point-to-point
links with the individual terminals

The point-to-point topology can be seen as one of the basic building blocks of
larger, more complicated topologies. All major topologies include point-to-point
connections, even if there is no wire between two devices, but some other
medium instead. Satellite transmissions are considered to be point-to-point
communications. Similarly, laser transmissions can also be viewed in this
manner. A variant on point-to-point connections is a multipoint topology in which
a single cable may split into several segments in order to connect to several
devices.

Point-to-point topology is not just limited to networking use. You should be aware
that the direct connection of a PC to a printer follows a point-to-point topology. In
fact, any externally connected device, including modems or hard disk drives
would also fall under this classification.

Bus Topology

If you have ever had the occasion to visit San Francisco, you might have noticed
that the world-famous streetcars in that scenic city utilize a common cable
running beneath the streets to propel them up the steep hills. Similarly, other
major cities have mass transit systems like busses that utilize common wires
above the streets for power. These shared cables might be called "bus wires", an
excellent description of one of the most popular topologies for LANs -- the bus
topology.
Devices all share a common cable for transferring data in a bus topology LAN. Signals are
eventually absorbed by the terminator.

3. Star Topology

Today if you decide to install a LAN, your local LAN dealer will probably suggest
you look seriously at star topology networks. Star topology networks are nothing
new, they just offer some benefits that are hard to overlook. Star topology derives
its name from the arrangement of devices so that they radiate from a central
point. At the central point we usually see a device generically called a hub

Key to the benefits of the star topology is the hub unit which may vary in function
from a simple signal splitter (called a passive hub) to one that amplifies and
keeps statistics on data traveling through them (termed as an active and
intelligent hub). In fact, hubs may be sophisticated enough to selectively
disconnect any machine connected to them that is misbehaving, as well as allow
network operators to dial into to them and monitor the performance of a single
workstation. It's these advantages that make the star topology a popular choice
in the networking marketplace. Hubs that amplify signals coming through are
called active hubs or multiport repeaters.

Star topologies do require more cable than a simple bus topology, but most use a
relatively inexpensive type of cable called twisted pair cabling which helps control
costs of wiring. The hubs themselves require expense and the level of that
expense is directly attributable to how complex a hub is needed.

Troubleshooting a star topology network is a bit easier than bus topology. At the
very least, one may disconnect devices from a central hub to isolate a problem
as opposed to visiting each individual machine. Above this physical level of
troubleshooting, there is hub management software that can report problems
back to you. It's obvious how the central hub device offers advantages, but there
is one drawback. The hub itself represents a single point of failure. If you lose a
hub, you effectively lose all workstations attached to it. Quality and reliability of
hub products you purchase can not be over-stressed.

The star topology involves one or more devices radiating out from a central point (i.e.
hub).

In summary, star topology systems offer better troubleshooting and management


capabilities, but require more physical resources than a comparable bus system.

Ring Topology

Ring Topology describes the logical layout of token ring and FDDI networks. In
this scheme, a ring is created to which each device (workstation, server, etc.)
attaches A special signal, called a token travels around this ring visiting each
machine, letting it know that it is that machine's turn to transmit. Since the token
visits every node, every one gets the chance to transmit, creating a very "fair"
LAN. This simplistic explanation belies the true complexity of ring topology
systems available today. Token ring LANs, and their FDDI cousins, are the most
sophisticated, fault-tolerant, and, consequently, the most ex-pensive systems
available in the current marketplace.

The logical creation of a ring allows information on such a LAN to travel in one
direction. Since only one device is allowed to transmit at a time, collisions are not
a problem on ring systems. Of course there are always problems that can occur
like bad network cards or hub units that will bring a ring topology LAN to a
grinding halt, but they are often very resilient. Typical ring system network
interface cards (NICs) contain the ability to perform what is known as signal
regeneration. This means information received by them is copied and
retransmitted at a higher amplification. Since every piece of data traveling around
a ring must visit each device, the signal gets regenerated numerous times. This
feature allows for greater distances between nodes and increased chances that
good data will completely traverse the ring. More details on ring topology
systems will be passed along in later sections of this coursebook.

Even though token ring LANs utilize a star topology physically, this illustration shows that
a logical ring is created inside the MAU.

5. Mesh Topology
Every device has a direct path to every other device in the seldom used mesh topology.

Mesh topology is uncommon today because of its sheer impracticality. In a mesh


topology system, every node is connected to every other node. The pervading
thought behind this is to offer the maximum amount of reliability for data transit
and fault-tolerance

The major problem is the amount of cabling necessary to create this topology,
plus each link from one device to another requires an individual NIC. Not only are
physical components wasted, but the overall capacity to carry data is grossly
under-utilized unless all nodes are transmitting to one another almost constantly.
Components
A local area network can be composed of several components. This section
deals with what those components are focusing on terminology and functionality.

1. Servers

Server is a generic term applied to any machine running a "service" application.


That service being performed might include access to shared files (file server) or
access to shared printers (print server).

Novell's file services are all governed by the portion of the Novell oper-ating
system that resides on your file server. In addition, NetWare provides security
services that offer login/password protection.

Several different types of servers are utilized on LANs.

There are other types of servers besides file and print servers. Communication
servers offer access to remote devices outside of a network. That access might
be to a mainframe or minicomputer, or other networks, workstations or servers.
Typically, a machine that allows multiple users to share one or more modems for
external connections is called a modem server. Modem servers are becoming
increasingly popular today as more and more companies find the need to access
external information or E-mail services.
Another type of server is known as a database server. This unique device assists
users in interacting with databases by coordinating the data sent to the local
workstation. It takes a burden off the local PC by filtering out all but required
data, which also greatly reduces LAN traffic.

File servers sit at the heart of just about every network. Their responsibility is to
dole out files to users requesting them and to sometimes deny that access where
appropriate. File servers must know which directories and files that certain users
are allowed to utilize in order to efficiently manage them. The responsibility of
providing security information to the machine is that of the supervisor,
administrator, or some other level of network management personnel.

When users request a file, its contents are copied across the network into the
memory of the user's local workstation. Once there, the user may use it however
they wish. Some files are not designed to be simultaneously shared on the
network. Many executable files, for instance, are only utilized by one person at a
time. Consequently, if one user attempts to use one of these non-shareable files
while another has it tied up, the file server will be responsible for letting the user
know there is a conflict. For those files that are shareable, the file server will
allow multiple copies of these to be sent to the workstations if the users only want
to view the contents of them. If users are allowed to simultaneously update a file,
its records being updated would have to be locked so more than one user can
not be updating the same section of the file. This would pose a serious conflict
and might result in the "deadly embrace". The file server must be able to
distinguish whether or not a file is shareable or non-shareable. Often that
delineation is done by the network administrator.

A print server's role is very important in the shared peripheral environment as it


carries out the crucial task of making sure data from an application successfully
reaches its temporary holding tank (queue) and subsequently the printer for
which it was destined.

2. Workstations

We should be careful to delineate that the term "workstation" may be a little


misleading depending on your particular involvement in the computer industry. In
PC-based local area networking, a workstation refers to a machine that will allow
users access to a LAN and its resources while providing intelligence on-board
allowing local execution of applications. This would pretty well cover the gamut of
all PCs.
Workstations may allow data to be stored locally or remotely on a file server.
Obviously, diskless workstations require all data to be stored remotely including
that data necessary for the diskless machine to boot up. Executable files may
reside locally or remotely as well, meaning a workstation can run its own
programs or those copied off the LAN. Though the source of data doesn't matter,
the destination for execution does. Processing is done on local machines in PC
LANs.

3. Network Interface Cards

The NIC is obviously a crucial component to networking. It allows a device to


participate on the network. Token ring LANs require token ring NICs, Ethernet
LANs require Ethernet NICs, etc.

Software is required to interface between a particular NIC and an operating


system (i.e. NetWare). This interface is called a driver. NetWare provides several
drivers for different vendors' cards. The vendors themselves will provide drivers
for their cards as well. Different drivers are needed for integrating a NIC on a
workstation as opposed to a file server. That's because the operating systems on
the two types of machines are different.

4. Hubs

Hubs are a crucial element to all star topology LANs. Hubs serve as a central
device through which data bound for a workstation travels. The data may be
distributed, amplified, regenerated, screened or cut off.

Hubs have different names depending on the type of LAN. In token ring LANs
they are referred to as Multistation Access Units or Controlled Access Units
(MAUs or CAUs). In 10BASE-T Ethernet, they are referred to as concentrators.
In ARCnet they are simply called hubs.

Hubs vary in their capabilities and sophistication. ARCnet passive hubs are very
inexpensive and only split signals among several devices. Other hub units cost
several thousands of dollars providing state-of-the-art remote management and
diagnostic capabilities.

Transmission Media
Transmission media is what actually carries a signal from one point to another.
This may include copper wiring in the case of twisted pair cable or coax cable, or
electronic waves in the case of microwave or satellite transmission. A medium
such as copper wiring is referred to as bounded media because it holds
electronic signals. Fiber optic cable is said to be bounded media as well because
it holds light waves. Other media that do not physically constrain signals are
considered to be unbounded media.

1. Twisted Pair Cabling

Twisted pair cabling is the current popular favorite for new LAN installations. The
marketplace popularity is primarily due to twisted pair's (TP's) low cost in
proportion to its functionality. Its usage has been justified through years of
implementation by phone companies as it is the medium used by them to
connect our world together. In many cases, TP cabling has already been
installed in a site by the phone company during telephone installation removing
the need to put in any new cabling for a local area network.

The construction of TP is simple. Two insulated wires are twisted around one
another a set number of times within one foot of distance. If properly
manufactured, the twists themselves fall in no consistent pattern. This is to help
offset electrical disturbances which can affect TP cable such as radio frequency
interference (RFI) and electromagnetic interference (EMI). These "pairs" of wires
are then bundled together and coated to form a cable.

Twisted pair cabling is exactly what its name implies - two wires twisted around one
another.

Twisted pair comes in two different varieties - shielded and unshielded. Shielded
twisted pair (STP) is often implemented with LocalTalk by Apple and by IBM's
token ring systems. STP is simply TP cabling with a foil or mesh wrap inside the
outer coating. This special layer is designed to help offset interference problems.
The shielding has to be properly grounded, however, or it may cause serious
problems for the LAN. Twisted pair cabling with no shielding is simply called
unshielded twisted pair (UTP).
2. Coaxial Cable

Coaxial cable or just "coax" enjoys a huge installed base among LAN sites in the
US. It has fit the bill perfectly for applications requiring stable transmission
characteristics over fairly long distances. It has been used in ARCnet systems,
Ethernet systems and is sometimes used to connect one hub device to another
in other systems. This is due to coax's superior distance allowances.

Construction-wise coax is a little more complex then TP. It is typically composed


of a copper conductor that serves as the "core" of the cable. This conductor is
covered by a piece of insulating plastic, which is covered by a wire mesh serving
as both a shield and second conductor. This second conductor is then coated by
PVC or other coating. The conductor within a conductor sharing a single axis is
how the name of the cable is derived.

Coaxial cable's use of a second conductor doubling as shielding helps reduce effects of
outside interference.

Coaxial cable's construction and components make it superior to twisted pair for
carrying data. It can carry data farther and faster than TP can. These
characteristics improve as the size of the coax increases. There are several
different types of coax used in the network world. Each has its own RG
specification that governs size and impedance, the measure of a cable's
resistance to an alternating current. One must be cautious in acquiring coax to
make sure the right kind has been obtained. Different cable can differ widely in
many important areas.

Twisted pair has one chief advantage, however, and it's an important one. TP is
less expensive than coax. In addition, as mentioned in our earlier section, TP is
often already available on-site due to phone installation. TP is also extremely
flexible and easy to work with, though it may not be as sturdy as coax. Because
of these factors, the current marketplace has migrated away from coax and it is
no longer the "chic" cable to buy. Plus, most development research is based on
improving performance on twisted pair systems. Coax still has specific purposes,
which means it won't go away, but its role as primary choice for cabling is no
longer accepted in the marketplace.

3. Fiber Optic Cable

Carrying data at dizzying speeds, fiber has come into its own as the premier
bounded media for high speed LAN use. Because of fiber's formidable expense,
however, you're not likely to see it at the local workstation any time real soon.
Instead, fiber is used to link vital components (like file servers) in a LAN or multi-
LAN environment together. Consequently we often hear terms like "fiber
backbone" thrown around.

Fiber optic is unsophisticated in its structure, but expensive in its manufacture.


The crucial element for fiber is glass that makes up the core of the cabling. The
glass fibers may be only a few microns thick or bundled to produce something
more sizable. It is worth noting that there are two kinds of fiber optic cable
commercially available - single mode and multimode. Single mode is used in the
telecommunications industry by companies like AT&T or US Sprint to carry huge
volumes of voice data. Multimode is what we use in the LAN world.

The glass core of a fiber optic cable is surrounded by and bound to a glass tube
called "cladding". Cladding adds strength to the cable while disallowing any stray
light wave from leaving the central core. This cladding is then surrounded by a
plastic or PVC outer jacket with provides additional strength and protection for
the innards. Some fiber optic cables incorporate Kevlar fibers for added strength
and durability. Kevlar is the stuff of which bullet-proof vests are made, so it's
tough.

Fiber optic cable provides tremendous bandwidth for data transmissions. Its construction
makes it a very durable medium.

Fiber optic is lightweight and is utilized often with LEDs (Light-Emitting Diodes)
and ILDs (Injection Laser Diodes). Since it contains no metal, it is not susceptible
to problems that copper wiring encounters like RFI and EMI. Plus, fiber optic is
extremely difficult to tap, so security is not a real issue.

The biggest hindrance to fiber is the cost. Special tools and skills are needed to
work with fiber. These tools are expensive and hired skills are expensive too. The
cable itself is pricey, but demand will ease that burden as more people invest in
this medium. Attempts have been made to ease the cost of fiber. One solution
was to create synthetic cables from plastic as opposed to glass. While this cable
worked, it didn't possess near the capabilities of glass fiber optic, so its
acceptance has been somewhat limited. The plastic fiber cables are constructed
like glass fiber only with a plastic core and cladding.

4. Cabling Summary

Now that we've examined the major bounded media, let's take a quick look at
how they compare.

Twisted Pair Cable


Advantages Disadvantages
1. Inexpensive 1. Susceptible to RFI and EMI

2. Often available in existing 2. Not as durable as coax


phone system
3. Doesn't support as high a
3. Well tested and easy to get speed as other media

Coaxial Cable
Advantages Disadvantages
1. Fairly resistant to RFI and
1. Can be effected by strong
EMI
interference
2. Supports faster data rates
2. More costly than TP
than twisted pair
3. Bulkier and more rigid than TP
3. More durable than TP
Fiber Optic Cable
Advantages Disadvantages
1. Highly secure 1. Extremely costly in product
and service
2. Not affected by RFI and EMI
2. Sophisticated tools and
3. Highest bandwidth available methods for installation

4. Very durable 3. Complex to layout and design

Wireless Media
The dream of being able to communicate data in networks without having deal
with the constraints of physical cabling is very much realized today. Wide area
networks obviously make use of wireless technology to transmit data around our
globe. The acceptance of wireless networks on the local level has been
significantly hindered, however, for a number of reasons.

Perhaps the biggest drawback to the two major local wireless technologies -
radio and infrared - has been their speed. Neither could come close to matching
the 10 or 16 Mbps provided by conventional bounded media LANs. In fact, until
recently, these technologies were struggling within their confines to reach out of
the Kbps range. Today, however, wireless LANs are climbing out of the doldrums
with comparable speeds to token ring systems. The perception that they are slow
and limited is still fairly widespread, however, which will limit wireless' acceptance
on the desktop.

Additionally, the size of the installed base of physical wiring plays a part in
unbounded local media acceptance. The United States, for instance, has a very
large installed base of physical cabling. It's readily available and fast. Other
countries like Japan, surprisingly enough, do not have such a large installed
base. Consequently, their marketplaces are more open to the idea of wireless
LANs and emerging higher speed technologies may find better acceptance there.

Another major hurdle for wireless LANs will be the standardization process. This
is necessary if there is ever any hope for interoperability in the marketplace
between products from different vendors. The IEEE has created a committee that
will oversee this standardization. The standard will be called the 802.11 standard.

1. Radio
Radio offers superior characteristics as a wireless media but suffers from a major
hindering force known as the government. The government doesn't mean to
hinder radio LANs, but the Federal Communications Commission must bridle
radio for LAN use in order to responsibly manage our public airwaves, and that
is, after all, what we pay them to do. Fortunately, radio LAN product
manufacturers have isolated frequencies that are not licensed by the government
and made use of these allowing them to scoot under the regulatory fence.

Radio-based LANs use portable transmitters and receivers at each LAN device.

Radio-based LANs do have to contend with the interference that occurs daily in
the workplace. That interference can come from a number of different electrical
sources and can be quite impacting on LAN performance. For radio systems
using only a small portion of the radio spectrum (narrowband systems), this could
mean that problem might be insurmountable. The vendors of spread-spectrum
products claim that their products can isolate interference problems and avoid
using those frequencies.

Though radio offers portability to any node within range, its unbounded nature
makes it somewhat less secure. A "non-friendly" could, in theory, listen in to your
radio broadcasts. The eavesdropper would have to, of course, know what
frequency or frequencies you were using. Once that hurdle was overcome, your
LAN would be laid bare.
Radio, though limited by its speed, may be the wireless transmission method of
choice for many desktops because of its low cost and capabilities. However, the
delay of regulation has cost radio a few months before standardization. This has
given infrared vendors at least a little time to create competing products.

2. Infrared

Infrared technology uses the invisible portion of the light spectrum with
wavelengths just a little less than those of red light. These frequencies are very
high offering nice data transfer rates. Modern infrared LANs can achieve
throughput at 16 Mbps with potential for better. We are used to seeing infrared
technology utilized for our television or VCR remotes.

Infrared transmissions offer potential for high speed data transfer but are limited by
inability to penetrate walls and floors.

Infrared technology involves the use of an infrared transmitter like an LED or ILD
along with a receiver, typically a photodiode. These components operate in a
line-of-sight fashion. That is, nothing can obstruct the pathway between them.
Fortunately these signals can be bounced off walls and ceilings providing
transmission around obstacles. Line-of-sight means, however, that these signals
cannot be broadcast through walls, severely limiting infrared LANs.

Modern infrared systems use a repeater device simply to retransmit a signal from
one room into another. This device is generally mounted on the ceiling or high in
a corner to alleviate as many obstacles as possible. These systems also use a
process called "diffusion" to send the signal in a wide path across a room thus
reducing the chance of signals not getting past a single obstacle.
The good news about infrared technology is that it may not be very costly to
implement. Since infrared items have been around a while, significant resources
exist to mass produce infrared products. Advances in the technology will
probably lead to faster products without as many limitations. Infrared
transmissions now are limited to a relatively short distance, and used outdoors,
are extremely susceptible to atmospheric conditions.

3. Wireless LAN Media Summary

Radio
Advantages Disadvantages
1. Transmission not line of sight 1. Limited bandwidth means less
data throughput
2. Inexpensive products
2. Some frequencies subject to
3. Direct point-to-point linking to FCC regulation
receiving station
3. Highly susceptible to
4. Ideal for portable devices interference

Infrared
Advantages Disadvantages
1. Higher bandwidth means
superior throughput to radio
1. Limited in distance
2. Inexpensive to produce
2. Cannot penetrate physical
barriers like walls, ceilings,
3. No longer limited to tight
floors, etc.
interroom line-of-sight
restrictions
Repeaters
As networks begin to grow and expand, physical limitations are reached. The
limitations may have nothing to do with running out of cable or components, but
rather running out signal power, or worse yet - running into signal noise. In
technical terms this loss of power of a signal is referred to as attenuation while
the signal noise is called just that - noise.

In order to minimize these phenomena, special devices called repeaters are


incorporated into internetworks (combination of individual networks into larger
ones). A repeater does what its name implies. It takes an incoming signal and
repeats it, but at a higher power and noise-free.

The repeater is not an amplifier only, as such a device would amplify the good
part of the signal as well as the bad. Instead repeaters employ what is known as
"signal regeneration". This simply means that the original signal is absorbed,
copied and retransmitted along another segment of cabling. This new signal has
been beefed up and cleaned up. When it leaves the repeater it is both renewed
and noise-free.

Repeaters allow us to extend beyond typical distance limitations by regenerating signals.

Bridges
A bridge is a device that is smarter than a repeater. A repeater knows nothing
about the data passing through it or the destination of that data. It only knows to
regenerate a signal. A bridge on the other hand is informed of where data is
going to, and based on that information, can make an intelligent call whether or
not to allow the data to go to the destination.

Bridges are able to perform their decision-making because they operate on the
Data Link layer of the OSI model. It's on this layer that network systems group
packets from data off the wire and make a determination as to where the data
goes. Each device on a network has a unique physical station address. This
identification is used by devices on network to determine how to send data to one
another. A bridge allows two networks to be connected to one another, each
having its own group of devices with unique station addresses. The bridge acts
as a traffic cop, only allowing data to pass through that is specifically bound from
one network to the other. It screens out all data that is transmitted from one
device on a network to another device on the same network.

This function is extremely important because it can significantly lower the flow of
traffic across a large network. The idea here is to simply divide the network up
into smaller networks separated by a bridge thus allowing traffic on one segment
to be virtually unaffected by traffic on the other newly created segment. Of course
accomplishing this requires a little forethought and planning.

Bridges can help control network traffic.

Routers
Stepping on up the OSI model, we reach the Network layer next. The Network
layer allows us to group devices together regardless of whether they share the
same physical network or not. We might, for instance, have two distinct LANs in
our accounting department, but we might group all of those users as an
accounting group by assigning each device in this area a unique logical station
address. Then we could refer to the accounting department by way of its logical
addresses.

Routers use this type of logical information to perform a very useful task. They
are able to determine the best route from a source to a destination regardless of
what lies in between. An example would be sending information across the
Internet. This huge global network is laden with routers. As we begin sending
information over the Internet, each packet is individually directed to the
destination. Each time a packet goes through a router, this device attempts to
find the best path to send it on closer to its destination. The result is a very
dynamic network that can speed data along identifying best paths based on
traffic loads and functioning pathways.

Routers may serve as boundaries to distinguish networks. Here the router at Network A
would choose Path A to send data to Network D because it requires the smallest number
of hops (trips through other routers). In fact there are no other routers between Networks
A and D .

The methods for determining the best route are many and varied. Modern routers
usually incorporate a number of factors in trying determine this type of
information. This is necessary because basing a decision on only one factor may
prove inefficient. For instance, let's say we are basing our best path decision on
selecting the segments along the way with the fastest data throughput. We may
end up going through dozens of segments before we reach our destination, thus
eliminating our segment speed advantage. Plus, the routers may have selected
costly wide area network links, so our packets arrive slowly and our money
departs quickly. If we were to choose the best path according to the number of
routing devices a packet has to travel through (called hops), we might end up
choosing slow or, once again, costly pathways. For these reasons, many routers
make a best path decision based on a number of factors, some of which can be
weighted subjectively by an administrator.

Gateways
We have established that repeaters work on the Physical layer of the OSI model,
while bridges function on the Data Link layer and routers on the Network layer.
Devices that function at these layers and above to allow interconnection between
different network types require a fair amount of sophistication. The changes
necessary to create a mainframe-bound message from a PC-based NetWare
LAN are significant. The data that is used in the PC world is encoded in a format
known as ASCII. IBM host computers use data encoded into a format known as
EBCDIC. To switch from one format to another involves the complete
restructuring of data. Another thing to consider is that primarily keystroke and
screen data are often transmitted along mainframe or minicomputer networks.
PC networks can send whole programs and data files, not just terminal data.

Gateways enable such diverse systems as PC LANs and mainframe networks to


communicate.

The sophisticated device required to bridge these two very different


environments together is called a gateway. Gateways are unique in that they
have the capability of functioning on any level of the OSI model, whatever is
necessary to bring together the vastly dissimilar networks. When you purchase a
gateway, it is with a certain connection in mind. You might buy one for NetWare
and IBM's SNA connections, AppleTalk to DECnet, etc.

Gateways are available in both external and internal models much in the same
way that modems are available. External boxes containing the gateway's
components tend to be a bit more reliable than their internal plug-in card cousins.
Software usually accompanies a gateway, and these devices may be singular in
their operation (dedicated) or be multi-functional (non-dedicated).

Analog and Digital Signals

The term "analog" comes from the word "analogous" meaning something is
similar to something else. It is used to describe devices that turn the movement
or condition of a natural event into similar electronic or mechanical signals. The
are numerous examples, but let's look at a couple.

A non-digital watch contains a movement that is constantly active in order to


display time, which is also constantly active. Our time is measured in ranges of
hours, minutes, seconds, months, years, etc. The display of a watch constantly
tracks time within these ranges. In effect the data represented on a watch may
have any number of values within a fairly large range. The watch's movement is
analogous to the movement of time. In this respect the data produced is analog
data.

Another prime example of an analog device is a non-digital thermometer


measuring a constantly changing temperature. The action is continuous and the
range is not very limited, though sometimes we wish it were. The data produced
by a thermometer is analogous to the change in temperature. Therefore, it is an
analog signal.

Digital signals, on the other hand, are distinctively different. Digital signals don't
have large ranges, nor do they reflect constant activity. Digital signals have very
few values. Each signal is unique from a previous digital value and unique from
one to come. In effect, a digital signal is a snapshot of a condition and does not
represent continual movement.

Of course the most obvious example of digital data is that communicated on-
board a computer. Since a computer's memory is simply a series of switches that
can either be on or off, digital data directly represents one of these two
conditions. We typically represent this on and off status with 1s and 0s where 1
represents an "on" bit and 0 represents "off".

Analog data, by its nature, more closely captures the essence of natural
phenomenon, with its action and subtlety. Digital data can only attempt to capture
natural phenomenon by "sampling" it at distinct intervals, creating a digital
representation composed of 1s and 0s. Obviously, if the interval between
samples is too large, the digital representation less accurately represents the
phenomenon. If the sampling occurs at too short of an interval, then an inordinate
amount of digital resources may be utilized to capture the phenomenon. The
changes involved may not be significant enough to warrant so frequent a
sampling for accuracy's sake. To digitally represent sound authentically, a
sample must be taken over 44, 500 times per second.

A reference to digital resources would certainly include digital storage media. In


terms of storage, digital samples of natural phenomenon, or encoding of analog
signals from such phenomenon, generally requires a significant amount of
recording media (i.e., disk space). To record a second of authentic sound, 1.5
million bits of storage is required. Analog signals don't require such great storage
capacity, but they do suffer in the area of duplication.

When copying an analog signal from one generation to another, deterioration of


the original signal occurs. A prime example is when we copy a videotape. Since
video recorders are analog machines, copying a tape several times results in the
accumulation of unwanted analog values called "noise". Eventually these signals
become so evident, that the original analog signal is compromised and the video
"dub" suffers from intense graininess and poor audio sound. Our technology is
limited in the transmission and duplication of analog signals because of the
infinite number of values that are allowable.

Digital signals, however, have basically two values. It is much easier to work with
two values rather than an infinite number. Consequently our current level of
technology allows us to maintain the original quality of a digital signal. With a
value of "on" or "off", it's pretty heard to miss.

Converting and Translating Data

Converting analog to digital data, or vice versa, requires special machinery.


These devices must be able to capture through sampling the continuous
movement of naturally occurring phenomenon as well as reproduce an authentic
representation of natural events from digital snapshots. The latter involves the
conversion of digital data (1s and 0s) to analog data (like sound).
Converting Signal Types

Perhaps the most common device associated with signal conversion today is the
modem. A modem receives digital data and converts to an analog form for
transmission over a media, most typically a phone line. Modem is a shortened
form of MOdulator-DEModulator, which means that the device is involved in both
creating analog signals from digital data and changing analog data back to digital
data (demodulating). Here's how it works:

1. A modem receives its signal from a computer, also known as a DTE (Digital
Terminal Equipment).

2. The digital signal is used to modulate an analog carrier signal by either


frequency-shift keying or phase-shift keying.

3. The analog signal travels over telephone lines or another medium. Remember
analog signals can be broadcast further without attenuation problems.

4. The analog data is detected by another modem which receives and decodes
the data on the analog signal.

5. A digital signal is generated by the modem and transmitted to the DTE.

Multiple Signal Transmission Schemes


Networks require us to jump through some hoops if we are going to
accommodate multiple signals utilizing a single piece of cabling. This need is
seen throughout networking whether we are talking about local area networks or
wide area ones.

Modern telephone systems must place a large number of calls over a limited
amount of bandwidth (i.e. a trunk). Broadband LANs must have several different
types of data on a single wire at once. These are examples where "multiplexing"
must take place. Multiplexing is the process of putting data from several different
sources on the same wire, or, in some cases, putting a large amount of data from
a single source on several smaller bandwidth wires. There are several different
ways that multiplexing can be accomplished. We'll look at a couple of them.

1. Time-Division Multiplexing (TDM)


TDM is used both in networking and phone systems. It is a process whereby
several slower speed signals are divided up and placed on a high speed
transmission channel. A multiplexer (also called a "MUX") actually selects which
source data will be sent at what amount and places that chunk of data on the
wire. It then selects a different source and takes a portion of its data and places it
on the wire next. In this manner several "samplings" from several sources can be
interleaved on the high-speed communications channel. This can be
accomplished because the individual sources are sending their data at a
relatively slow speed (i.e. 300 baud), while the outgoing channel has significant
speed to accommodate a sampling from each source (i.e. 1200 baud). When the
data reaches its destination, another multiplexer disassembles the combination
data and places each chunk of data on an appropriate channel to its destination,
once again at the slower speed at which it entered the original MUX. Figure 5.5
illustrates the concept of time-division multiplexing.

This same technology is used by phone service providers who must grapple with
the task of getting a large number of conversations over limited numbers of wires
contained in trunks. If the conversations are broken up and put back together fast
enough, no one notices it. For this reason, high speed trunks use time-division
multiplexing to carry several conversations at once - and no one is the wiser.

Sampling a conversation of data from several sources may take place on the bit,
byte or block level. When only a bit from each source is placed on the wire, we
call it "bit interleaving". When a byte is sampled and then placed on a wire with
other sampled bytes from other sources, we call it "word interleaving".

MUXs, at both ends of a high-speed link, must synchronize with one another so
that the time required for each sampling matches. Otherwise, the demultiplexer
would not be able to determine which source signal goes with what destination
channel. Timing is obviously an extremely important element to a time-based
methodology like TDM.
Time-Division Multiplexing allows several devices to share a single medium via
interleaving.

One disadvantage of multiplexers that use TDM is that they allocate time for a
source's data even if the source is not currently sending any. This is a waste of
resources. Special MUXs have been created that only make slots for sources
when those sources need to send data. This type of multiplexer must
communicate with the MUX at the other end of the link whose data is being sent.

TDM can be used on baseband networks. If you recall, baseband networks only
carry one kind of data - digital. Digital data is susceptible to attenuation and
interferences. Fortunately, digital data can be used with repeaters that actually
regenerate the digital signal and rebroadcasts it at a higher level.

Broadband systems may also use TDM for a particular frequency. The
frequencies on a broadBand network are many and varied. They are the product
of another type of multiplexing called Frequency-Division Multiplexing (FDM).

2. Frequency-Division Multiplexing (FDM)

FDM allows us to take signals from various sources and place them on a single
wire by giving each signal its own frequency. The total bandwidth of the entire
cable can be divided up into several smaller bandwidths. These are analog
signals that carry data.

The information carried by the analog "carrier" may be encoded using any of the
analog encoding methods. Each individual signal source must be routed through
a modem. The modem takes the digital data and uses it to modulate an analog
signal at a unique frequency. A modem with a different frequency is required for
each signal source. A modem must be on the receiving end as well, listening for
a unique carrier frequency from the sender.

Frequency-Division Multiplexing is used to allow multiple channels of data share a


common wire (broadBand networks).

FDM may also be utilized by phone companies who wish to maximize their usage
of a limited amount of cable. As mentioned in an earlier chapter, the phone
companies typically allow about 4 MHz of bandwidth for calls after filtering.

Broadband networks use technology similar to that of cable TV companies in


placing several channels of data on a cable at once. Broadband systems use the
different frequencies to separate directional traffic and provide special services.
Both analog devices and digital devices can use a broadBand network, but only
analog signals are carried on the wire.

Potrebbero piacerti anche