Sei sulla pagina 1di 46

Q.1:- What is computer networking?

ANS: - Users and network administrators often have different


views of their networks. Often, users share printers and
some servers form a workgroup, which usually means they
are in the same geographic location and are on the same
LAN. A community of interest has less of a connotation of
being in a local area, and should be thought of as a set of
arbitrarily located users who share a set of servers, and
possibly also communicate via peer-to-peer technologies.

Network administrators see networks from both physical and


logical perspectives. The physical perspective involves
geographic locations, physical cabling, and the network
elements (e.g., routers, bridges and application layer
gateways that interconnect the physical media. Logical
networks, called, in the TCP/IP architecture, subnets, map
onto one or more physical media. For example, a common
practice in a campus of buildings is to make a set of LAN
cables in each building appear to be a common subnet,
using virtual LAN (VLAN) technology.

Both users and administrators will be aware, to varying


extents, of the trust and scope characteristics of a network.
Again using TCP/IP architectural terminology, an intranet is
a community of interest under private administration usually
by an enterprise, and is only accessible by authorized users
(e.g. employees). Intranets do not have to be connected to
the Internet, but generally have a limited connection. An
extranet is an extension of an intranet that allows secure
communications to users outside of the intranet (e.g.
business partners, customers).

Informally, the Internet is the set of users, enterprises,and


content providers that are interconnected by Internet Service
Providers (ISP). From an engineering standpoint, the
Internet is the set of subnets, and aggregates of subnets,
which share the registered IP address space and exchange
information about the reachability of those IP addresses
using the Border Gateway Protocol. Typically, the human-
readable names of servers are translated to IP addresses,
transparently to users, via the directory function of the
Domain Name System (DNS).

Over the Internet, there can be business-to-business (B2B),


business-to-consumer (B2C) and consumer-to-consumer
(C2C) communications. Especially when money or sensitive
information is exchanged, the communications are apt to be
secured by some form of communications security
mechanism. Intranets and extranets can be securely
superimposed onto the Internet, without any access by
general Internet users, using secure Virtual Private Network
(VPN) technology.

When used for gaming one computer will have to be the


server while the others play through it.

History

Before the advent of computer networks that were based


upon some type of telecommunications system,
communication between calculation machines and early
computers was performed by human users by carrying
instructions between them. Many of the social behavior seen
in today's Internet was demonstrably present in nineteenth-
century telegraph networks, and arguably in even earlier
networks using visual signals.
In September 1940 George Stibitz used a teletype machine
to send instructions for a problem set from his Model K at
Dartmouth College in New Hampshire to his Complex
Number Calculator in New York and received results back by
the same means. Linking output systems like teletypes to
computers was an interest at the Advanced Research
Projects Agency (ARPA) when, in 1962, J.C.R. Licklider was
hired and developed a working group he called the
"Intergalactic Network", a precursor to the ARPANet.

In 1964, researchers at Dartmouth developed the Dartmouth


Time Sharing System for distributed users of large computer
systems. The same year, at MIT, a research group
supported by General Electric and Bell Labs used a
computer (DEC's PDP-8) to route and manage telephone
connections.

Throughout the 1960s Leonard Kleinrock, Paul Baran and


Donald Davies independently conceptualized and developed
network systems which used datagrams or packets that
could be used in a packet switched network between
computer systems.

1965 Thomas Merrill and Lawrence G. Roberts created the


first wide area network(WAN).

The first widely used PSTN switch that used true computer
control was the Western Electric 1ESS switch, introduced in
1965.

In 1969 the University of California at Los Angeles, SRI (in


Stanford), University of California at Santa Barbara, and the
University of Utah were connected as the beginning of the
ARPANet network using 50 kbit/s circuits. Commercial
services using X.25, an alternative architecture to the
TCP/IP suite, were deployed in 1972.
Computer networks, and the technologies needed to connect
and communicate through and between them, continue to
drive computer hardware, software, and peripherals
industries. This expansion is mirrored by growth in the
numbers and types of users of networks from the researcher
to the home user.

Today, computer networks are the core of modern


communication. For example, all modern aspects of the
Public Switched Telephone Network (PSTN) are computer-
controlled, and telephony increasingly runs over the Internet
Protocol, although not necessarily the public Internet. The
scope of communication has increased significantly in the
past decade and this boom in communications would not
have been possible without the progressively advancing
computer network.

Networking methods

Networking is a complex part of computing that makes up


most of the IT Industry. Without networks, almost all
communication in the world would cease to happen. It is
because of networking that telephones, televisions, the
internet, etc. work.

One way to categorize computer networks is by their


geographic scope, although many real-world networks
interconnect Local Area Networks (LAN) via Wide Area
Networks (WAN)and wireless networks[WWAN]. These
three (broad) types are:

Local area network (LAN)

A local area network is a network that spans a relatively


small space and provides services to a small number of
people.
A peer-to-peer or client-server method of networking may be
used. A peer-to-peer network is where each client shares
their resources with other workstations in the network.
Examples of peer-to-peer networks are: Small office
networks where resource use is minimal and a home
network. A client-server network is where every client is
connected to the server and each other. Client-server
networks use servers in different capacities. These can be
classified into two types:

1. Single-service servers
2. print server,

where the server performs one task such as file server, ;


while other servers can not only perform in the capacity of
file servers and print servers, but they also conduct
calculations and use these to provide information to clients
(Web/Intranet Server). Computers may be connected in
many different ways, including Ethernet cables, Wireless
networks, or other types of wires such as power lines or
phone lines.

The ITU-T G.hn standard is an example of a technology that


provides high-speed (up to 1 Gbit/s) local area networking
over existing home wiring (power lines, phone lines and
coaxial cables).

Wide area network (WAN)

A wide area network is a network where a wide variety of


resources are deployed across a large domestic area or
internationally. An example of this is a multinational business
that uses a WAN to interconnect their offices in different
countries. The largest and best example of a WAN is the
Internet, which is a network composed of many smaller
networks. The Internet is considered the largest network in
the world.. The PSTN (Public Switched Telephone Network)
also is an extremely large network that is converging to use
Internet technologies, although not necessarily through the
public Internet.

A Wide Area Network involves communication through the


use of a wide range of different technologies. These
technologies include Point-to-Point WANs such as Point-to-
Point Protocol (PPP) and High-Level Data Link Control
(HDLC), Frame Relay, ATM (Asynchronous Transfer Mode)
and Sonet (Synchronous Optical Network). The difference
between the WAN technologies is based on the switching
capabilities they perform and the speed at which sending
and receiving bits of information (data) occur.

Metropolitan Area Network (MAN)

A metropolitan network is a network that is too large for even


the largest of LAN's but is not on the scale of a WAN. It also
integrates two or more LAN networks over a specific
geographical area ( usually a city ) so as to increase the
network and the flow of communications. The LAN's in
question would usually be connected via "backbone" lines.

Wireless networks (WLAN, WWAN)

A wireless network is basically the same as a LAN or a WAN


but there are no wires between hosts and servers. The data
is transferred over sets of radio transceivers. These types of
networks are beneficial when it is too costly or inconvenient
to run the necessary cables. For more information, see
Wireless LAN and Wireless wide area network. The media
access protocols for LANs come from the IEEE.

The most common IEEE 802.11 WLANs cover, depending


on antennas, ranges from hundreds of meters to a few
kilometers. For larger areas, either communications satellites
of various types, cellular radio, or wireless local loop (IEEE
802.16) all have advantages and disadvantages. Depending
on the type of mobility needed, the relevant standards may
come from the IETF or the ITU.

Network topology

The network topology defines the way in which computers,


printers, and other devices are connected, physically and
logically. A network topology describes the layout of the wire
and devices as well as the paths used by data
transmissions.

Network topology has two types:

Physical
logical

Commonly used topologies include:

Bus
Star
Tree (hierarchical)
Linear
Ring
Mesh
o partially connected
o fully connected (sometimes known as fully
redundant)

The network topologies mentioned above are only a general


representation of the kinds of topologies used in computer
network and are considered basic topologies.
Q.2:- Describe client server computing.

ANS: - To truly understand how much of the Internet


operates, including the Web, it is important to understand the
concept of client/server computing. The client/server model
is a form of distributed computing where one program (the
client) communicates with another program (the server) for
the purpose of exchanging information.

The client's responsibility is usually to:

1. Handle the user interface.


2. Translate the user's request into the desired protocol.
3. Send the request to the server.
4. Wait for the server's response.
5. Translate the response into "human-readable" results.
6. Present the results to the user.

The server's functions include:

1. Listen for a client's query.


2. Process that query.
3. Return the results back to the client.

A typical client/server interaction goes like this:

1. The user runs client software to create a query.


2. The client connects to the server.
3. The client sends the query to the server.
4. The server analyzes the query.
5. The server computes the results of the query.
6. The server sends the results to the client.
7. The client presents the results to the user.
8. Repeat as necessary.
A typical client/server interaction

This client/server interaction is a lot like going to a French


restaurant. At the restaurant, you (the user) are presented
with a menu of choices by the waiter (the client). After
making your selections, the waiter takes note of your
choices, translates them into French, and presents them to
the French chef (the server) in the kitchen. After the chef
prepares your meal, the waiter returns with your diner (the
results). Hopefully, the waiter returns with the items you
selected, but not always; sometimes things get "lost in the
translation."

Flexible user interface development is the most obvious


advantage of client/server computing. It is possible to create
an interface that is independent of the server hosting the
data. Therefore, the user interface of a client/server
application can be written on a Macintosh and the server can
be written on a mainframe. Clients could be also written for
DOS- or UNIX-based computers. This allows information to
be stored in a central server and disseminated to different
types of remote computers. Since the user interface is the
responsibility of the client, the server has more computing
resources to spend on analyzing queries and disseminating
information. This is another major advantage of client/server
computing; it tends to use the strengths of divergent
computing platforms to create more powerful applications.
Although its computing and storage capabilities are dwarfed
by those of the mainframe, there is no reason why a
Macintosh could not be used as a server for less demanding
applications.

In short, client/server computing provides a mechanism for


disparate computers to cooperate on a single computing
task.

Description

Client-server describes the relationship between two


computer programs in which one program, the client
program, makes a service request to another, the server
program. Standard networked functions such as email
exchange, web access and database access, are based on
the client-server model. For example, a web browser is a
client program at the user computer that may access
information at any web server in the world. To check your
bank account from your computer, a web browser client
program in your computer forwards your request to a web
server program at the bank. That program may in turn
forward the request to its own database client program that
sends a request to a database server at another bank
computer to retrieve your account balance. The balance is
returned to the bank database client, which in turn serves it
back to the web browser client in your personal computer,
which displays the information for you.

The client-server model has become one of the central ideas


of network computing. Most business applications being
written today use the client-server model. So do the
Internet's main application protocols, such as HTTP, SMTP,
Telnet, DNS, etc. In marketing, the term has been used to
distinguish distributed computing by smaller dispersed
computers from the "monolithic" centralized computing of
mainframe computers. But this distinction has largely
disappeared as mainframes and their applications have also
turned to the client-server model and become part of
network computing.

Each instance of the client software can send data requests


to one or more connected servers. In turn, the servers can
accept these requests, process them, and return the
requested information to the client. Although this concept
can be applied for a variety of reasons to many different
kinds of applications, the architecture remains fundamentally
the same.

The most basic type of client-server architecture employs


only two types of hosts: clients and servers. This type of
architecture is sometimes referred to as two-tier. It allows
devices to share files and resources. The two tier
architecture means that the client acts as one tier and
application in combination with server acts as another tier.

These days, clients are most often web browsers, although


that has not always been the case. Servers typically include
web servers, database servers and mail servers. Online
gaming is usually client-server too. In the specific case of
MMORPG, the servers are typically operated by the
company selling the game; for other games one of the
players will act as the host by setting his game in server
mode.

The interaction between client and server is often described


using sequence diagrams. Sequence diagrams are
standardized in the Unified Modeling Language.

When both the client- and server-software are running on the


same computer, this is called a single seat setup.
Specific types of clients include web browsers, email clients,
and online chat clients.

Specific types of servers include web servers, ftp servers,


application servers, database servers, mail servers, file
servers, print servers, and terminal servers. Most web
services are also types of servers.

Comparison to Peer-to-Peer architecture

Another type of network architecture is known as peer-to-


peer, because each host or instance of the program can
simultaneously act as both a client and a server, and
because each has equivalent responsibilities and status.
Peer-to-peer architectures are often abbreviated using the
acronym P2P.

Both client-server and P2P architectures are in wide usage


today. You can find more details in Comparison of
Centralized (Client-Server) and Decentralized (Peer-to-Peer)
Networking. both client server and a2dp will work on
windows and Linux.

Comparison to Client-Queue-Client architecture

While classic Client-Server architecture requires one of the


communication endpoints to act as a server, which is much
harder to implement] Client-Queue-Client allows all
endpoints to be simple clients, while the server consists of
some external software, which also acts as passive queue
(one software instance passes its query to another instance
to queue, e.g. database, and then this other instance pulls it
from database, makes a response, passes it to database
etc.). This architecture allows greatly simplified software
implementation. Peer-to-Peer architecture was originally
based on Client-Queue-Client concept.
Advantages

In most cases, a client-server architecture enables the


roles and responsibilities of a computing system to be
distributed among several independent computers that
are known to each other only through a network. This
creates an additional advantage to this architecture:
greater ease of maintenance. For example, it is
possible to replace, repair, upgrade, or even relocate a
server while its clients remain both unaware and
unaffected by that change. This independence from
change is also referred to as encapsulation.
All the data is stored on the servers, which generally
have far greater security controls than most clients.
Servers can better control access and resources, to
guarantee that only those clients with the appropriate
permissions may access and change data.
Since data storage is centralized, updates to that data
are far easier to administer than what would be possible
under a P2P paradigm. Under a P2P architecture, data
updates may need to be distributed and applied to each
"peer" in the network, which is both time-consuming
and error-prone, as there can be thousands or even
millions of peers.
Many mature client-server technologies are already
available which were designed to ensure security,
'friendliness' of the user interface, and ease of use.
It functions with multiple different clients of different
capabilities.
Reduces the total cost of ownership.
Increases Productivity
End User Productivity
Developer Productivity

Disadvantages
Traffic congestion on the network has been an issue
since the inception of the client-server paradigm. As the
number of simultaneous client requests to a given
server increases, the server can become severely
overloaded. Contrast that to a P2P network, where its
bandwidth actually increases as more nodes are added,
since the P2P network's overall bandwidth can be
roughly computed as the sum of the bandwidths of
every node in that network.
The client-server paradigm lacks the robustness of a
good P2P network. Under client-server, should a critical
server fail, clients requests cannot be fulfilled. In P2P
networks, resources are usually distributed among
many nodes. Even if one or more nodes depart and
abandon a downloading file, for example, the remaining
nodes should still have the data needed to complete the
download .

Q.3. What is the internet? Is the search engine is very useful


to internet?

ANS: - The Internet is a global network of interconnected


computers, enabling users to share information along
multiple channels. Typically, a computer that connects to the
Internet can access information from a vast array of
available servers and other computers by moving
information from them to the computer's local memory. The
same connection allows that computer to send information to
servers on the network; that information is in turn accessed
and potentially modified by a variety of other interconnected
computers. A majority of widely accessible information on
the Internet consists of inter-linked hypertext documents and
other resources of the World Wide Web (WWW). Computer
users typically manage sent and received information with
web browsers; other software for users' interface with
computer networks includes specialized programs for
electronic mail, online chat, file transfer and file sharing.

The movement of information in the Internet is achieved via


a system of interconnected computer networks that share
data by packet switching using the standardized Internet
Protocol Suite (TCP/IP). It is a "network of networks" that
consists of millions of private and public, academic,
business, and government networks of local to global scope
that are linked by copper wires, fiber-optic cables, wireless
connections, and other technologies.

The terms Internet and World Wide Web are often used in
every-day speech without much distinction. However, the
Internet and the World Wide Web are not one and the same.
The Internet is a global data communications system. It is a
hardware and software infrastructure that provides
connectivity between computers. In contrast, the Web is one
of the services communicated via the Internet. It is a
collection of interconnected documents and other resources,
linked by hyperlinks and URLs.[1]

The term internet is written both with capital and without


capital, and is used both with and without the definite article.
Growth

Graph of internet users per 100 inhabitants between 1997


and 2007 by International Telecommunication Union

Although the basic applications and guidelines that make the


Internet possible had existed for almost two decades, the
network did not gain a public face until the 1990s. On 6
August 1991, CERN, a pan European organisation for
particle research, publicized the new World Wide Web
project. The Web was invented by English scientist Tim
Berners-Lee in 1989.

An early popular web browser was ViolaWWW, patterned


after HyperCard and built using the X Window System. It
was eventually replaced in popularity by the Mosaic web
browser. In 1993, the National Center for Supercomputing
Applications at the University of Illinois released version 1.0
of Mosaic, and by late 1994 there was growing public
interest in the previously academic, technical Internet. By
1996 usage of the word Internet had become commonplace,
and consequently, so had its use as a synecdoche in
reference to the World Wide Web.

Meanwhile, over the course of the decade, the Internet


successfully accommodated the majority of previously
existing public computer networks (although some networks,
such as FidoNet, have remained separate). During the
1990s, it was estimated that the Internet grew by 100% per
year, with a brief period of explosive growth in 1996 and
1997.[5] This growth is often attributed to the lack of central
administration, which allows organic growth of the network,
as well as the non-proprietary open nature of the Internet
protocols, which encourages vendor interoperability and
prevents any one company from exerting too much control
over the network. [6]

Using various statistics, AMD estimated the population of


internet users to be 1.5 billion as of January 2009.[7]

Today's Internet

The My Opera Community server rack. From the top, user


file storage (content of files.myopera.com), "bigma" (the
master MySQL database server), and two IBM blade centers
containing multi-purpose machines (Apache front ends,
Apache back ends, slave MySQL database servers, load
balancers, file servers, cache servers and sync masters)

Aside from the complex physical connections that make up


its infrastructure, the Internet is facilitated by bi- or multi-
lateral commercial contracts (e.g., peering agreements), and
by technical specifications or protocols that describe how to
exchange data over the network. Indeed, the Internet is
defined by its interconnections and routing policies.

By December 31, 2008, 1.574 billion people were using the


Internet according to Internet World Statistics

Internet protocols

The complex communications infrastructure of the Internet


consists of its hardware components and a system of
software layers that control various aspects of the
architecture. While the hardware can often be used to
support other software systems, it is the design and the
rigorous standardization process of the software architecture
that characterizes the Internet.

The responsibility for the architectural design of the Internet


software systems has been delegated to the Internet
Engineering Task Force (IETF). The IETF conducts
standard-setting work groups, open to any individual, about
the various aspects of Internet architecture. Resulting
discussions and final standards are published in Requests
for Comments (RFCs), freely available on the IETF web site.

The principal methods of networking that enable the Internet


are contained in a series of RFCs that constitute the Internet
Standards. These standards describe a system known as
the Internet Protocol Suite. This is a model architecture that
divides methods into a layered system of protocols (RFC
1122, RFC 1123). The layers correspond to the environment
or scope in which their services operate. At the top is the
space (Application Layer) of the software application, e.g., a
web browser application, and just below it is the Transport
Layer which connects applications on different hosts via the
network (e.g., client-server model). The underlying network
consists of two layers: the Internet Layer which enables
computers to connect to one-another via intermediate
(transit) networks and thus is the layer that establishes
internetworking and the Internet, and lastly, at the bottom, is
a software layer that provides connectivity between hosts on
the same local link (therefor called Link Layer), e.g., a local
area network (LAN) or a dial-up connection. This model is
also known as the TCP/IP model of networking. While other
models have been developed, such as the Open Systems
Interconnection (OSI) model, they are not compatible in the
details of description, nor implementation.

The most prominent component of the Internet model is the


Internet Protocol (IP) which provides addressing systems for
computers on the Internet and facilitates the internetworking
of networks. IP Version 4 (IPv4) is the initial version used on
the first generation of the today's Internet and is still in
dominant use. It was designed to address up to ~4.3 billion
(109) Internet hosts. However, the explosive growth of the
Internet has led to IPv4 address exhaustion. A new protocol
version, IPv6, was developed which provides vastly larger
addressing capabilities and more efficient routing of data
traffic. IPv6 is currently in commercial deployment phase
around the world.

IPv6 is not interoperable with IPv4. It essentially establishes


a "parallel" version of the Internet not accessible with IPv4
software. This means software upgrades are necessary for
every networking device that needs to communicate on the
IPv6 Internet. Most modern computer operating systems are
already converted to operate with both versions of the
Internet Protocol. Network infrastructures, however, are still
lagging in this development.

Internet structure

There have been many analyses of the Internet and its


structure. For example, it has been determined that both the
Internet IP routing structure and hypertext links of the World
Wide Web are examples of scale-free networks.

Similar to the way the commercial Internet providers connect


via Internet exchange points, research networks tend to
interconnect into large subnetworks such as the following:

GEANT
GLORIAD
The Internet2 Network (formally known as the Abilene
Network)
JANET (the UK's national research and education
network)

These in turn are built around relatively smaller networks.


See also the list of academic computer network
organizations.

Computer network diagrams often represent the Internet


using a cloud symbol from which network communications
pass in and out.

E-mail

The concept of sending electronic text messages between


parties in a way analogous to mailing letters or memos
predates the creation of the Internet. Even today it can be
important to distinguish between Internet and internal e-mail
systems. Internet e-mail may travel and be stored
unencrypted on many other networks and machines out of
both the sender's and the recipient's control. During this time
it is quite possible for the content to be read and even
tampered with by third parties, if anyone considers it
important enough. Purely internal or intranet mail systems,
where the information never leaves the corporate or
organization's network, are much more secure, although in
any organization there will be IT and other personnel whose
job may involve monitoring, and occasionally accessing, the
e-mail of other employees not addressed to them. Today you
can send pictures and attach files on e-mail. Most e-mail
servers today also feature the ability to send e-mail to
multiple e-mail addresses.

The World Wide Web

Graphic representation of a minute fraction of the WWW,


demonstrating hyperlinks

Many people use the terms Internet and World Wide Web (or
just the Web) interchangeably, but, as discussed above, the
two terms are not synonymous.
The World Wide Web is a huge set of interlinked documents,
images and other resources, linked by hyperlinks and URLs.
These hyperlinks and URLs allow the web servers and other
machines that store originals, and cached copies of, these
resources to deliver them as required using HTTP (Hypertext
Transfer Protocol). HTTP is only one of the communication
protocols used on the Internet.

Web services also use HTTP to allow software systems to


communicate in order to share and exchange business logic
and data.

Software products that can access the resources of the Web


are correctly termed user agents. In normal use, web
browsers, such as Internet Explorer, Firefox and Apple
Safari, access web pages and allow users to navigate from
one to another via hyperlinks. Web documents may contain
almost any combination of computer data including graphics,
sounds, text, video, multimedia and interactive content
including games, office applications and scientific
demonstrations.

Through keyword-driven Internet research using search


engines like Yahoo! and Google, millions of people
worldwide have easy, instant access to a vast and diverse
amount of online information. Compared to encyclopedias
and traditional libraries, the World Wide Web has enabled a
sudden and extreme decentralization of information and
data.

Using the Web, it is also easier than ever before for


individuals and organizations to publish ideas and
information to an extremely large audience. Anyone can find
ways to publish a web page, a blog or build a website for
very little initial cost. Publishing and maintaining large,
professional websites full of attractive, diverse and up-to-
date information is still a difficult and expensive proposition,
however.

Many individuals and some companies and groups use "web


logs" or blogs, which are largely used as easily updatable
online diaries. Some commercial organizations encourage
staff to fill them with advice on their areas of specialization in
the hope that visitors will be impressed by the expert
knowledge and free information, and be attracted to the
corporation as a result. One example of this practice is
Microsoft, whose product developers publish their personal
blogs in order to pique the public's interest in their work.

Collections of personal web pages published by large


service providers remain popular, and have become
increasingly sophisticated. Whereas operations such as
Angelfire and GeoCities have existed since the early days of
the Web, newer offerings from, for example, Facebook and
My Space currently have large followings. These operations
often brand themselves as social network services rather
than simply as web page hosts.

Advertising on popular web pages can be lucrative, and e-


commerce or the sale of products and services directly via
the Web continues to grow.

In the early days, web pages were usually created as sets of


complete and isolated HTML text files stored on a web
server. More recently, websites are more often created using
content management or wiki software with, initially, very little
content. Contributors to these systems, who may be paid
staff, members of a club or other organisation or members of
the public, fill underlying databases with content using
editing pages designed for that purpose, while casual visitors
view and read this content in its final HTML form. There may
or may not be editorial, approval and security systems built
into the process of taking newly entered content and making
it available to the target visitors.

Remote access

The Internet allows computer users to connect to other


computers and information stores easily, wherever they may
be across the world. They may do this with or without the
use of security, authentication and encryption technologies,
depending on the requirements.

This is encouraging new ways of working from home,


collaboration and information sharing in many industries. An
accountant sitting at home can audit the books of a company
based in another country, on a server situated in a third
country that is remotely maintained by IT specialists in a
fourth. These accounts could have been created by home-
working bookkeepers, in other remote locations, based on
information e-mailed to them from offices all over the world.
Some of these things were possible before the widespread
use of the Internet, but the cost of private leased lines would
have made many of them infeasible in practice.

An office worker away from his desk, perhaps on the other


side of the world on a business trip or a holiday, can open a
remote desktop session into his normal office PC using a
secure Virtual Private Network (VPN) connection via the
Internet. This gives the worker complete access to all of his
or her normal files and data, including e-mail and other
applications, while away from the office.

This concept is also referred to by some network security


people as the Virtual Private Nightmare, because it extends
the secure perimeter of a corporate network into its
employees' homes.
Collaboration

The low cost and nearly instantaneous sharing of ideas,


knowledge, and skills has made collaborative work
dramatically easier. Not only can a group cheaply
communicate and share ideas, but the wide reach of the
Internet allows such groups to easily form in the first place.
An example of this is the free software movement, which has
produced Linux, Mozilla Firefox, OpenOffice.org etc.

Internet "chat", whether in the form of IRC chat rooms or


channels, or via instant messaging systems, allow
colleagues to stay in touch in a very convenient way when
working at their computers during the day. Messages can be
exchanged even more quickly and conveniently than via e-
mail. Extensions to these systems may allow files to be
exchanged, "whiteboard" drawings to be shared or voice and
video contact between team members.

Version control systems allow collaborating teams to work


on shared sets of documents without either accidentally
overwriting each other's work or having members wait until
they get "sent" documents to be able to make their
contributions.

Business and project teams can share calendars as well as


documents and other information. Such collaboration occurs
in a wide variety of areas including scientific research,
software development, conference planning, political
activism and creative writing.

File sharing

A computer file can be e-mailed to customers, colleagues


and friends as an attachment. It can be uploaded to a
website or FTP server for easy download by others. It can be
put into a "shared location" or onto a file server for instant
use by colleagues. The load of bulk downloads to many
users can be eased by the use of "mirror" servers or peer-to-
peer networks.

In any of these cases, access to the file may be controlled by


user authentication, the transit of the file over the Internet
may be obscured by encryption, and money may change
hands for access to the file. The price can be paid by the
remote charging of funds from, for example, a credit card
whose details are also passedhopefully fully encrypted
across the Internet. The origin and authenticity of the file
received may be checked by digital signatures or by MD5 or
other message digests.

These simple features of the Internet, over a worldwide


basis, are changing the production, sale, and distribution of
anything that can be reduced to a computer file for
transmission. This includes all manner of print publications,
software products, news, music, film, video, photography,
graphics and the other arts. This in turn has caused seismic
shifts in each of the existing industries that previously
controlled the production and distribution of these products.

Streaming media

Many existing radio and television broadcasters provide


Internet "feeds" of their live audio and video streams (for
example, the BBC). They may also allow time-shift viewing
or listening such as Preview, Classic Clips and Listen Again
features. These providers have been joined by a range of
pure Internet "broadcasters" who never had on-air licenses.
This means that an Internet-connected device, such as a
computer or something more specific, can be used to access
on-line media in much the same way as was previously
possible only with a television or radio receiver. The range of
material is much wider, from pornography to highly
specialized, technical webcasts. Podcasting is a variation on
this theme, whereusually audiomaterial is downloaded
and played back on a computer or shifted to a portable
media player to be listened to on the move. These
techniques using simple equipment allow anybody, with little
censorship or licensing control, to broadcast audio-visual
material on a worldwide basis.

Webcams can be seen as an even lower-budget extension


of this phenomenon. While some webcams can give full-
frame-rate video, the picture is usually either small or
updates slowly. Internet users can watch animals around an
African waterhole, ships in the Panama Canal, traffic at a
local roundabout or monitor their own premises, live and in
real time. Video chat rooms and video conferencing are also
popular with many uses being found for personal webcams,
with and without two-way sound.

YouTube was founded on 15 February 2005 and is now the


leading website for free streaming video with a vast number
of users. It uses a flash-based web player to stream and
show the video files. Users are able to watch videos without
signing up; however, if they do sign up, they are able to
upload an unlimited amount of videos and build their own
personal profile. YouTube claims that its users watch
hundreds of millions, and upload hundreds of thousands, of
videos daily.

Internet Telephony (VoIP)

VoIP stands for Voice-over-Internet Protocol, referring to the


protocol that underlies all Internet communication. The idea
began in the early 1990s with walkie-talkie-like voice
applications for personal computers. In recent years many
VoIP systems have become as easy to use and as
convenient as a normal telephone. The benefit is that, as the
Internet carries the voice traffic, VoIP can be free or cost
much less than a traditional telephone call, especially over
long distances and especially for those with always-on
Internet connections such as cable or ADSL.

VoIP is maturing into a competitive alternative to traditional


telephone service. Interoperability between different
providers has improved and the ability to call or receive a
call from a traditional telephone is available. Simple,
inexpensive VoIP network adapters are available that
eliminate the need for a personal computer.

Voice quality can still vary from call to call but is often equal
to and can even exceed that of traditional calls.

Remaining problems for VoIP include emergency telephone


number dialling and reliability. Currently, a few VoIP
providers provide an emergency service, but it is not
universally available. Traditional phones are line-powered
and operate during a power failure; VoIP does not do so
without a backup power source for the phone equipment and
the Internet access devices.

VoIP has also become increasingly popular for gaming


applications, as a form of communication between players.
Popular VoIP clients for gaming include Ventrilo and
Teamspeak, and others. PlayStation 3 and Xbox 360 also
offer VoIP chat features.

Internet access

Common methods of home access include dial-up, landline


broadband (over coaxial cable, fiber optic or copper wires),
Wi-Fi, satellite and 3G technology cell phones.
Public places to use the Internet include libraries and
Internet cafes, where computers with Internet connections
are available. There are also Internet access points in many
public places such as airport halls and coffee shops, in some
cases just for brief use while standing. Various terms are
used, such as "public Internet kiosk", "public access
terminal", and "Web payphone". Many hotels now also have
public terminals, though these are usually fee-based. These
terminals are widely accessed for various usage like ticket
booking, bank deposit, online payment etc. Wi-Fi provides
wireless access to computer networks, and therefore can do
so to the Internet itself. Hotspots providing such access
include Wi-Fi cafes, where would-be users need to bring
their own wireless-enabled devices such as a laptop or PDA.
These services may be free to all, free to customers only, or
fee-based. A hotspot need not be limited to a confined
location. A whole campus or park, or even an entire city can
be enabled. Grassroots efforts have led to wireless
community networks. Commercial Wi-Fi services covering
large city areas are in place in London, Vienna, Toronto, San
Francisco, Philadelphia, Chicago and Pittsburgh. The
Internet can then be accessed from such places as a park
bench.[14]

Apart from Wi-Fi, there have been experiments with


proprietary mobile wireless networks like Ricochet, various
high-speed data services over cellular phone networks, and
fixed wireless services.

High-end mobile phones such as smartphones generally


come with Internet access through the phone network.
Web browsers such as Opera are available on these
advanced handsets, which can also run a wide variety of
other Internet software. More mobile phones have
Internet access than PCs, though this is not as widely
used. An

Market

The Internet has also become a large market for companies;


some of the biggest companies today have grown by taking
advantage of the efficient nature of low-cost advertising and
commerce through the Internet, also known as e-commerce.
It is the fastest way to spread information to a vast number
of people simultaneously. The Internet has also
subsequently revolutionized shoppingfor example; a
person can order a CD online and receive it in the mail within
a couple of days, or download it directly in some cases. The
Internet has also greatly facilitated personalized marketing
which allows a company to market a product to a specific
person or a specific group of people more so than any other
advertising medium.

Examples of personalized marketing include online


communities such as MySpace, Friendster, Orkut, Facebook
and others which thousands of Internet users join to
advertise themselves and make friends online. Many of
these users are young teens and adolescents ranging from
13 to 25 years old. In turn, when they advertise themselves
they advertise interests and hobbies, which online marketing
companies can use as information as to what those users
will purchase online, and advertise their own companies'
products to those users.

Q.4. Explain in brief Transmission Media.

ANS: - Transmission media comprises; different types of


cables and wireless techniques that are used to connect
network devices in a Local Area Network (LAN), Wireless
Local Area Network (WLAN) or Wide Area Network (WAN).
Choice of correct type of transmission media is very
important for the implementation of any network. It can make
a major impact on the performance, speed, cost and
reliability of the network.

Copper Wires

Conventional computer networks use copper wire because it


is inexpensive, easy to install, and has low resistance to
electrical current. Unfortunately, copper wire is prone to
interference in the form electromagnetic energy emitted by
neighbouring wires, especially those running in parallel.

To minimise interference, twisted pair wiring, as used in


telephone systems, can be used as illustrated in Figure 1.

Figure 1: Twisted pair wiring

A plastic coating on each wire prevents the copper in one


wire from touching the copper in another. The twist helps
reduce interference by preventing electrical signals on the
wire radiating energy (causing interference) and by
preventing signals on other wires interfering with the pair.

A second type of copper wire is coaxial cable, similar to that


used for TV aerials. The coaxial cable provides better
protection from interference by providing a metal shield as
illustrated in Figure 2.
Figure 2: Cross-section of a coaxial cable

The metal shield forms a flexible cylinder around the inner


wire providing a barrier to electromagnetic radiation, both
incoming and outgoing. The cable can run parallel to other
cables and can be bent round corners.

Optical Fibres

Optical fibres use light to transmit data. A thin glass fibre is


encased in a plastic jacket which allows the fibre to bend
without breaking. A transmitter at one end uses a light
emitting diode (LED) or laser to send pulses of light down
the fibre which are detected at the other end by a light
sensitive transistor.

Figure 3 illustrates a single fibre (a) and a sheath of three


fibres (b). Other configurations are possible.

Figure 3: Single fibre and a sheath of three fibres

Optical fibres have four main advantages over copper wires.


They use light which neither causes electrical
interference nor are they susceptible to electrical
inteference
They are manufactured to reflect the light inwards, so a
fibre can carry a pulse of light further than a copper wire
can carry a signal
Light can encode more information that electrical
signals, so they carry more information than a wire
Light can carry a signal over a single fibre, unlike
electricity which requires a pair of wires

Figure 4 illustrates the hybrid nature of neighbourhood


wiring. Optical fibres carry cable TV to each street with the
houses fed by coaxial cable (a). Optical fibres also carry the
Plain Old Telephone Service (POTS) to the nearest
exchange, with the local loop to the house consisting of
twisted pairs (b).
Figure 4: Cable television and POTS

Radio

A network that uses electromagnetic radio waves operates


at radio frequency and its transmissions are called RF
transmissions. Each host on the network attaches to an
antenna, which can both send and receive RF.

Satellites

Radio transmissions do not bend round the surface of the


earth, but RF technology combined with satellites can
provide long-distance connections. Figure 5 illustrates a
satellite link across an ocean.
Figure 5: Satellite and ground stations

The satellite contains a transponder consisting of a radio


receiver and transmitter. A ground station on one side of the
ocean sends a signal to the satellite, which amplifies it and
transmits the amplified signal at a different angle than it
arrived at to another ground station on the other side of the
ocean.

A single satellite contains multiple transponders (usually six


to twelve) each using a different radio frequency, making it
possible for multiple communications to proceed
simultaneously. These satellites are often geostationary, i.e.
they appear stationary in the sky. To achieve this, their orbit
must be 22,236 miles (35,785 kilometres) high.

Microwave

Electromagnetic radiation beyond the frequency range of


radio and television can be used to transport information.
Microwave transmission is usually point-to-point using
directional antennae with a clear path between transmitter
and receiver.

Infrared
Infrared transmission is usually limited to a small area,
e.g. one room, with the transmitter pointed towards the
receiver. The hardware is inexpensive and does not
require an antennal.

Q.5. What is functionality of modem? Describe in detail.

ANS:- Modem (from modulator-demodulator) is a device that


modulates an analog carrier signal to encode digital
information, and also demodulates such a carrier signal to
decode the transmitted information. The goal is to produce a
signal that can be transmitted easily and decoded to
reproduce the original digital data. Modems can be used
over any means of transmitting analog signals, from driven
diodes to radio.

The most familiar example is a voiceband modem that turns


the digital 1s and 0s of a personal computer into sounds that
can be transmitted over the telephone lines of Plain Old
Telephone Systems (POTS), and once received on the other
side, converts those 1s and 0s back into a form used by a
USB, Ethernet, serial, or network connection. Modems are
generally classified by the amount of data they can send in a
given time, normally measured in bits per second, or "bps".
They can also be classified by Baud, the number of times
the modem changes its signal state per second.

Baud is not the modem's speed in bit/s, but in symbols/s.


The baud rate varies, depending on the modulation
technique used. Original Bell 103 modems used a
modulation technique that saw a change in state 300 times
per second. They transmitted 1 bit for every baud, and so a
300 bit/s modem was also a 300-baud modem. However,
casual computerists confused the two. A 300 bit/s modem is
the only modem whose bit rate matches the baud rate. A
2400 bit/s modem changes state 600 times per second, but
due to the fact that it transmits 4 bits for each baud, 2400
bits are transmitted by 600 baud, or changes in states.

Faster modems are used by Internet users every day,


notably cable modems and ADSL modems. In
telecommunications, "wide band radio modems" transmit
repeating frames of data at very high data rates over
microwave radio links. Narrow band radio modem is used for
low data rate up to 19.2k mainly for private radio networks.
Some microwave modems transmit more than a hundred
million bits per second. Optical modems transmit data over
optical fibers. Most intercontinental data links now use
optical modems transmitting over undersea optical fibers.
Optical modems routinely have data rates in excess of a
billion (1x109) bits per second. One kilobit per second (kbit/s
or kb/s or kbps) as used in this article means 1000 bits per
second and not 1024 bits per second. For example, a 56k
modem can transfer data at up to 56,000 bits (7kB) per
second over the phone line.

Narrowband/phone-line dialup modems


28.8 kbit/s serial port modem from Motorola

A standard modem of today contains two functional parts: an


analog section for generating the signals and operating the
phone, and a digital section for setup and control. This
functionality is actually incorporated into a single chip, but
the division remains in theory. In operation the modem can
be in one of two "modes", data mode in which data is sent to
and from the computer over the phone lines, and command
mode in which the modem listens to the data from the
computer for commands, and carries them out. A typical
session consists of powering up the modem (often inside the
computer itself) which automatically assumes command
mode, then sending it the command for dialing a number.
After the connection is established to the remote modem, the
modem automatically goes into data mode, and the user can
send and receive data. When the user is finished, the
escape sequence, "+++" followed by a pause of about a
second, is sent to the modem to return it to command mode,
and the command ATH to hang up the phone is sent.

The commands themselves are typically from the Hayes


command set, although that term is somewhat misleading.
The original Hayes commands were useful for 300 bit/s
operation only, and then extended for their 1200 bit/s
modems. Faster speeds required new commands, leading to
a proliferation of command sets in the early 1990s. Things
became considerably more standardized in the second half
of the 1990s, when most modems were built from one of a
very small number of "chip sets". We call this the Hayes
command set even today, although it has three or four times
the numbers of commands as the actual standard.

Increasing speeds (V.21 V.22 V.22bis)

2400 bit/s modem for a laptop.

The 300 bit/s modems used frequency-shift keying to send


data. In this system the stream of 1s and 0s in computer
data is translated into sounds which can be easily sent on
the phone lines. In the Bell 103 system the originating
modem sends 0s by playing a 1070 Hz tone, and 1s at
1270 Hz, with the answering modem putting its 0s on
2025 Hz and 1s on 2225 Hz. These frequencies were
chosen carefully, they are in the range that suffer minimum
distortion on the phone system, and also are not harmonics
of each other.

In the 1200 bit/s and faster systems, phase-shift keying was


used. In this system the two tones for any one side of the
connection are sent at the similar frequencies as in the 300
bit/s systems, but slightly out of phase. By comparing the
phase of the two signals, 1s and 0s could be pulled back out,
for instance if the signals were 90 degrees out of phase, this
represented two digits, "1,0", at 180 degrees it was "1,1". In
this way each cycle of the signal represents two digits
instead of one. 1200 bit/s modems were, in effect, 600
symbols per second modems (600 baud modems) with 2 bits
per symbol.

Voiceband modems generally remained at 300 and 1200


bit/s (V.21 and V.22) into the mid 1980s. A V.22bis 2400-
bit/s system similar in concept to the 1200-bit/s Bell 212
signalling was introduced in the U.S., and a slightly different
one in Europe. By the late 1980s, most modems could
support all of these standards and 2400-bit/s operation was
becoming common.

For more information on baud rates versus bit rates, see the
companion article List of device bandwidths.

Using digital lines and PCM (V.90/92)

In the late 1990s Rockwell and U.S. Robotics introduced


new technology based upon the digital transmission used in
modern telephony networks. The standard digital
transmission in modern networks is 64 kbit/s but some
networks use a part of the bandwidth for remote office
signaling (eg to hang up the phone), limiting the effective
rate to 56 kbit/s DS0. This new technology was adopted into
ITU standards V.90 and is common in modern computers.
The 56 kbit/s rate is only possible from the central office to
the user site (downlink) and in the United States,
government regulation limits the maximum power output to
only 53.3 kbit/s. The uplink (from the user to the central
office) still uses V.34 technology at 33.6k.

Later in V.92, the digital PCM technique was applied to


increase the upload speed to a maximum of 48 kbit/s, but at
the expense of download rates. For example a 48 kbit/s
upstream rate would reduce the downstream as low as 40
kbit/s, due to echo on the telephone line. To avoid this
problem, V.92 modems offer the option to turn off the digital
upstream and instead use a 33.6 kbit/s analog connection, in
order to maintain a high digital downstream of 50 kbit/s or
higher. (See November and October 2000 update at
http://www.modemsite.com/56k/v92s.asp ) V.92 also adds
two other features. The first is the ability for users who have
call waiting to put their dial-up Internet connection on hold for
extended periods of time while they answer a call. The
second feature is the ability to "quick connect" to one's ISP.
This is achieved by remembering the analog and digital
characteristics of the telephone line, and using this saved
information to reconnect at a fast pace.

List of dialup speeds

Note that the values given are maximum values, and actual
values may be slower under certain conditions (for example,
noisy phone lines) For a complete list see the companion
article List of device bandwidths.

Connection Bitrate

Modem 110 baud 0.1 kbit/s


Modem 300 (300 baud) (Bell 103 or
0.3 kbit/s
V.21)
Modem 1200 (600 baud) (Bell 212A or
1.2 kbit/s
V.22)
Modem 2400 (600 baud) (V.22bis) 2.4 kbit/s
Modem 2400 (1200 baud) (V.26bis) 2.4 kbit/s
Modem 4800 (1600 baud) (V.27ter) 4.8 kbit/s
Modem 9600 (2400 baud) (V.32) 9.6 kbit/s
Modem 14.4 (2400 baud) (V.32bis) 14.4 kbit/s
Modem 28.8 (3200 baud) (V.34) 28.8 kbit/s
Modem 33.6 (3429 baud) (V.34) 33.6 kbit/s
Modem 56k (8000/3429 baud) (V.90) 56.0/33.6
kbit/s
56.0/48.0
Modem 56k (8000/8000 baud) (V.92)
kbit/s
[
Bonding Modem (two 56k modems)) 112.0/96.0 5
(V.92) kbit/s ]

Hardware compression (variable) 56.0-220.0


(V.90/V.42bis) kbit/s
Hardware compression (variable) 56.0-320.0
(V.92/V.44) kbit/s
Server-side web compression 100.0-1000.0
(variable) (Netscape ISP) kbit/s

Radio modems

Direct broadcast satellite, WiFi, and mobile phones all use


modems to communicate, as do most other wireless
services today. Modern telecommunications and data
networks also make extensive use of radio modems where
long distance data links are required. Such systems are an
important part of the PSTN, and are also in common use for
high-speed computer network links to outlying areas where
fibre is not economical.

Even where a cable is installed, it is often possible to get


better performance or make other parts of the system
simpler by using radio frequencies and modulation
techniques through a cable. Coaxial cable has a very large
bandwidth, however signal attenuation becomes a major
problem at high data rates if a digital signal is used. By using
a modem, a much larger amount of digital data can be
transmitted through a single piece of wire. Digital cable
television and cable Internet services use radio frequency
modems to provide the increasing bandwidth needs of
modern households. Using a modem also allows for
frequency-division multiple access to be used, making full-
duplex digital communication with many users possible using
a single wire.

Wireless modems come in a variety of types, bandwidths,


and speeds. Wireless modems are often referred to as
transparent or smart. They transmit information that is
modulated onto a carrier frequency to allow many
simultaneous wireless communication links to work
simultaneously on different frequencies.

Transparent modems operate in a manner similar to their


phone line modem cousins. Typically, they were half duplex,
meaning that they could not send and receive data at the
same time. Typically transparent modems are polled in a
round robin manner to collect small amounts of data from
scattered locations that do not have easy access to wired
infrastructure. Transparent modems are most commonly
used by utility companies for data collection.

Smart modems come with a media access controller inside


which prevents random data from colliding and resends data
that is not correctly received. Smart modems typically
require more bandwidth than transparent modems, and
typically achieve higher data rates. The IEEE 802.11
standard defines a short range modulation scheme that is
used on a large scale throughout the world.

WiFi and WiMax

Wireless data modems are used in the WiFi and WiMax


standards, operating at microwave frequencies.

WiFi (Wireless Fidelity) is principally used in laptops for


Internet connections (wireless access point) and wireless
application protocol (WAP).
.

Broadband

DSL modem

ADSL modems, a more recent development, are not limited


to the telephone's "voiceband" audio frequencies. Some
ADSL modems use coded orthogonal frequency division
modulation (DMT).

Cable modems use a range of frequencies originally


intended to carry RF television channels. Multiple cable
modems attached to a single cable can use the same
frequency band, using a low-level media access protocol to
allow them to work together within the same channel.
Typically, 'up' and 'down' signals are kept separate using
frequency division multiple access.

New types of broadband modems are beginning to appear,


such as doubleway satellite and power line modems.

Broadband modems should still be classed as modems,


since they use complex waveforms to carry digital data.
They are more advanced devices than traditional dial-up
modems as they are capable of modulating/demodulating
hundreds of channels simultaneously.

Many broadband modems include the functions of a router


(with Ethernet and WiFi ports) and other features such as
DHCP, NAT and firewall features.

When broadband technology was introduced, networking


and routers were unfamiliar to consumers. However, many
people knew what a modem was as most internet access
was through dial-up. Due to this familiarity, companies
started selling broadband modems using the familiar term
"modem" rather than vaguer ones like "adapter" or
"transceiver".

Many broadband modems must be configured in bridge


mode before they can use a router.

Deep-space telecommunications

Many modern modems have their origin in deep space


telecommunications systems of the 1960s.

Differences with deep space telecom modems vs landline


modems

digital modulation formats that have high doppler


immunity are typically used
waveform complexity tends to be low, typically binary
phase shift keying
error correction varies mission to mission, but is
typically much stronger than most landline modems

Voice modem

Voice modems are regular modems that are capable of


recording or playing audio over the telephone line. They are
used for telephony applications. See Voice modem
command set for more details on voice modems. This type
of modem can be used as FXO card for Private branch
exchange systems (compare V.92).

Popularity

A CEA study in 2006 found that dial-up Internet access is on


a notable decline in the U.S. In 2000, dial-up Internet
connections accounted for 74% of all U.S. residential
Internet connections. The US demographic pattern for (dial-
up modem users per capita) has been more or less mirrored
in Canada and Australia for the past 20 years.

Dial-up modem use in the US had dropped to 60% by 2003,


and in 2006 stood at 36%. Voiceband modems were once
the most popular means of Internet access in the U.S., but
with the advent of new ways of accessing the Internet, the
traditional 56K modem is losing popularity.

Potrebbero piacerti anche