Sei sulla pagina 1di 14

Chapter 3: Computer Networks and the

Internet

The concepts and ideas with which you should be familiar are contained in the chapter 3 checklist. Print out the checklist to use
as a study/notes guide while working through the chapter.

"The network is the computer."

Bill Joy, Sun Microsystems

As we have seen, for much of the history of information technology, computers were big, expensive, and few and far between.
This had several implications:

 Computers usually had to be shared by many users in order to justify the cost and space these systems required.
Thus, unlike today's personal computers, in which one person uses a computer at a time, for much of their history
computers have been multi-user (aka time-sharing) systems.

 However, in order to use a computer, a user had to be local—in close physical proximity to the machine. Typically, this
meant working at a terminal, a combination of a keyboard and a screen that is designed to work with a particular
brand of computer system. (Today, we often call these devices dumb terminals because, unlike personal computers,
they do not contain a CPU chip.) Because there are limits to the length of the wires that can connect a terminal to a
"port" on the computer system itself, truly remote access was not a possibility for most users.

In time, it was discovered that by using a modem—so called because it is a device that "modulates" (turns binary digits
into sounds) and "demodulates" (turns sounds back into binary digits)—an ordinary telephone line could provide a link
between a terminal and a computer system. Even so, modems were initially expensive devices, and this fact, along
with the cost of long-distance telephone calls, confined most users to local computing.

 Computers were stand-alone machines. They were not connected to other computers—i.e., networked. If users of one
computer system wished to share data with another computer system, this typically meant that the data had to be
saved onto magnetic tape, and the tape had to be transported to the other computer system (which might be
thousands of miles away), where the second computer could read the data on the tape
 Lesson 1: Origins
 The origin of computer networking is in the Cold War era, a stand-off between the two nuclear superpowers: the United
States and the Soviet Union. This arms race pitted the technological
capabilities of each country against those of the other.
 In 1957, the USSR launched Sputnik, the first orbiting satellite. With this
launch, the supposedly low-tech Soviets had leap-frogged past the
Americans into space, setting off great concern in the U.S., since such satellite communications could be used to guide
nuclear missles over long distances.
 U.S. President Dwight D. Eisenhower urgently established several
government agencies to meet the new need for technology development. Most notably,
the National Aeronautics and Space Administration (NASA) was immediately created,
and in January of 1958, the U.S. launched its own orbiting satellite, the Explorer I.
 That same year, Eisenhower also established the Advanced Research
Projects Agency (ARPA) to rapidly develop a variety of technologies for the U.S.
Department of Defense.
 In 1962, ARPA created the Information Processing Techniques (IPTO) to
explore information technology issues.
 A New Kind of Network
 One project the IPTO was asked to investigate was the feasibility of a new kind of network.
 At the time, the dominant model of a network was that of the nation's only telephone company, AT&T. This network
was a centralized network, meaning that all of the nodes (points on the network) communicated through a central
switch point.
 This was a circuit-switching network: in order for one person to talk to
another via telephone, an electrical connection had to be created between the
telephone lines of each person. In old-time movies, this was done by a human operator
who "patched through the call," thereby creating a temporary, physical electrical circuit
over which sounds could travel. When the call was completed, the circuit was given to
two other callers. Later, this connecting and breaking of circuits was handled by
machines.
 However, such a system was
extremely vulnerable to catastrophe; if the
central switching point failed—e.g., if it were
struck by a single Soviet missile—the entire
network would be unable to function. How would the nation be able to
defend itself if it could not communicate?
 The researchers at IPTO concluded that the Cold War called for a new kind
of network.

Distributed Networks
In consideration of this dilemma, the IPTO researchers turned to recent research done by Paul Baran at RAND corporation on
the concept of "distributed" networks.

In a centralized network, there is only one path from a given node to another, namely the one through the central circuit-
switching point.

A distributed network is one that has no single, central connection point. Rather, each of the nodes is of equal importance: each
node is able to send and receive information and is also able to serve as a router of information to other nodes. Thus, there are
multiple possible paths from any one node on the network to any other node.
In some sense, this redundancy makes a distributed network less efficient than a
centralized network. However, this also makes this network very robust (resistant to
failure). If one node on the network is not operating, the routing computer simply
rent route to the destination. Thus, even if a large
directs the information along a different
portion of a distributed network fails, the rest of the nodes on the network can still
communicate.

distance truckers use CB


In some respects this is similar to the way long-distance
radios: when they hear of a traffic jam ahead on the highway, they reroute.
It may mean having to travel extra miles, but all that matters is getting to
the destination.

Packets
In Baran's network scheme, transmissions would be broken up into packets, each of
which might travel a completely different route to the destination, where they would then
be reassembled into the original transmission. Thus, this vision of a distributed network
was also called a packet-switching network.

However, no one had ever er built a distributed, packet-switching


packet network before. Network
experts for the telephone company at Bell Laboratories predicted such a network would
never work. Still, the researchers at IPTO were going to try to realize Paul Baran's idea...

Growing Needs
It was hoped that this network would also serve another growing need.

The Department of Defense bought computers for the various agencies around the
country that were doing Department of Defense research. However, there was no way
for the different sites to access a computer at another site. The ability to do so would
be advantageous in two major respects:

1. Each research site tended to have a different make and model of


computer. This meant, then, that each site was likely to have certain
computer capabilities that the computer at another site might not have. The
IPTO researchers wished that they could occasionally make use of the capabilities of another research site's
computer.

2. The researchers at the various Department of Defense research sites collaborated on various research projects.
However, there was no easy way for the computers at their respective sites to share data
data. Not only was distance the
problem: different makes of computers were incompatible with others (not unlike the incompatibility of PCs and
Macintosh computers today).

Thus, the networking of computers was not merely a hardware problem—i.e.,


i.e., a matter of constructing a physical connection
a presented a significant software problem: namely, how to make it possible for
between computers. Rather, the project also
incompatible computers to exchange information.
ARPANET
IPTO's solution to the incompatibility problem was to design special computers that would serve as
interfaces through which other computers could be connected. They called this special kind of connecting
computer an Interface Message Processor,
Processor or IMP,

Boston area computer company.


andd awarded the contract for realizing the IPTO designs to BBN, a Boston-area
"interfaith message
(Massachusetts Senator Ted Kennedy sent a letter to BBN, congratulating them on their "interfaith
processor.")

Most of the IPTO research sites were universities.


unive Accordingly, the first two
computers that IPTO attempted to connect using this new network design
scheme were one at Stanford University in northern California and another at
UCLA, in southern California.

The computer system at UCLA was connected to an IMP there, which was in
turn connected by "high-speed"
speed" (though slow by today's standards) phone
lines to an IMP at Stanford, which was in turn connected to the computer at
the Stanford Research Institute (SRI).

On a red-letter day in September 1969,


1969 the UCLA team attempted to log in to
the computer system at Stanford. As one of the UCLA team members began typing in the "login" command,
researchers 350 miles away at Stanford were delighted to see the letters appearing at their end. As the "g" in
"login"" was typed, the software in the IMPs crashed. But it had worked nonetheless!

This humble beginning, dubbed the ARPANET,


ARPANET would become the Internet.

A Tower of Babel ?
The plan was to have the ARPANET serve as a backbone, a high-speed
speed network into which other smaller, slower networks
could connect. Thus, the ARPANET was to be an internetwork—aa network made up of networks. In time the ARPANET came
to referred to simply as the Internet.

After the successful connection between


en UCLA and Stanford, the ARPANET grew rapidly.

 By the end of 1969,, computers at the research centers at the University of California at Santa Barbara and the
University of Utah had been added, bringing the size of the ARPANET to four. The term host was introduced to refer
to any computer on the ARPANET.

 By 1972, there were 19 hosts.

 In 1973, there were 25 hosts.

 In 1978, there 111 hosts.

 By 1981, there were 223 hosts.

This growth of the ARPANET, however, also meant that the ARPANET had to contend
with a growing number of computer makes and models. Again, as we see today with
Macs and PCs, the hardware and software of the many different kinds of computers on
the ARPANET were not compatible. Each computer "spoke a different language," if you will, and a kind of "Tower of Babel"
problem was beginning to occur.

The IMP computers handled the incompatibilities between computers, but it was becoming increasingly difficult to up
update their
odate the new kinds of computers. In fact, at this rate, the ARPANET would soon be down for upgrades far
software to accommodate
too often.

A new solution was needed.

TCP/IP
IPTO's solution to the "Tower of Babel" problem was to introduce into a new protocol—i.e.,
i.e., a set of communication rules
rules—called
TCP/IP.

TCP/IP is actually a suite of protocols, the two principal ones being:

 Transmission Control Protocol (TCP), and


 Internet Protocol (IP).
 a "To:" address,
 a "From:" address,
 the ID number of the transmission of which it is a part,
 its place or location within that transmission.

Think of a datagram as something like the image to the right. (This isn't how a datagram actually looks, of course.)

At the other end of the transmission, TCP also is at work reconstructing the message by reassembling the
packets. Click the image to the right illustrating the basic principle behind the sending of packets.
Each packet may take a different path to the destination, depending upon which way the router sends it;
thus, some packets of a message might encounter network congestion or outages and fail to reach the
destination while the other packets succeed in making it to the destination. Thus, TCP also handles the
resending of copies of packets that were lost or damaged in transit. Click the image to the left to see an
animation illustrating the basic concepts of packet loss.

In this way, TCP ultimately creates a kind of connection between two hosts, enabling them to send transmissions to each other
other.

IP - Internet Protocol
IP is the other principal protocol of TCP/IP.
TCP/IP

IP defines the format of the packets (datagrams). However, it is best known for providing the address scheme according to
which the Internet operates.

Consider this: global telephone calls are possible because of a prescribed format for
telephone numbers that ensures there are no duplicate phone numbers. Also, a postal
system works because it defines an address format and also depends upon unique
mail addresses.

Similarly, the IP address scheme


heme specifies an Internet address format and also assures that there are no
duplicate addresses on the Internet.

3 digit numbers between 0 and 255, separated by periods referred to as dots. For
The basic format for an IP address is four 3-digit
dress of the Web server at Calvin is: 153.106.4.23
example, the IP address
This is sometimes called dotted decimal format because of the "dots" involved and also because each of the four numbers is
written as a decimal, even though it is actually a binary number.

Why do the numbers fall between 0 and 255? Because each of these numbers is actually an 8-bit binary number— i.e., one byte.
And what is the largest number that can be stored in one byte? Answer:

11111111

IP - Internet Protocol (continued)


What's the IP address of your current host? (Recall, the term host simply refers to a computer that is
connected to the Internet.)

Click here to find out your IP address. This is the location your computer currently occupies on the Internet.

What makes Calvin College a place on the Internet?

Actually, it is not the fact that Calvin's computers all reside on Calvin's campus. The Internet is not structured in
terms of geographic location. Rather, it is defined in terms of IP addresses.

A defined range of IP addresses is called a domain. Calvin College is an Internet domain (calvin.edu)
because the IP addresses of all of its computers fall within the range of IP addresses that Calvin has
registered.

There are several different classes of domains. Recall that each IP address is comprised of four numbers
between 0 and 255; including 0, this amounts to 256 different possible values for each number.

In a Class A registration, the first of the four numbers that make up an IP address would be fixed, while the
other three would be allowed to vary between 0 and 255. Thus, given a Class A license, the number of IP
addresses available to you to "hand out" to the computers in your domain would be: 256 x 256 x 256 =
16,777,216—over 16 million possible addresses! Needless to say, Class A registrations are very rare and very
expensive. Typically, they are used only by Internet Service Providers and companies in need of such a large
number of IP addresses.

In a Class B registration, the first two numbers would be fixed and the other two would be allowed to vary
between 0 and 255. Given a Class B license, the number of IP addresses available to assign to the computers
in your domain would be: 256 x 256 = 65,536. Class B registrations are much more common, although still
fairly expensive.

In a Class C registration, the first three numbers would be fixed and the last one would be allowed to vary
between 0 and 255. Given a Class C license, only 256 IP addresses are available to assign to the computers
in your domain. Class C registrations are very, very common and quite inexpensive. However, 256 IP
addresses does not stretch very far: a network printer, for instance, may require its own IP address.

Note: the first number of IP address is the most significant in terms of the Internet's IP address scheme. Each
following number reflects a narrower and narrower scope of the Internet.
Calvin College has a Class B registration. Thus,
the first two numbers in Calvin's IP addresses
are fixed at 153.106, and so the IP address of
every computer within the Calvin domain begins
with 153.106. Also, this means that no other
computer anywhere on the Internet has an IP
address that begins with 153.106.

If we wish, we can restrict certain Web materials,


allowing only computers within our domain to
access them. Remember: this is your IP address.
Now, click on this Web link. If you are currently at a computer whose IP address begins with 153.106., then
you are able to see this page. If not, then the Web server forbids you from viewing it.

How does this work? Every time you type a Web address into a Web browser or click on a link, your browser
sends a request to a Web server somewhere. The browser must provide the IP address of the computer on
which it is running so that the server knows where to send the page!

Thus, do not be lulled into believing in Internet anonymity! Practically every site you contact knows the IP
address of the computer you are using.

IP addresses: In short supply!


There is a problem with all of this: Even though the traditional 32-bit (four-byte) IP address scheme consisting of four numbers
(each between 0 and 255) offers 256 x 256 x 256 x 256 = 4,294,967,296 possible IP addresses (more, really, since there are
certain "tricks" possible), this is not enough for today's Internet!

Thus, a new scheme for IP addresses is being introduced: IPv6. This is a 128-bit (16-byte) address scheme. At face value, this
would seem to provide 25616 possibilities: 340,282,366,920,938,463,463,374,607,431,768,211,456 combinations. However,
again, given the clever addressing "tricks" that are part of this scheme, many more computers than this can be assigned IP
addresses.

How big of a number is this? One person has noted that IPv6 would make it possible to
assign an IP address to every molecule on the planet!

ARPANET and the NSFNet


Thus, in response to the increasingly diverse, Tower of Babel–like ARPANET, a kind of
lingua franca (universal language) was developed: TCP/IP.

The ARPANET grew even more rapidly following the introduction of TCP/IP in 1983. Also around this time, the ARPANET made
several key steps away from its military origins.

First, key military research sites were split off from the ARPANET to form a separate network called MILNET.
Secondly, in 1984, the National Science Foundation (NSF) created a new network called the NSFNet.
Originally designed to allow remote connection to the new supercomputer centers NSF had created at
several universities, it was instead decided that the NSFNet would serve the broader purpose of connecting
smaller computer networks at each of the supercomputing sites. And because the NSFNet used ARPANET's
TCP/IP as its protocol, the NSFNet and the ARPANET could be combined. The result was a network that
truly deserved the name Internet, since it was now, on a very large scale, an "internetwork"—a network
comprised of multiple networks. The NSFNet became, along with the ARPANET, another "backbone" for the
Internet.

Recall that a backbone is a high-speed portion of a network. The original NSFNet backbone was built at the National Center for
Supercomputing Applications (NCSA) at the University of Illinois. It was fast for its day: 56000 bits per second—about the speed
of today's regular phone-line modems (the ones we complain are "so slow"!).

Key Changes in the Internet


Thus, several important changes to the fundamental nature of the Internet occurred by 1985.

First, with the implementation of TCP/IP, the grounds were laid for both the growth and the diversification of the Internet. In our
present era, where the only two personal computer platforms are "Mac" or "PC," it is hard to fully appreciate an era in which there
were literally dozens and dozens of heterogeneous, incompatible computer makes and models. But in the 1970s and early
1980s, it was often impossible to standardize on a single make of computer: in such an expensive era of information technology,
choices of computers often had to be made based on price rather than compatibility. As a result, the computers at a single
university might be unable to connect to each other, let alone to computers at another university! Thus, the arrival of TCP/IP, a
powerful means for completely incompatible computers systems to interface with each other, seemed nothing short of
miraculous.

The Unix operating system had been developed at Bell Labs in the 1970s. Unix was (and still is) a
very stable and powerful multi-user operating system. In 1984, the University of California at
Berkeley released a new version of Unix called "Berkeley" Unix, or BSD. (The chosen mascot for
BSD Unix was a demon—a play on the term daemon, a software program that runs in the
background). This version of Unix became an instant hit, especially in college and university
computing. Not only could this very affordable (often free) version of Unix run on a great variety of
computer platforms, enabling colleges and universities to provide top-notch time-sharing of
individual computers by many users, but it also included a complete version of TCP/IP. Thus,
colleges and universities could connect their computer systems together to create their own,
smaller TCP/IP local area networks. But this meant that they could also connect these small, academic research networks into
the NSFNet.

Thus, the Internet had truly become an internetwork, a network made up of networks.

Moreover, the Internet was no longer restricted to defense-related research. Rather, any college or university that had legitimate
research-related needs that could justify connection to the NSFNet was allowed to do so.

Thus, with the introduction of the NSFNet backbone, the Internet entered the next phase of its growth.

Internet Growth: the 1980s


 1983: 500+ hosts
 1984: 1000+ hosts
 1984: original NSFNet backbone—56,000 bits (56 kilobits) per second.
 1986: 2000+ hosts
 1987: somewhere between 10,000 and 25,000 hosts
The Internet was growing so quickly that in 1987, only three years later, the NSF had to upgrade the NSFNet backbone to a "T-
1" line—1.5 million bits (megabits) per second.

Here, the Internet was running into the issue of "bandwidth."

Bandwidth refers to the amount of information that can travel over a network connection in
a given amount of time. One of the common analogies invoked to describe bandwidth is that
of a pipe: a pipe with a larger diameter can carry more water in one minute than a narrower
pipe can. Similarly, the many network connections that make up the Internet come in a great
variety of bandwidths—that is, some can carry more bits per second (bps) than others. By
definition, a backbone must have a very high bandwidth relative to other kinds of
connections.

The original NSFNet backbone could not handle the packet traffic generated by the growing
number of hosts on the Internet. Thus, it had to be upgraded to a higher bandwidth.

Domain Name System (DNS)


The 1980s also introduced the domain name system (DNS) into the Internet. In 1984, this new scheme for referring to Internet
sites was deployed.

The fundamental addressing scheme of the Internet is in terms of IP addresses. However, they are difficult to remember!
Imagine having to send e-mail to a Calvin friend at jdoe@153.106.4.1.

It was for this reason that DNS was developed.

A domain is a registered range of IP addresses. However, these registered addresses


are also associated with a domain name (e.g., calvin.edu).

Thus, the DNS system is rather like an Internet "phonebook." A phonebook allows you
to look up numbers in reference to a name. Similarly, if you supply the DNS with a
domain name it will convert it into a number: an IP address. This enables us to send e-
mail and browse the Web using names instead of numbers.

When a DNS server translates a name into an IP addresses, this is called resolving
that name.

Domains
The domain name system dictates a very specific format for domain names.

Interestingly, unlike IP addresses, in which the portions of the address become less significant as you move from left to right,
with domain names it is the opposite: as you move from left to right, the portions of the address become more significant.

calvin.edu
The most significant portion of a domain name is the last, which is called a top-level domain. In the case of U.S. domain names,
this suffix puts the domain into one of a number of categories, some of which are more tightly regulated than others.

Examples:
.org non-profit organization

.edu four-year college or university

.gov U.S. federal government

.mil U.S. military

.net a network organization

originally, a commercial business;


.com
now, almost anything

The 1990s
The last decade of the 20th century was a surprising one in the arena of information technology. It was the over the course of
this decade that the Internet changed from an academic research network into a truly global communications system, connecting
people from all walks of life.

The decade began with the dismantling of ARPANET. The ARPANET's bandwidth, 50,000 bits (50 kilobits) per second, cutting-
edge technology in 1969, now paled next to the T-1 NSFNet backbone. Thus, in 1990, the computer network that had started it
all, ARPANET, the original backbone of what had become the Internet, was quietly disconnected from the Internet.

The 1990s continued the explosive growth of the Internet that had begun in the '80s.

 1991: 600,000 hosts around the world.

As a result, in 1991 NSF had to upgrade the backbone once again, this time to a T-3 connection, which transmits data at the rate
of 44 million bits (44 megabits) per second.

And the decade was only beginning.

The World Wide Web


Back in 1945, Vannevar Bush, a member of U.S. President Franklin D. Roosevelt's administration, wrote a now-famous article
entitled "As We May Think." Bush was frustrated by the inability of traditional library systems to keep up with the enormous
increase in both printed documents and such new media as recordings and movies. Moreover, Bush observed that traditional
cataloging schemes also enforced very limiting mental concepts of how items of information relate to other each other. Bush
called upon the scientific community that had fought WWII to direct its energies toward more humane tasks. Specifically, Bush
believed that a machine he called the Memex could be built that would be able to store information from a variety of media and
would also be able to link these items to each other in every way imaginable. Thus, a Memex user would find information not by
following the rigid rules of a cataloging system, but rather by following links corresponding to the kind of free associations that the
human mind makes.

Bush's text was stunningly prescient: in many ways, he described the World Wide Web nearly 50 years before it came into
existence.

In 1991, Tim Berners-Lee of the European Organization for Nuclear Research (CERN) developed and made public the World
Wide Web (WWW), making the transfer of files over the Internet much simpler than was previously possible. The World Wide
Web realized Bush's dream of what would come to be known as hypertext—text containing hyperlinks to other documents and
media.

Still, the World Wide Web did not explode onto the scene until 1993, when students at the University of Illinois
Champaign-Urbana developed Mosaic, a freeware graphical user interface (GUI) browser for the Web. One of
these University of Illinois students was Marc Andreesen, who later joined with Jim Clark to develop Netscape,
an even more popular and powerful Web browser.

The Web Goes Commercial

In 1995, the National Science Foundation (NSF) decided that the Internet had become much more than a
network supporting scientific research. Thus, NSF decided that it would not continue to upgrade and
support the Internet backbone.

Instead, NSF turned the responsibility over to a number of commercial companies. The plan: to have
multiple, commercially-owned backbones that would all converge at key nework access points (NAPs)—
initially in San Francisco, Chicago, New Jersey, and Washington, D.C.

This not only changed how the Internet backbone was administered: it also changed the entire culture of
the Internet.

Until 1995, given that the Internet backbone was a project of the U.S. Government (first ARPA, then NSF),
this meant that no commercial traffic was allowed on the Internet. You could not buy or sell over the
Internet. You could not advertise your company.

Secondly, until 1995, the only way to access the Internet was via a college, university, or other non-profit
institution that met the NSF criteria. Thus, although networks such as Compuserve and America Online
existed, these were not connected to the Internet: you could have an account on America Online, but you
could access only AOL online material and send e-mail only to other AOL customers.

The culture of the pre-1995 Internet was largely an academic one; most users had some ties to a college or
university. This had some advantages (e.g., there was no such thing as anonymity; you could always find
out who the "real person" was behind an e-mail address). But overall, it would be fair to say that the pre-
1995 Internet culture was somewhat elitist in comparison to today. It was for this reason that Bill Gates had
little interest in the Internet: he considered it largely a plaything of only the geekiest of college and
university students and professors—hardly a potentially lucrative market.

But everything changed in 1995. When the NSF privatized the Internet backbone system, there was no
longer any valid prohibition of commercial traffic on the Internet. Suddenly, businesses were popping up
online, and professors on e-mail discussion lists started seeing e-mail messages sent from "newbies" with
e-mail addresses that ended in "aol.com."

Was the party over? Or had it just begun?


To the Present
 1996: 15 million hosts
 1997: 20 million hosts
 1998: 30 million hosts
 1999: 50 million hosts
 2000: 70 million hosts
 2001: 110 million hosts
 2002: 150 million hosts
 2003: 175 million hosts
 2004: 233 million hosts
 2005: 318 million hosts

The Internet continues to grow at a phenomenal rate.

The Internet backbone now consists of dozens of backbones worldwide.

The registration of domain names and IP addresses was formerly done by a single, U.S. organization:
InterNIC. However, this task is now handled by commercial registrars worldwide such as Verisign.

But who manages the Internet? Who keeps it running?

Internet Administration
Today, the Internet is a truly global and cooperative venture. The Internet Society
(ISOC) is an international non-profit organization charged with overseeing the
Internet. The ISOC is comprised of representatives from a great variety of
companies, institutions, and agencies that contribute to some piece of the Internet
and thus have an interest in the viability of the Internet.

It's truly impressive, isn't it? The entire world can come together and cooperate to support this truly global
infrastructure. In some ways, it should be surprising that the Internet works at all, let alone works so well!

The Future: Internet 2


The Internet2 ("I2") is a very high speed section of the Internet, designed for researching
new possibilities for the Internet of tomorrow.

Access to I2 is restricted to participants in the project, which currently includes over 200
U.S. universities. Thus, in several respects, the Internet2 project marks a return to the
original purpose of the Internet as a high-speed, private research network (ARPANET and NSFNet).

At the heart of the Internet2 project are very high-speed backbones, one of
which is the Abilene backbone, which supports transfer rates that are
approximately 100 times the speed of the standard Internet. I2 backbones are
measured in billions of bits—gigabits—per second, and the access points to
these backbones are called gigapops. (Click on the map image to see a large
view of the I2 backbone system.)
The I2 project is exploring such high-bandwidth projects as high-
quality streaming video and audio, including telephony and HDTV.
For example, through the Internet2 network, astronomers from
around the world are actually able to control and peek through the
Keck Telescopes on Mauna Kea summit on the island of Hawaii.

Just how fast is the I2? Here's how long it would take for various
levels of bandwidth to download the entire DVD of the motion picture
"The Matrix":

56 K modem - 171 hours

Broadband (DSL or
Cable modem) - 25 hours

T-1 connection - 6.4 hours

Internet2 - 30 seconds

Potrebbero piacerti anche