Sei sulla pagina 1di 12

1.

The brief History of Internet Today we all take for granted that we should have instant access to the Internet regardless if we're at home or at work. We can even browse the web while on the go thanks to cell phones. It obviously hasn't been like this for ages and to help clarify the progress of the Internet we've written you a timeline with the most important things happening. Although the Internet only has a brief history it's a very interesting one - especially because things have happened so fast. What one thought was impossible some 20 years ago is now a reality for all web users. 1950's 1957 - It was this year that the USSR launched 'Sputnik', the first artificial earth satellite. The United States in reply formed the Advanced Research Projects Agency (ARPA) within the Department of Defense (DoD) to create US lead in science and technology applicable to the military. Backbones: None - Hosts: None 1960's 1962 - The U.S. Air Force commissioned Rand Paul Baran, of the RAND Corporation (a government agency), to do a study on how it could maintain its control and command over its missiles and bombers, after a nuclear attack. It was to be a military research network that could survive a nuclear strike. It was decentralized so that if any locations in the U.S. were attacked, the military could still have control of nuclear arms for a counter-attack. Baran's completed document explained the different ways to achieve this. His final proposal was a packet switched network - "Packet switching is the breaking down of data into datagrams or packets that are labeled to indicate the origin and the destination of the information and the forwarding of these packets from one computer to another computer until the information arrives at its final destination computer. This was crucial to the realization of a computer network. If packets are lost at any given point, the message can be resent by the originator." Backbones: None - Hosts: None 1968 - ARPA awarded the ARPANET contract to BBN. BBN had chosen a Honeywell minicomputer as the base to build the switch on. In 1969 the actual, physical network was constructed linking four nodes: University of California at Santa Barbara, and University of Utah, University of California at Los Angeles, and SRI (in Stanford. The network was wired together via 50 Kbps circuits. Backbones: 50Kbps ARPANET - Hosts: 4 1970's 1972 - Ray Tomlinson of BBN created the first e-mail program. The Advanced Research Projects Agency (ARPA) was renamed The Defense Advanced Research Projects Agency (or DARPA) ARPANET was currently using the Network Control Protocol or NCP to transfer data, allowing communications between hosts running on the same network. Backbones: 50Kbps ARPANET - Hosts: 23 1973 - DARPA began development on the protocol which was later to be called TCP/IP. It was developed by a group headed by Vinton Cerf from Stanford and Bob Kahn from DARPA. This new protocol was to allow varied computer networks to interconnect and communicate with each other. Backbones: 50Kbps ARPANET - Hosts: 23+

1974 - It was in 1974 the term Internet was first used by Vint Cerf and Bob Kahn in paper on Transmission Control Protocol. Backbones: 50Kbps ARPANET - Hosts: 23 1976 - Ethernet was developed by Dr. Robert M. Metcalfe which allowed coaxial cable to move data extremely fast. This was a crucial component to the development of LANs. The packet satellite project was then put to practical use and Atlantic packet Satellite network, SATNET was born. It was this network that linked the United States with Europe. But surprisingly it used INTELSAT satellites that were owned by a consortium of countries and not exclusively the United States government. It was in AT&T Bell Labs that UUCP (Unix-to-Unix CoPy) was developed and distributed with UNIX one year later. The Department of Defense began to experiment with the TCP/IP protocol and soon decided to require it for use on ARPANET. Backbones: 50Kbps ARPANET, plus satellite and radio connections - Hosts: 111+ 1979 - USENET (the decentralized news group network) was developed by Steve Bellovin, a graduate student at University of North Carolina along with other programmers Tom Truscott and Jim Ellis. It was based on UUCP. The Creation of BITNET, by IBM, "Because its Time Network", introduced the "store and forward" network. It was used for email and listservs. Backbones: 50Kbps ARPANET, plus satellite and radio connections - Hosts: 111+ 1980's 1981 - National Science Foundation created backbone called CSNET 56 Kbps network for institutions without access to ARPANET. Vinton Cerf came up with a plan for an inter-network connection between CSNET and the ARPANET. Backbones: 50Kbps ARPANET, 56Kbps CSNET, plus satellite and radio connections - Hosts: 21 1983 - This year saw the creation of Internet Activities Board (IAB). On January 1st, every machine connected to ARPANET had to use TCP/IP. NCP was replaced entirely and TCP/IP became the core Internet protocol. The University of Wisconsin created Domain Name System (DNS). This allowed packets to be directed to a domain name, which would be translated by the server database into the corresponding IP number. This made it much easier for people to access other servers, as there was no need to remember numbers. Backbones: 50Kbps ARPANET, 56Kbps CSNET, plus satellite and radio connections - Hosts: 562 Hosts: 111+ 1984 - The ARPANET was divided into two networks and the Department of Defense continued to support both networks. The networks were ARPANET and MILNET. ARPANET was to support the advanced research component, and MILNET was to serve the needs of the military. MCI was given the contract to upgrade to CSNET. New circuits would be T1 lines, 1.5 Mbps which is twenty-five times faster than the old 56 Kbps lines. IBM was to provide advanced routers and Merit to manage the network. New network was to be called NSFNET (National Science Foundation Network), and old lines were to remain called CSNET. Backbones: 50Kbps ARPANET, 56Kbps CSNET, plus satellite and radio connections - Hosts: 1024 1985 - The National Science Foundation began deploying its new T1 lines, which were finished by 1988. Backbones: 50Kbps ARPANET, 56Kbps CSNET, 1.544Mbps (T1) NSFNET, plus satellite and radio connections - Hosts: 1961 1986 - IETF or The Internet Engineering Task Force was developed to serve as a forum for technical coordination by contractors for DARPA working on ARPANET, US Defense Data Network (DDN), and the Internet core gateway system. Backbones: 50Kbps ARPANET, 56Kbps CSNET, 1.544Mbps (T1) NSFNET, plus satellite and radio connections - Hosts: 2308

1987 - BITNET and CSNET were merged to form the Corporation for Research and Educational Networking (CREN), another work of the National Science Foundation. Backbones: 50Kbps ARPANET, 56Kbps CSNET, 1.544Mbps (T1) NSFNET, plus satellite and radio connections - Hosts: 28,174 1988 - Soon after the completion of the T1 NSFNET backbone, traffic increased so quickly that they had to immediately start upgrading the network again. Backbones: 50Kbps ARPANET, 56Kbps CSNET, 1.544Mbps (T1) NSFNET, plus satellite and radio connections - Hosts: 56,000 1990's 1990 - IBM, Merit and MCI formed a non-profit corporation called ANS, Advanced Network & Services. This was formed to conduct research into high speed networking. It soon came up with the idea of the T3, a 45 Mbps line. NSF immediately adopted the new network and by the end of 1991 all of its sites were connected by this new backbone. While the T3 lines were being constructed, the ARPANET was disbanded by the Department of Defense and replaced it by the NSFNET backbone. The original 50Kbs lines of ARPANET were taken out of service. Tim Berners-Lee and CERN in Geneva implemented a hypertext system to provide efficient information access to the members of the international high-energy physics community. Backbones: 56Kbps CSNET, 1.544Mbps (T1) NSFNET, plus satellite and radio connections Hosts: 313,000 1991 - CSNET (which consisted of 56Kbps lines) was discontinued after fulfilling its important early role in the provision of academic networking service. A key feature of CREN is that its operational costs are fully met through dues paid by its member organizations. The NSF established a new network, named NREN, the National Research and Education Network. The objective of this network was to conduct high speed networking research. It was not to be used as a commercial network, nor was it to be used to send a lot of the data that the Internet now transfers. Backbones: Partial 45Mbps (T3) NSFNET, a few private backbones, plus satellite and radio connections - Hosts: 617,000 1992 - In this year the Internet Society was chartered. World-Wide Web was released by CERN. NSFNET backbone was upgraded to T3 (44.736Mbps) Backbones: 45Mbps (T3) NSFNET, private interconnected backbones consisting mainly of 56Kbps, 1.544Mbps, plus satellite and radio connections - Hosts: 1,136,000 1993 - The InterNIC was created by NSF to provide specific Internet services: directory and database services (by AT&T), registration services (by Network Solutions Inc.), and information services (by General Atomics/CERFnet). Marc Andreessen and NCSA and the University of Illinois developed a graphical user interface to the WWW, called "Mosaic for X". Search engine Lycos was created, as a university project. Backbones: 45Mbps (T3) NSFNET, private interconnected backbones consisting mainly of 56Kbps, 1.544Mbps, and 45Mpbs lines, plus satellite and radio connections - Hosts: 2,056,000 1994 - This year saw no major changes to the physical network. Growth was the most important thing that happened. Many new networks were added to the NSF backbone. Hundreds of thousands of new hosts were added to the INTERNET during this time period. Pizza Hut offers pizza ordering on its Web page. First Virtual, the first cyberbank, opens. ATM (Asynchronous Transmission Mode, 145Mbps) backbone is installed on NSFNET. WebCrawler, the first fulltext Search Engine was created. Backbones: 145Mbps (ATM) NSFNET, private interconnected backbones consisting mainly of 56Kbps, 1.544Mbps, and 45Mpbs lines, plus satellite and radio connections - Hosts: 3,864,000 1995 - The National Science Foundation discontinued direct access to the NSF backbone from April 30, 1995. The National Science Foundation contracted with four companies that would be

providers of access to the NSF backbone (Merit). These companies would then sell connections to groups, organizations, and companies. $50 annual fee is imposed on domains, excluding .edu and .gov domains which are still funded by the National Science Foundation. Industry leaders, at least at the time, Yahoo! and Altavista were founded. Backbones: 145Mbps (ATM) NSFNET (now private), private interconnected backbones consisting mainly of 56Kbps, 1.544Mbps, 45Mpbs, 155Mpbs lines in construction, plus satellite and radio connections - Hosts: 6,642,000 1996 - Most Internet traffic is carried by backbones of independent ISPs, including MCI, AT&T, Sprint, UUNet, BBN planet, ANS, and more. Currently the Internet Society, the group that controls the INTERNET, is trying to figure out new TCP/IP to be able to have billions of addresses, rather than the limited system of today. The problem that has arisen is that it is not known how both the old and the new addressing systems will be able to work at the same time during a transition period. Backbones: 145Mbps (ATM) NSFNET (now private), private interconnected backbones consisting mainly of 56Kbps, 1.544Mbps, 45Mpbs, and 155Mpbs lines, plus satellite and radio connections - Hosts: over 15,000,000, and growing rapidly 2000's The early 2000's is heavily connected with the dot-com bubble that really created a stir in the whole web industry. Reaching an audience of millions was suddenly possible at a low cost and this made opportunists and venture capitalists go crazy. Many of these people were truly talented but the majority was just people with ideas and not much else. They thought they could make some quick and easy cash. However, when the big companies with already strong brand names started launching their own sites a lot of peoples hope was shattered. They simply lacked the ability to compete with these businesses. The bubble burst in March 2000 and by 2001 the deflation was at full speed. Many lost all of their capital without ever having made any profit. A report done by JupiterResearch shows that 1.1 billion people currently are taking advantage of regular access to the Internet. The same report anticipates that the number of people with online access will increase with 38 percent between 2006 and 2011. One can only say that the future of the web is looking bright. 2. World Wide Web From Wikipedia, the free encyclopedia "WWW" redirects here. For other uses, see WWW (disambiguation). "The Web" redirects here. For other uses, see Web (disambiguation). Not to be confused with the Internet. The World Wide Web (abbreviated as WWW or W3 and commonly known as the Web), is a system of interlinked hypertext documents accessed via the Internet. With a web browser, one can view web pages that may contain text, images, videos, and other multimedia and navigate between them via hyperlinks. Using concepts from earlier hypertext systems, British engineer and computer scientist Sir Tim Berners-Lee, now the Director of the World Wide Web Consortium, wrote a proposal in March 1989 for what would eventually become the World Wide Web.[1] At CERN in Geneva, Switzerland, Berners-Lee and Belgian computer scientist Robert Cailliau proposed in 1990 to use "HyperText ... to link and access information of various kinds as a web of nodes in which the user can browse at will",[2] and publicly introduced the project in December.[3] "The World-Wide Web was developed to be a pool of human knowledge, and human culture, which would allow collaborators in remote sites to share their ideas and all aspects of a common project."[4]

3. Uploading and downloading From Wikipedia, the free encyclopedia "Upload" and "Download" redirect here. For other uses, see Upload (disambiguation) and Download (disambiguation). In Wikipedia, see Wikipedia:Uploading images and Wikipedia:Database download. In computer networks, to download means to receive data to a local system from a remote system, or to initiate such a data transfer. Examples of a remote system from which a download might be performed include a webserver, FTP server, email server, or other similar systems. A download can mean either any file that is offered for downloading or that has been downloaded, or the process of receiving such a file. It has become more common to mistake and confuse the meaning of downloading and installing or simply combine them incorrectly together. The inverse operation, uploading, can refer to the sending of data from a local system to a remote system such as a server or another client with the intent that the remote system should store a copy of the data being transferred, or the initiation of such a process. The words first came into popular usage among computer users with the increased popularity of Bulletin Board Systems (BBSs), facilitated by the widespread distribution and implementation of dial-up access the in the 1970s. Download

A symbol for downloading to a hard drive. The use of the terms uploading and downloading often imply that the data sent or received is to be stored permanently, or at least stored more than temporarily. In contrast, the term downloading is distinguished from the related concept of streaming, which indicates the receiving of data that is used near immediately as it is received, while the transmission is still in progress and which may not be stored long-term, whereas in a process described using the term downloading, this would imply that the data is only usable when it has been received in its entirety. Increasingly, websites that offer streaming media or media displayed in-browser, such as YouTube, and which place restrictions on the ability of users to save these materials to their computers after they have been received, say that downloading is not permitted.[1] In this context, "download" implies specifically "receive and save" instead of simply "receive". However, it is also important to note that "downloading" is not the same as "transferring" (i.e., sending/receiving data between two storage devices would be a transferral of data, but receiving data from the Internet would be considered a download of data). Sideload When applied to local transfers (sending data from one local system to another local system), it is often difficult to decide if it is an upload or download, as both source and destination are in the local control of the user. Technically if the user uses the receiving device to initiate the transfer then it would be a download and if they used the sending device to initiate it would be an upload. However, as most non-technical users tend to use the term download to refer to any data transfer, the term "sideload" is sometimes being used to cover all local to local transfers to end this confusion. Remote upload When there is a transfer of data from a remote system to another remote system, the process is called "remote uploading". This is used by some online file hosting services.

Remote uploading is also used in situations where the computers that need to share data are located on a distant high speed local area network, and the remote control is being performed using a comparatively slow dialup modem connection. 4. Web page From Wikipedia, the free encyclopedia A web page or webpage is a document or information resource that is suitable for the World Wide Web and can be accessed through a web browser and displayed on a monitor or mobile device. This information is usually in HTML or XHTML format, and may provide navigation to other web pages via hypertext links. Web pages frequently subsume other resources such as style sheets, scripts and images into their final presentation. Web pages may be retrieved from a local computer or from a remote web server. The web server may restrict access only to a private network, e.g. a corporate intranet, or it may publish pages on the World Wide Web. Web pages are requested and served from web servers using Hypertext Transfer Protocol (HTTP). Web pages may consist of files of static text and other content stored within the web server's file system (static web pages), or may be constructed by server-side software when they are requested (dynamic web pages). Client-side scripting can make web pages more responsive to user input once on the client browse 5. Web site
y y y y y y y y

E-mail Print AAAAAA LinkedIn Facebook Twitter Share This Reprints

This definition is also listed under presence, site and Website. A Web site is a related collection of World Wide Web (WWW) files that includes a beginning file called a home page. A company or an individual tells you how to get to their Web site by giving you the address of their home page. From the home page, you can get to all the other pages on their site. For example, the Web site for IBM has the home page address of http://www.ibm.com. (The home page address actually includes a specific file name like index.html but, as in IBM's case, when a standard default name is set up, users don't have to enter the file name.) IBM's home page address leads to thousands of pages. (But a Web site can also be just a few pages.) Since site implies a geographic place, a Web site can be confused with a Web server. A server is a computer that holds the files for one or more sites. A very large Web site may be spread over a number of servers in different geographic locations. IBM is a good example; its Web site consists of thousands of files spread out over many servers in world-wide locations. But a more typical example is probably the site you are looking at, whatis.com. We reside on a commercial space provider's server with a number of other sites that have nothing to do with Internet glossaries. A synonym and less frequently used term for Web site is "Web presence." That term seems to better express the idea that a site is not tied to specific geographic location, but is "somewhere in cyberspace." However, "Web site" seems to be used much more frequently.

You can have multiple Web sites that cross-link to files on each others' sites or even share the same files. 6. Web server Web servers are computers on the Internet that host websites, serving pages to viewers upon request. This service is referred to as web hosting. Every web server has a unique address so that other computers connected to the internet know where to find it on the vast network. The Internet Protocol (IP) address looks something like this: 69.93.141.146. This address maps to a more human friendly address, such as http://www.wisegeek.com. Web hosts rent out space on their web servers to people or businesses to set up their own websites. The web server allocates a unique website address to each website it hosts. When someone connects to the Internet, his personal computer also receives a unique IP address assigned by his Internet service provider (ISP). This address identifies the computer's location on the network. When he clicks on a link to visit a website, like www.wisegeek.com, his browser sends out a request to wiseGEEK's IP address. This request includes return information and functions like a postal letter sent across town, but in this case the information is transferred across a network. The communiqu passes through several computers on the way to wiseGEEK, each routing it closer to its ultimate destination. 7. Internet access People connect to the Internet by various means; consumer access is mostly by broadband connections in the 21st century, whereas dial-up connections were more common in the 20th century. 8. What are the different activities we can do through internet Make money online is a hot topic today. Many people want to learn how to make money on the Internet, thus creating Internet Money Rush!. This Make Money Online blog will show you how to create an income on the Internet for FREE. We have ourselves for a long time creating income online. This has resulted in that we want to show you how to do.There are a lot of different activities you can do to

9. Internet Protocol (IP) The Internet Protocol (IP) is the method or protocol by which data is sent from one computer to another on the Internet. Each computer (known as a host) on the Internet has at least one IP address that uniquely identifies it from all other computers on the Internet. When you send or receive data (for example, an e-mail note or a Web page), the message gets divided into little chunks called packets. Each of these packets contains both the sender's Internet address and the receiver's address. Any packet is sent first to a gateway Learn More
y y y

UC Architecture and Service Models IP Telephony Systems IP Telephony ROI

computer that understands a small part of the Internet. The gateway computer reads the destination address and forwards the packet to an adjacent gateway that in turn reads the destination address and so forth across the Internet until one gateway recognizes the packet as

belonging to a computer within its immediate neighborhood or domain. That gateway then forwards the packet directly to the computer whose address is specified. Because a message is divided into a number of packets, each packet can, if necessary, be sent by a different route across the Internet. Packets can arrive in a different order than the order they were sent in. The Internet Protocol just delivers them. It's up to another protocol, the Transmission Control Protocol (TCP) to put them back in the right order. IP is a connectionless protocol, which means that there is no continuing connection between the end points that are communicating. Each packet that travels through the Internet is treated as an independent unit of data without any relation to any other unit of data. (The reason the packets do get put in the right order is because of TCP, the connection-oriented protocol that keeps track of the packet sequence in a message.) In the Open Systems Interconnection (OSI) communication model, IP is in layer 3, the Networking Layer. The most widely used version of IP today is Internet Protocol Version 4 (IPv4). However, IP Version 6 (IPv6) is also beginning to be supported. IPv6 provides for much longer addresses and therefore for the possibility of many more Internet users. IPv6 includes the capabilities of IPv4 and any server that can support IPv6 packets can also support IPv4 packets. 10. TCP/IP (Transmission Control Protocol/Internet Protocol) TCP/IP (Transmission Control Protocol/Internet Protocol) is the basic communication language or protocol of the Internet. It can also be used as a communications protocol in a private network (either an intranet or an extranet). When you are set up with direct access to the Internet, your computer is provided with a copy of the TCP/IP program just as every other computer that you may send messages to or get information from also has a copy of TCP/IP. TCP/IP is a two-layer program. The higher layer, Transmission Control Protocol, manages the assembling of a message or file into smaller packets that are transmitted over the Internet and received by a TCP layer that reassembles the packets into the original message. The lower layer, Internet Protocol, handles the address part of each packet so that it gets to the right destination. Each gateway computer on the network checks this address to see where to forward the message. Even though some packets from the same message are routed differently than others, they'll be reassembled at the destination.
y y y

IP Networking Network Administration Network Monitoring

TCP/IP uses the client/server model of communication in which a computer user (a client) requests and is provided a service (such as sending a Web page) by another computer (a server) in the network. TCP/IP communication is primarily point-to-point, meaning each communication is from one point (or host computer) in the network to another point or host computer. TCP/IP and the higher-level applications that use it are collectively said to be "stateless" because each client request is considered a new request unrelated to any previous one (unlike ordinary phone conversations that require a dedicated connection for the call duration). Being stateless frees network paths so that everyone can use them continuously. (Note that the TCP layer itself is not stateless as far as any one message is concerned. Its connection remains in place until all packets in a message have been received.) Many Internet users are familiar with the even higher layer application protocols that use TCP/IP to get to the Internet. These include the World Wide Web's Hypertext Transfer Protocol (HTTP), the File Transfer Protocol (FTP), Telnet (Telnet) which lets you logon to remote computers, and the Simple Mail Transfer Protocol (SMTP). These and other protocols are often packaged together with TCP/IP as a "suite."

Personal computer users with an analog phone modem connection to the Internet usually get to the Internet through the Serial Line Internet Protocol (SLIP) or the Point-to-Point Protocol (PPP). These protocols encapsulate the IP packets so that they can be sent over the dial-up phone connection to an access provider's modem. Protocols related to TCP/IP include the User Datagram Protocol (UDP), which is used instead of TCP for special purposes. Other protocols are used by network host computers for exchanging router information. These include the Internet Control Message Protocol (ICMP), the Interior Gateway Protocol (IGP), the Exterior Gateway Protocol (EGP), and the Border Gateway Protocol (BGP). 11. File Transfer Protocol File Transfer Protocol (FTP) is a standard network protocol used to transfer files from one host to another over a TCP-based network, such as the Internet. FTP is built on a client-server architecture and utilizes separate control and data connections between the client and server.[1] FTP users may authenticate themselves using a clear-text sign-in protocol but can connect anonymously if the server is configured to allow it. The first FTP client applications were interactive command-line tools, implementing standard commands and syntax. Graphical user interface clients have since been developed for many of the popular desktop operating systems in use today.[2][3] 12. HTTP Short for HyperText Transfer Protocol, the underlying protocol used by the World Wide Web. HTTP defines how messages are formatted and transmitted, and what actions Web servers and browsers should take in response to various commands. For example, when you enter a URL in your browser, this actually sends an HTTP command to the Web server directing it to fetch and transmit the requested Web page The other main standard that controls how the World Wide Web works is HTML, which covers how Web pages are formatted and displayed. HTTP is called a stateless protocol because each command is executed independently, without any knowledge of the commands that came before it. This is the main reason that it is difficult to implement Web sites that react intelligently to user input. This shortcoming of HTTP is being addressed in a number of new technologies, including ActiveX, Java, JavaScript and cookies. Also see The Difference Between FTP and HTTP in the Did You Know . . . ? section of Webopedia. 13. What is IP Address? Internet Protocol Address (or IP Address) is an unique address that computing devices use to identify itself and communicate with other devices in the Internet Protocol network. Any device connected to the IP network must have an unique IP address within its network. An IP address is analogous to a street address or telephone number in that it is used to uniquely identify a network device to deliver mail message, or call ("view") a website. 14. IP characteristics The IP protocol resides in the Internet layer, as we have already said. The IP protocol is the protocol in the TCP/IP stack that is responsible for letting your machine, routers, switches and etcetera, know where a specific packet is going. This protocol is the very heart of the whole TCP/IP stack, and makes up the very foundation of everything in the Internet.

The IP protocol encapsulates the Transport layer packet with information about which Transport layer protocol it came from, what host it is going to, and where it came from, and a little bit of other useful information. All of this is, of course, extremely precisely standardized, down to every single bit. The same applies to every single protocol that we will discuss in this chapter. The IP protocol has a couple of basic functionalities that it must be able to handle. It must be able to define the datagram, which is the next building block created by the transport layer (this may in other words be TCP, UDP or ICMP for example). The IP protocol also defines the Internet addressing system that we use today. This means that the IP protocol is what defines how to reach between hosts, and this also affects how we are able to route packets, of course. The addresses we are talking about are what we generally call an IP address. Usually when we talk about IP addresses, we talk about dotted quad numbers (e.g., 127.0.0.1). This is mostly to make the IP addresses more readable for the human eye, since the IP address is actually just a 32 bit field of 1's and 0's (127.0.0.1 would hence be read as 01111111000000000000000000000001 within the actual IP header). 15. Domain name A domain name is an identification label that defines a realm of administrative autonomy, authority, or control in the Internet. Domain names are hostnames that identify Internet Protocol (IP) resources such as web sites. Domain names are formed by the rules and procedures of the Domain Name System (DNS). Domain names are used in various networking contexts and application-specific naming and addressing purposes. They are organized in subordinate levels (subdomains) of the DNS root domain, which is nameless. The first-level set of domain names are the top-level domains (TLDs), including the generic top-level domains (gTLDs), such as the prominent domains com, net and org, and the country code top-level domains (ccTLDs). Below these top-level domains in the DNS hierarchy are the second-level and third-level domain names that are typically open for reservation by end-users that wish to connect local area networks to the Internet, create other publicly accessible Internet resources or run web sites. The registration of these domain names is usually administered by domain name registrars who sell their services to the public. Individual Internet host computers use domain names as host identifiers, or hostnames. Hostnames are the leaf labels in the domain name system usually without further subordinate domain name space. Hostnames appear as a component in Uniform Resource Locators (URLs) for Internet resources such as web sites (e.g., en.wikipedia.org). Domain names are also used as simple identification labels to indicate ownership or control of a resource. Such examples are the realm identifiers used in the Session Initiation Protocol (SIP), the DomainKeys used to verify DNS domains in e-mail systems, and in many other Uniform Resource Identifiers (URIs). An important purpose of domain names is to provide easily recognizable and memorizable names to numerically addressed Internet resources. This abstraction allows any resource (e.g., website) to be moved to a different physical location in the address topology of the network, globally or locally in an intranet. Such a move usually requires changing the IP address of a resource and the corresponding translation of this IP address to and from its domain name. Domain names are often referred to simply as domains and domain name registrants are frequently referred to as domain owners, although domain name registration with a registrar does not confer any legal ownership of the domain name, only an exclusive right of use. The Internet Corporation for Assigned Names and Numbers (ICANN) manages the top-level development and architecture of the Internet domain name space. It authorizes domain name registrars, through which domain names may be registered and reassigned. The use of domain

names in commerce may subject strings in them to trademark law. In 2010, the number of active domains reached 196 million.[1] 16. What Are the Different Parts of an Email Address? By Drew Somers, eHow Contributor ? 1.

The @ symbol is the central part of an email address. Email is so commonly used in business, education and personal life that it's easy to take it for granted and not think about just how your messages get where they're supposed to go. Every email address is unique and made up of three parts designed to tell the Internet how to route the mail so that it reaches your inbox.

2. User Name o The first part of an email address is the user name, which identifies you personally on the mail server that you use. Each user name on a server must be different and consists of letters, numbers or special characters such as underscores or periods. Your user name might be your first initial and last name, a business name or anything else you want to use to identify yourself on the Internet. @ Symbol
o

The symbol "@", called the "at" symbol, connects the user name of an email address to the mail server, or domain. It tells the Internet that your user name can be found at that domain.

Domain Name
o

The domain name in an email address appears after the @ symbol and identifies the Internet domain that handles your email. It can be further broken down into two parts: the name of the computer or server that handles the mail and the toplevel domain, often "com," "gov" or "edu," which stand for commercial business, government agency and educational institution, respectively, according to St. Edward's University. An example of a domain name is "ehow.com."

JELLIE VAINE CASELA

Potrebbero piacerti anche