Sei sulla pagina 1di 24

Networking Basics:

Part 1 - Networking Hardware


Published: Aug 10, 2006 Author: Brien M. Posey

In this article series, I will start with the absolute basics, and work toward building a
functional network. In this article I will begin by discussing some of the various
networking components and what they do.

In the past, all of the articles that I have written for this Web site have been intended
for use by administrators with at least some level of experience. Recently though,
there have been requests for articles targeted toward those who are just getting started
with networking and that have absolutely no experience at all. This article will be the
first in a series targeted toward novices. In this article series, I will start with the
absolute basics, and work toward building a functional network. In this article I will
begin by discussing some of the various networking components and what they do.

Network Adapters
The first piece of hardware that I want to discuss is a network adapter. There are
many different names for network adapters, including network cards, Network
Interface Cards, NICs. These are all generic terms for the same piece of hardware. A
network card’s job is to physically attach a computer to a network, so that the
computer can participate in network communications.

The first thing that you need to know about network cards is that the network card has
to match the network medium. The network medium refers to the type of cabling that
is being used on the network. Wireless networks are a science all their own, and I will
talk about them in a separate article.

At one time making sure that a network card matched the network medium was a
really big deal, because there were a large number of competing standards in
existence. For example, before you built a network and started buying network cards
and cabling, you had to decide if you were going to use Ethernet, coaxal Ethernet,
Token Ring, Arcnet, or one of the other networking standards of the time. Each
networking technology had its strengths and weaknesses, and it was important to
figure out which one was the most appropriate for your organization.

Today, most of the networking technologies that I mentioned above are quickly
becoming extinct. Pretty much the only type of wired network used by small and
medium sized businesses is Ethernet. You can see an example of an Ethernet network
card, shown in Figure A.
Figure A: This is what an Ethernet card looks like

Modern Ethernet networks use twisted pair cabling containing eight wires. These
wires are arranged in a special order, and an RJ-45 connecter is crimped onto the end
of the cable. An RJ-45 cable looks like the connector on the end of a phone cord, but
it’s bigger. Phone cords use RJ-11 connectors as opposed to the RJ-45 connectors
used by Ethernet cable. You can see an example of an Ethernet cable with an RJ-45
connector, shown in Figure B.
Figure B: This is an Ethernet cable with an RJ-45 connector installed

Hubs and Switches


As you can see, computers use network cards to send and receive data. The data is
transmitted over Ethernet cables. However, you normally can’t just run an Ethernet
cable between two PCs and call it a network.

In this day and age of high speed Internet access being almost universally available,
you tend to hear the term broadband thrown around a lot. Broadband is a type of
network in which data is sent and received across the same wire. In contrast, Ethernet
uses Baseband communications. Baseband uses separate wires for sending and
receiving data. What this means is that if one PC is sending data across a particular
wire within the Ethernet cable, then the PC that is receiving the data needs to have the
wire redirected to its receiving port.

You can actually network two PCs together in this way. You can create what is
known as a cross over cable. A cross over cable is simply a network cable that has the
sending and receiving wires reversed at one end, so that two PCs can be linked
directly together.

The problem with using a cross over cable to build a network is that the network will
be limited to using no more and no less than two PCs. Rather than using a cross over
cable, most networks use normal Ethernet cables that do not have the sending and
receiving wires reversed at one end.

Of course the sending and receiving wires have to be reversed at some point in order
for communications to succeed. This is the job of a hub or a switch. Hubs are starting
to become extinct, but I want to talk about them any way because it will make it easier
to explain switches later on.

There are different types of hubs, but generally speaking a hub is nothing more than a
box with a bunch of RJ-45 ports. Each computer on a network would be connected to
a hub via an Ethernet cable. You can see a picture of a hub, shown in Figure C.

Figure C: A hub is a device that acts as a central connection point for computers on a
network

A hub has two different jobs. Its first job is to provide a central point of connection
for all of the computers on the network. Every computer plugs into the hub (multiple
hubs can be daisy chained together if necessary in order to accommodate more
computers).

The hub’s other job is to arrange the ports in such a way so that if a PC transmits data,
the data is sent over the other computer’s receive wires.

Right now you might be wondering how data gets to the correct destination if more
than two PCs are connected to a hub. The secret lies in the network card. Each
Ethernet card is programmed at the factory with a unique Media Access Control
(MAC) address. When a computer on an Ethernet network transmits data across an
Ethernet network containing PCs connected to a hub, the data is actually sent to every
computer on the network. As each computer receives the data, it compares the
destination address to its own MAC address. If the addresses match then the computer
knows that it is the intended recipient, otherwise it ignores the data.

As you can see, when computers are connected via a hub, every packet gets sent to
every computer on the network. The problem is that any computer can send a
transmission at any given time. Have you ever been on a conference call and
accidentally started to talk at the same time as someone else? This is the same thing
that happens on this type of network.

When a PC needs to transmit data, it checks to make sure that no other computers are
sending data at the moment. If the line is clear, it transmits the necessary data. If
another computer tries to communicate at the same time though, then the packets of
data that are traveling across the wire collide and are destroyed (this is why this type
of network is sometimes referred to as a collision domain). Both PCs then have to
wait for a random amount of time and attempt to retransmit the packet that was
destroyed.

As the number of PCs on a collision domain increases, so does the number of


collisions. As the number of collisions increase, network efficiency is decreased. This
is why switches have almost completely replaced hubs.

A switch, such as the one shown in Figure D, performs all of the same basic tasks as a
hub. The difference is that when a PC on the network needs to communicate with
another PC, the switch uses a set of internal logic circuits to establish a dedicated,
logical path between the two PCs. What this means is that the two PCs are free to
communicate with each other, without having to worry about collisions.

Figure D: A switch looks a lot like a hub, but performs very differently

Switches greatly improve a network’s efficiency. Yes, they eliminate collisions, but
there is more to it than that. Because of the way that switches work, they can establish
parallel communications paths. For example, just because computer A is
communicating with computer B, there is no reason why computer C can’t
simultaneously communicate with computer D. In a collision domain, these types of
parallel communications would be impossible because they would result in collisions.
Conclusion
In this article, I have discussed some of the basic components that make up a simple
network. In Part 2, I will continue the discussion of basic networking hardware.

Part 2 - Routers
Published: Oct 04, 2006

This article continues the discussion of networking hardware by talking about one of
the most important networking components; routers.]

In the first part of this article series, I talked about some basic networking hardware
such as hubs and switches. In this article, I want to continue the discussion of
networking hardware by talking about one of the most important networking
components; routers.

Even if you are new to networking, you have probably heard of routers. Broadband
Internet connections, such as those utilizing a cable modem or a DSL modem, almost
always require a router. A router's job isn't to provide Internet connectivity though. A
router's job is to move packets of data from one network to another. There are actually
many different types of routers ranging from simple, inexpensive routers used for
home Internet connectivity to the insanely expensive routers used by giant
corporations. Regardless of a router’s cost or complexity, routers all work on the same
basic principles.

That being the case, I'm going to focus my discussion around simple, low budget
routers that are typically used to connect a PC to a broadband Internet connection. My
reason for doing so is that this article series is intended for beginners. In my opinion,
it will be a lot easier to teach you the basics if I am referencing something that is at
least somewhat familiar to most people, and that is not as complicated as many of the
routers used within huge corporations. Besides, the routers used in corporations work
on the same basic principles as the routers that I will be discussing in this article. If
you are wanting a greater level of knowledge though, don’t worry. I will talk about
the science of routing in a whole lot more detail later in this article series.

As I explained earlier, a router's job is to move packets of data from one network to
another. This definition might seem strange in the context of a PC that's connected to
a broadband Internet connection. If you stop and think about it, the Internet is a
network (actually it's a collection of networks, but that's beside the point).

So if a router's job is to move traffic between two networks, and the Internet is one of
those networks, where is the other one? In this particular case, the PC that is
connected to the router is actually configured as a very simple network.
To get a better idea of what I am talking about, take a look at the pictures shown in
Figures A and B. Figure A shows the front of a 3COM broadband router, while Figure
B shows the back view of the same router.

Figure A: This is the front view of a 3COM broadband router

Figure
B: A broadband Internet router contains a set of RJ-45 ports just like a hub or switch
As you can see in the figures, there is nothing especially remarkable about the front
view of the router. I wanted to include this view anyway though, so that those of you
who are unfamiliar with routers can see what a router looks like. Figure B is much
more interesting.

If you look at Figure B, you’ll see that there are three sets of ports on the back of the
router. The port on the far left is where the power supply connects to the router. The
middle port is an RJ-45 port used to connect to the remote network. In this particular
case, this router is intended to provide Internet connectivity. As such, this middle port
would typically be used to connect the router to a cable modem or to a DSL
modem. The modem in turn would provide the actual connectivity to the Internet.

If you look at the set of ports on the far right, you’ll see that there are four RJ-45
ports. If you think back to the first part of this article series, you’ll recall that hubs and
switches also contained large groups of RJ-45 ports. In the case of a hub or switch, the
RJ-45 ports are used to provide connectivity to the computers on the network.

These ports work the exact same way on this router. This particular router has a four
port switch built in. Remember earlier when I said that a router’s job was to move
packets between one network and another? I explained that in the case of a broadband
router, the Internet represents one network, and the PC represents the second
network. The reason why a single computer can represent an entire network is
because the router does not treat the PC as a standalone device. Routers treat the PC
as a node on a network. As you can see from the photo in Figure B, this particular
router could actually accommodate a network of four PCs. It’s just that most home
users who use this type of configuration only plug one PC into the router. Therefore a
more precise explanation would be that this type of network routes packets of data
between a small network (even if that network only consists of a single computer) to
the Internet (which it treats as a second network).

The Routing Process


Now that I've talked a little bit about what a router is and what it does, I want to talk
about the routing process. In order to understand how routing works, you have to
understand a little bit about how the TCP/IP protocol works.

Every device connected to a TCP/IP network has a unique IP address bound to its
network interface. The IP address consists of a series of four numbers separated by
periods. For example, a typical IP address looks something like this: 192.168.0.1

The best analogy I can think of to describe an IP address is to compare it to a street


address. A street address consists of a number and a street name. The number
identifies the specific building on the street. An IP address works kind of the same
way. The address is broken into the network number and a device number. If you
were to compare an IP address to a Street address, then think of the network number
as being like a street name, and at the device number as being like a house
number. The network number identifies which network the device is on, and the
device number gives the device an identity on that network.
So how do you know where the network number ends and the device number begins?
This is the job of the subnet mask. A subnet mask tells the computer where the
network number portion of an IP address stops, and where the device number starts.
Subnetting can be complicated, and I will cover in detail in a separate article. For
now, let's keep it simple and look at a very basic subnet mask.

A subnet mask looks a lot like an IP address in that it follows the format of having
four numbers separated by periods. A typical subnet mask looks like this:
255.255.255.0

In this particular example, the first three numbers (called octets) are each 255, and the
last number 0. The number 255 indicates that all of the bits in the corresponding
position in the IP address are a part of the network number. The number zero indicates
that none of the bits in the corresponding position in the IP address are a part of the
network number, and therefore they all belong to the device number.

I know this probably sounds a little bit confusing, so consider this example. Imagine
that you had a PC with an IP address of 192.168.1.1 and a subnet mask of
255.255.255.0. In this particular case, the first three octets of the subnet mask are all
255. This means that the first three octets of the IP address all belong to the network
number. Therefore, the network number portion of this IP address is 192.168.1.x.

The reason why this is important to know is because a router’s job is to move packets
of data from one network to another. All of the devices on a network (or on a network
segment to be more precise) share a common network number. For example, if
192.168.1.x was the network number associated with computers attached to the router
shown in Figure B, then the IP addresses for four individual computers might be:

• 192.168.1.1
• 192.168.1.2
• 192.168.1.3
• 192.168.1.4

As you can see, each computer on the local network shares the same network number,
but has a different device number. As you may know, whenever a computer needs to
communicate with another computer on a network, it does so by referring to the other
computer’s IP address. For example, in this particular case the computer with the
address of 192.168.1.1 could easily send a packet of data to the computer with the
address of 192.168.1.3, because both computers are a part of the same physical
network.

Things work a bit differently if a computer needs to access a computer on another


network. Since I am focusing this particular discussion on small broadband routers
that are designed to provide Internet connectivity, let’s pretend that one of the users
on the local network wanted to visit the www.brienposey.com Web site. A Web site is
hosted by a server. Like any other computer, a Web server has a unique IP address.
The IP address for this particular Web site is 24.235.10.4.

You can easily look at this IP address and tell that it does not belong to the
192.168.1.x network. That being the case, the computer that’s trying to reach the Web
site can’t just send the packet out along the local network, because the Web server
isn’t a part of the local network. Instead, the computer that needs to send the packet
looks at its default gateway address.

The default gateway is a part of a computer’s TCP/IP configuration. It is basically a


way of telling a computer that if it does not know where to send a packet, then send it
to the specified default gateway address. The default gateway’s address would be the
router’s IP address. In this case, the router’s IP address would probably be
192.168.1.0.

Notice that the router’s IP address shares the same network number as the other
computers on the local network. It has to so that it can be accessible to those
computers. Actually, a router has at least two IP addresses. One of those addresses
uses the same network number as your local network. The router’s other IP address is
assigned by your ISP. This IP address uses the same network number as the ISPs
network. The router’s job is therefore to move packets from your local network onto
the ISPs network. Your ISP has routers of its own that work in exactly the same way,
but that route packets to other parts of the Internet.

Conclusion
As you can see, a router is a vital network component. Without routers, connectivity
between networks (such as the Internet) would be impossible. In Part 3 of this article
series, I will discuss the TCP/IP protocol in more detail.

Part 3 - DNS Servers


Published: Oct 18, 2006

This article continues the Networking for Beginners series by talking about how DNS
servers work.

In the last part of this article series, I talked about how all of the computers on a
network segment share a common IP address range. I also explained that when a
computer needs to access information from a computer on another network or
network segment, it’s a router’s job to move the necessary packets of data from the
local network to another network (such as the Internet).

If you read that article, you probably noticed that in one of my examples, I made a
reference to the IP address that’s associated with my Web site. To be able to access a
Web site, your Web browser has to know the Web site’s IP address. Only then can it
give that address to the router, which in turn routes the outbound request packets to
the appropriate destination. Even though every Web site has an IP address, you
probably visit Web sites every day without ever having to know an IP address. In this
article, I will show you why this is possible.
I have already explained that IP addresses are similar to street addresses. The network
portion of the address defines which network segment the computer exists on, and the
computer portion of the address designates a specific computer on that network.
Knowing an IP address is a requirement for TCP/IP based communications between
two computers.

When you open a Web browser and enter the name of a Web site (which is known as
the site’s domain name, URL, or Universal Resource Locator), the Web browser goes
straight to the Web site without you ever having to enter an IP address. With that in
mind, consider my comparison of IP addresses to postal addresses. You can’t just
write someone’s name on an envelope, drop the envelope in the mail, and expect it to
be delivered. The post office can’t deliver the letter unless it has an address. The same
basic concept applies to visiting Web sites. Your computer cannot communicate with
a Web site unless it knows the site’s IP address.

So if your computer needs to know a Web site’s IP address before it can access the
site, and you aren’t entering the IP address, where does the IP address come from?
Translating domain names into IP addresses is the job of a DNS server.

In the two articles leading up to this one, I talked about several aspects of a
computer’s TCP/IP configuration, such as the IP address, subnet mask, and default
gateway. If you look at Figure A, you will notice that there is one more configuration
option that has been filled in; the Preferred DNS server.
Figure A: The Preferred DNS Server is defined as a part of a computer’s TCP/IP
configuration

As you can see in the figure, the preferred DNS server is defined as a part of a
computer’s TCP/IP configuration. What this means is that the computer will always
know the IP address of a DNS server. This is important because a computer cannot
communicate with another computer using the TCP/IP protocol unless an IP address
is known.

With that in mind, let’s take a look at what happens when you attempt to visit a Web
site. The process begins when you open a Web browser and enter a URL. When you
do, the Web browser knows that it can not locate the Web site based on the URL
alone. It therefore retrieves the DNS server’s IP address from the computer’s TCP/IP
configuration and passes the URL on to the DNS server. The DNS server then looks
up the URL on a table which also lists the site’s IP address. The DNS server then
returns the IP address to the Web browser, and the browser is then able to
communicate with the requested Web site.

Actually, that explanation is a little bit over simplified. DNS name resolution can only
work in the way that I just described if the DNS server contains a record that
corresponds to the site that’s being requested. If you were to visit a random Web site,
there is a really good chance that your DNS server does not contain a record for the
site. The reason for this is because the Internet is so big. There are millions of Web
sites, and new sites are created every day. There is no way that a single DNS server
could possibly keep up with all of those sites and service requests from everyone who
is connected to the Internet.

Let’s pretend for a moment that it was possible for a single DNS server to store
records for every Web site in existence. Even if the server’s capacity were not an
issue, the server would be overwhelmed by the sheer volume of name resolution
requests that it would receive from people using the Internet. A centralized DNS
server would also be a very popular target for attacks.

Instead, DNS servers are distributed so that a single DNS server does not have to
provide name resolutions for the entire Internet. There is an organization named the
Internet Corporation for Assigned Names and Numbers, or ICANN for short, that is
responsible for all of the registered domain names on the Internet. Because managing
all of those domain names is such a huge job, ICANN delegates portions of the
domain naming responsibility to various other firms. For example, Network Solutions
is responsible for all of the .com domain names. Even so, Network Solutions does not
maintain a list of the IP addresses associated with all of the .com domains. In most
cases, Network Solution’s DNS servers contain records that point to the DNS server
that is considered to be authoritative for each domain.

To see how all this works, imagine that you wanted to visit the www.brienposey.com
website. When you enter the request into your Web browser, your Web browser
forwards the URL to the DNS server specified by your computer’s TCP/IP
configuration. More than likely, your DNS server is not going to know the IP address
of this website. Therefore, it will send the request to the ICANN DNS server. The
ICANN DNS server wouldn’t know the IP address for the website that you are trying
to visit. It would however know the IP address of the DNS server that is responsible
for domain names ending in .COM. It would return this address to your Web browser,
which in return would submit the request to the specified DNS server.

The top level DNS server for domains ending in .COM would not know the IP
address of the requested Web site either, but it would know the IP address of a DNS
server that is authoritative for the brienposey.com domain. It would send this address
back to the machine that made the request. The Web browser would then send the
DNS query to the DNS server that is authoritative for the requested domain. That
DNS server would then return the websites IP address, thus allowing the machine to
communicate with the requested website.

As you can see, there are a lot of steps that must be completed in order for a computer
to find the IP address of a website. To help reduce the number of DNS queries that
must be made, the results of DNS queries are usually cached for either a few hours or
a few days, depending on how the machine is configured. Caching IP addresses
greatly improves performance and minimizes the amount of bandwidth consumed by
DNS queries. Imagine how inefficient Web browsing would be if your computer had
to do a full set of DNS queries every time you visit a new page.

Conclusion
In this article, I explained how DNS servers are used to resolve domain names to IP
addresses. Although the process that I’ve described sounds fairly simple, it is
important to remember that ICANN and top level DNS registrars, such as Network
Solutions, use a load balancing technique to distribute requests across many different
DNS servers. This prevents any one server from becoming overwhelmed, and
eliminates the chances of having a single point of failure.

Part 4 - Workstations and Servers


Published: Nov 09, 2006

So far in this article series, I have talked a lot about networking hardware and about
the TCP/IP protocol. The networking hardware is used to establish a physical
connection between devices, while the TCP/IP protocol is essentially the language
that the various devices use to communicate with each other. In this article, I will
continue the discussion by talking a little bit about the computers that are connected to
a network.

Even if you are new to networking, you have no doubt heard terms such as server and
workstation. These terms are generally used to refer to a computer’s role on the
network rather than the computer’s hardware. For example, just because a computer is
acting as a server, it doesn’t necessarily mean that it has to be running server
hardware. It is possible to install a server operating system onto a PC, and have that
PC act as a network server. Of course in most real life networks, servers are running
specialized hardware to help them to be able to handle the heavy workload that
servers are typically subjected to.

What might make the concept of network servers a little bit more confusing is that
technically speaking a server is any computer that hosts resources over a network.
This means that even a computer that’s running Windows XP could be considered to
be a server if it is configured to share some kind of resource, such as files or a printer.

Computers on a network typically fall into one of three roles. Usually a computer is
considered to be either a workstation (sometimes referred to as a client), server, or a
peer.

Workstations are computers that use network resources, but that do not host resources
of their own. For example, a computer that is running Windows XP would be
considered a workstation so long as it is connected to a network and is not sharing
files or printers.

Servers are computers that are dedicated to the task of hosting network resources.
Typically, nobody is going to be sitting down at a server to do their work. Windows
servers (that is, computers running Windows Server 2003, Windows 2000 Server, or
Windows NT Server) have a user interface that is very similar to what you would find
on a Windows workstation. It is possible that someone with an appropriate set of
permissions could sit down at the server and run Microsoft Office or some other
application. Even so, such behavior is strongly discouraged because it undermines the
server’s security, decreases the server’s performance, and has the potential to affect
the server’s stability.

The last type of computer that is commonly found on a network is a peer. A peer
machine is a computer that acts as both a workstation and a server. Such machines
typically run workstation operating systems (such as Windows XP), but are used to
both access and host network resources.

In the past, peers were found primarily on very small networks. The idea was that if a
small company lacks the resources to purchase true servers, then the workstations
could be configured to perform double duty. For example, each user could make their
own files accessible to every other user on the network. If a user happens to have a
printer attached to their PC, they can also share the printer so that others on the
network can print to it.

Peer networks have been traditionally discouraged in larger companies because of


their inherent lack of security, and because they cannot be centrally managed. That’s
why peer networks are primarily found in extremely small companies or in homes
with multiple PCs. Windows Vista (the successor to Windows XP) is attempting to
change that. Windows Vista will allow users on traditional client/server networks to
form peer groups that will allow the users and those groups to share resources
amongst themselves in a secure manner, without breaking their connection to network
servers. This new feature is being marketed as a collaboration tool.

Earlier I mentioned that peer networks are discouraged in favor of client/server


networks because they lack security and centralized manageability. However, just
because a network is made up of workstations and servers, it doesn’t necessarily
guarantee security and centralized management. Remember, a server is only a
machine that is dedicated to the task of hosting resources over a network. Having said
that, there are countless varieties of servers and some types of servers are dedicated to
providing security and manageability.

For example, Windows servers fall into two primary categories; member servers and
domain controllers. There is really nothing special about a member server. A member
server is simply a computer that is connected to a network, and is running a Windows
Server operating system. A member server might be used as a file repository (known
as a file server), or to host one or more network printers (known as a print server).
Member servers are also frequently used to host network applications. For example,
Microsoft offers a product called Exchange Server 2003 that when installed on a
member server, allows that member server to function as a mail server. The point is
that a member server can be used for just about anything.

Domain controllers are much more specialized. A domain controller’s job is to


provide security and manageability to the network. I am assuming that you’re
probably familiar with the idea of logging on to a network by entering a username and
password. On a Windows network, it is the domain controller that is responsible for
keeping track of usernames and passwords.
The person who is responsible for managing the network is known as the network
administrator. Whenever a user needs to gain access to resources on a Windows
network, the administrator uses a utility provided by a domain controller to create a
user account and password for the new user. When the new user (or any user for that
matter) attempts to log onto the network, the users credentials (their username and
password) are transmitted to the domain controller. The domain controller validates
the user’s credentials by comparing them against the copy stored in the domain
controller’s database. Assuming that the password that the user entered matches the
password that the domain controller has on file, the user is granted access to the
network. This process is called authentication.

On a Windows network, only the domain controllers perform authentication services.


Of course users will probably need to access resources stored on member servers.
This is not a problem because resources on member servers are protected by a set of
permissions that are related to the security information stored on domain controllers.

For example, suppose that my user name was Brien. I enter my username and
password, which is sent to a domain controller for authentication. When the domain
controller authenticates me, it has not actually given me access to any resources.
Instead, it validates that I am who I claim to be. When I go to access resources off of a
member server, my computer presents a special access token to the member server
that basically says that I have been authenticated by a domain controller. The member
server does not trust me, but it does trust the domain controller. Therefore, since the
domain controller has validated my identity, the member server accepts that I am who
I claim to be and gives me access to any resources for which I have permission to
access.

Conclusion
As you’ve probably guessed, the process of being authenticated by a domain
controller and gaining access to network resources is a little more complicated than
what I have discussed here. I will be discussing authentication and resource access in
much greater detail later in the series. For right now, I wanted to keep things simple
so that I could gradually introduce you to these concepts. In the next part of this
article series, I will be discussing domain controllers in much more detail. As I do, I
will also discuss the role that domain controllers play within the Active Directory.

Part 5 - Domain Controllers


Published: Dec 05, 2006

What domain controllers are and how they fit into your network infrastructure.

In the previous article in this series, I talked about the roles of various computers on a
network. As you may recall, one of the roles that I talked a little bit about was that of
a domain controller. In this article, I will talk more about what domain controllers are
and how they fit into your network infrastructure.
One of the most important concepts in Windows networking is that of a domain. A
domain is basically a collection of user accounts and computer accounts that are
grouped together so that they can be centrally managed. It is the job of the domain
controller to facilitate this central management of domain resources.

To see why this is important, consider that any workstation that’s running Windows
XP contains a handful of built in user accounts. Windows XP even allows you to
create additional user accounts on the workstation. Unless the workstation is
functioning as a standalone system or is a part of a peer network, these workstation
level user accounts (called local user accounts) are not used for controlling access to
network resources. Instead, local user accounts are used to regulate access to the local
computer. They act primarily as a mechanism which insures that administrators can
perform workstation maintenance, without the end users having the ability to tamper
with workstation settings.

The reason why local user accounts are not used to control access to resources outside
of the workstation that they reside on is because doing so would create an extreme
management burden. Think about it for a minute. Local user accounts reside on each
individual workstation. This means that if local user accounts were a network’s
primary security mechanism, then an administrator would have to physically travel to
the computer containing an account any time a change is needed to be made to the
account’s permissions. This might not be a big deal on smaller networks, but making
security changes would be extremely cumbersome on larger networks or in situations
in which a change is needed to be applied globally to all accounts.

Another reason why local user accounts are not used to control access to network
resources is because they don’t travel with the user from one computer to another. For
instance, if a user’s computer crashed, the user couldn’t just log on to another
computer and work while their computer was being fixed, because the user’s account
is specific to the computer that crashed. In order for the user to be able to do any
work, a new account would have to be created on the computer that the user is now
working with.

These are just a few of the reasons why using local user accounts to secure access to
network resources is impractical. Even if you wanted to implement this type of
security, Windows does not allow it. Local user accounts can only be used to secure
local resources.

A domain solves these and other problems by centralizing user accounts (and other
configuration and security related objects that I will talk about later in the series). This
allows for easier administration, and allows users to log onto the network from any
PC on the network (unless you restrict which machines a user can login from).

With the information that I have given you so far regarding domains, it may seem that
the philosophy behind domains is that, since the resources which users need access to
reside on a server, you should use server level user accounts to control access to those
resources. In a way this idea is true, but there is a little more to it than that.

Back in the early 1990s I was working for a large insurance company that was
running a network with servers running Novell NetWare. Windows networking hadn’t
been invented yet, and Novell NetWare was the server operating system of choice at
the time. At the time when I was hired, the company only had one network server,
which contained all of the user accounts and all of the resources that the users needed
access to. A few months later, someone decided that the users at the company needed
to run a brand new application. Because of the size of the application and the volume
of data that the application produced, the application was placed onto a dedicated
server.

The version of Novell NetWare that the company was running at the time used the
idea that I presented earlier in which resources residing on a server were protected by
user accounts which also resided on that server. The problem with this architecture
was that each server had its own, completely independent set of user accounts. When
the new server was added to the network, users logged in using the normal method,
but they had to enter another username and password to access resources on the new
server.

At first things ran smoothly, but about a month after the new server was installed
things started to get ugly. It became time for users to change their password. Users
didn’t realize that they now had to change their password in two different places. This
meant that passwords fell out of sync, and the help desk was flooded with calls related
to password resets. As the company continued to grow and added more servers, the
problem was further compounded.

Eventually, Novell released version 4.0 of NetWare. NetWare version 4 introduced a


technology called the Directory Service. The idea was that users should not have a
separate account for each server. Instead, a single user account could be used to
authenticate users regardless of how many servers there were on the network.

The interesting thing about this little history lesson is that although domains are
unique to Microsoft networks (Novell networks do not use domains), domains work
on the same basic principle. In fact, when Windows 2000 was released, Microsoft
included a feature which is still in use today called the Active Directory. The Active
Directory is very similar to the directory service that Novell networks use.

So what does all of this have to do with domains? Well, on Windows servers running
Windows 2000 Server, Windows Server 2003, or the forthcoming Longhorn Server, it
is the domain controller’s job to run the Active Directory service. The Active
Directory acts as a repository for directory objects. Among these objects are user
accounts. As such, one of a domain controller’s primary jobs is to provide
authentication services.

One very important concept to keep in mind is that domain controllers provide
authentication, not authorization. What this means is that when a user logs on to a
network, a domain controller validates the user’s username and password and
essentially confirms that the user is who they claim to be. The domain controller does
not however tell the user what resources they have rights to.

Resources on Windows networks are secured by access control lists (ACLs). An ACL
is basically just a list that tells who has rights to what. When a user attempts to access
a resource, they present their identity to the server containing the resource. That
server makes sure that the user’s identity has been authenticated and then cross
references the user’s identity with an ACL to see what it is that the user has rights to.

Conclusion
As you can see, a domain controller performs a very important role within a Windows
network. In the next part of this article series, I will talk more about domain
controllers and about the Active Directory.

Part 6 - Windows Domain


Published: Jan 23, 2007

Discusses the anatomy of a Windows domain.

In the previous article in this series, I introduced you to the concept of domains and
domain controllers. In this article, I want to continue the discussion by talking about
the anatomy of a Windows domain.

As I explained in Part 5 of this article series, domains are not something new.
Microsoft originally introduced them in Windows NT Server. Originally, domains
were completely self contained. A single domain often housed all of the user accounts
for an entire company, and the domain’s administrator had complete control over the
domain and anything in it.

Occasionally though, having a single domain just wasn’t practical. For example, if a
company had offices in several different cities, then each office might have its own
domain. Another common scenario is when one company buys another company. In
such situations, it is not at all uncommon for both companies to already have domains.

In situations like these, it is sometimes necessary for users from one domain to access
resources located in another domain. Microsoft created trusts as a way of facilitating
such access. The best way that I can think of to describe trusts is to compare them to
the way that security works at an airport.

In the Untied States, passengers are required to show their drivers license to airport
security staff before boarding a domestic flight. Suppose for a moment that I were
going to fly somewhere. The security staff at the airport does not know who I am, and
they certainly don’t trust me. They do however trust the state of South Carolina. They
assume that the state of South Carolina has exercised due diligence in verifying my
identity before issuing me a drivers license. Therefore, I can show them a South
Carolina drivers license and they will let me on the plane, even though they don’t
necessarily trust me as an individual.

Domain trusts work the same way. Suppose that I am a domain administrator and my
domain contains resources that users in another domain need to access. If I am not an
administrator in the foreign domain then I have no control over who is given user
accounts in that domain. If I trust the administrator of that domain not to do anything
stupid, then I can establish a trust so that my domain trusts members of the other
domain. In a situation like this, my domain would be referred to as the trusting
domain, and the foreign domain would be known as the trusted domain.

In the previous article, I mentioned that domain controllers provide authentication, not
authorization. This holds true even when trust relationships are involved. Simply
choosing to trust a foreign domain does not give the users in that domain rights to
access any of the resources in your domain. You must still assign permissions just as
you would for users in your own domain.

At the beginning of this article, I mentioned that in Windows NT a domain was a


completely self contained environment, and that trusts were created as a way of
allowing users in one domain to access resources in another domain. These concepts
still hold partially true today, but the domain model changed dramatically when
Microsoft created the Active Directory. As you may recall, the Active Directory was
first introduced in Windows 2000, but is still in use today in Windows Server 2003
and the soon to be released Longhorn Server.

One of the primary differences between Windows NT style domains and Active
Directory domains is that domains are no longer completely isolated from each other.
In Windows NT, there was really no organizational structure for domains. Each
domain was completely independent of any other domain. In an Active Directory
environment, the primary organizational structure is known as a forest. A forest can
contain multiple domain trees.

The best way that I can think of to compare a domain tree is to compare it to a family
tree. A family tree consists of great grandparents, grandparents, parents, children, etc.
Each member of a family tree has some relation to the members above and below
them. A domain tree works in a similar manner, and you can tell a domain’s position
within a tree just by looking at its name.

Active Directory domains use DNS style names, similar to the names used by Web
sites. In Part 3 of this article series, I explained how DNS servers resolve URLs for
Web browsers. The same technique is used internally in an Active Directory
environment. Think about it for a moment. DNS stands for Domain Name Server. In
fact, a DNS server is a required component for any Active Directory deployment.

To see how domain naming works, let’s take a look at how my own network is set up.
My network’s primary domain is named production.com. I don’t actually own the
production.com Internet domain name, but it doesn’t matter because this domain is
private and is only accessible from inside my network.

The production.com domain is considered to be a top level domain. If this were an


Internet domain, it would not be a top level domain, because .com would be a top
level domain and production.com would be a child domain of the .com domain. In
spite of this minor difference, the same basic principle holds true. I could easily create
a child domain by creating another domain name that encompasses production.com.
For example, sales.production.com would be considered to be a child domain of the
production.com domain. You can even create grandchild domains. An example of a
grandchild domain of production.com would be widgets.sales.production.com. As you
can see, you can easily tell a domain’s position within a domain tree just by looking at
the number of periods in the domain’s name.

Earlier I mentioned that an Active Directory forest can contain domain trees. You are
not limited to creating a single domain tree. In fact, my own network uses two domain
trees; production.com and test.com. The test.com domain contains all of the servers
that I monkey around with while experimenting with the various techniques that I
write articles about. The production.com domain contains the servers that I actually
use to run my business. This domain contains my mail server and some file servers.

The point is that having the ability to create multiple domain trees allows you to
segregate your network in a way that makes the most sense from a management
prospective. For example, suppose that a company has offices in five different cities.
The company could easily create an Active Directory forest that contains five
different domain trees; one for each city. There would most likely be a different
administrator in each city, and that administrator would be free to create child
domains off of their domain tree on an as needed basis.

The beauty of this type of structure is that all of these domains fall within a common
forest. This means that while administrative control over individual domains or
domain trees might be delegated to an administrator in another city, the forest
administrator ultimately maintains control over all of the domains in the forest.
Furthermore, trust relationships are greatly simplified because every domain in the
forest automatically trusts every other domain in the forest. It is still possible to
establish trusts with external forests or domains.

Conclusion
In this article, I have talked about the organizational structure used in creating Active
Directory domains. In the next part of this article series, I will talk about how network
communications work in an Active Directory environment.

Part 7 - Introduction to FSMO Roles


Published: Mar 01, 2007

The necessity of FSMO roles.

ke to be notified of when Brien Posey releases the next part in this article series
please sign up to our WindowsNetworking.com Real Time Article Update newsletter.

So far in this article series, I have explained that the Active Directory consists of a
forest filled with domain trees, and that the names of each domain indicate its position
within the forest. Given the hierarchical nature of the Active Directory, it might be
easy to assume that domains near the top of the hierarchy (or rather the domain
controllers within those domains) are the most important. This isn't necessarily the
case though. In this article, I will discuss the rules that individual domain controllers
play within the Active Directory forest.

Earlier in this series, I talked about how domains in Windows NT were all
encompassing. Like Active Directory domains, Windows NT domains supported the
use of multiple domain controllers. Remember that domain controllers are responsible
for authenticating user logons. Therefore, if a domain controller is not available then
no one will be able to log on to the network. Microsoft realized this early on and
designed Windows to allow multiple domain controllers so that if a domain controller
failed, another domain controller would be available to authenticate logons. Having
multiple domain controllers also allows the domain related work load to be shared by
multiple computers rather than the full burden falling on a single server.

Although Windows NT supported multiple domain controllers within a domain, one


of these domain controllers was considered to be more important than the others. This
was known as the Primary Domain Controller or PDC. As you may recall, a domain
controller contains a database of all of the user accounts within the domain (among
other things). This database was called the Security Accounts Manager, or SAM
database.

In Windows NT, the PDC stored the master copy of the database. Other domain
controllers within a Windows NT domain were known as Backup Domain Controllers
or BDCs. Any time that a change needed to be made to the domain controller’s
database, the change would be written to the PDC. The PDC would then replicate the
change out to all of the BDCs in the domain. Under normal circumstances, the PDC
was the only domain controller in a Windows NT domain to which domain related
updates could be applied. If the PDC were to fail, there was a way to promote a BDC
to PDC, thus enabling that domain controller to act as the domain’s one and only
PDC.

Active Directory domains do things a little bit differently. The Active Directory uses a
Multi master replication model. What this means is that every domain controller
within a domain is writable. There is no longer the concept of PDCs and BDCs. If an
administrator needs to make a change to the Active Directory database, the change
can be applied to any domain controller in the domain, and then replicated to the
remaining domain controllers.

Although the multimaster replication model probably sounds like a good idea, it
opens the door for contradictory changes. For example, what happens if two different
administrators apply contradictory changes to two different domain controllers at the
same time?

In most cases, the Active Directory assumes that the most recent change takes
precedence. In some situations, the consequences of a conflict are too serious to rely
on this type of conflict resolution. In these cases, Microsoft takes a stand point that it
is better to prevent a conflict from occurring in the first place than to try to resolve the
conflict after it happens.

To handle these types of situations, Windows is designed to designate certain domain


controllers to perform Flexible Single Master Operation (FSMO) roles. Essentially
this means that Active Directory domains fully support multimaster replication except
in certain circumstances in which the domain reverts to using a single master
replication model. There are three different FSMO roles that are assigned at the
domain level, and two additional roles that are assigned the forest level.

Where are the FSMO Roles Located?


For the most part, the FSMO roles pretty much take care of themselves. It is important
however for you to know which domain controllers host these roles. By default, the
first domain controller in the forest hosts all five roles. As additional domains are
created, the first domain controller brought online in each domain holds all three of
the domain level FSMO roles.

The reason why it is so important to know which domain controllers hold these roles
is because hardware eventually gets old and is decommissioned. I once saw a situation
in which a network administrator was preparing to deploy an Active Directory
network for his company. While waiting for the newly ordered servers to arrive, the
administrator installed Windows onto a junk PC so that he could begin playing around
with the various Active Directory management tools.

When the new servers finally arrived, the administrator configured them as domain
controllers in the already created domain rather than creating a new forest. Of course
this meant that the junk PC was holding the FSMO roles for the domain in the forest.
Everything worked fine until the administrator decided to remove the “junk” PC from
the network. Had he properly decommissioned this server, there would not have been
a problem. Being inexperienced though, he simply reformatted the machine’s hard
drive. All of a sudden the Active Directory began to experience numerous
problems. If this administrator had realized that the machine that he had removed
from the domain was hosting the domain and forest’s FSMO roles, the problems
could have been avoided. Incidentally, in a situation like this there is a way of seizing
the FSMO roles from the deceased server so that your network can resume normal
operations.

What are the FSMO Roles?


I will talk more about the specific functions of the FSMO roles in the next article in
this series. I do however want to quickly mention what these roles are. As you may
recall, I mentioned that there are three domain specific roles, and two forest specific
roles.

The domain specific roles include the Relative identifier, the Primary Domain
Controller Emulator, and the Infrastructure Master. Forest level roles include the
Schema Master and the Domain Naming master. Below is a brief description of what
these roles do:

Schema Master: maintains the authoritative copy of the Active Directory database
schema.

Domain Naming Master: maintains the list of domains within the forest.
Relative Identifier Master: responsible for ensuring that every Active Directory
object at a domain receives a unique security identifier.

Primary Domain Controller Emulator: acts as the Primary Domain Controller in


domains containing domain controllers running Windows NT.

Infrastructure Master: the Infrastructure Master is responsible for updating an


object’s security identifier and distinguished name in a cross domain object reference.

Conclusion
Hopefully by now, you understand the importance of the FSMO roles even if you
don’t understand what the rules themselves actually do. In the next article in this
series, I will discuss the FSMO roles in much greater detail and help you to
understand what it is that they actually do. I will also show you how to definitively
determine which server is hosting the various roles.

If you would like to read the other parts in this article series please go to:

• Networking Basics: Part 1 - Networking Hardware


• Networking Basics: Part 2 - Routers
• Networking Basics: Part 3 - DNS Servers
• Networking Basics: Part 4 - Workstations and Servers
• Networking Basics: Part 5 - Domain Controllers
• Networking Basics: Part 6 - Windows Domain

Potrebbero piacerti anche