Sei sulla pagina 1di 34

.

COMPUT circuit design in computer: the circuit design and


principles used by a computer in its

logic circuit
(plural logic circuits)
noun
computer operation circuit: a computer switching circuit that
performs operations on input signals
VII. SWITCHING AND TIMING
CIRCUITS
Digital Logic and NOR Gate
Circuitry
Digital Logic and NOR Gate Circuitry
Computers use digital logic to perform operations. Digital logic involves making
successive true or false decisions, which may also be represented by 1 and 0,
respectively. Logic circuits, which are at the heart of computer chips, are designed to
make a series of these decisions via junctures called gates. Gates are designed and
arranged to make different kinds of decisions about the input they receive.
Individual input and output values are always either true or false and are relayed
through the circuit in the form of different voltages. This circuit uses 4 NOR gates,
each of which makes the decision neither A nor B. The NOR operation yields an
output of 0 whenever one or more of the input values is 1. The table shows input
values (A, B) and output value (F) for the NOR gate. A circuit map (bottom) shows the
layout of a NOR gate and its components, indicating voltage values when the inputs
are 0,0 and the output is 1.
Encarta Encyclopedia
Microsoft Corporation. All Rights Reserved.
Switching and timing circuits, or logic circuits, form the heart of any device where
signals must be selected or combined in a controlled manner. Applications of these
circuits include telephone switching, satellite transmissions, and digital computer
operations.
Digital logic is a rational process for making simple true or false decisions based on
the rules of Boolean algebra. True can be represented by a 1 and false by a 0, and in
logic circuits the numerals appear as signals of two different voltages. Logic circuits are
used to make specific true-false decisions based on the presence of multiple true-false
signals at the inputs. The signals may be generated by mechanical switches or by solidstate transducers. Once the input signal has been accepted and conditioned (to remove
unwanted electrical signals, or noise), it is processed by the digital logic circuits. The
various families of digital logic devices, usually integrated circuits, perform a variety of
logic functions through logic gates, including OR,AND, and NOT, and
combinations of these (such as NOR, which includes both OR and NOT). One widely
used logic family is the transistor-transistor logic (TTL). Another family is the
complementary metal oxide semiconductor logic (CMOS), which performs similar
functions at very low power levels but at slightly lower operating speeds. Several other,
less popular families of logic circuits exist, including the currently obsolete resistortransistor logic (RTL) and the emitter coupled logic (ELC), the latter used for very-highspeed systems.

Digital Circuits and Boolean Truth Tables


Digital circuits operate in the binary number system, which means that all circuit
variables must be either 1 or 0. The algebra used to solve problems and process
information in digital systems is called Boolean algebra; it deals with logic, rather than
calculating actual numeric values. Boolean algebra is based on the idea that logical
propositions are either true or false, depending on the type of operation they describe
and whether the variables are true or false. True corresponds to the digital value of 1,
while false corresponds to 0. These diagrams show various electronic switches, called
gates, each of which performs a specific Boolean operation. There are three basic
Boolean operations, which may be used alone or in combination: logical multiplication
(AND gate), logical addition (OR gate), and logical inversion (NOT gate). The
accompanying tables, called truth tables, map all of the potential input combinations
against yielded outputs.
Encarta Encyclopedia
Microsoft Corporation. All Rights Reserved.
The elemental blocks in a logic device are called digital logic gates. An AND gate has
two or more inputs and a single output. The output of an AND gate is true only if all the
inputs are true. An OR gate has two or more inputs and a single output. The output of an
OR gate is true if any one of the inputs is true and is false if all of the inputs are false. An
INVERTER has a single input and a single output terminal and can change a true signal
to a false signal, thus performing the NOT function. More complicated logic circuits are
built up from elementary gates. They include flip-flops (binary switches), counters,
comparators, adders, and more complex combinations.
To perform a desired overall function, large numbers of logic elements may be connected
in complex circuits. In some cases microprocessors are utilized to perform many of the
switching and timing functions of the individual logic elements (see Microprocessor).
The processors are specifically programmed with individual instructions to perform a
given task or tasks. An advantage of microprocessors is that they make possible the
performance of different logic functions, depending on the program instructions that are
stored. A disadvantage of microprocessors is that normally they operate in a sequential
mode, which may be too slow for some applications. In these cases specifically designed
logic circuits are used.
VIII.
RECENT DEVELOPMENTS

The development of integrated circuits has revolutionized the fields of communications,


information handling, and computing. Integrated circuits reduce the size of devices and
lower manufacturing and system costs, while at the same time providing high speed and
increased reliability. Digital watches, hand-held computers, and electronic games are
systems based on microprocessors. Other developments include the digitalization of
audio signals, where the frequency and amplitude of an audio signal are coded digitally
by appropriate sampling techniques, that is, techniques for measuring the amplitude of
the signal at very short intervals. Digitally recorded music shows a fidelity that is not
possible using direct-recording methods. Digital playback devices of this nature have
already entered the home market. Digital storage could also form the basis of home video
systems and may significantly alter library storage systems, because much more
information can be stored on a disk for replay on a television screen than can be
contained in a book.
Medical electronics has progressed from computerized axial tomography, or the use of
CAT or CT scanners (see X Ray), to systems that can discriminate more and more of the
organs of the human body. Devices that can view blood vessels and the respiratory
system have been developed as well. Ultrahigh definition television also promises to
substitute for many photographic processes, because it eliminates the need for silver.

Today's research to increase the speed and capacity of computers concentrates mainly on
the improvement of integrated circuit technology and the development of even faster
switching components. Very-large-scale integrated (VLSI) circuits that contain several
hundred thousand components on a single chip have been developed. Very-high-speed
computers are being developed in which semiconductors may be replaced by
superconducting circuits using Josephson junctions (see Josephson Effect) and operating
at temperatures near absolute zero.
Microsoft Encarta 2009. 1993-2008 Microsoft Corporation. All rights
reserved.
Microprocessor, electronic circuit that functions as the central processing unit (CPU) of a
computer, providing computational control. Microprocessors are also used in other
advanced electronic systems, such as computer printers, automobiles, and jet airliners.
The microprocessor is one type of ultra-large-scale integrated circuit. Integrated circuits,
also known as microchips or chips, are complex electronic circuits consisting of
extremely tiny components formed on a single, thin, flat piece of material known as a
semiconductor. Modern microprocessors incorporate transistors (which act as electronic
amplifiers, oscillators, or, most commonly, switches), in addition to other components
such as resistors, diodes, capacitors, and wires, all packed into an area about the size of a
postage stamp.
A microprocessor consists of several different sections: The arithmetic/logic unit (ALU)
performs calculations on numbers and makes logical decisions; the registers are special
memory locations for storing temporary information much as a scratch pad does; the
control unit deciphers programs; buses carry digital information throughout the chip and
computer; and local memory supports on-chip computation. More complex
microprocessors often contain other sectionssuch as sections of specialized memory,
called cache memory, to speed up access to external data-storage devices. Modern
microprocessors operate with bus widths of 64 bits (binary digits, or units of information
represented as 1s and 0s), meaning that 64 bits of data can be transferred at the same
time.
A crystal oscillator in the computer provides a clock signal to coordinate all activities of
the microprocessor. The clock speed of the most advanced microprocessors allows
billions of computer instructions to be executed every second.
II.
COMPUTER MEMORY

Because the microprocessor alone cannot accommodate the large amount of memory
required to store program instructions and data, such as the text in a word-processing
program, transistors can be used as memory elements in combination with the
microprocessor. Separate integrated circuits, called random-access memory (RAM) chips,
which contain large numbers of transistors, are used in conjunction with the
microprocessor to provide the needed memory. There are different kinds of randomaccess memory. Static RAM (SRAM) holds information as long as power is turned on
and is usually used as cache memory because it operates very quickly. Another type of
memory, dynamic RAM (DRAM), is slower than SRAM and must be periodically
refreshed with electricity or the information it holds is lost. DRAM is more economical
than SRAM and serves as the main memory element in most computers.

SEMICONDUCTORS

WORLD OF SCIENCE
Manufacturing an Integrated Circuit
Beginning in the late 20th century, integrated circuits based on silicon chips shrank
rapidly in price and size while expanding in capacity. These advances in chip technology
contributed to a boom in the computer industry. The creation of a single silicon chip
requires hundreds of manufacturing steps. In this Scientific American article, Intel
Corporation president and chief operating officer Craig R. Barrett describes the chip
manufacturing process from design through completion.
open sidebar
All integrated circuits are fabricated from semiconductors, substances whose ability to
conduct electricity ranks between that of a conductor and that of a nonconductor, or
insulator. Silicon is the most common semiconductor material. Because the electrical
conductivity of a semiconductor can change according to the voltage applied to it,
transistors made from semiconductors act like tiny switches that turn electrical current on
and off in just a few nanoseconds (billionths of a second). This capability enables a
computer to perform many billions of simple instructions each second and to complete
complex tasks quickly.
The basic building block of most semiconductor devices is the diode, a junction, or
union, of negative-type (n-type) and positive-type (p-type) materials. The terms n-type
and p-type refer to semiconducting materials that have been dopedthat is, have had
their electrical properties altered by the controlled addition of very small quantities of
impurities such as boron or phosphorus. In a diode, current flows in only one direction:
across the junction from the p- to n-type material, and then only when the p-type material
is at a higher voltage than the n-type. The voltage applied to the diode to create this
condition is called the forward bias. The opposite voltage, for which current will not flow,
is called the reverse bias. An integrated circuit contains millions of p-n junctions, each
serving a specific purpose within the millions of electronic circuit elements. Proper
placement and biasing of p- and n-type regions restrict the electrical current to the correct
paths and ensure the proper operation of the entire chip.
From Sand to Silicon: Manufacturing an Integrated Circuit
By Craig R. Barrett
The fundamental device of the digital world is the integrated circuit, a small square of
silicon containing millions of transistors. It is probably the most complex of man-made
products. Although it looks flat, it is in fact a three-dimensional structure made by
painstakingly building up on the silicon base several microscopically thin layers of
materials that both insulate and conduct electricity. Assembled according to a pattern
carefully worked out in advance, these layers form the transistors, which function as
switches controlling the flow of electricity through the circuit, which is also known as a
chip. 'On' and 'off' switches manipulate the binary code that is at the core of what a
computer does.
Building a chip typically requires several hundred manufacturing steps that take weeks to
complete. Each step must be executed perfectly if the chip is to work. The conditions are
demanding. For example, because a speck of dust can ruin a chip, the manufacturing has
to be done in a 'clean room' containing less than one submicron particle of dust per cubic
foot of air (in contrast, the average living room has between 100,000 and one million
particles per cubic foot of air). Much of the equipment needed for making chips embodies
the highest of high technology, with the result that chip factorieswhich cost between $1
billion and $2 billion for a state-of-the-art facilityare among the costliest of
manufacturing plants.

A basic technology of chipmaking is the 'planar' process devised in 1957 by Jean Hoerni
of Fairchild Semiconductor. It provided a means of creating a layered structure on the
silicon base of a chip. This technology was pivotal in Robert N. Noyce's development of

the integrated circuit in 1958. (Noyce later became co-founder with Gordon E. Moore of
Intel Corporation, the company that invented the microprocessor and has become the
world's leading supplier of semiconductor chips.) Bridging the gap between the
transistor and the integrated circuit, the planar technology opened the way to the
manufacturing process that now produces chips. The hundreds of individual steps in that
process can be grouped into a few basic operations.
Chip Design
The first operation is the design of the chip. When tens of millions of transistors are to be
built on a square of silicon about the size of a child's fingernail, the placing and
interconnections of the transistors must be meticulously worked out. Each transistor must
be designed for its intended function, and groups of transistors are combined to create
circuit elements such as inverters, adders and decoders. The designer must also take into
account the intended purpose of the chip. A processor chip carries out instructions in a
computer, and a memory chip stores data. The two types of chips differ somewhat in
structure. Because of the complexity of today's chips, the design work is done by
computer, although engineers often print out an enlarged diagram of a chip's structure to
examine it in detail.
The Silicon Crystal
The base material for building an integrated circuit is a silicon crystal. Silicon, the most
abundant element on the earth except for oxygen, is the principal ingredient of beach
sand. Silicon is a natural semiconductor, which means that it can be altered to be either an
insulator or a conductor. Insulators, such as glass, block the passage of electricity;
conductors, such as copper, let electricity pass through. To make a silicon crystal, raw
silicon obtained from quartz rock is treated with chemicals that remove contaminants
until what remains is almost 100 percent silicon. This purified silicon is melted and then
formed into cylindrical single crystals called ingots. The ingots are sliced into wafers
about 0.725 millimeter (0.03 inch) thick. In a step called planarization they are polished
with a slurry until they have a flawless, mirror-smooth surface. At present, most of the
wafers are 200 millimeters (eight inches) in diameter, but the industry is moving toward
achieving a standard diameter of 300 millimeters (12 inches) by 1999. Because a single
wafer yields hundreds of chips, bigger wafers mean that more chips can be made at one
time, holding down the cost per chip.
The First Layers
With the wafer prepared, the process of building the chip's circuitry begins. Making the
transistors and their interconnections entails several different basic steps that are repeated
many times. The most complex chips made today consist of 20 or more layers and may
require several hundred separate processing steps to build them up one by one.
The first layer is silicon dioxide, which does not conduct electricity and therefore serves
as an insulator. It is created by putting the wafers into a diffusion furnace essentially an
oven at high temperature where a thin layer of oxide is grown on the wafer surface.
Removed from the furnace, the wafer is now ready for its first patterning, or
photolithographic, step. A coating of a fairly viscous polymeric liquid called photoresist,
which becomes soluble when it is exposed to ultraviolet light, is applied to the surface. A
spigot deposits a precise amount of photoresist on the wafer surface. Then the wafer is
spun so that centrifugal force spreads the liquid over the surface at an even thickness.
This operation takes place on every layer that is modified by a photolithographic
procedure called masking, described in the next step.

Masking

A mask is the device through which ultraviolet light shines to define the circuit pattern on
each layer of a chip. Because the pattern is intricate and must be positioned precisely on
the chip, the arrangement of opaque and transparent spaces on a mask must be done
carefully during a chip's design stage.
The mask image is transferred to the wafer using a computer-controlled machine known
as a stepper. It has a sophisticated lens system to reduce the pattern on the mask to the
microscopic dimensions of the chip's circuitry, requiring resolution as small as 0.25
micron. The wafer is held in place on a positioning table below the lens system.
Ultraviolet light from an arc lamp or a laser shines through the clear spaces of the mask's
intricate pattern onto the photoresist layer of a single chip. The stepper table then moves
the wafer the precise distance required to position another chip under the light. On each
chip, the parts of the photoresist layer that were struck by the light become soluble and
can be developed, much like photographic film, using organic solvents. Once the
photoresist is patterned, the wafer is ready for etching.
Etching
During this step, photoresist remaining on the surface protects parts of the underlying
layer from being removed by the acids or reactive gases used to etch the pattern on the
surface of the wafer. After etching is complete, the protective layer of photoresist is
removed to reveal electrically conducting or electrically insulating segments in the
pattern determined by the mask. Each additional layer put on the chip has a distinctive
pattern of this kind.
Adding Layers
Further masking and etching steps deposit patterns of additional materials on the chip.
These materials include polysilicon as well as various oxides and metal conductors such
as aluminum and tungsten. To prevent the formation of undesired compounds during
subsequent steps, other materials known as diffusion barriers can also be added. On each
layer of material, masking and etching create a unique pattern of conducting and
nonconducting areas. Together these patterns aligned on top of one another form the
chip's circuitry in a three-dimensional structure. But the circuitry needs fine-tuning to
work properly. The tuning is provided by doping.
Doping
Doping deliberately adds chemical impurities, such as boron or arsenic, to parts of the
silicon wafer to alter the way the silicon in each doped area conducts electricity.
Machines called ion implanters are often used to inject these impurities into the chip.
In electrical terms, silicon can be either n-type or p-type, depending on the impurity
added. The atoms in the doping material in n-type silicon have an extra electron that is
free to move. Some of the doping atoms in p-type silicon are short an electron and so
constitute what is called a hole. Where the two types adjoin, the extra electrons can flow
from the n-type to the p-type to fill the holes.
This flow of electrons does not continue indefinitely. Eventually the positively charged
ions left behind on the n-type side and the negatively charged ions on the p-type side
together create an electrical force that prevents any further net flow of electrons from the
n-type to the p-type region.
The material at the base of the chip is p-type silicon. One of the etching steps in the
manufacture of a chip removes parts of the polysilicon and silicon dioxide layers put on
the pure silicon base earlier, thus laying bare two strips of p-type silicon. Separating them
is a strip that still bears its layer of conducting polysilicon; it is the transistor's 'gate.' The
doping material now applied to the two strips of p-type silicon transforms them into ntype silicon. A positive charge applied to the gate attracts electrons below the gate in the
transistor's silicon base. These electrons create a channel between one n-type strip (the
source) and the other (the drain). If a positive voltage is applied to the drain, current will
flow from source to drain. In this mode, the transistor is 'on.' A negative charge at the gate
depletes the channel of electrons, thereby preventing the flow of current between source
and drain. Now the transistor is 'off.' It is by means of switching on and off that a
transistor represents the arrays of 1 and 0 that constitute the binary code, the language of
computers.

Done many times in many layers, these operations provide the chip with its multitude of
transistors. But just as provision must be made to run electrical wires and plumbing pipes
between floors of a building, provision must be made in chips for interconnecting the
transistors so they form an integrated circuit.
Interconnections
This final step begins with further masking and etching operations that open a thin layer
of electrical contacts between layers of the chip. Then aluminum is deposited and
patterned using photolithography to create a form of wiring that links all the chip's
transistors. Aluminum is chosen for this application because it makes good electrical
contact with silicon and also bonds well to silicon dioxide.
This step completes the processing of the wafer. Now the individual chips are tested to
ensure that all their electrical connections work using tiny electrical probes. Next, a
machine called a dicer cuts up the wafer into individual chips, and the good chips are
separated from the bad. The good chipsusually most of the wafer's cropare mounted
onto packaging units with metal leads. Wire bonders then attach these metal leads to the
chips. The electrical contacts between the chip's surface and the leads are made with tiny
gold or aluminum wires about 0.025 millimeter (0.001 inch) in diameter. Once the
packaging process is complete, the finished chips are sent to do their digital work.
Source: Reprinted with permission. Copyright 1997 by Scientific American, Inc. All
Rights Reserved.

Digital Logic and NOR Gate Circuitry


Computers use digital logic to perform operations. Digital logic involves
making successive true or false decisions, which may also be
represented by 1 and 0, respectively. Logic circuits, which are at the
heart of computer chips, are designed to make a series of these
decisions via junctures called gates. Gates are designed and arranged
to make different kinds of decisions about the input they receive.
Individual input and output values are always either true or false and
are relayed through the circuit in the form of different voltages. This
circuit uses 4 NOR gates, each of which makes the decision neither A
nor B.
The NOR operation yields an output of 0 whenever one or more of the
input values is 1. The table shows input values (A, B) and output value
(F) for the NOR gate. A circuit map (bottom) shows the layout of a NOR

gate and its components, indicating voltage values when the inputs are
0,0 and the output is 1.
Microsoft Corporation. All Rights Reserved.
Microsoft Encarta 2009. 1993-2008 Microsoft Corporation. All rights reserved.
Magnistors.
Magnistors, developed by the Potter Instrument Co. of Great Neck, N.Y., are basic circuit
elements with all the advantages of transistors and of magnetic cores without their
disadvantages. The devices are of two general types transient and permanent. The first
has no memory; the second will remember its set or reset condition indefinitely, even if
the power is disconnected.
Magnistors have either two or three windings wound on special ferroceramic material.
One, the signal coil, is used to carry a sine-wave signal in the range of 100 kilocycles to
15 megacycles or pulses having a repetition rate of zero to 10 megacycles. By varying the
direct current applied to the second winding (control coil), the impedance of the signal
winding to the carrier frequency or pulser can be varied over a ratio of 500 to one, if
desired. Power levels in the range of microwatts to tens of watts can be controlled.
Magnistors are used to gate, switch, amplify, and record. They also form logical arrays
for adding, subtracting, shifting, and other computer functions. Their principal
applications will be in high-speed computers, data handling, automatic control and
magnetic tape systems.
Tradic.
A digital computer, which eliminates problems caused by vacuum-tube failure and heat,
has been developed by Bell Telephone Laboratories for the U.S. Air Force. Containing
nearly 800 transistors and called TRADIC (Transistor-Digital Computer), it is believed to
be the first all-transistor computer designed for aircraft. Transistors, tiny solid devices
invented at Bell Laboratories, are highly efficient amplifiers which need no warm-up
period and use very little power. TRADIC requires less than 100 watts to operate. This is
one twentieth of the power needed by comparable vacuum-tube computers.
In addition to transistors, TRADIC contains nearly 11,000 germanium diodes which serve
as the equivalent of tiny one-way switches. They are capable of operating thousands of
times faster than their mechanical counterparts.
TRADIC can do 60,000 additions or subtractions and 3,000 multiplications or divisions a
second. A typical problem fed into the machine requires it to go through about 250
different steps of computation. It can run through an entire problem of that complexity
and provide an answer in about 15 thousandths of a second. Mathematical instructions are
given TRADIC by means of a 'plug-in' unit resembling a small headboard. Plug-ins are
set up in advance with interconnecting wires to represent the problems at hand. Numbers
to be processed are put into the machine by means of simple switches. To handle the
successive steps of complex computation, a machine must have a means of storing
information until it is needed. (When a man works on an involved mathematical problem,
he usually jots down the answer to each part as it is solved, then refers to this frequently
as he proceeds.) TRADIC automatically transfers each subanswer to 'built-in' memory
packages while it continues to solve the problem.
When TRADIC has been completely designed, it will probably occupy less than three
cubic feet of the critical space in a modern military aircraft. This small volume is possible
because transistors and germanium diodes occupy much less space than their
conventional counterparts. TRADIC development at Bell Telephone Laboratories was
under the direction of J. H. Felker.

Object Code

Object Code, in computer science, translated version of source codethe statements of a


particular computer program that can either be read by the computer directly, or read by
the computer after it is further translated. Object code may also be called target code or
the object program.
Software engineers write source code in high-level programming languages that people
can read and understand. A computers central processing unit (CPU) cannot recognize
and execute the instructions in source code until it is translated into machine code, a lowlevel language written in binary digits, 1s and 0s, called bits. These bits activate specific
logic gates or circuits in the computer.
A computer uses one of two translation programs, known as language processors, to
convert source code into object code: an assembler or a compiler. Assemblers produce a
strict one-for-one translation of source code into machine code. Compilers, on the other
hand, can produce either machine code directly or create intermediate versions of the
original source code in assembly language, which can then be translated to machine code
in another step. Object code is the translation of source code produced by the language
processor, so it may either be in machine code or in a language that can be reduced to
machine code.
Object code should not be confused with objects in computer science. Objects are selfcontained, modular instruction sets that are used as programming units in object-oriented
programming (OOP) languages, such as Smalltalk, C++, and Java.
Machine Code, or machine language, in computer science, low-level programming
language that can be understood directly by a computers central processing unit (CPU).
Machine code consists of sequences of binary numbers, or bits, which are usually
represented by 1s and 0s, and which form the basic instructions that guide the operation
of a computer. The specific set of instructions that constitutes a machine code depends on
the make and model of the computers CPU. For instance, the machine code for the
Motorola 68000 microprocessor differs from that used in the Intel Pentium
microprocessor.
Writing programs in machine code is tedious and time-consuming since the programmer
must keep track of each specific bit in an instruction. Another difficulty with
programming directly in machine code is that errors are very hard to detect because the
program is represented by rows and columns of 1s and 0s. To overcome these problems,
American mathematician Grace Murray Hopper developed assembly language in 1952.
Assembly language uses easy to remember termssuch as SUB for a subtraction
operation, and MPY for a multiplication operationto represent specific instructions in
machine code.
Assembly language makes programming much easier, but an assembly language program
must be translated into machine code before it can be understood and run by the
computer. Special utility programs called assemblers perform the function of translating
assembly language code into machine code. Like machine code, the specific set of
instructions that make up an assembly language depend on the make and model of the
computers CPU. Other programming languages such as Fortran, BASIC, and C++, make
programming even easier than with assembly language and are used to write the majority
of programs. These languages, called high-level languages, are closer in form to natural
languages and allow very complicated operations to be written in compact notation.
Like assembly languages, all high-level languages must first be translated into machine
code before they can be run by a computer. To accomplish this, programmers use various
types of utility programs, such as compilers, assemblers, linkers, and debuggers, which
help them translate high-level language into machine code. The computer code used to
write a program is called source code before being translated into machine code, and
object code after it has been translated.
Microsoft Encarta 2009. 1993-2008 Microsoft Corporation. All rights
reserved.

Assembly Language, in computer science, a type of low-level computer programming


language in which each statement corresponds directly to a single machine instruction.
Assembly languages are thus specific to a given processor. After writing an assembly
language program, the programmer must use the assembler specific to the microprocessor
to translate the assembly language into machine code. Assembly language provides
precise control of the computer, but assembly language programs written for one type of
computer must be rewritten to operate on another type. Assembly language might be used
instead of a high-level language for any of three major reasons: speed, control, and
preference. Programs written in assembly language usually run faster than those
generated by a compiler; use of assembly language lets a programmer interact directly
with the hardware (processor, memory, display, and input/output ports). See also
Compiler; High-Level Language; Low-Level Language; Machine Code
Microsoft Encarta 2009. 1993-2008 Microsoft Corporation. All rights
reserved.
Source Code, in computer science, human-readable program statements written in a highlevel or assembly language, as opposed to object code, which is derived from the source
code and designed to be machine-readable. See also High-Level Language; Low-Level
Language; Programming Language.
Microsoft Encarta 2009. 1993-2008 Microsoft Corporation. All rights
reserved.
Microcomputer, desktop- or notebook-size computing device that uses a microprocessor
as its central processing unit, or CPU. In common usage, the term microcomputer is
equivalent to personal computer. When they first appeared, microcomputers were
considered single-user devices, and they were capable of handling only 4, 8, or 16 bits of
information at one time. Over time the distinction between microcomputers and large,
mainframe computers (as well as the smaller mainframe-type systems called
minicomputers) has become blurred, as newer microcomputer models have increased the
speed and data-handling capabilities of their CPUs into the 32-bit and 64-bit, multiuser
range.
Microsoft Encarta 2009.
BIT (COMPUTER)
Bit (computer), abbreviation for binary digit, the smallest unit of information in a
computer. A bit is represented by the numbers 1 and 0, which correspond to the states on
and off, true and false, or yes and no.
Bits are the building blocks for all information processing that goes on in digital
electronics and computers. Bits actually represent the state of a transistor in the logic
circuits of a computer. The number 1 (meaning on, yes, or true) is used to represent a
transistor with current flowing through itessentially a closed switch. The number 0
(meaning off, no, or false) is used to represent a transistor with no current flowing
through itan open switch. All computer information processing can be understood in
terms of vast arrays of transistors (3.1 million transistors on the Pentium chip) switching
on and off, depending on the bit value they have been assigned.
Bits are usually combined into larger units called bytes. A byte is composed of eight bits.
The values that a byte can take on range between 00000000 (0 in decimal notation) and
11111111 (255 in decimal notation). This means that a byte can represent 28 (2 raised to
the eighth power) or 256 possible states (0-255). Bytes are combined into groups of 1 to 8
bytes called words. The size of the words used by a computers central processing unit
(CPU) depends on the bit-processing ability of the CPU. A 32-bit processor, for example,
can use words that are up to four bytes long (32 bits).

Computers are often classified by the number of bits they can process at one time, as well
as by the number of bits used to represent addresses in their main memory (RAM).
Computer graphics are described by the number of bits used to represent pixels (short for
picture elements), the smallest identifiable parts of an image. In monochrome images,
each pixel is made up of one bit. In 256-color and gray-scale images, each pixel is made
up of one byte (eight bits). In true color images, each pixel is made up of at least 24 bits.
Microsoft Encarta 2009. 1993-2008 Microsoft Corporation. All rights
reserved.
Byte
Byte, in computer science, a unit of information built from bits, the smallest units of
information used in computers. Bits have one of two absolute values, either 0 or 1. These
bit values physically correspond to whether transistors and other electronic circuitry in a
computer are on or off. A byte is usually composed of 8 bits, although bytes composed of
16 bits are also used. See Number Systems.
The particular sequence of bits in the byte encodes a unit of information such as a
keyboard character. One byte typically represents a single character such as a number,
letter, or symbol. Most computers operate by manipulating groups of 2, 4, or 8 bytes
called words.
Software designers use computers and software to combine bytes in complex ways and
create meaningful data in the form of text files or binary files (files that contain data to be
processed and interpreted by a computer). Bits and bytes are the basis for creating all
meaningful information and programs on computers. For example, bits form bytes, which
represent characters and can be combined to form words, sentences, paragraphs, and
ultimately entire documents.
Bytes are the key unit for measuring quantity of data. Data quantity is commonly
measured in kilobytes (1024 bytes), megabytes (1,048,576 bytes), or gigabytes (about 1
billion bytes). A regular, floppy disk normally holds 1.44 megabytes of data, which
equates to approximately 1,400,000 keyboard characters, among other types of data. At
this storage capacity, a single disk can hold a document approximately 700 pages long,
with 2000 characters per page.
The term byte was first used in 1956 by Germanborn American computer scientist
Werner Buchholz to prevent confusion with the word bit. He described a byte as a group
of bits used to encode a character. The eight-bit byte was created that year and was soon
adopted by the computer industry as a standard.
The number of bits used by a computers Central Processing Unit (CPU) for addressing
information represents one measure of a computers speed and power. Computers today
often use 16, 32, or 64 bits in groups of 2, 4, and 8 bytes in their addressing. See Address
(Computer).
Microsoft Encarta 2009. 1993-2008 Microsoft Corporation. All rights
reserved.

Dynamic RAM

Dynamic RAM (DRAM) in computer science, a form of semiconductor random access


memory (RAM). Dynamic RAMs store information in integrated circuits that contain
capacitors. Because capacitors lose their charge over time, dynamic RAM boards must
include logic to refresh (recharge) the RAM chips continuously. While a dynamic
RAM is being refreshed, it cannot be read by the processor; if the processor must read the
RAM while it is being refreshed, one or more wait states occur. Because their internal
circuitry is simple, dynamic RAMs are more commonly used than static RAMs, even
though they are slower. A dynamic RAM can hold approximately four times as much data
as a static RAM chip of the same complexity. See also Computer.
Microsoft Encarta 2009. 1993-2008 Microsoft Corporation. All rights
reserved.
Logic circuits in computers (see Computer; Electronics) carry out the different arithmetic
operations of binary numbers; the conversion of decimal numbers to binary numbers for
processing, and of binary numbers to decimal numbers for the readout, is done
electronically.
Microsoft Encarta 2009. 1993-2008 Microsoft Corporation. All rights
reserved.
Personal Computer
I.INTRODUCTION
Personal Computers at
Home
Personal computers (PCs) have changed the way families track finances, write reports,
and play games. PCs also help students enhance math, spelling, and reading skills.

Personal Computer (PC), computer in the form of a desktop or laptop device designed for
use by a single person. PCs function using a display monitor and a keyboard. Since their
introduction in the 1980s, PCs have become powerful and extremely versatile tools that
have revolutionized how people work, learn, communicate, and find entertainment. Many
households in the United States now have PCs, thanks to affordable prices and software
that has made PCs easy to use without special computer expertise. Personal computers
are also a crucial component of information technology (IT) and play a key role in
modern economies worldwide.
The usefulness and capabilities of personal computers can be greatly enhanced by
connection to the Internet and World Wide Web, as well as to smaller networks that link
to local computers or databases. Personal computers can also be used to access content
stored on compact discs (CDs) or digital versatile discs (DVDs), and to transfer files to
personal media devices and video players.
Personal computers are sometimes called microcomputers or micros. Powerful PCs
designed for professional or technical use are known as work stations. Other names that
reflect different roles for PCs include home computers and small-business computers.
The PC is generally larger and more powerful than handheld computers, including
personal digital assistants (PDAs) and gaming devices.

II.
PARTS OF A PERSONAL COMPUTER

The different types of equipment that make a computer function are known as hardware;
the coded instructions that make a computer work are known as software.
A.

Types of
Hardware

Personal Computer
Components
Personal Computer Components
A typical personal computer has components to display and print information (monitor
and laser printer); input commands and data (keyboard and mouse); retrieve and store
information (CD-ROM and disk drives); and communicate with other computers
(modem).
PCs consist of electronic circuitry called a microprocessor, such as the central processing
unit (CPU), that directs logical and arithmetical functions and executes computer
programs. The CPU is located on a motherboard with other chips. A PC also has
electronic memory known as random access memory (RAM) to temporarily store
programs and data. A basic component of most PCs is a disk drive, commonly in the form
of a hard disk or hard drive. A hard disk is a magnetic storage device in the form of a disk
or disks that rotate. The magnetically stored information is read or modified using a drive
head that scans the surface of the disk.
Removable storage devicessuch as floppy drives, compact disc (CD-ROM) and digital
versatile disc (DVD) drives, and additional hard drivescan be used to permanently
store as well as access programs and data. PCs may have CD or DVD burners that
allow users to write or rewrite data onto recordable discs. Other external devices to
transfer and store files include memory sticks and flash drives, small solid-state devices
that do not have internal moving parts.
Computer Docking Station
Computer Docking Station
A computer docking station enables a notebook, or laptop, computer to operate the hard
drive and peripheral devices of a desktop computer. When removed from the docking
station, the smaller computer is portable and functions as a notebook.
Cards are printed circuit boards that can be plugged into a PC to provide additional
functions such as recording or playing video or audio, or enhancing graphics (see
Graphics Card).
A PC user enters information and commands with a keyboard or with a pointing device
such as a mouse. A joystick may be used for computer games or other tasks. Information
from the PC is displayed on a video monitor or on a liquid crystal display (LCD) video
screen. Accessories such as speakers or headphones allow audio to be listened to. Files,
photographs, or documents can be printed on laser, dot-matrix, or inkjet printers. The
various components of the computer system are physically attached to the PC through the
bus. Some PCs have wireless systems that use infrared or radio waves to link to the
mouse, the keyboard, or other components.
PC connections to the Internet or local networks may be through a cable attachment or a
phone line and a modem (a device that permits transmission of digital signals). Wireless
links to the Internet and networks operate through a radio modem. Modems also are used
to link other devices to communication systems.

B.

Types of
Software

Computer Software
PCs are run by software called the operating system. Widely used operating systems
include Microsofts Windows, Apples Mac OS, and Linux. Other types of software
called applications allow the user to perform a wide variety of tasks such as word
processing; using spreadsheets; manipulating or accessing data; or editing video,
photographs, or audio files.
Drivers are special software programs that operate specific devices that can be either
crucial or optional to the functioning of the computer. Drivers help operate keyboards,
printers, and DVD drives, for example.
Most PCs use software to run a screen display called a graphical user interface (GUI). A
GUI allows a user to open and move files, work with applications, and perform other
tasks by clicking on graphic icons with a mouse or other pointing device.
In addition to text files, PCs can store digital multimedia files such as photographs, audio
recordings, and video. These media files are usually in compressed digital formats such
as JPEG for photographs, MP3 for audio, and MPEG for video.
III.

USES FOR PERSONAL


COMPUTERS

Personal Computer

Personal Computer
A personal computer (PC) enables people to carry out an array of tasks, such as word
processing and slide presentations. With a connection to the Internet, users can tap into a
vast amount of information on the World Wide Web, send e-mail, and download music
and videos. As a family tool, the PC may be used for school, research, communication,
record keeping, work, and entertainment.
Encarta Encyclopedia
Jose Luis Pelaez, Inc./Corbis
The wide variety of tasks that PCs can perform in conjunction with the PCs role as a
portal to the Internet and World Wide Web have had profound effects on how people
conduct their lives and work, and pursue education.
In the home, PCs can help with balancing the family checkbook, keeping track of
finances and investments, and filing taxes, as well as preserving family documents for
easy access or indexing recipes. PCs are also a recreational device for playing computer
games, watching videos with webcasting, downloading music, saving photographs, or
cataloging records and books. Together with the Internet, PCs are a link to social contacts
through electronic mail (e-mail), text-messaging, personal Web pages, blogs, and chat
groups. PCs can also allow quick and convenient access to news and sports information
on the World Wide Web, as well as consumer information. Shopping from home over the
Internet with a PC generates billions of dollars in the economy.

Computers in Schools

Computers in Schools
Students work on their classroom computers as a teacher supervises. Nearly every
school in the United States has desktop computers that can be used by students.
Computers aid education by providing students with access to learning tools and
research information.

PCs can greatly improve productivity in the workplace, allowing people to collaborate on
tasks from different locations and easily share documents and information. Many people
with a PC at home are able to telecommute, working from home over the Internet. Laptop
PCs with wireless connections to the Internet allow people to work in virtually any
environment when away from the office. PCs can help people to be self-employed.
Special software can make running a small business from home much easier. PCs can
also assist artists, writers, and musicians with their creative work, or allow anyone to
make their own musical mixes at home. Medical care has been improved and costs have
been reduced by transferring medical records into electronic form that can be accessed
through PC terminals.
PCs have become an essential tool in education at all levels, from grammar school to
university. Many school children are given laptop computers to help with schoolwork and
homework. Classrooms of all kinds commonly use PCs. Many public libraries make PCs
available to members of the public. The Internet and World Wide Web provide access to
enormous amounts of information, some of it free and some of it available through
subscription or fee. Online education as a form of distance education or correspondence
education is a growing service, allowing people to take classes and work on degrees at
their convenience using PCs and the Internet.
PCs can also be adapted to help people with disabilities, using special devices and
software. Special keyboards, cursors that translate head movements, or accessories such
as foot mice can allow people with limited physical movement to use a PC. PCs can also
allow people with speech or auditory disabilities to understand or generate speech. Visual
disabilities can be aided by speech-recognition software that allows spoken commands to
work a PC or for e-mail and text to be read aloud. Text display can also be magnified for
individuals with low vision.
Apple Macintosh Computer
Apple Macintosh Computer
The Apple Macintosh, released in 1984, was among the first personal computers to use
a graphical user interface. A graphical user interface enables computer users to easily
execute commands by clicking on pictures, words, or icons with a pointing device
called a mouse.
Full Size
The first true modern computers were developed during World War II (1939-1945) and
used vacuum tubes. These early computers were the size of houses and as expensive as
battleships, but they had none of the computational power or ease of use that are common
in modern PCs. More powerful mainframe computers were developed in the 1950s and
1960s, but needed entire rooms and large amounts of electrical power to operate.

Technology and the Media

In this essay, British historian and broadcaster Asa Briggs looks at how technological
advances made in recent decades have created a revolution in the media, allowing people
to communicate in ways they had never dreamed of. Briggs notes that although these new
modes of communicationincluding the television, the personal computer, the Internet,
and other digital technologiesare available throughout many parts of the world, these
media may be used in different ways depending upon the prevailing political and social
circumstances. Briggs also raises questions about the future of the media and how the
unfolding media revolution will affect peoples lives.
open sidebar
A major step toward the modern PC came in the 1960s when a group of researchers at the
Stanford Research Institute (SRI) in California began to explore ways for people to
interact more easily with computers. The SRI team developed the first computer mouse
and other innovations that would be refined and improved in the 1970s by researchers at
the Xerox PARC (Palo Alto Research Center, Inc). The PARC team developed an
experimental PC design in 1973 called Alto, which was the first computer to have a
graphical user interface (GUI).
Two crucial hardware developments would help make the SRI vision of computers
practical. The miniaturization of electronic circuitry as microelectronics and the invention
of integrated circuits and microprocessors enabled computer makers to combine the
essential elements of a computer onto tiny silicon computer chips, thereby increasing
computer performance and decreasing cost.
The integrated circuit, or IC, was developed in 1959 and permitted the miniaturization of
computer-memory circuits. The microprocessor first appeared in 1971 with the Intel
4004, created by Intel Corporation, and was originally designed to be the computing and
logical processor of calculators and watches. The microprocessor reduced the size of a
computers CPU to the size of a single silicon chip.
Because a CPU calculates, performs logical operations, contains operating instructions,
and manages data flows, the potential existed for developing a separate system that could
function as a complete microcomputer. The first such desktop-size system specifically
designed for personal use appeared in 1974; it was offered by Micro Instrumentation
Telemetry Systems (MITS). The owners of the system were then encouraged by the
editor of Popular Electronics magazine to create and sell a mail-order computer kit
through the magazine.
The Altair 8800 is considered to be the first commercial PC. The Altair was built from a
kit and programmed by using switches. Information from the computer was displayed by
light-emitting diodes on the front panel of the machine. The Altair appeared on the cover
of Popular Electronics magazine in January 1975 and inspired many computer
enthusiasts who would later establish companies to produce computer hardware and
software. The computer retailed for slightly less than $400.

Computer Circuit Board


Computer Circuit Board
Integrated circuits (ICs) make the microcomputer possible; without them, individual
circuits and their components would take up far too much space for a compact computer
design. Also called a chip, the typical IC consists of elements such as resistors,
capacitors, and transistors packed on a single piece of silicon. In smaller, more denselypacked ICs, circuit elements may be only a few atoms in size, which makes it possible
to create sophisticated computers the size of notebooks. A typical computer circuit board
features many integrated circuits connected together.

The demand for the microcomputer kit was immediate, unexpected, and totally
overwhelming. Scores of small entrepreneurial companies responded to this demand by
producing computers for the new market. The first major electronics firm to manufacture
and sell personal computers, Tandy Corporation (Radio Shack), introduced its model in
1977. It quickly dominated the field, because of the combination of two attractive
features: a keyboard and a display terminal using a cathode-ray tube (CRT). It was also
popular because it could be programmed and the user was able to store information by
means of cassette tape
American computer designers Steven Jobs and Stephen Wozniak created the Apple II in
1977. The Apple II was one of the first PCs to incorporate a color video display and a
keyboard that made the computer easy to use. Jobs and Wozniak incorporated Apple
Computer Inc. the same year. Some of the new features they introduced into their own
microcomputers were expanded memory, inexpensive disk-drive programs and data
storage, and color graphics. Apple Computer went on to become the fastest-growing
company in U.S. business history. Its rapid growth inspired a large number of similar
microcomputer manufacturers to enter the field. Before the end of the decade, the market
for personal computers had become clearly defined.
In 1981 IBM introduced its own microcomputer model, the IBM PC. Although it did not
make use of the most recent computer technology, the IBM PC was a milestone in this
burgeoning field. It proved that the PC industry was more than a current fad, and that the
PC was in fact a necessary tool for the business community. The PCs use of a 16-bit
microprocessor initiated the development of faster and more powerful microcomputers,
and its use of an operating system that was available to all other computer makers led to
what was effectively a standardization of the industry. The design of the IBM PC and its
clones soon became the PC standard, and an operating system developed by Microsoft
Corporation became the dominant software running PCs.
A graphical user interface (GUI)a visually appealing way to represent computer
commands and data on the screenwas first developed in 1983 when Apple introduced
the Lisa, but the new user interface did not gain widespread notice until 1984 with the
introduction of the Apple Macintosh. The Macintosh GUI combined icons (pictures that
represent files or programs) with windows (boxes that each contain an open file or
program). A pointing device known as a mouse controlled information on the screen.
Inspired by earlier work of computer scientists at Xerox Corporation, the Macintosh user
interface made computers easy and fun to use and eliminated the need to type in complex
commands (see User Interface).
Beginning in the early 1970s, computing power doubled about every 18 months due to
the creation of faster microprocessors, the incorporation of multiple microprocessor
designs, and the development of new storage technologies. A powerful 32-bit computer
capable of running advanced multiuser operating systems at high speeds appeared in the
mid-1980s. This type of PC blurred the distinction between microcomputers and
minicomputers, placing enough computing power on an office desktop to serve all small
businesses and most medium-size businesses.
Static RAM
Static RAM (SRAM), in computer science, a form of semiconductor memory (RAM).
Static RAM storage is based on the logic circuit known as a flip-flop, which retains the
information stored in it as long as there is enough power to run the device. A static RAM
chip can store only about one-fourth as much data as a dynamic RAM chip of the same
complexity, but static RAM does not require refreshing and is usually much faster than
dynamic RAM. It is also more expensive. Static RAMs are usually reserved for use in
caches. See also Dynamic RAM.
Microsoft Encarta 2009. 1993-2008 Microsoft Corporation. All rights
reserved.

Number Systems
I.INTRODUCTION
Ancient Mathematical
Characters

Number Systems, in mathematics, various notational systems that have been or are being
used to represent the abstract quantities called numbers. A number system is defined by
the base it uses, the base being the number of different symbols required by the system to
represent any of the infinite series of numbers. Thus, the decimal system in universal use
today (except for computer application) requires ten different symbols, or digits, to
represent numbers and is therefore a base-10 system.

Number Systems
Throughout history, many different number systems have been used; in fact, any whole
number greater than 1 can be used as a base. Some cultures have used systems based on
the numbers 3, 4, or 5. The Babylonians used the sexagesimal system, based on the
number 60, and the Romans used (for some purposes) the duodecimal system, based on
the number 12. The Mayas used the vigesimal system, based on the number 20. The
binary system, based on the number 2, was used by some tribes and, together with the
system based on 8, is used today in computer systems. For historical background, see
Numerals.
II.
PLACE VALUES

Except for computer work, the universally adopted system of mathematical notation
today is the decimal system, which, as stated, is a base-10 system. As in other number
systems, the position of a symbol in a base-10 number denotes the value of that symbol in
terms of exponential values of the base. That is, in the decimal system, the quantity
represented by any of the ten symbols used0, 1, 2, 3, 4, 5, 6, 7, 8, and 9depends on
its position in the number. Thus, the number 3,098,323 is an abbreviation for (3 106) +
(0 105) + (9 104) + (8 103) + (3 102) + (2 101) + (3 100, or 3 1). The first 3
(reading from right to left) represents 3 units; the second 3, 300 units; and the third 3,
3 million units. In this system the zero plays a double role; it represents naught, and it
also serves to indicate the multiples of the base 10: 100, 1000, 10,000, and so on. It is
also used to indicate fractions of integers: 1/10 is written as 0.1, 1/100 as 0.01, 1/1000 as
0.001, and so on.
Two digits0, 1suffice to represent a number in the binary system; 6 digits0, 1, 2, 3,
4, 5are needed to represent a number in the sexagesimal system; and 12 digits0, 1, 2,
3, 4, 5, 6, 7, 8, 9, t (ten), e (eleven)are needed to represent a number in the duodecimal
system. The number 30155 in the sexagesimal system is the number (3 64) + (0 63) +
(1 62) + (5 61) + (5 60) = 3959 in the decimal system; the number 2et in the
duodecimal system is the number (2 122) + (11 121) + (10 120) = 430 in the decimal
system
To write a given base-10 number n as a base-b number, divide (in the decimal system) n
by b, divide the quotient by b, the new quotient by b, and so on until the quotient 0 is
obtained. The successive remainders are the digits in the base-b expression for n. For
example, to express 3959 (base 10) in the base 6, one writes

from which, as above, 395910 = 301556. (The base is frequently written in this way as a
subscript of the number.) The larger the base, the more symbols are required, but fewer
digits are needed to express a given number. The number 12 is convenient as a base
because it is exactly divisible by 2, 3, 4, and 6; for this reason, some mathematicians have
advocated adoption of base 12 in place of the base 10.
BINARY SYSTEM
Comparison of Decimal (base-10) and Binary (base-2)
Computer systems frequently process decimal numbers in binary terms. For instance, in a
system called binary-coded decimal (BCD), each of the decimal digits 0 to 9 is coded in 4
bits. The boxes on this chart are similar to the 4-bit groupings in BCD.

The binary system plays an important role in computer technology. The first 20 numbers
in the binary notation are 1, 10, 11, 100, 101, 110, 111, 1000, 1001, 1010, 1011, 1100,
1101, 1110, 1111, 10000, 10001, 10010, 10011, 10100. The zero here also has the role of
place marker, as in the decimal system. Any decimal number can be expressed in the
binary system by the sum of different powers of two. For example, starting from the
right, 10101101 represents (1 20) + (0 21) + (1 22) + (1 23) + (0 24) + (1 25) +
(0 26) + (1 27) = 173. This example can be used for the conversion of binary numbers
into decimal numbers. For the conversion of decimal numbers to binary numbers, the
same principle can be used, but the other way around. Thus, to convert, the highest power
of two that does not exceed the given number is sought first, and a 1 is placed in the
corresponding position in the binary number. For example, the highest power of two in
the decimal number 519 is 29 = 512. Thus, a 1 can be inserted as the 10th digit, counted
from the right: 1000000000. In the remainder, 519 - 512 = 7, the highest power of 2 is 22
= 4, so the third zero from the right can be replaced by a 1: 1000000100. The next
remainder, 3, consists of the sum of two powers of 2: 21 + 20, so the first and second zeros
from the right are replaced by 1: 51910 = 10000001112.
Arithmetic operations in the binary system are extremely simple. The basic rules are: 1 +
1 = 10, and 1 1 = 1. Zero plays its usual role: 1 0 = 0, and 1 + 0 = 1. Addition,
subtraction, and multiplication are done in a fashion similar to that of the decimal system:

Because only two digits (or bits) are involved, the binary system is used in computers,
since any binary number can be represented by, for example, the positions of a series of
on-off switches. The on position corresponds to a 1, and the off position to a 0.
Instead of switches, magnetized dots on a magnetic tape or disk also can be used to
represent binary numbers: a magnetized dot stands for the digit 1, and the absence of a
magnetized dot is the digit 0. Flip-flopselectronic devices that can only carry two
distinct voltages at their outputs and that can be switched from one state to the other state
by an impulsecan also be used to represent binary numbers; the two voltages
correspond to the two digits. Logic circuits in computers (see Computer; Electronics)
carry out the different arithmetic operations of binary numbers; the conversion of decimal
numbers to binary numbers for processing, and of binary numbers to decimal numbers for
the readout, is done electronically.
Microsoft Encarta 2009. 1993-2008 Microsoft Corporation. All rights
reserved.

Address (computer)

Address (computer), in computer science, a code that indicates the location of data in a
computers internal memory or on a storage device such as a hard disk or a floppy disk
(see Computer Memory). This code is used in computer programs to help the computer
access information in its memory. The term address may also refer to a computer users
electronic mail (e-mail) address, or to a computer or computer files network address.
A computers main memory, also called random-access memory (RAM), is made up of a
series of circuits able to store information. These circuits are organized into groups called
memory locations, each with a unique address. The address is a binary number, meaning
that it is a sequence of 1s and 0s (bits). RAM addresses are typically 8 to 32 bits long
depending on the type of central processing unit (CPU) in the computer. Bits are usually
combined into 8-bit groups called bytes.
A computer can only address a limited amount of data in its main memory. The limit
depends on the number of bits the computers CPU can process. A 16-bit processor, for
example, can handle a maximum of 16 bits (2 bytes) of data at a time. The data that the
CPU processes comes from RAM, which limits the size of individual pieces of data in
RAM to binary numbers 16 bits long. Since each digit in a binary number can have one
of two values, either a 0 or a 1, the total number of memory locations that a 16-bit CPU
can address is 216, or 65,536 (64K of memory). This is a small amount for most
applications, so various software techniques are used by the operating system to expand
RAM in 16-bit computers.
Other memory storage devices such as hard disks, floppy disks and CD-ROMs also store
data to specific locations designated by unique addresses.
Another use of addresses in computer science refers to the mailboxes of origin and
destination for e-mail messages. E-mail travels from a users computer, which has a
unique e-mail address, to a recipients computer, which also has a unique e-mail address.
E-mail addresses have three parts. The first part is the username, the second is the name
of the host computer, and the third is a designation for the type of organization or
institution where the host resides. For example, in the e-mail address
sally@cs.university.edu, the username is sally, the host name is cs.university and the
institution suffix is edu, indicating an educational institution. The username is separated
from the host name by the @ symbol, while the host name is separated from the
institution designation by a period (.). The host name can have multiple parts separated
by periods (.) to specifically indicate the appropriate host at a large institution.
Addresses are also used to label files and documents on internets, intranets and the
Internet, as well as to label Web pages on the World Wide Web. These addresses are
similar to e-mail addresses in form and are called Uniform Resource Locators (URLs).
An example of a URL is: http://sally.cs.university.edu/myfiles.html. A URL also has three
parts: the type of protocol (the specific method by which information is transferred over
the network), the host name, and the directory and file name. A computer uses a URL to
locate a specific network site, which it can then download.
ALSO IN ENCARTA
Microsoft Encarta 2009. 1993-2008 Microsoft Corporation. All rights
reserved.
Base (mathematics)

Find in Article (Ctrl+F)

Base (mathematics), the number of different single-digit symbols used in a particular


number system. In the usual counting system of numbers, the decimal system (with
symbols 0, 1, 2, 3, 4, 5, 6, 7, 8, 9), the base is 10. In the binary number system, which has
only the symbols 1 and 0, the base is 2. A base is also a number that, when raised to a
particular power (that is, when multiplied by itself a particular number of times, as in 102
= 10 x 10 = 100), has a logarithm equal to the power. For example, the logarithm of 100
to the base 10 is 2. In geometry, the term base is used to denote the line or area on which
a polygon or solid stands.
Microsoft Encarta 2009. 1993-2008 Microsoft Corporation. All rights
reserved.

Base (mathematics)
Bit Machine Computer
16-Bit Machine Computer, in computer science, a computer that works with information
in groups of 16 bits (binary digits) at a time. A description of a computer as a 16-bit
machine can either refer to the word size (basic working unit) of its microprocessor or,
more commonly, refer to the number of bits transferred along the computer's data bus
(data path along which information travels to and from the microprocessor) at a single
time. A 16-bit microprocessor thus has a word size of 16 bits, or 2 bytes; a 16-bit data bus
has 16 data lines, so it ferries information through the system in sets of 16 bits at a time.
The IBM PC/AT and similar models based on the Intel 80286 microprocessor are 16-bit
machines, in terms of both the word size of the microprocessor and the size of the data
bus. The Apple Macintosh Plus and Macintosh SE have a 32-bit microprocessor (the
Motorola MC68000) but a 16-bit data bus and are generally considered 16-bit machines.
See also 80286; 8-Bit Machine; 32-Bit Machine; Bit; Microcomputer.
Microsoft Encarta 2009. 1993-2008 Microsoft Corporation. All rights reserved.
Computer, machine that performs tasks, such as calculations or electronic
communication, under the control of a set of instructions called a program. Programs
usually reside within the computer and are retrieved and processed by the computers
electronics. The program results are stored or routed to output devices, such as video
display monitors or printers. Computers perform a wide variety of activities reliably,
accurately, and quickly.
USES OF COMPUTER
Computers in Spaceflight

When a software program on a home computer crashes, it can be frustrating, but the
problem would be far more grave if that crash occurred in an onboard computer assisting
in the launch or reentry of a spacecraft. Software engineer James E. Tomayko provides an
introduction to the challenges posed by spaceflight computing over the past several
decades and details how the National Aeronautics and Space Administration (NASA)
approached those challenges. This excerpt is from a 1988 publication of the NASA
History Office.

People use computers in many ways. In business, computers track inventories with bar
codes and scanners, check the credit status of customers, and transfer funds
electronically. In homes, tiny computers embedded in the electronic circuitry of most
appliances control the indoor temperature, operate home security systems, tell the time,
and turn videocassette recorders (VCRs) on and off. Computers in automobiles regulate
the flow of fuel, thereby increasing gas mileage, and are used in anti-theft systems.
Computers also entertain, creating digitized sound on stereo systems or computeranimated features from a digitally encoded laser disc. Computer programs, or
applications, exist to aid every level of education, from programs that teach simple
addition or sentence construction to programs that teach advanced calculus. Educators
use computers to track grades and communicate with students; with computer-controlled
projection units, they can add graphics, sound, and animation to their communications
(see Computer-Aided Instruction). Computers are used extensively in scientific research
to solve mathematical problems, investigate complicated data, or model systems that are
too costly or impractical to build, such as testing the air flow around the next generation
of aircraft. The military employs computers in sophisticated communications to encode
and unscramble messages, and to keep track of personnel and supplies.
III.

HOW COMPUTERS
WORK

Computer System
Computer System
A typical computer system consists of a central processing unit (CPU), input
devices, storage devices, and output devices. The CPU consists of an
arithmetic/logic unit, registers, control section, and internal bus. The
arithmetic/logic unit carries out arithmetical and logical operations. The registers
store data and keep track of operations. The control unit regulates and controls
various operations. The internal bus connects the units of the CPU with each other
and with external components of the system. For most computers, the principal
input devices are a keyboard and a mouse. Storage devices include hard disks, CDROM drives, and random access memory (RAM) chips. Output devices that display
data include monitors and printers.

The physical computer and its components are known as hardware. Computer hardware
includes the memory that stores data and program instructions; the central processing unit

(CPU) that carries out program instructions; the input devices, such as a keyboard or
mouse, that allow the user to communicate with the computer; the output devices, such as
printers and video display monitors, that enable the computer to present information to
the user; and buses (hardware lines or wires) that connect these and other computer
components. The programs that run the computer are called software. Software generally
is designed to perform a particular type of taskfor example, to control the arm of a
robot to weld a cars body, to write a letter, to display and modify a photograph, or to
direct the general operation of the computer.
A.
The Operating System

When a computer is turned on it searches for instructions in its memory. These


instructions tell the computer how to start up. Usually, one of the first sets of these
instructions is a special program called the operating system, which is the software that
makes the computer work. It prompts the user (or other machines) for input and
commands, reports the results of these commands and other operations, stores and
manages data, and controls the sequence of the software and hardware actions. When the
user requests that a program run, the operating system loads the program in the
computers memory and runs the program. Popular operating systems, such as Microsoft
Windows and the Macintosh system (Mac OS), have graphical user interfaces (GUIs)
that use tiny pictures, or icons, to represent various files and commands. To access these
files or commands, the user clicks the mouse on the icon or presses a combination of keys
on the keyboard. Some operating systems allow the user to carry out these tasks via
voice, touch, or other input methods.
B.

Computer
Memory

Inside a Computer Hard


Drive
Inside a Computer Hard Drive
The inside of a computer hard disk drive consists of four main components. The round
disk platter is usually made of aluminum, glass, or ceramic and is coated with a
magnetic media that contains all the data stored on the hard drive. The yellow armlike
device that extends over the disk platter is known as the head arm and is the device
that reads the information off of the disk platter. The head arm is attached to the head
actuator, which controls the head arm. Not shown is the chassis which encases and
holds all the hard disk drive components.

To process information electronically, data are stored in a computer in the form of binary
digits, or bits, each having two possible representations (0 or 1). If a second bit is added
to a single bit of information, the number of representations is doubled, resulting in four
possible combinations: 00, 01, 10, or 11. A third bit added to this two-bit representation
again doubles the number of combinations, resulting in eight possibilities: 000, 001, 010,
011, 100, 101, 110, or 111. Each time a bit is added, the number of possible patterns is
doubled. Eight bits is called a byte; a byte has 256 possible combinations of 0s and 1s.
See also Expanded Memory; Extended Memory.
A byte is a useful quantity in which to store information because it provides enough
possible patterns to represent the entire alphabet, in lower and upper cases, as well as
numeric digits, punctuation marks, and several character-sized graphics symbols,
including non-English characters such as . A byte also can be interpreted as a pattern
that represents a number between 0 and 255. A kilobyte1,024 bytescan store about
1,000 characters; a megabyte can store about 1 million characters; a gigabyte can store
about 1 billion characters; and a terabyte can store about 1 trillion characters. Computer
programmers usually decide how a given byte should be interpretedthat is, as a single
character, a character within a string of text, a single number, or part of a larger number.

Numbers can represent anything from chemical bonds to dollar figures to colors to
sounds.
The physical memory of a computer is either random access memory (RAM), which can
be read or changed by the user or computer, or read-only memory (ROM), which can be
read by the computer but not altered in any way. One way to store memory is within the
circuitry of the computer, usually in tiny computer chips that hold millions of bytes of
information. The memory within these computer chips is RAM. Memory also can be
stored outside the circuitry of the computer on external storage devices, such as magnetic
floppy disks, which can store about 2 megabytes of information; hard drives, which can
store gigabytes of information; compact discs (CDs), which can store up to 680
megabytes of information; and digital video discs (DVDs), which can store 8.5 gigabytes
of information. A single CD can store nearly as much information as several hundred
floppy disks, and some DVDs can hold more than 12 times as much data as a CD.
C.
The Bus

The bus enables the components in a computer, such as the CPU and the memory circuits,
to communicate as program instructions are being carried out. The bus is usually a flat
cable with numerous parallel wires. Each wire can carry one bit, so the bus can transmit
many bits along the cable at the same time. For example, a 16-bit bus, with 16 parallel
wires, allows the simultaneous transmission of 16 bits (2 bytes) of information from one
component to another. Early computer designs utilized a single or very few buses.
Modern designs typically use many buses, some of them specialized to carry particular
forms of data, such as graphics.
D.

Input
Devices

Light Pen
Light Pen
Light pens are electronic pointers that allow users to modify designs on-screen. The
hand-held pointer contains sensors that send signals to the computer whenever light
is recorded. The computers screen is not lit up all at once, but traced row-by-row
by an electron beam sixty times every second. Because of this, the computer is able
to determine the pens position by noting exactly when the pen detects the electron
beam passing its tip. Light pens are often used in computer-aided design and
computer-aided manufacture (CAD and CAM) technology because of the flexibility
they provide. Here, an engineer uses a light pen to modify a technical drawing on a
computer display screen.
Full Size
Input devices, such as a keyboard or mouse, permit the computer user to communicate
with the computer. Other input devices include a joystick, a rodlike device often used by
people who play computer games; a scanner, which converts images such as photographs
into digital images that the computer can manipulate; a touch panel, which senses the
placement of a users finger and can be used to execute commands or access files; and a
microphone, used to input sounds such as the human voice which can activate computer
commands in conjunction with voice recognition software. Tablet computers are being
developed that will allow users to interact with their screens using a penlike device.
E.
The Central Processing Unit

Information from an input device or from the computers memory is communicated via
the bus to the central processing unit (CPU), which is the part of the computer that
translates commands and runs programs. The CPU is a microprocessor chipthat is, a
single piece of silicon containing millions of tiny, microscopically wired electrical
components. Information is stored in a CPU memory location called a register. Registers
can be thought of as the CPUs tiny scratchpad, temporarily storing instructions or data.
When a program is running, one special register called the program counter keeps track
of which program instruction comes next by maintaining the memory location of the next
program instruction to be executed. The CPUs control unit coordinates and times the
CPUs functions, and it uses the program counter to locate and retrieve the next
instruction from memory.
In a typical sequence, the CPU locates the next instruction in the appropriate memory
device. The instruction then travels along the bus from the computers memory to the
CPU, where it is stored in a special instruction register. Meanwhile, the program counter
changesusually increasing a small amountso that it contains the location of the
instruction that will be executed next. The current instruction is analyzed by a decoder,
which determines what the instruction will do. Any data the instruction needs are
retrieved via the bus and placed in the CPUs registers. The CPU executes the instruction,
and the results are stored in another register or copied to specific memory locations via a
bus. This entire sequence of steps is called an instruction cycle. Frequently, several
instructions may be in process simultaneously, each at a different stage in its instruction
cycle. This is called pipeline processing.
F.
Output Devices

Once the CPU has executed the program instruction, the program may request that the
information be communicated to an output device, such as a video display monitor or a
flat liquid crystal display. Other output devices are printers, overhead projectors,
videocassette recorders (VCRs), and speakers. See also Input/Output Devices.
IV.

PROGRAMMING
LANGUAGES

Application of Programming Languages


Application of Programming Languages
Programming languages allow people to communicate with computers. Once a job
has been identified, the programmer must translate, or code, it into a list of
instructions that the computer will understand. A computer program for a given task
may be written in several different languages. Depending on the task, a
programmer will generally pick the language that will involve the least complicated
program. It may also be important to the programmer to pick a language that is
flexible and widely compatible if the program will have a range of applications.
These examples are programs written to average a list of numbers. Both C and
BASIC are commonly used programming languages. The machine interpretation
shows how a computer would process and execute the commands from the
programs.
Full Size
Programming languages contain the series of commands that create software. A CPU has
a limited set of instructions known as machine code that it is capable of understanding.
The CPU can understand only this language. All other programming languages must be
converted to machine code for them to be understood. Computer programmers, however,
prefer to use other computer languages that use words or other commands because they
are easier to use. These other languages are slower because the language must be

translated first so that the computer can understand it. The translation can lead to code
that may be less efficient to run than code written directly in the machines language.
How do companies and other organizations assess the value of the computer software
programs they own, or estimate the amount of time and resources required to complete a
planned software project? Software experts have developed techniques to perform
meaningful evaluations of these questions and are now working to standardize their
methods. In this December 1998 Scientific American article, computer scientist Capers
Jones explains how these assessment techniques work and describes the ongoing efforts
for standardization.
Computer programs that can be run by a computers operating system are called
executables. An executable program is a sequence of extremely simple instructions
known as machine
Microsoft Encarta 2009. 1993-2008 Microsoft Corporation. All rights
reserved.

Base (mathematics), the number of different single-digit symbols used in a particular


number system. In the usual counting system of numbers, the decimal system (with
symbols 0, 1, 2, 3, 4, 5, 6, 7, 8, 9), the base is 10. In the binary number system, which has
only the symbols 1 and 0, the base is 2. A base is also a number that, when raised to a
particular power (that is, when multiplied by itself a particular number of times, as in 102
= 10 x 10 = 100), has a logarithm equal to the power. For example, the logarithm of 100
to the base 10 is 2. In geometry, the term base is used to denote the line or area on which
a polygon or solid stands.
Microsoft Encarta 2009. 1993-2008 Microsoft Corporation. All rights
reserved.

Data Processing
I.
INTRODUCTION

Data Processing, in computer science, the analysis and organization of data by the
repeated use of one or more computer programs. Data processing is used extensively in
business, engineering, and science and to an increasing extent in nearly all areas in which
computers are used. Businesses use data processing for such tasks as payroll preparation,
accounting, record keeping, inventory control, sales analysis, and the processing of bank
and credit card account statements. Engineers and scientists use data processing for a
wide variety of applications, including the processing of seismic data for oil and mineral
exploration, the analysis of new product designs, the processing of satellite imagery, and
the analysis of data from scientific experiments.

Data processing is divided into two kinds of processing: database processing and
transaction processing. A database is a collection of common records that can be
searched, accessed, and modified, such as bank account records, school transcripts, and
income tax data. In database processing, a computerized database is used as the central
source of reference data for the computations. Transaction processing refers to interaction
between two computers in which one computer initiates a transaction and another
computer provides the first with the data or computation required for that function.
Most modern data processing uses one or more databases at one or more central sites.
Transaction processing is used to access and update the databases when users need to
immediately view or add information; other data processing programs are used at regular
intervals to provide summary reports of activity and database status. Examples of systems
that involve all of these functions are automated teller machines, credit sales terminals,
and airline reservation systems.
II.
THE DATA-PROCESSING CYCLE

The data-processing cycle represents the chain of processing events in most dataprocessing applications. It consists of data recording, transmission, reporting, storage,
and retrieval. The original data is first recorded in a form readable by a computer. This
can be accomplished in several ways: by manually entering information into some form
of computer memory using a keyboard, by using a sensor to transfer data onto a magnetic
tape or floppy disk, by filling in ovals on a computer-readable paper form, or by swiping
a credit card through a reader. The data are then transmitted to a computer that performs
the data-processing functions. This step may involve physically moving the recorded data
to the computer or transmitting it electronically over telephone lines or the Internet. See
Information Storage and Retrieval.
Once the data reach the computer, the computer processes it. The operations the computer
performs can include accessing and updating a database and creating or modifying
statistical information. After processing the data, the computer reports summary results to
the programs operator.
As the computer processes the data, it stores both the modifications and the original data.
This storage can be both in the original data-entry form and in carefully controlled
computer data forms such as magnetic tape. Data are often stored in more than one place
for both legal and practical reasons. Computer systems can malfunction and lose all
stored data, and the original data may be needed to recreate the database as it existed
before the crash.
The final step in the data-processing cycle is the retrieval of stored information at a later
time. This is usually done to access records contained in a database, to apply new dataprocessing functions to the data, orin the event that some part of the data has been lost
to recreate portions of a database. Examples of data retrieval in the data-processing
cycle include the analysis of store sales receipts to reveal new customer spending patterns
and the application of new processing techniques to seismic data to locate oil or mineral
fields that were previously overlooked.
III.
HISTORY

To a large extent, data processing has been the driving force behind the creation and
growth of the computer industry. In fact, it predates electronic computers by almost 60
years. The need to collect and analyze census data became such an overwhelming task for
the United States government that in 1890 the U.S. Census Bureau contracted American
engineer and inventor Herman Hollerith to build a special purpose data-processing
system. With this system, census takers recorded data by punching holes in a paper card
the size of a dollar bill. These cards were then forwarded to a census office, where

mechanical card readers were used to read the holes in each card and mechanical adding
machines were used to tabulate the results. In 1896 Hollerith founded the Tabulating
Machine Company, which later merged with several other companies and eventually
became International Business Machines Corporation (IBM).
During World War II (1939-1945) scientists developed a variety of computers designed
for specific data-processing functions. The Harvard Mark I computer was built from a
combination of mechanical and electrical devices and was used to perform calculations
for the U.S. Navy. Another computer, the British-built Colossus, was an all-electronic
computing machine designed to break German coded messages. It enabled the British to
crack German codes quickly and efficiently.
The role of the electronic computer in data processing began in 1946 with the
introduction of the ENIAC, the first all-electronic computer. The U.S. armed services
used the ENIAC to tabulate the paths of artillery shells and missiles. In 1950 Remington
Rand Corporation introduced the first nonmilitary electronic programmable computer for
data processing. This computer, called the UNIVAC, was initially sold to the U.S. Census
Bureau in 1951; several others were eventually sold to other government agencies.
With the purchase of a UNIVAC computer in 1954, General Electric Company became
the first private firm to own a computer, soon followed by Du Pont Company,
Metropolitan Life, and United States Steel Corporation. All of these companies used the
UNIVAC for commercial data-processing applications. The primary advantages of this
machine were its programmability, its high-speed arithmetic capabilities, and its ability to
store and process large business files on multiple magnetic tapes. The UNIVAC gained
national attention in 1952, when the American Broadcast Company (ABC) used a
UNIVAC during a live television broadcast to predict the outcome of the presidential
election. Based upon less than 10 percent of the election returns, the computer correctly
predicted a landslide victory for Dwight D. Eisenhower over his challenger, Adlai E.
Stevenson.
In 1953, IBM produced the first of its computers, the IBM 701-a machine designed to
be mass-produced and easily installed in a customers building. The success of the 701
led IBM to manufacture many other machines for commercial data processing. The sales
of IBMs 650 computer were a particularly good indicator of how rapidly the business
world accepted electronic data processing. Initial sales forecasts were extremely low
because the machine was thought to be too expensive, but over 1800 were eventually
made and sold.
In the 1950s and early 1960s data processing was essentially split into two distinct areas,
business data processing and scientific data processing, with different computers
designed for each. In an attempt to keep data processing as similar to standard accounting
as possible, business computers had arithmetic circuits that did computations on strings
of decimal digits (numbers with digits that range from 0 to 9). Computers used for
scientific data processing sacrificed the easy-to-use decimal number system for the more
efficient binary number system in their arithmetic circuitry.
The need for separate business and scientific computing systems changed with the
introduction of the IBM System/360 family of machines in 1964. These machines could
all run the same data-processing programs, but at different speeds. They could also
perform either the digit-by-digit math favored by business or the binary notation favored
for scientific applications. Several models had special modes in which they could execute
programs from earlier IBM computers, especially the popular IBM 1401. From that time
on, almost all commercial computers were general-purpose.

One notable exception to the trend of general-purpose computers and programming


languages is the supercomputer. Supercomputers are computers designed for high-speed
precision scientific computations. However, supercomputers are sometimes used for data
processing that is not scientific. In these cases, they must be built so that they are flexible
enough to allow other types of computations.
The division between business and scientific data processing also influenced the
development of programming languages in which application programs were written.
Two such languages that are still popular today are COBOL (COmmon Business Oriented
Language) and Fortran (FORmula TRANslation). Both of these programming languages
were developed in the late 1950s and early 1960s, with COBOL becoming the
programming language of choice for business data processing and FORTRAN for
scientific processing. In the 1970s other languages such as C were developed. These
languages reflected the general-purpose nature of modern computers and allowed
extremely efficient programs to be developed for almost any data-processing application.
One of the most popular languages currently used in data-processing applications is an
extension of C called C++. C++ was developed in the 1980s and is an object-oriented
language, a type of language that gives programmers more flexibility in developing
sophisticated applications than other types of programming languages.
Microsoft Encarta 2009. 1993-2008 Microsoft Corporation. All rights
reserved.
Digital

Digital, related to digits or the way they are represented. In computing, digital is virtually
synonymous with binary because the computers familiar to most people process
information coded as combinations of binary digits (bits). One bit can represent at most
two values; 2 bits, four values; 8 bits, 256 values; and so on. Values that fall between two
numbers are represented as either the lower or the higher of the two. Because digital
representation represents a value as a coded number, the range of values represented can
be very wide, although the number of possible values is limited by the number of bits
used. See also Computer; Digital-To-Analog Converter.
Microsoft Encarta 2009. 1993-2008 Microsoft Corporation. All rights
reserved

Digital Logic
Digital Logic, also called binary logic, in computer science, a strict set of rules for
representing the relationships and interactions among numbers, words, symbols, and
other data stored or entered in the memory of a computer. Digital logic is the heart of the
operational function for all modern digital computers. The system uses binary arithmetic,
in which a sequence of 1s and 0s (called bits) are used to represent a number. These bits
are combined in meaningful ways through the operation of digital logic and physically
describe electrical voltage states in a computers circuitry. Digital logic uses the bit value
1 to represent a transistor with electric current flowing through it and the bit value 0 to
represent a transistor with no current flowing through it.
The instructions that direct a computers operation are known as machine code, and they
are written as a sequence of binary digits. These binary digits, or bits, switch specific
groups of transistors, called gates, on or off (see Transistor). There are three basic logic
states, or functions, for logic gates: AND, OR, and NOT. An AND gate takes the value of
two input bits and tests them to see if they are both equal to 1. If they are, the output from
the AND gate is a 1, or true. If they are not, the AND gate will output a 0, or false. An OR
gate tests two input bits to see if either of the bits is equal to 1. If either input bit is equal

to 1, the gate outputs a 1; if both input bits are 0, it outputs a 0. A NOT gate negates the
input bit, so an input of 1 results in an output of 0 and vice versa.
Combinations of logic gates in open or closed positions can be used to represent and
execute operations on data. A series of logic gates together form a logic circuit. The
output of a logic circuit can provide input to another circuit or produce the result of an
operation. Extremely complex operations can be performed using combinations of the
AND, OR, and NOT functions.
Binary logic was first proposed by 19th-century British logician and mathematician
George Boole, who in 1847 invented a two-valued system of algebra that represented
logical relationships and operations. This system of algebra, called Boolean Algebra, was
used by German engineer Konrad Zuse in the 1930s for his Z1 calculating machine. It
was also used in the design of the first digital computer in the late 1930s by American
physicist John Atanasoff and his graduate student Clifford Berry. During 1944 and 1945
Hungarian-born American mathematician John von Neumann suggested using the binary
arithmetic system for storing programs in computers. In the 1930s and 1940s British
mathematician Alan Turing and American mathematician Claude Shannon also
recognized how binary logic was well suited to the development of digital computers.
Microsoft Encarta 2009. 1993-2008 Microsoft Corporation. All rights
reserved.

Digitize

Digitize, in computer science, to convert any continuously varying source of input, such
as the lines in a drawing or a sound signal, into a series of discrete units represented (in a
computer) by the binary digits 0 and 1. A drawing or photograph, for example, can be
digitized by a scanner that converts lines and shading into combinations of 0's and 1's by
sensing different intensities of light and dark. Analog-to-digital converters are commonly
used to perform this translation. See also Aliasing; Analog-To-Digital Converter.
Microsoft Encarta 2009. 1993-2008 Microsoft Corporation. All rights
reserved.
Information Theory
Information Theory, application of mathematical principles to the problems of
transmitting and storing information. Information theory stems originally from the work
of American mathematician and electrical engineer Claude E. Shannon and, in particular,
his classic paper A Mathematical Theory of Communication, published in 1948 in the
Bell System Technical Journal. Shannon rewrote the paper and, with Warren Weaver,
published it in book form as The Mathematical Theory of Communication in 1949.
Shannon was interested in the problems of communicating information accurately from
one place to another. He decided to approach communication from a mathematical point
of view.
Information theory focuses on the problems inherent in sending and receiving messages
and information. The theory is based on the idea that communication involves uncertain
processes, both in the selection of the message to be transmitted and in the transmission
of the message itself. Information theory provides a way to measure this uncertainty
precisely.
In information theory, the actual meaning of the message is unimportant. Instead, the
important qualities of communication are the amount of information that the message
contains, the accuracy of the transmission, and the quality of the reception. All of these
values are represented mathematically, so different messages and different
communication systems can be compared, studied, and improved.

Information theory measures the amount of information in a message by using units


called bits, short for binary digits, which use only the numbers 0 and 1 (see Number
Systems). Information theory is useful because it provides a way to find the minimum
number of bits required to communicate a given message. Information theory can also
determine the maximum rate, in bits per second, at which a given communication channel
can transmit reliable information.
Information theory is primarily a theoretical study. However, it has had a profound
impact on the design of practical data communication and storage systems, such as
telephones and computers. The theory can be applied to both the transmission and the
storage of messages, because storage is nothing more than transmission in time. For
example, both making a telephone call to a friend in another city and tape recording a
message for a friend to play later in the day involve the same issues of sending and
receiving messages. In information theory, no fundamental distinction is made between
these two types of problems
II.

PARTS OF A COMMUNICATION
SYSTEM

Elements of a Communication System


Elements of a Communication System
American electrical engineer Claude Shannon identified several fundamental
elements present in any communication system. In his model, a message originates
at a source; the message is sent by a transmitter along a channel to a receiver; and
then the message finally arrives at a destination. Information theory studies how
messages are transmitted and received. It also studies how unwanted noise in a
system can interfere with communication.
Full Size
Any time a message is sent from a sender to a receiver, the different parts of the
communication system can be represented by the accompanying schematic figure titled
'Elements of a Communication System,' adapted from Shannons work on information
theory. The model he devised to represent a communication system always consists of
five major parts: the information source, the transmitter, the channel, the receiver, and the
destination.
The information source produces (or selects) the message or the sequence of messages to
be transmitted to the destination. For example, the information source could be a distant
spacecraft and the message could be an image of a planet, or the information source could
be a rock-and-roll band and the message could be a new song.

The transmitter converts the message into a signal suitable for transmission over the
channel. For example, the transmitter could be the spacecraft telecommunication system
that converts a photograph of Jupiter into a television signal. Another example would be
the recording studios audio equipment, which converts the rock-and-roll song into a
sequence of tiny pits on the mirrorlike surface of a compact disc (CD).
The channel is the medium that is used to transmit the signal. The channel is often noisy,
in the sense that when the signal arrives at the receiver, it may contain noise or static, or it
may be slightly garbled. For example, the channel could be the millions of kilometers of
empty space between Jupiter and Earth, with noise arising because the received signal is
so weak. Or it could be the surface of a CD, with noise occurring because of fingerprints,
dust, or scratches on the surface.

The receiver is a device that reconstructs (either exactly or approximately) the message
from the received signal. It could be a large dish antenna or the electronics in a CD
player.
The destination is the person (or thing) for which the message is intended. For example,
the destination could be a teenager interested in planetary science or an astronomer
interested in rock and roll.
Information theory is the mathematical study of these five components, individually and
in combination. Existing communication systems can be studied this way, and new
systems can be designed based on the knowledge gained. For example, information
theory can provide a way to measure the amount of information produced by a source or
to measure the ability of a noisy channel to transmit information reliably. In addition, the
theory provides the theoretical basis for data compression, which is a way to squeeze
more information into a message by eliminating redundancy, or parts of the message that
do not contain any important information. Information theory also offers guidelines for
the engineering design of transmitters and receivers.
III.
MEASURING INFORMATION: THE BIT

In any communication system the message produced by the source is one of several
possible messages. The receiver will know what these possibilities are but will not know
which one has been selected. Shannon observed that any such message can be
represented by a sequence of fundamental units called bits, consisting of either 0s or 1s.
The number of bits required for a message depends on the number of possible messages:
the more possible messages (and hence the more initial uncertainty at the receiver), the
more bits required.
As a simple example, suppose a coin is flipped and the outcome (heads or tails) is to be
communicated to a person in the next room. The outcome of the flip of a coin can be
represented using one bit of information: 0 for heads and 1 for tails. Similarly, the
outcome of a football game might also be represented with one bit: 0 if the home team
loses and 1 if the home team wins. These examples emphasize one of the limitations of
information theoryit cannot measure (and does not attempt to measure) the meaning or
the importance of a message. It requires the same amount of information to distinguish
heads from tails as it does to distinguish a win from a loss: one bit.
For an example with more than two outcomes, more bits are required. Suppose a playing
card is chosen at random from a 52-card deck, and the suit chosen (hearts, spades, clubs,
or diamonds) is to be communicated. Communicating the chosen suit (one of four
possible messages) requires two bits of information, using the following simple scheme:

Binary digit signals enable language, numbers, images, patterns, and music to be
communicated through a common technology. The possibilities seem almost limitless.
They would have seemed in the past to have belonged not to science but to science
fiction. The word information itself seems to be inadequate. It covers entertainment,
as it did in the McLuhanesque period, raising different issues, and it encompasses ways
of learning as well as of communicating. It is difficult to keep a sense of perspective
given the rate and scale of change.

Comparison of Decimal (base-10) and Binary (base-2)

Computer systems frequently process decimal numbers in binary


terms. For instance, in a system called binary-coded decimal (BCD),
each of the decimal digits 0 to 9 is coded in 4 bits. The boxes on this
chart are similar to the 4-bit groupings in BCD.
Microsoft Corporation. All Rights Reserved.
Microsoft Encarta 2009. 1993-2008 Microsoft Corporation. All rights reserved.

Digital Circuits and Boolean Truth Tables


Digital circuits operate in the binary number system, which means that
all circuit variables must be either 1 or 0. The algebra used to solve
problems and process information in digital systems is called Boolean
algebra; it deals with logic, rather than calculating actual numeric
values. Boolean algebra is based on the idea that logical propositions
are either true or false, depending on the type of operation they

describe and whether the variables are true or false. True


corresponds to the digital value of 1, while false corresponds to 0.
These diagrams show various electronic switches, called gates, each of
which performs a specific Boolean operation. There are three basic
Boolean operations, which may be used alone or in combination:
logical multiplication (AND gate), logical addition (OR gate), and logical
inversion (NOT gate). The accompanying tables, called truth tables,
map all of the potential input combinations against yielded outputs.
Microsoft Corporation. All Rights Reserved.

Potrebbero piacerti anche