Sei sulla pagina 1di 36

Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

INTRODUCTION TO COMPUTER SCIENCE


COURSE OUTLINE
1. Historical Development of Computer
2. Basic Computer Concepts (Definition and Components of the Computer
System)
3. Types of Computers (Analogue, Digital and Hybrid)
4. Hardware Components
5. Software Components
6. Peripheral Devices

1.0 HISTORY OF COMPUTERS


1.1 INTRODUCTION
A computer is basically a processor of information. Its power and versatility is
gotten from the fact that it has a high speed of operation and the information
contained in it may be processed in many different forms, with text, sound, video,
computer generated graphics etc.

Since civilizations began, many of the advances made by science and technology
have depended upon the ability to process large amounts of data and perform
complex mathematical calculations. For thousands of years, mathematicians,
scientists and businessmen have searched for computing machines that could
perform calculations and analyze data quickly and efficiently. One such device was
the abacus invented around 500 BC.
The abacus was an important counting machine in ancient Babylon, China, and
throughout Europe where it was used until the late middle ages.

In 1833, Prof. Charles Babbage, the father of the computer, developed a machine
called analytical engine which was the vase for the modern digital computer.
It was followed by a series of improvements in mechanical counting machines that
led up to the development of accurate mechanical adding machines in the 1930’s.
These machines used a complicated assortment of gears and levers to perform the
calculations but they were far to slow to be of much use to scientists. Also, a
machine capable of making simple decisions such as which number is larger was
needed. A machine capable of making decisions is called a computer.
The first computer like machine was the Mark I developed by a team from IBM and
Harvard University. It used mechanical telephone relays to store information and it
processed data entered on punch cards. This machine was not a true computer
since it could not make decisions.
In June 1943, work began on the world's first electronic computer. It was built at
the University of Pennsylvania as a secret military project during World War II and
was to be used to calculate the trajectory of artillery shells. It covered 1500 square
feet and weighed 30 tons. The project was not completed until 1946 but the effort

Page 1
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

was not wasted. In one of its first demonstrations, the computer solved a problem
in 20 seconds that took a team of mathematicians three days. This machine was a
vast improvement over the mechanical calculating machines of the past because it
used vacuum tubes instead of relay switches. It contained over 17,000 of these
tubes, which were the same type tubes used in radios at that time.
The invention of the transistor made smaller and less expensive computers
possible. Although computers shrank in size, they were still huge by today’s
standards. Another innovation to computers in the 60’s was storing data on tape
instead of punch cards. This gave computers the ability to store and retrieve data
quickly and reliably.

1.2 GENERATION OF COMPUTERS


It is usual to associate each stage of computer development commonly referred to
as generation, with one sort of technological innovation or another. Each generation
usually makes possible certain things which were not possible earlier. The
characteristics of present day computer have been arrived at through a process of
development, most of which has been occurring since the mid 1940s.

The electronic digital computers, which were introduced in 1950's, were using
vacuum tubes. Following this, the development in the electronic components
helped in the development of digital computers also. The second-generation
computers used transistors.
The introduction of Integrated Circuits (ICs), also known as chips opened the
door for the development of third generation computers. A very large number of
circuit elements (transistors, diodes, resistors, etc.,) could be integrated into a very
small (less than 5 mm square) surface of silicon and hence the name IC. The third
generation computers used Small-Scale Integrated Circuits (SSI) which contain
about 10-20 components. When Large-Scale Integrated Circuits (LSI) (around
30,000 components) was developed, the fourth generation computers were
produced.

1.2.1 FIRST GENERATION COMPUTER (1945-1957)


The characteristic technology of the first generation computers was the use of
vacuum tubes as the basic blocks for building the logic parts of the computer. The
technological base was therefore circuitry, consisting of wires and thermionic
valves. The valves had hollow tubes where non-solid state as electrical pulses had
to flow through. They used magnetic drums and delay lines for their internal
storage.
Examples of first generation computers were Mark I (1944), ENIAC (1946), EDVAC
(1947), EDSAC (1949) and UNIVAC (1951).

1.2.2 SECOND GENERATION COMPUTER (1958-1963)


The characteristics technology of this generation was the transistor which
revolutionized not only the computer industry but the whole of electronic
engineering. Although the transistor itself was developed by a team led by William
Shockley in the late 1940s at Bell Labs, it was not until late 1950s that it began to

Page 2
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

replace the valves of the first generation. The components of this generation
computers (printed circuit diode and transistors) were based on the solid state
technology since electricity did not have to flow through space as in the thermionic
valve. Transistors could do all that the thermionic valves could do at a much faster
rate, consumed far less electricity (power) and were physically smaller and
cheaper.
Another important technology of this age was the use of magnetic core storage
instead of magnetic drum of the first generation.
The second generation also saw the advent of magnetic tape which replaced, to
some extent punch card systems (input/output operations).
Examples of computers of this age were the IBM 7090 and IBM 7094. their
applications included payroll and inventory processing.

1.2.3 THIRD GENERATION COMPUTERS (1964-1969)


A very important technological innovation of the 3rd generation was the Integrated
Circuits (I.C). This innovation involves the manufacture of complex electronic
circuits attached to a small piece of silicon chip less than two millimeters long. The
introduction of I.C. made the computers faster, cheaper and smaller than the
circuits they replaced. The first set of I.C. produced contained about ten or twenty
interconnected transistors and diodes giving three or four basic circuits on a single
module. Modules of this type are referred to as Small Scale Integrated circuits (SSI).
Later, the number of transistor per chip rose to hundred or more so that counters
and storage registers could be fabricated on a single chip. Such modules are called
Medium Scale Integration (MSI). Recently, Long Scale Integration (LSI) is in the
market containing tens of thousands of transistors and diodes on one chip.
Another important feature of this generation was the replacement of the magnetic
core and magnetic drum memories of the first and second generations by cheap
Metal Oxide Semi-Conductor (MOS), this provided fast memory access.
Typical commercial computers of this age were IBM S/360 – S/370 series (1964-
1965); Burroughs B5000, the CDC 6600 (19964); the CDC 7600 (1969) and PDP-11
series.
New applications were credit card billing, airline reservation systems and market
forecasting.

1.2.4 FOURTH GENERATION COMPUTERS (1970-1990s)


One of the results of Large Scale Integration was the production of the
microprocessor in the 4th generation. A microprocessor is a central processing
unit of a microcomputer fabricated on a small chip. We also began to see the
development of Very Large Scale Integration (VLSI) during this generation.
This generation witnessed the flooding of the market with a wide variety of
software tools like data-base management system, word processing packages,
spreadsheet packages, graphical packages and games packages, etc. It also
witnessed the enhancement of networking capabilities.
Commercial machines of this generation were IBM 3033 and Burroughs B7700
mainframes, the HP 3000 minicomputers and apple 11 microcomputers.

Page 3
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

New applications were mathematical modeling and simulation, electronic funds


transfer, computer-aided instruction and home applications.

Page 4
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

1.2.5 FIFTH GENERATION COMPUTERS (Now)


The characteristics technology of this generation is VLSI, Parallel Processing
and Artificial Intelligence (AI) whose main attraction over previous computer
would be speed and power.
AI is a way of making computers appears to be reasoning as exemplified in the use
of Robots in factories.
Applications now include financial planning management, database management,
word processing/office automation and desktop publishing.
Examples are VAX 6000 computers, the NCR Tower 386, MACINTOSH (MAC 11) and
the IBM PC/AT 286.

2.0 BASIC COMPUTER CONCEPTS


To an uninformed mind, the mention of a computer gives the impression of a
machine with giant brains flashing different colors of light. Such people feel that
these machine brains think for themselves and provide solutions to problems that
no human being has ever done. This feeling often leads to the question ‘Can a
computer think?’, the answer to this is ‘No’.
The truth about the computer is that it is a combination of different electronic
machines coupled together to form what is known as the Computer System.
It stores and processes information whether numeric or non-numeric (i.e. it works
with both numbers and words). It can be given a list of instructions which, unlike
human beings, it can always remember (unless you instruct it to forget the
instructions).
Although the computer systems do the work of human beings at fantastically high
speeds, a lot of thinking is done by the humans who feed them with information
and program them to perform particular operations with the information they are
given.

2.1 DEFINITION OF A COMPUTER


A computer is an electronic device capable of executing instructions,
developed based on algorithms stored in its memory, to process data fed to it and
produces the required results faster than human beings.

A computer is an electronic machine that:


• Accepts (read) information or data;
• Stores accepted data/information in memory until it is needed,
• Processes (manipulates) the information according to the instructions
provided by the user, and finally
• Returns the results (intelligent reports) to the user from the processed
(manipulated) data.

The computer can store and manipulate large quantities of data at very high speed,
but a computer cannot think. A computer makes decisions based on simple
comparisons such as one number being larger than another. Although the

Page 5
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

computer can help solve a tremendous variety of problems, it is simply a machine;


it cannot solve problems on its own.

2.2 BASIC UNITS


A computer is designed using four basic units. They are

1. INPUT UNIT: Computers need to receive data and instructions in order to


solve any problem. Therefore we need to put the data and instructions into the
computer. The input unit consists of one or more input device. The keyboard and
mouse of a computer are the most commonly used input devices.

2. CENTRAL PROCESSING UNIT (CPU): It is the main part of a computer


system; it is the electronic brain of the computer. The CPU in a personal
computer is usually a single chip. It organizes and carries out instructions that
come from either the user or from the software. It interprets the instructions in
the program and executes one by one. It consists of three major units.

a. CONTROL UNIT: It controls and directs the transfer of program


instructions and data between various units, in other words, it controls the
electronic flow of information around the computer.

b. ARITHMETIC AND LOGIC UNIT (ALU): Arithmetic operations like


(+,-,*,^,/), logical operations like (AND, OR, NOT) and relational operations
like (<,>,<=,>=) are being carried out in this Unit. It is responsible for
mathematical calculations and logical comparisons.

c. REGISTERS: They are used to store instructions and data for further
use.

3. MEMORY UNIT: It is used to store the Programs and data.

4. OUTPUT UNIT: It is used to print/display the results, which are stored in the
memory unit.

Secondary Stored Devices refer to floppy disks, magnetic disks, magnetic disks,
magnetic tapes, hard disks, compact disks etc., which are used to store huge
information for future use.

Page 6
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

2.3 BLOCK DIAGRAM OF A COMPUTER


The block diagram of a computer is shown below:

The
components
of a computer are connected by using buses.
A bus is a collection of wire that carries electronic signals from one component
to another. There are standard buses such as Industry Standard Architecture (ISA),
Extended Industry Standard Architecture (EISA), Micro-Channel Architecture (MCA),
and so on. The standard bus permits the user to purchase the components from
different vendors and connect them together easily.

The processor is plugged into the computer’s motherboard. The motherboard is a


rigid rectangular card containing the circuitry that connects the processor and all
the other components that make up your personal computer. In most personal
computers, some of the components are attached directly to the motherboard and
some are housed on their own small circuit boards that plug into the expansion
slots built into the motherboard.
The various input and output devices have a standard way of connecting to the CPU
and Memory. These are called interface standards. Some popular interface
standards are the RS-232C and Small Computer System Interconnect (SCSI). The
places where the standard interfaces are provided are called ports.

Page 7
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

2.4 INPUT DEVICES


A computer would be useless without some way for you to interact with it because
the machine must be able to receive your instructions and deliver the results of
these instructions to you. Input devices accept instructions and data from you
the user. Some popular input devices are listed below.

Examples of Input Devices:


TEXT INPUT DEVICES
• Keyboard - The most common input device is the Keyboard. It is used to input
letters, numbers, and commands from the user. A device to input text and
characters by depressing buttons (referred to as keys), similar to a typewriter.
The most common English-language key layout is the QWERTY layout.

POINTING DEVICES
• Mouse – Mouse is a small device held in hand and pushed along a flat surface.
It can move the cursor in any direction. In a mouse a small ball is kept inside and
the ball touches the pad through a hole at the bottom of the mouse. When the
mouse is moved, the ball rolls. This movement of the ball is converted into
electronic signals and sent to the computer. Mouse is very popular in the
modern computers that use Windows and other Graphical User Interface (GUI)
applications.
• Trackball - a pointing device consisting of an exposed protruding ball housed
in a socket that detects rotation about two axes.

GAMING DEVICES
• Joystick - a general control device that consists of a handheld stick that pivots
around one end, to detect angles in two or three dimensions.
• Gamepad - a general game controller held in the hand that relies on the digits
(especially thumbs) to provide input.
• Game controller - a specific type of controller specialized for certain gaming
purposes.

IMAGE, VIDEO INPUT DEVICES


• Image scanner - a device that provides input by analyzing images, printed
text, handwriting, or an object.
• Webcam - a low resolution video camera used to provide visual input that can
be easily transferred over the internet.

AUDIO INPUT DEVICES


• Microphone - an acoustic sensor that provides input by converting sound into
an electrical signal

Page 8
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

2.5 OUTPUT DEVICES

Examples of Output Devices:

AUDIO OUTPUT DEVICES


• Speakers - a device that converts analog audio signals into the equivalent air
vibrations in order to make audible sound.
• Headset - a device similar in functionality to computer speakers used mainly
to not disturb others nearby.

VIDEO OUTPUT DEVICES

• Monitor or Video Display Unit (VDU)


Monitors provide a visual display of data. It looks like a television. Monitors are of
different types and have different display capabilities.

Other output devices are given below:


• Printer
• Drum Plotter
• Flat Bed Plotter
• Microfilm and Microfiche
• Graphic Display device (Digitizing Tablet)
• Speech Output Unit

3.0 CLASSIFICATION (TYPES) OF COMPUTERS


The need to gather, collate and process data and retrieve the processed
information easily, accurately and speedily for effective decision making has given
rise to the manufacturing of different types of computer.

3.1 CLASSIFICATION ACCORDING TO SIZE


The three major types of computer based on some features such as size, cost &
simplicity are:-
1. Super Computers
2. Mainframe Computer
3. Minicomputers
4. Microcomputers

1. Super Computers, otherwise known as monsters, are the largest computers


and are super fast. They request efficient cooling systems and only a few had
been installed. They are used in research and forecasting.

2. Mainframe computers are very large, often filling an entire room. They can
store enormous amount of information, can perform many tasks at the same
time, can communicate with many users at the same time (multi-user

Page 9
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

environment), and are very expensive. The price of a mainframe computer


frequently runs into the millions of dollars. Mainframe computers usually have
many terminals connected to them. These terminals look like small computers
but they are only devices used to send and receive information from the actual
computer using wires. Terminals can be located in the same room with the
mainframe computer, but they can also be in different rooms, buildings, or cities.
Large businesses, government agencies, and universities usually use this type of
computer. Examples of mainframe computers include IBM360, IBM 370, ICL
1900, ICL 2900 and DEC 10.

3. Minicomputers are similar but much smaller than mainframe computers and
they are also much less expensive. The cost of these computers can vary. They
possess most of the features found on mainframe computers, but on a more
limited scale. They can still have many terminals, but not as many as the
mainframes. They can store a tremendous amount of information, but again
usually not as much as the mainframe. Data is usually input by means of
keyboard and adapt to a number of environments into which large mainframes
cannot fit because they emit less heat. Medium and small businesses typically
use these computers.
Examples are Digital PDP11 and VAX Range, the Data General Range, HP1000
and 3000.

4. Microcomputers are the types of computers we are using in your classes.


These computers are usually divided into desktop models and laptop models.
They are terribly limited in what they can do when compared to the larger
models discussed above because they can only be used by one person at a time,
they are much slower than the larger computers, and they cannot store nearly
as much information, but they are excellent when used in small businesses,
homes, and school classrooms. These computers are inexpensive and easy to
use. They have become an indispensable part of modern life. They can be
broken into:
a. Desktops: they are sizeable and are placed on top of tables. They are
found in offices and computer centers.
b. Laptops: they are portable and can be carried about. They can be
placed on the lap while working.
c. Palmtops: they are held on the palm when used. They are pocket-
sized.

3.2 CLASSIFICATION ACCORDING TO APPLICATION


Computers can still be further classified on the basis of application into digital,
analog and hybrid.

1. Digital Computer converts all input, whether numbers, alphanumeric or


other symbols into binary form (computer understandable form). The input data
is processed in the binary form but the processed information is converted to

Page 10
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

decimal (the original input form). This is necessary because of the ease with
which the input interpretation of the output and processing are to be handled.
Most of today’s business applications use digital computers.

2. Analogue Computer does not hold data in discrete digital form. They
measure physical quantities and give output in form of electric signal or
calibrated moving parts. They work on continuous data e.g. volume control. It
stimulates the system in question by representing data in proportionally physical
quality such as voltage. This means that an analogue computer holds data in
quantity and volume. The analogue machine, because it has only limited
memory facility and is restricted to the type of calculation it can perform, can
only be used for certain specialized engineering and scientific applications.
Analog devices are used for time, volume, pressure or temperature
measurement, weapon guidance etc.

3. Hybrid Computer system bridges the gap between digital and analogue
computers. They combine the features of both digital and analogue computers.
These feature means that a hybrid computer needs conversion element which
will accept analogue inputs and outputs digital values. The device responsible
for the conversion is known as digitizer. These computers can be employed in
process control applications, specialized engineering and scientific applications.

4.0 HARDWARE
The electronic circuits used in building the computer that executes the software is
known as the hardware of the computer. For example, a TV bought from a shop is
hardware; the various entertainment programs transmitted from the TV station are
software. An important point to note is, hardware is a one-time expense and is
necessary whereas software is a continuing expense and is vital. However, for the
purpose of this course, we would be studying computer hardware and software.

Hardware is best described as a device that is physically connected to your


computer or something that can be physically touched. A CD-ROM, monitor, and
printer are all examples of computer hardware.
Almost all computer hardware requires some type of software or drivers to be
installed before it can properly communicate with the computer. If these drivers
contain problems or do not properly work with the computer, this can cause issues
with the hardware device you are attempting to install in the computer.
Therefore, device driver is a kind of software installed on computer system to
enable the device being connected, communicate properly and effectively with the
computer. They usually come in CD-ROM or floppy diskettes.

Page 11
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

5.0 SOFTWARE
A set of programs associated with the operation of a computer is called
software.
Software is a term used for all sorts of programs (set of instructions) that
activates the hardware to function, that is, controls the computer system and its
operation.

Computer software may be classified into two broad categories:


1. System Software
2. Application Software

5.1 SYSTEM SOFTWARE


System software will come provided with each computer and is necessary for the
computers operation. This software acts as an interpreter between the
computer and user. It interprets your instructions into binary code and likewise
interprets binary code into language the user can understand. In the past you may
have used MS-DOS or Microsoft Disk Operating System which was a command line
interface. This form of system software required specific commands to be typed.
Windows XP Professional is a more recent version of system software and is known
as a graphical interface. This means that it uses graphics or "icons" to represent
various operations. You no longer have to memorize commands; you simply point
to an icon and click.
The system software are written by computer manufacturers to facilitate the
optimal (maximum) use of the hardware systems or provide suitable environment
for writing, editing, debugging, testing and running of users programs. They are an
essential (important) part of any computer system.

Example of systems software include:-


a. Operating system
b. Language translators
c. Service programs

a. Operating System
These are collection of programs acting as an interface between the user of
computers on one hand and the hardware on the other. It provides the users with
features that make it easier for him/her to code, test execute, debug and maintain
his programs while efficiently managing the hardware resources.
An operating system can be said to be a complex piece of software needed to
harness (control) the power of a computer system and make it easier to use.
Operating System are programs written by the manufacturer to help the computer
user control all the various devices

Functions of the Operating System


i. Resource Sharing
ii. Input/Output Handling
iii. Memory Management

Page 12
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

iv. Filing System


v. Protection and Error Handling
vi. Program Interaction
vii. Program Control
viii. Accounting of Computing Resources

b. Language Translators
These are programs that can convert source codes written in high – level language
such as BASIC, FORTRAN, COBOL and C into Object Code (machine language).

BASIC - Beginners All – Purpose Symbolic Instruction Code


FORTRAN – FORmula TRANslator
COBOL – Common Business Oriented Language

At the initial stage of computer development, programs were written directly in


machine language but that is not the case now.
There is therefore the need to translate programs written in these other languages
such as BASIC or COBOL to machine language.
The initial program written in a language different from machine language is called
the Source Program (Code) and its equivalent in machine language is called Object
Program (Code).
Example of language translators are compilers, assemblers, interpreters and
loaders.

i. Compilers: this is a translator (computer program) that accepts a source


program in one high level language reads and translates the entire user’s
program into an equivalent program in machine language. For each high-level
language, there is a different compiler e.g. COBOL compiler, FORTRAN
compiler. A compiler also detects errors that arise from the use of the
language. Compilers are ‘portable’, i.e. COBOL compiler on one machine is
similar to another with minimum changes.

ii. Assemblers: this is a computer program that accepts a source program in


assembly language and produces an equivalent machine language. Each
machine has its own assembly language, as a result, assembly language is
not ‘portable’ i.e. assembly language of one machine cannot run on another
machine.

iii.Interpreters: This translator that accepts high level source program, reads,
translates and executes the program one line at a time. It also produces the
errors (if any) at the end of every line. An example of this is the BASIC
interpreter.

iv. Loader: It is a system program used to store the machine language program
into the memory of the computer.

Page 13
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

c. Service Programs
They are specialized programs that perform routine functions and are always
available to any computer user. They perform the following operation: -

i. File conversion
ii. File Copy
iii. File Reorganization
iv. File Maintenance
v. File Sorting
vi. Dumping routines
vii. House keeping operations
viii. Tracing routines
ix. Library program
x. Editing

5.2 APPLICATION SOFTWARE


It is the set of programs necessary to carry out operations for a specified
application. They are written by the computer programmer in order to perform
specific jobs for the computer user.
For many applications, it is necessary to produce sets of programs which are used
in conjunction with various service programs.
Technology has come a long way in the last hundred years but the ideas behind
using it are still much the same, we should make use of computers and associated
software if it saves time and hence saves money or is able to do something which
would be quite impossible by manual means. This would include working with great
precision and speed, doing highly complex calculations or doing boring and
repetitive or dangerous tasks that are unacceptable if they are being carried out by
humans.

Application Software is classified into two areas:


a. Application packages
b. User application programs

a. Application packages
They come in a complete set of suite of programs with its documentation covering
a business routine. It is supplied by a software house or a manufacturer, either on
lease or purchase from dealers in computer hardware.
Application software is any software used for specified applications such as Word
Processing, Spreadsheet, Database, Presentation Graphics, Communication,
Tutorials, Entertainment and Games.

Examples are:
i. Accounting Package e.g. SAGE ACCOUNTANT, PEACHTREE ACCOUNTING etc.
ii. Word Processing Packages e.g. WordStar, Word Perfect, Microsoft Word, Notepad
iii. Spreadsheet Packages. E.g. Lotus 1-2-3, Microsoft Excel
iv. Utilities e.g. Norton Utilities

Page 14
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

v. Integrated Packages e.g. Microsoft Works.


vi. Graphics Packages e.g. PM (Print Master) and Harvard graphic.
vii. Desktop Publishing Packages e.g. Ventura Publisher, Pagemaker
viii. Games Packages e.g. Chess, Scrabble
ix. Web Design Packages e.g. Macromedia DreamWeaver MX, Microsoft Frontage.

b. Users Applications Programs


This is a suite of programs written by a programmer or computer user, required for
operation of their individual business/tasks or customized for corporate company,
institutions and government agency such as JAMB, WAEC etc.

User application programs are used to do a lot of things, a few is mentioned below:
• To solve a set of equations
• To process examination results
• To prepare a Pay-Bill for an organization
• To prepare Electricity - Bill for each month.

6.0 PERIPHERAL DEVICES


In computer hardware, a peripheral device is any device attached to a computer
in order to expand its functionality. Some of the more common peripheral devices
are printers, scanners, disk drives, tape drives, microphones, speakers, and
cameras. A device can also refer to a non-physical item, such as a pseudo terminal,
a RAM drive, or a network adapter.
Before the advent of the personal computer, any connected device added to the
three base components — the motherboard, CPU and working memory (RAM, ROM,
or core) — was considered to be a peripheral device.
The personal computer has expanded the sense of what devices are needed on a
base system, and keyboards, monitors, and mice are no longer generally
considered to be peripheral devices.
More specifically, the term is used to describe those devices that are optional in
nature, as opposed to hardware that is either demanded or always required in
principle.
The term also tends to be applied to devices that are hooked up externally,
typically through some form of computer bus like USB. Typical examples include
joysticks, printers and scanners. Devices such as monitors and disk drives are not
considered peripherals when they are not truly optional.
Some people do not consider internal devices such as video capture cards to be
peripherals because they are added inside the computer case; for them, the term
peripherals are reserved exclusively for devices that are hooked up externally to
the computer. It is debatable however whether PCMCIA cards qualify as peripherals
under this restrictive definition, because some of them go fully inside the laptop,
while some, like WiFi cards, have external appendages.

Page 15
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

6.1 List of Common Peripherals


Storage Peripherals
• CD
• DVD
• USB flash drive
• Tape drive
• Floppy disk
• Punch card

Input Peripherals
• Joystick
• Touch screen
• Gamepad
• Microphone
• Image scanner
• Computer speech recognition
• Webcam
• Barcode reader

Output Peripherals
• Plotter
• Printer
• Braille embosser
• Computer speech synthesis
• Sound card
• Speakers
• Digital Camera
• Graphics card
• Monitor

Page 16
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

USES OF COMPUTERS
The computer is used to assist man in business organization, in research and in
many aspects of life. Computer affects our daily lives more and more and hopefully
can be used to improve the quality of our lives by releasing us from dull, respective
tasks and allowing us to expand our minds.

AREAS OF APPLICATION
Scientific Research – Medicine, Space Technology, Weather Forecast.
Business Application – Payroll, Office Automation, Stock Control and Sales, Banking.
Industrial Application – Quality Control, Oil refineries
Communication – Transportation, Libraries,
Education – Administrative and Guidance, Computer Assisted Instruction (CAI),
Computer Managed Instruction (CMI)

ADVANTAGES, DISADVANTAGES & LIMITATIONS OF COMPUTERS


Advantages of Computers
Computers can facilitate self-paced learning. In the CAI mode, for example,
computers individualize learning, while giving immediate reinforcement and
feedback.
Computers are a multimedia tool. With integrated graphic, print, audio, and video
capabilities, computers can effectively link various technologies. Interactive video
and CD-ROM technologies can be incorporated into computer-based instructional
units, lessons, and learning environments.
Computers are interactive. Microcomputer systems incorporating various software
packages are extremely flexible and maximize learner control.
Computer technology is rapidly advancing. Innovations are constantly emerging,
while related costs drop. By understanding their present needs and future technical
requirements, the cost-conscious educator can effectively navigate the volatile
computer hardware and software market.
Computers increase access. Local, regional, and national networks link resources
and individuals, wherever they might be. In fact, many institutions now offer
complete undergraduate and graduate programs relying almost exclusively on
computer-based resources.
The Computers can be used for other variety of things. For example, you can:
Write a letter and keep in touch
Use the word processing package (MS Word) to type it up.
If you are sending it abroad it can be cheaper and easier to send it by e-mail
Scanned photographs can be sent either in a letter or attached to an e-mail.
Create your own CV
Use the word processing package to type it up. If you save it to disk you can make
changes as and when you need to.

Page 17
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

Plan your holiday


Use the Internet to search for flights, accommodation or up-to-date information
about your destination.
Do accounts
Using a spreadsheet (e.g. MS Excel) can help you do calculations and keep up-to-
date accounts.
Go shopping
The Internet offers you lots of chances to plan your purchases of everything from
books to weddings! However please be careful when revealing your Credit Card
details. Only do so if you are certain you have a secure connection.
Research your family tree
Use Internet resources to locate information.
Get tips and advice from the Council's Intranet
Keep your records with a database (MS Access) and write up your findings (MS
Word).
Learn a new skill
There are many learning opportunities available to you including workbooks, CD-
ROMs and on-line.
Do your homework/college assignments
You can type up your essay & report.
Carry out research from the Internet or CD-ROM
Add images and graphics.

Disadvantages of Computers
Discouraging people with less technology advantages.
Internet availability.
Connection tariffs.
Speed of technologies advance outsmarts the users' possibilities.
Technical disabilities need more acquired knowledge.
Centers on one specialization at a time.
Learning a lot in a particular study field is not necessarily useful.
Non-availability of on-line reference material.
Many institutions do not recognize a specific qualification.
English knowledge must be satisfactory.
Overestimation of time available.
Learners experience frustration going through all their mail messages.

Limitations of Computers
Computer networks are costly to develop.
Technology is changing rapidly.
Widespread computer illiteracy still exists.
Students must be highly motivated and proficient in computer operation.

COMPUTER MEMORY
Main Memory

Page 18
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

A flip-flop made of electronic semiconductor devices is used to fabricate a memory


cell. These memory cells organized as a Random Access Memory (RAM). Each
cell has a capability to store one bit of information. A main memory or store of a
computer is organized using a large number of cells. Each cell stores a binary digit.
A memory cell, which does not loose the bit stored in it when no power is supplied
to the cell, is known as a non-volatile cell.
A word is a group of bits, which are stored and retrieved as a unit. A memory
system is organized to store a number of words. A Byte consists of 8 bits. A word
may store one or more bytes. The storage capacity of a memory is the number of
bytes it can store. The address of the location from where a word is to be retrieved
or to be stored is entered in a Memory Address Register (MAR). The data
retrieved from memory or to be stored in memory are placed in a Memory Data
Register (MDR). The time taken to write a word is known as the Write time. The
time to retrieve information is called the Access time of the memory.
The time taken to access a word in a memory is independent of the address of the
word and hence it is know as a Random Access Memory (RAM). The main
memory used to store programs and data in a computer is a RAM. A RAM may be
fabricated with permanently stored information, which cannot be erased. Such a
memory is called a Read Only Memory (ROM). For more specialized uses, a user
can store his won special functions or programs in a ROM. Such ROM's are called
Programmable ROM (PROM). A serial access memory is organized by arranging
memory cells in a linear sequence. Information is retrieved or stored in such a
memory by using a read/write head. Data is presented serially for writing and is
retrieved serially during read.
Secondary or Auxiliary storage devices
Magnetic surface recording devices commonly used in computers are Hard disks,
Floppy disks, CD-ROMs and Magnetic tapes. These devices are known as secondary
or auxiliary storage devices. We will see some of these devices below.
Floppy Disk Drive (FDD)
In this device, the medium used to record the data is called as floppy disk. It is a
flexible circular disk of diameter 3.5 inches made of plastic coated with a magnetic
material. This is housed in a square plastic jacket. Each floppy disk can store
approximately on million characters. Data recorded on a floppy disk is read and
stored in a computer's memory by a device called a floppy disk is read and stored
in a computer's memory by a device called a floppy disk drive (FDD). A floppy disk
is inserted in a slot of the FDD. The disk is rotated normally at 300 revolutions per
minute. A reading head is positioned touching a track. A voltage is induced in a coil
wound on the head when a magnetized spot moves below the head. The polarity of
the induced voltage when a 0 is read. The voltage sensed by the head coil is
amplified, converted to an appropriate signal and stored in computer's memory.
Floppy Disks com with various capacities as mentioned below.
51/4 drive- 360KB, 1.2MB (1 KB= 210 = 1024 bytes)
31/2 drive- 1.44 Mb, 2.88 MB (1MB= 220 bytes)
Compact Disk Drive (CDD)
CD-ROM (Compact Disk Read Only Memory) used a laser beam to record and read
data along spiral tracks on a 51/4 disk. A disk can store around 650 MB of

Page 19
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

information. CD-ROMs are normally used to store massive text data. (such as
encyclopedias) which is permanently recorded and read many times. Recently CD
writers have come in the market. Using a CD writer, lot of information can be
written on CD-ROM and stored for future reference.
Hard Disk Drive (HDD)
Unlike a floppy disk that is flexible and removable, the hard disk used in the PC is
permanently fixed. The hard disk used in a higher end Pc can have a maximum
storage capacity of 17 GB (Giga Byte; 1 GB= 1024 MB = 230 bytes). Now a days,
hard disks capacities of 540 MB, 1 GB, 2 GB, 4 GB and 8 GB are quite common. The
data transfer rate between the CPU and hard disk is much higher as compared to
the between the CPU and the floppy disk drive. The CPU can use the hard disk to
load programs and data as well as to store data. The hard disk is a very important
Input/Output (I/O) device. The hard disk drive doesn't require any special care other
than the requirement that one should operate the PC within a dust-free and cool
room (preferably air-conditioned).
In summary, a computer system is organized with a balanced configuration of
different types of memories. The main memory (RAM) is used to store program
being currently executed by the computer. Disks are used to store large data files
and program files. Tapes are serial access memories and used to backup the files
form the disk. CD-ROMs are used to store user manuals, large text, audio and video
data.
Central Processing Unit
A processing unit in a computer interprets instructions in a program and carries
them out. An instruction in general, consists of a part, which specifies the operation
to be performed and other parts, which specify the address of operand. In a
processor, a strong of bits is used to code operations and another string of n
operations in binary so that 2x = n. For example to code 16 operations we need 4
bits, since 24=16.
An instruction consisting of an operation code and operand address or addresses
designed for a specific computer is know as a machine language instruction of that
computer. Machine language instructions for input/output, data movement,
arithmetic, logic and controlling sequence of operations are available in all
computers. A computer's processor has storage registers to store operands and
results. It also has a register to store the instruction being executed called
"Instruction Register" and a register which store the address of the next instruction
to be executed called “Program Counter Register".
A sequence of machine language instructions to solve a problem is known as a
machine language program. A computer executes a machine language programs in
two phases. In the first phase, it reads and stores the program in its memory. After
storing the program, it initiates program execution. In this phase, instructions are
retrieved from memory one after another and are decoded and executed.

Page 20
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

DATA REPRESENTATION
Introduction
In any modern numbering system, the numbers are represented by unique patterns
of unique symbols; individual symbols are usually called digits. The common
system is the decimal system with ten different symbols 0,1,2,3,4,5,6,7,8 and 9. In
a computer, however, the pattern of symbols which will represent the numbers are
created by some physical condition. (e.g. transistor or valve passing current) which
is either in one state (e.g. passing current) or in only one other possible state (not
passing current).

Bits and Bytes


Digital computers therefore use a binary system of number representation with
only two different digit symbols usually represented by 0 and 1. These are called "
Binary digITS" (or) "BITS" for short. The reason computers use the base-2 system is
because it makes it a lot easier to implement them with current electronic
technology. You could wire up and build computers that operate in base-10, but
they would be fiendishly expensive right now. On the other hand, base-2 computers
are relatively cheap.

Bits are rarely seen alone in computers, they are bundled together into 8-bit
collections.
A set of 8 bits is called a byte and each byte stores one character.

ASCII (American Standards Code for Information Interchange) codes are used to
represent each character. The ASCII code includes codes for English Letters (Both
Capital & Small), decimal digits, 32 special characters and codes for a number of
symbols used to control the operation of a computer which are non-printable.

NUMERATION
Man has over the years developed symbols to represent numbers.
A group of symbols that can be used according to some rules to express numbers is
called numeration system.
The symbols of a numeration system are called numerals.
The number of unique digits in a numeration system is called number base or base
of the numeration system.
Hindu-Arabic Notation uses base ten probably because humans have ten fingers.
Thus ten and its powers are the basic numbers of the system.
In the topic, we shall examine four number bases used very frequently in
computing.
They are:
Decimal System
Binary System
Octal System
Hexadecimal System

Decimal Numbers

Page 21
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

The easiest way to understand bits is to compare them to something you know:
digits. A digit is a single place that can hold numerical values between 0 and 9.
Digits are normally combined together in groups to create larger numbers.
The decimal system is a number system to the base ten where counting is done in
groups of ten. For example, 6,357 has four digits. It is understood that in the
number 6,357, the 7 is filling the ‘1s place’, while the 5 is filing the 10s place, the 3
is filling the 100s place and the 6 id filling the 1,000s place. So you could express
things this way if you wanted to be explicit as:
(6*1000) + (3*100) + (5*10) + (7*1) = 6000 + 300 + 50 + 7 = 6357

It can be expressed using the powers of ten as:


(6*103) + (3*102) + (5*101) + (7*100) = 6000 + 300 + 50 + 7 = 6357

Binary numbers
The binary system (also called base two) has just two states: usually called ‘on and
‘off’ or ‘1’ and ‘0’. The reason why this why this system is so important is that it is
the most simple system to implement in practice using the electronic technology
available today. It is easy to detect very quickly if a circuit is switched on or off. It
would be a much more difficult task to detect levels in between these two
extremes. Hence binary is ideal for use in modern electronic digital computers.

Binary numbers are formed using the positional notation. Powers of 2 are used as
weights in the binary number system. A binary number 10111, has a decimal value
equal to 1*24+0*23+1*21+1*20=23.

To understand the binary system, it is useful to think more carefully about a system
with which most people will be more familiar, i.e. the decimal system or base ten
also known as denary. When counting in base ten, the symbols that are used are
0,1,2,3,4,5,6,7,8 and 9. i.e. ten different symbols in base ten. Decimal numbers are
arranged into units, tens, hundreds and thousands etc.
Th H T U
2 0 6 7
The above represents two thousands, no hundreds, six tens and seven units
making two thousand and sixty seven. In base ten each column from right to
left is obtained by multiplying the previous column by ten.
Likewise, in binary system, there are just two symbols 0 and 1. Therefore any
number must be represented using 0s and 1s only. This time the column heading
will be (from right to left) 1s, 2s, 4s, 8s, 16s, etc. to obtain the next column
heading, the number is multiplied by two i.e. 1x2=2s, 2x2=4s, 4x2=8s etc.
therefore, in the binary system, the number:
16 8 4 2 1
1 0 1 1 1
would represent one lot of 16, no lots of 8, one lot of 4, one lot of 2 and one unit
of 1.

Page 22
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

In other to avoid confusion about the base number being used, a useful subscript
notation was developed. Therefore, 1012 means base two and 10116 means base
sixteen.
If no subscript is used, then it is usual to assume that base ten is the base number
being used.

Decimal 5 – bit binary number Decimal 5 – bit binary number


number 16 8 4 2 number 16 8 4 2 1
T U 1 T U
0 0 0 0 0 1 6 1 0 0 0 0
0
1 0 0 0 0 1 7 1 0 0 0 1
1
2 0 0 0 1 1 8 1 0 0 1 0
0
3 0 0 0 1 1 9 1 0 0 1 1
1
4 0 0 1 0 2 0 1 0 1 0 0
0
5 0 0 1 0 2 1 1 0 1 0 1
1
6 0 0 1 1 2 2 1 0 1 1 0
0
7 0 0 1 1 2 3 1 0 1 1 1
1
8 0 1 0 0 2 4 1 1 0 0 0
0
9 0 1 0 0 2 5 1 1 0 0 1
1
1 0 0 1 0 1 2 6 1 1 0 1 0
0
1 1 0 1 0 1 2 7 1 1 0 1 1
1
1 2 0 1 1 0 2 8 1 1 1 0 0
0
1 3 0 1 1 0 2 9 1 1 1 0 1
1
1 4 0 1 1 1 3 0 1 1 1 1 0
0
1 5 0 1 1 1 3 1 1 1 1 1 1
1

Page 23
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

To count in binary, we simply start with 00000, to the required number of digits.
The units column changes 0,1,0,1 etc. the twos column has two 0s followed two 1s
then two 0s etc. the fours column has four 0s followed by four 1s then four 0s etc.
the eights column has eight 1s followed by eight 1s then eight 0s etc.
When it is hoped to obtain large number, the method above would be cumbersome,
therefore, mathematical methods can be used to convert decimal numbers to
binary.

A decimal number is converted into an equivalent binary number by dividing the


number by 2 and storing the remainder as the least significant bit of the binary
number. For example, consider the decimal number 23. Its equivalent binary
number is obtained as show below in figure
CONVERSION OF DECIMAL TO BINARY – (using repeated division)
EXAMPLE 18310 = 101101112 2310 = 101112

2 18 remainder 1
3
2 91 remainder 1
2 45 remainder 1
2 22 remainder 0
2 11 remainder 1
2 5 remainder 1
2 2 remainder 0
2 1 remainder 1
0

TO CONVERT BACK TO DECIMAL – 101101112


27 26 25 24 23 22 21 20
128 64 32 16 8 4 2 1
1 0 1 1 0 1 1 1

(1x27) + (0x26) + (1x25) + (1x24) + (0x23) + (1x22) + (1x21) + (1x20)


(1x128) + (0x64) + (1x32) + (1x16) + (0x8) + (1x4) + (1x2) + (1x1) = 18310

ADDITION AND SUBTRACTION OF BINARY NUMBERS


It is useful to be able to add, subtract binary numbers as handled by the computer.
1+ 0=1
0+1=1
0+0=0
1 + 1 = 0 with a carry of 1 to the next position (2s), i.e. 1 0.

Examples of additions
1 1 0 12 1 0 0 12 1 0 1 02
+ 1 0 1 02 + 1 1 12 + 1 1 0 02

Page 24
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

1 0 1 1 12 1 0 0 0 02 1 0 1 1 02

Examples of subtractions

1 1 10 02 1 12 10 02 1
02 1 12
0 1 0 1 0 02 - 0 1 1 1 12 -
1 0 0 1 1 02 0 0 1 0 02 therefore 100112 – 11112 = 1002

OCTAL NUMBERS
If the principles of the last few sections have been well understood then it is
relatively easy to extend the concepts of the basic rules to any other number base.
However, in computing, only two other number bases are of any importance. These
are octal (base eight) and more important, hexadecimal (base sixteen). In octal we
have eight different characters: (0,1,2,3,4,5,6 and 7). The column headings in octal
are units, 8s, 64s, 512s etc.
We have seen that a computer uses binary numbers rather than decimal numbers
to perform arithmetic operations; however pure binary representation has one
disadvantage. Much space is wasted in storing data than in any other numbering
system. To conserve storage some computers group binary numbers into bunch of
threes. Such grouping is possible with octal numbering system (base 8). Here, a
single octal number is used to represent three binary digits.

CONVERTING OCTAL TO DECIMAL


To find the decimal equivalent of any octal number, we express the number in
expanded notation and add the results.
Example:
2378 = (2 * 82) + (3*81) + (7*80) b. 12348 = (1*8 ) (2 * 8 ) + (3*8 )
3 2 1

+ (4*8 )
0

= (2*64) + (3*8) + (7*1) = (1*512) + (2*64) + (3*8) + (7*1)


= 128+24+7 = 15910 = 512 + 128 + 24 + 7 = 67110

CONVERTING DECIMAL TO OCTAL (using remainder method)


To obtain the Octal equivalent of a decimal number, we again use the remainder
method, dividing by the base 8.
Example:
a. Convert 15910 into base b. Convert 67110 into
8. base 8
8 15 8 671
9

Page 25
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

8 19 remainder 7 8 83 remainder
4
8 2 remainder 3 8 10 remainder
3
0 remainder 2 8 1 remainder
2
=2378 0 remainder
1
= 12348
CONVERTING BINARY TO OCTAL
As mentioned above, the octal numbering system can be represented as a group of
three binary digits or bits. In other words, any three binary digits can be
represented as a single octal number. These are used as special codes for digits of
octal numbers (base 8) called three-bit equivalent forms as shown below: -

Digits of base 8 0 1 2 3 4 5 6 7
Equivalent number in 00 00 01 01 10 10 11 11
base 2 0 1 0 1 0 1 0 1 Converting
binary to octal requires the subdivision of the binary numbers into groups of threes.
Example
Binary to Octal
a) 1010110112 b) 110001101112

Solution
a) b)
01 000 110 111
10 011 011
1
1
3 0 6 7
5 3 3
Therefore 1010110112 = 5338 and 110001101112 = 30678

CONVERTING OCTAL TO BINARY


In like manner, converting an octal number to its binary equivalent requires the
representing each octal number as three binary numbers.

Example
Octal to Binary
a) 1528 b) 4 7 3 2
47328
10 111 011 010
1 5 2 0
00 101 010
1

ADDITION AND SUBTRACTION OF OCTAL NUMBERS

Page 26
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

To add two octal numbers, we proceed as we do in the decimal system, keeping in


mind however, that when adding in a column involves 7+1 = 8 which is octal 10, 0
part being written down under the column while the 1 is carried to the left column.
In subtraction, any carry over to the right is counted as an eight which is added to
the number being carried to. Example 3, 0 – 1 needs a carry from 7 on the left,
however the carry is now 0+8=8, then finish the subtraction.

Examples
2 3 48 3 4 38 76 08 58 2 32 614
+ 1 78 + 7 4 58 - 5 1 38 - 1 2 7
2 5 38 1 3 1 08 1 7 28 1 0 78

HEXADECIMAL NUMBERS
As seen above in octal numbering system, the computer saves some storage space
by grouping together three binary digits to produce a single digit. A further step is
taken in some computers, to group four binary digits to produce a digit in the base
16 or hexadecimal numbering system.
In other words, this number base comes into its own when it is necessary to deal
with large groups of binary numbers. The base of the hexadecimal system is 16 and
the symbols used in this system are 0,1,2,4,5,6,7,8,9,A,B,C,D,E,F. Strings of 4 bits
have an equivalent hexadecimal value (four-bit equivalent forms).

Decimal 0 1 2 3 4 5 6 7 8 9 10 11 12 13 1 15
4
Hexadecim 0 1 2 3 4 5 6 7 8 9 A B C D E F
al

For example,
6B is represented by 0110 1011 or 110 1011,
3E1 is represented by 0011 1110 0001 or 11 1110 0001 and
5DBE34 is represented by 101 1101 1011 1110 0011 0100. Decimal fractions can
also be converted to binary fractions.

We use the method as previously discussed to convert from any number system to
the decimal system; multiply each digit by its place value and then obtain the total.
Care should be taken when using hexadecimal digits A to F.

CONVERSION OF HEXADECIMAL NUMBERS TO DECIMAL


C3BD16 = (C * 163) + (3 * 162) + (B*161) + (D*160)
= (12*4096) + (3*256) + (11*16) + (13*1)
= 50,10910

F6A16 = (F * 16 ) + (6*16 ) + (A*16 )


2 1 0

= (F*256) + (6*16) + (A*1)


= (15*256) + (6*16) + (10*1)
= 3,94610

Page 27
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

CONVERSION OF DECIMAL NUMBERS TO HEXADECIMAL (Using remainder


method)
a. 24910 =F916 b. 158310 = 62F16
16 24 16 158
9 3
16 15 remainder 9 16 98 remainder F
0 remainder F 16 6 remainder 2
0 remainder 6

CONVERSION OF HEXADECIMAL TO BINARY


Any four bit binary number can be converted to a hexadecimal number using the
special codes (four-bit equivalent forms). The binary numbers are divided into
groups of four digits and each group is represented by a single hexadecimal digit.

a. 1111001101102 b. 11011010102
111 001 011 011 001 011 101
1 1 1 0 1 0 0
F 3 7 6 3 6 A

Therefore 1111001101102= F37616 and 11011010102= 36A16

CONVERSION OF BINARY TO HEXADECIMAL


a. BDC16 = 1011 1100 11012 b. A57F16 = 1010 0101
0111 11112
B C D A 5 7 F
101 110 110 101 010 011 111
1 0 1 0 1 1 1

ADDITION AND SUBTRACTION OF HEXADECIMAL NUMBERS


Hexadecimal arithmetic operations are similar to those of other number systems.
The carrying of a hexadecimal digit to the next position is done in exactly the same
manner as in the decimal number system. A sum of 16 results in a carry of 1 (1016
= 1610) and in subtraction each borrow equals 16.
Examples
B A C16 D B1 A16 A9 117 516 1 B10 622
+ 4 4 116 + 6 2 716 - 5 2 316 - 1 2 7
F E D16 1 3 E 116 4 F 28 8 F16
Hint:
C+1 = 12+1 = 13 (D) 5-3 = 2
A+4 = 10+4 = 14 (E) 1-2 (carry 16 from A) =16+1=17,
17-2=15(F)
B+4 = 11+4 = 15 (F) after borrowing, A-1=10-1=9, 9-
5=4
ASCII CHARACTER ENCODING
The name ASCII is an acronym for: American Standard Code for Information
Interchange. It is a character encoding standard developed several decades ago to

Page 28
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

provide a standard way for digital machines to encode characters. The ASCII code
provides a mechanism for encoding alphabetic characters, numeric digits, and
punctuation marks for use in representing text and numbers written using the
Roman alphabet. As originally designed, it was a seven bit code. The seven bits
allow the representation of 128 unique characters. All of the alphabet, numeric
digits and Standard English punctuation marks are encoded.

The ASCII standard was later extended to an eight bit code (which allows 256
unique code patterns) and various additional symbols were added, including
characters with diacritical marks (such as accents) used in European languages,
which don’t appear in English. There are also numerous non-standard extensions to
ASCII giving different encoding for the upper 128 character codes than the
standard. For example, the character set encoded into the display card for the
original IBM PC had a non-standard encoding for the upper character set. This is a
non-standard extension that is in very wide spread use, and could be considered a
standard in itself.

Standard ASCII Character Sets


0 NUL
1 SOH 21 NAK 41 ) 61 = 81 Q 101 e 121 y
2 STX 22 SYN 42 * 62 > 82 R 102 f 122 z
3 ETX 23 ETB 43 + 63 ? 83 S 103 g 123 {
4 EOT 24 CAN 44 64 @ 84 T 104 h 124 |
5 ENQ 25 EM 45 - 65 A 85 U 105 i 125 }
6 ACK 26 SUB 46 . 66 B 86 V 106 j 126 ~
7 BEL 27 ESC 47 / 67 C 87 W 107 k 127 DEL
8 BS 28 FS 48 0 68 D 88 X 108 l
9 TAB 29 GS 49 1 69 E 89 Y 109 m
10 LF 30 RS 50 2 70 F 90 Z 110 n
11 VT 31 US 51 3 71 G 91 [ 111 o
12 FF 32 52 4 72 H 92 \ 112 p
13 CR 33 ! 53 5 73 I 93 ] 113 q
14 SOH 34 " 54 6 74 J 94 ^ 114 r
15 SI 35 55 7 75 K 95 _ 115 s
16 DLE 36 $ 56 8 76 L 96 116 t
17 DCI 37 % 57 9 77 M 97 a 117 u
18 DC2 38 & 58 : 78 N 98 b 118 v
19 DC3 39 # 59 ; 79 O 99 c 119 w
20 DC4 40 ( 60 < 80 P 100 d 120 x

Bytes are frequently used to hold individual characters in a text document. In the
ASCII character set, each binary value between 0 and 127 is given a specific
character. Most computers extend the ASCII character set to use the full range of
256 characters available in a byte. The upper 128 handle special things like
accented characters from common foreign languages. The table above shows the
127 standard ASCII codes which computers uses to store text documents both on
disk and in memory.
Parity Check Bit

Page 29
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

In the early computers systems, the ASCII code consisted of just 128 characters as
explained above. This is because the 8th bit was used for parity checking. Errors
may occur while recording and reading data and when data is transmitted from one
unit to another unit in a computer. Detection of a single error in the code for a
character is possible by introducing an extra bit in its code. This bit, know as the
parity check bit, is appended to the code. The user can set the parity bit either as
even or odd. The user chooses this bit so that the total number of ones ('1') in the
new code is even or odd depending upon the selection. If a single byte is
incorrectly read or written or transmitted, then the error can be identified using the
parity check bit.
However, when being used inside computers, parity was not really necessary and
the top bit in the byte was wasted. It then was a good idea to make use of this top
bit (8th bit), thus releasing a further 128 characters giving a total of 256 overall. This
has led to what is now known as the Extended ASCII character set.

Page 30
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

Binary and Hexadecimal Number Tables


Powers of 2: Hexadecimal Binary
Digits Equivalent
2 ....................
0
0 0 0000
..1
21 .................... 1 1 0001
..2
22 .................... 2 2 0010
..4
23 .................... 3 3 0011
..8
24 .................... 4 4 0100
16
25 .................... 5 5 0101
32
26 .................... 6 6 0110
64
27 .................. 7 7 0111
128
28 .................. 8 8 1000
256
29 .................. 9 9 1001
512
210 ................ 10 A 1010
1024
211 ................ 11 B 1011
2048
212 ................ 12 C 1100
4096
213 ................ 13 D 1101
8192
214 .............. 14 E 1110
16384
215 .............. 15 F 1111
32768
216 ..............
65536

Equivalent Numbers in Decimal, Binary and Hexadecimal Notation:


Deci Binary Hexadeci
mal mal
0 00000000 00
1 00000001 01
2 00000010 02
3 00000011 03

Page 31
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

4 00000100 04
5 00000101 05
6 00000110 06
7 00000111 07
8 00001000 08
9 00001001 09
10 00001010 0A
11 00001011 0B
12 00001100 0C
13 00001101 0D
14 00001110 0E
15 00001111 0F
16 00010000 10
17 00010001 11
31 00011111 1F
32 00100000 20
63 00111111 3F
64 01000000 40
65 01000001 41
127 01111111 7F
128 10000000 80
129 10000001 81
255 11111111 FF
256 00000001000 0100
00000
32767 01111111111 7FFF
11111
32768 10000000000 8000
00000
65535 11111111111 FFFF
11111

Page 32
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

EXERCISES ON DATA REPRESENTATION


Express these binary numbers as decimal numbers: a) 101012 b) 10112 c)
111012
Change the following decimal number into binary numbers a) 46310 b) 3210
Evaluate the following: a) 110112 + 100112 b) 1011012 + 10102 + 101102
(c) 110002 – 111012 d) 1110112 - 1100012

Express these octal numbers as decimal numbers: a) 65478 b) 3458 c)


23398
Change the following decimal number into octal numbers a) 346110 b) 193210
Using the three-bit equivalent forms:
Change from octal to binary: a) 67738 b) 24158 c) 3458
Change from binary to octal: a) 10101110010012 b) 101011101001002
Calculate the following in base 8: a) 37638 + 35318 b) 1368 + 7328 (c) 53218 -
6778

Express these hexadecimal as decimal numbers: a) 8D516 b) 7ED16 c)


B6AC16
Change the following decimal number into hexadecimals a) 345610 b) 142110
Using the four-bit equivalent forms:
Change from hexadecimal to binary: a) 4DE16 b) 5C3A16 c) B6F816
Change from binary to hexadecimal: a) 10101110010012 b)
1001111101001002
Calculate the following in base 16: a) B5D + 45C b) F1D – 1E3 c)
6789 + 4321

Page 33
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

COMPUTER LANGUAGES
The term computer language includes a wide variety of languages used to
communicate with computers. It is broader than the more commonly-used term
programming language. Programming languages are a subset of computer
languages. For example, HTML is a markup language and a computer language, but
it is not traditionally considered a programming language.
Computer languages can be divided into two groups: high-level languages and low-
level languages. High-level languages are designed to be easier to use, more
abstract, and more portable than low-level languages. Syntactically correct
programs in some languages are then compiled to low-level language and executed
by the computer. Most modern software is written in a high-level language,
compiled into object code, and then translated into machine instructions.
Computer languages could also be grouped based on other criteria. Another
distinction could be made between human-readable and non-human-readable
languages. Human-readable languages are designed to be used directly by humans
to communicate with the computer. Non-human-readable languages, though they
can often be partially understandable, are designed to be more compact and easily
processed, sacrificing readability to meet these ends.

Machine language
The computers can execute a program written using binary digits only. This type of
programs is called machine language programs. Since these programs use only '0's
and '1's it will be very difficult for developing programs for complex problem
solving. Also it will be very difficult for a person to understand a machine language
program written by another person. At present, computer users do not write
programs using machine language. Also these programs written for execution in
one computer cannot be used on another type of computer. i.e., the programs were
machine dependent.

Assembly Language
In assembly language mnemonic codes are used to develop program for problem
solving. The program given below shows assembly language program to add two
numbers A & B.
Program code Description
It reads the value of A.
READ A
The value of B is added
ADD B
with A.
STORE C
The result is store in C. Assembly language is
PRINT C
The result in 'C' is printed. designed mainly to replace
HALT
Stop execution. each machine code with
and understandable mnemonic code. To execute an assembly language program it
should first be translates into an equivalent machine language program. Writing
and understanding programs in assembly language is easier than that of machine
language. The programs written in assembly language are also machine-
dependent.

Page 34
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

High Level Languages


High level languages are developed to allow application programs, which are
machine independent. High level language permits the user to use understandable
codes using the language structure. In order to execute a high-level language
program, it should be translated into a machine language either using a compiler or
interpreter. The high level languages commonly used are FORTRAN (FORmula
TRANslation), BASIC (Beginner's All-purpose Symbolic Instruction Code), COBOL
(COmmon Business Oriented Language). Recently developed programming
language such as Visual Foxpro, Visual Basic (VB), Visual C++ (VC++) are more
popular among the software developers. The following program written in BASIC
language is to add two given numbers.
Program Code Description
To read the value of A&B
10 INPUT A,B
A&B are added and result is
20 LET C=A+B
stored in C
30 PRINT C
Print the value of C
40 END
Stop execution

Page 35
Introduction to Computer Science [CSC 111] Lecture Note [2010] Olowonisi Victor O.

COMPUTERS AND COMMUNICATIONS


Local Area Network (LAN) & Wide Area Network (WAN)
Computers available in remote locations can communicate with each other using a
telecommunication line. One way of connecting the computers is by using devices
called modems. A modem is used to transfer data from one computer to another
using the telephone lines. A modem converts the strings of 0s and 1s into electrical
signals which can be transferred over the telephone lines. Both the receiving and
the transmitting computer have a telephone connection and a modem. An external
modem is connected to the computer like a typical input or an output device. An
internal modem is fitted into the circuitry related to the CPU and Memory.
Interconnection of computers which are within the same building or nearby
locations forms a network of computers and this network is called a Local Area
Network (LAN). A LAN permits sharing of data files, computing resources and
peripherals. Interconnection of computers located in far away locations using
telecommunication system is know as Wide Area Network (WAN).
COMPUTER COMMUNICATION USING TELEPHONE LINES

Internet
Intercommunication between computer networks is possible now. Computer
networks located in different Organizations can communicate with each other
through a facility know as Internet. Internet is a world wide computer network,
which interconnects computer networks across countries. The Internet facility
known as Internet. Internet is a world wide computer network, which interconnects
computer networks across countries. The Internet facilitates electronic mail
(email), file-transfer between any two computers and remote access to a
computer connected in the internet. This intercommunication facility has changed
the style of functioning of the business organization and it has made the world a
global village.

Page 36

Potrebbero piacerti anche