Sei sulla pagina 1di 300

ALAGAPPA UNIVERSITY

[Accredited with ‘A+’ Grade by NAAC (CGPA:3.64) in the Third Cycle


and Graded as Category–I University by MHRD-UGC]
(A State University Established by the Government of Tamil Nadu)
KARAIKUDI – 630 003

Directorate of Distance Education

B.Com. [Computer Application]


I - Semester
123 13

FUNDAMENTALS
OF
INFORMATION TECHNOLOGY
Authors:
Vinay Ahlawat, Lecturer, KIET, Ghaziabad
Units: (1.0-1.3, 1.5-1.9, 2.0-2.3)
Sanjay Saxena, Managing Director, Total Synergy Consulting Pvt.Ltd.
Units: (1.4, 2.4, 14.0-14.4)
B Basavaraj, Former Principal and HOD, Department of Electronics and Communication Engineering, SJR College of Science,
Arts & Commerce
Units: (3, 4, 7, 8)
Deepti Mehrotra, Professor, Amity School of Engineering and Technology, Amity University, Noida
Unit: (5)
Vivek Kesari, Asst. Professor, Galgotia’s GIMT Institute of Management & Technology, Greater Noida
Unit: (6)
VK Govindan, Professor, Computer Engineering, Deptt. of Computer Science and Engineering, NIT, Calicut
Units: (9, 12.0-12.6)
Rajneesh Agrawal, Senior Scientists, Department of Information Technology, Government of India
Units: (10, 13.0-13.3)
Rohit Khurana, CEO, ITL Education Solutions Ltd.
Units: (11, 12.7-12.13)
Deepak Gupta, Assistant Professor, G.L. Bajaj Group of Institutions, Greater Noida
Units: (14.6-14.13)
Vikas Publishing House
Units: (2.5-2.10, 13.4-13.10, 14.5)

"The copyright shall be vested with Alagappa University"

All rights reserved. No part of this publication which is material protected by this copyright notice
may be reproduced or transmitted or utilized or stored in any form or by any means now known or
hereinafter invented, electronic, digital or mechanical, including photocopying, scanning, recording
or by any information storage or retrieval system, without prior written permission from the Alagappa
University, Karaikudi, Tamil Nadu.

Information contained in this book has been published by VIKAS® Publishing House Pvt. Ltd. and has
been obtained by its Authors from sources believed to be reliable and are correct to the best of their
knowledge. However, the Alagappa University, Publisher and its Authors shall in no event be liable for
any errors, omissions or damages arising out of use of this information and specifically disclaim any
implied warranties or merchantability or fitness for any particular use.

Vikas® is the registered trademark of Vikas® Publishing House Pvt. Ltd.


VIKAS® PUBLISHING HOUSE PVT. LTD.
E-28, Sector-8, Noida - 201301 (UP)
Phone: 0120-4078900  Fax: 0120-4078999
Regd. Office: 7361, Ravindra Mansion, Ram Nagar, New Delhi 110 055
 Website: www.vikaspublishing.com  Email: helpline@vikaspublishing.com

Work Order No. AU/DDE/DE1-238/Preparation and Printing of Course Materials/2018 Dated 30.08.2018 Copies - 500
SYLLABI-BOOK MAPPING TABLE
Fundamentals of Information Technology

BLOCK I: Fundamentals of Computer and Circuit


UNIT-I: Computers-Basics of Computer-Characteristics of
Computers-Limitations of Computers-System Components-Input Unit-1: Computers
Devices-Output Devices-Computer Memory-Central Processing (Pages 1-9);
Unit-Mother Board. Unit-2: Computer Generations
UNIT–II: Computer Generations & Classifications-Evolution of and Classification
Computers-Classification of Computers-Types of Microcomputers (Pages 10-25);
Distributed Computer. Unit-3: Number Systems and
UNIT–III: Number Systems and Boolean Algebra–Decimal–Binary Boolean Algebra
–Octal–Hexadecimal-Converting Techniques in Number Systems- (Pages 26-54);
1’s Complements, 2’s Complements-Computer Codes-Rules and Unit-4: Logical Circuits
Laws of Boolean Algebra-Basic Gates (NOT, AND & OR). (Pages 55-96)
UNIT–IV: Logical Circuits-Combinational Circuits-Sequential
Circuits-Flip Flops-Shift Registers-Types of Shift Registers–
Counters.

BLOCK II: Basics of CPU and Buses


UNIT–V: CPU Essentials-Modern CPU Concepts-CISC vs. RISC Unit-5: CPU Essentials
CPUs-Circuit Size and Die Size-Processor Speed-Processor (Pages 97-107);
Cooling-System Clocks-CPU Over Clocking. Unit-6: Computer Memory
UNIT–VI: Computer Memory-Memory System-Memory Cells- (Pages 108-117);
Memory Arrays-Random Access Memory (RAM) Read Only Unit-7: Bus
Memory (ROM)-Physical Devices Used to Construct Memories. (Pages 118-130);
UNIT–VII: Bus-Bus Interface-Industry Standard Architecture Unit-8: Storage Devices
(ISA)-Micro Channel Architecture (MCA)-VESA (Video Electronics (Pages 131-144)
Standards Association-Peripheral Component Interconnect-
Accelerated graphics Port–FSB–USB-Dual Independent Bus–
Troubleshooting.
UNIT–VIII: Storage Devices-Hard Disk–Construction-IDE Drive
Standard and Features–Troubleshooting–DVD-Blue-Ray Disc-
Flash Memory.

BLOCK III: Storage Devices and Computer Software


UNIT– IX: Input Output Devices Wired and Wireless Connectivity- Unit-9: Input Output Devices, Wired
Wired and Wireless Devices-Input Devices-Touch Screen-Visual and Wireless Connectivity
Display Terminal–Troubleshooting. (Pages 145-159);
UNIT–X: Computer Software-Overview of Different Operating Unit-10: Computer Software
Systems-Overview of Different Application Software-Overview (Pages 160-175);
of Proprietary Software-Overview of Open Source Technology. Unit-11: Software Development
UNIT–XI: Software Development, Design and Testing Requirement (Pages 176-221)
Analysis-Design Process-Models for System Development-
Software Testing Life Cycle-Software Testing-Software Paradigms-
Programming Methods-Software Applications.
BLOCK IV: Fundamentals of OS and Workings of Internet Unit-12: Operating System
UNIT–XII: Operating System Concepts-Functions of Operating (Pages 222-240);
System-Development of Operating System-Operating System Unit-13: Internet and Its Working
Virtual Memory-Operating System Components-Operating System (Pages 241-260);
Services-Operating System Security. Unit-14: Internet and Its Uses
UNIT–XIII: Internet and Its Working-History of Internet-Web (Pages 261-290)
Browsers-Web Servers-Hypertext Transfer Protocol-Internet
Protocols Addressing-Internet Connection Types-How Internet
Works.
UNIT–XIV: Internet and Its Uses-Internet Security-Uses of
Internet–Virus–Antivirus-Cloud System-Cloud Technologies-Cloud
Architecture-Cloud Infrastructure-Cloud Deployment Models.
CONTENTS
INTRODUCTION
BLOCK I: FUNDAMENTALS OF COMPUTER AND CIRCUIT
UNIT 1 COMPUTERS 1-9
1.0 Introduction
1.1 Objectives
1.2 Basics of Computer
1.3 Characteristics and Limitations of Computers
1.4 System Components
1.5 Answers to Check Your Progress Questions
1.6 Summary
1.7 Key Words
1.8 Self Assessment Questions and Exercises
1.9 Further Readings
UNIT 2 COMPUTER GENERATIONS AND CLASSIFICATION 10-25
2.0 Introduction
2.1 Objectives
2.2 Evolution of Computers
2.3 Generations of Computer
2.4 Classification of Computers
2.5 Distributed Computer Systems
2.6 Answers to Check Your Progress Questions
2.7 Summary
2.8 Key Words
2.9 Self Assessment Questions and Exercises
2.10 Further Readings
UNIT 3 NUMBER SYSTEMS AND BOOLEAN ALGEBRA 26-54
3.0 Introduction
3.1 Objectives
3.2 Number Systems
3.2.1 Decimal Number System
3.2.2 Binary Number System
3.2.3 Octal Number System
3.2.4 Hexadecimal Number System
3.2.5 Conversion from One Number System to the Other
3.3 Complements
3.4 Numeric and Character Codes
3.5 Basic Gates
3.5.1 AND Gate
3.5.2 OR Gate
3.6 Boolean Algebra
3.6.1 Laws and Rules of Boolean Algebra
3.7 Answers to Check Your Progress Questions
3.8 Summary
3.9 Key Words
3.10 Self Assessment Questions and Exercises
3.11 Further Readings
UNIT 4 LOGICAL CIRCUITS 55-96
4.0 Introduction
4.1 Objectives
4.2 Combinational Logic
4.3 Adders and subtractors
4.3.1 Full-Adder
4.3.2 Half-Subtractor
4.3.3 Full-Subtractor
4.4 Decoders
4.4.1 3-Line-to-8-Line Decoder
4.5 Encoders
4.5.1 Octal-to-Binary Encoder
4.6 Multiplexer
4.7 Demultiplexer
4.7.1 Basic Two-Input Multiplexer
4.7.2 Four-Input Multiplexer
4.8 Flip-flops
4.8.1 S-R Flip-Flop
4.8.2 D Flip-Flop
4.8.3 J-K Flip-Flop
4.8.4 T Flip-Flop
4.8.5 Master–Slave Flip-Flops
4.9 Registers
4.9.1 Shift Registers Basics
4.9.2 Serial In/Serial out Shift Registers
4.9.3 Serial In/Parallel Out Shift Registers
4.9.4 Parallel In/Serial Out Shift Registers
4.9.5 Parallel In/Parallel out Registers
4.10 Counters
4.10.1 Asynchronous Counter Operations
4.10.2 Synchronous Counter Operations
4.11 Answers to Check Your Progress Questions
4.12 Summary
4.13 Key Words
4.14 Self Assessment Questions and Exercises
4.15 Further Readings
BLOCK II: BASICS OF CPU AND BUSES
UNIT 5 CPU ESSENTIALS 97-107
5.0 Introduction
5.1 Objectives
5.2 Modern CPU Concepts
5.3 CPU: Circuit Size and Die Size
5.4 Answers to Check Your Progress Questions
5.5 Summary
5.6 Key Words
5.7 Self Assessment Questions and Exercises
5.8 Further Readings
UNIT 6 COMPUTER MEMORY 108-117
6.0 Introduction
6.1 Objectives
6.2 Memory System
6.3 Physical Devices Used to Construct Memories
6.4 Answers to Check Your Progress Questions
6.5 Summary
6.6 Key Words
6.7 Self Assessment Questions and Exercises
6.8 Further Readings
UNIT 7 BUS 118-130
7.0 Introduction
7.1 Objectives
7.2 Bus Interface and Expansion Slots
7.2.1 Industry Standard Architecture
7.2.2 Extended Industry Standard Architecture
7.2.3 Micro Channel Architecture
7.2.4 Video Electronics Standards Association
7.2.5 Peripheral Component Interconnect or Personal Computer Bus
7.2.6 Accelerated Graphics Port
7.3 FSB
7.4 USB
7.5 Dual Independent Bus
7.6 Answers to Check Your Progress Questions
7.7 Summary
7.8 Key Words
7.9 Self Assessment Questions and Exercises
7.10 Further Readings
UNIT 8 STORAGE DEVICES 131-144
8.0 Introduction
8.1 Objectives
8.2 Storage Devices
8.3 Hard Disk Construction
8.4 IDE Drive Standard and Features
8.5 Answers to Check Your Progress Questions
8.6 Summary
8.7 Key Words
8.8 Self Assessment Questions and Exercises
8.9 Further Readings
BLOCK III: STORAGE DEVICES AND COMPUTER SOFTWARE
UNIT 9 INPUT OUTPUT DEVICES, WIRED AND
WIRELESS CONNECTIVITY 145-159
9.0 Introduction
9.1 Objectives
9.2 Wired and Wireless Devices
9.3 Input and Output Devices
9.4 Answers to Check Your Progress Questions
9.5 Summary
9.6 Key Words
9.7 Self Assessment Questions and Exercises
9.8 Further Readings
UNIT 10 COMPUTER SOFTWARE 160-175
10.0 Introduction
10.1 Objectives
10.2 Overview of Different Software
10.2.1 System Software
10.2.2 Application Software
10.3 Overview of Different Operating System
10.4 Answers to Check Your Progress Questions
10.5 Summary
10.6 Key Words
10.7 Self Assessment Questions and Exercises
10.8 Further Readings
UNIT 11 SOFTWARE DEVELOPMENT 176-221
11.0 Introduction
11.1 Objectives
11.2 Design and Testing Requirement Analysis
11.3 Design Process
11.4 Models for System Development
11.5 Software Testing Life Cycle
11.6 Software Testing
11.7 Software Paradigms and Programming Methods
11.8 Answers to Check Your Progress Questions
11.9 Summary
11.10 Key Words
11.11 Self Assessment Questions and Exercises
11.12 Further Readings
BLOCK IV: FUNDAMENTALS OF OS AND WORKINGS OF INTERNET
UNIT 12 OPERATING SYSTEM 222-240
12.0 Introduction
12.1 Objectives
12.2 Operating System Concepts
12.3 Functions of OS
12.4 Development of Operating System
12.5 Operating System Virtual Memory
12.6 Operating System Components
12.7 Operating System Services
12.8 Operating System Security
12.9 Answers to Check Your Progress Questions
12.10 Summary
12.11 Key Words
12.12 Self Assessment Questions and Exercises
12.13 Further Readings
UNIT 13 INTERNET AND ITS WORKING 241-260
13.0 Introduction
13.1 Objectives
13.2 Definition
13.3 History of the Internet
13.4 Web Browser
13.5 Internet Protocols
13.6 Answers to Check Your Progress Questions
13.7 Summary
13.8 Key Words
13.9 Self Assessment Questions and Exercises
13.10 Further Readings
UNIT 14 INTERNET AND ITS USES 261-290
14.0 Introduction
14.1 Objectives
14.2 Internet Security
14.3 Uses of Internet
14.4 Virus
14.5 Cloud System
14.6 Computing Platforms and Technologies
14.7 Cloud Computing Architecture and infrastructure
14.8 Types of Cloud and Deployment Models
14.9 Answers to Check Your Progress Questions
14.10 Summary
14.11 Key Words
14.12 Self Assessment Questions and Exercises
14.13 Further Readings
Introduction
INTRODUCTION

Information Technology (IT) needs no introduction today. Its impact is widespread.


No matter which field we work in, we use IT directly or indirectly. Characteristically, NOTES
IT is the application of computers and telecommunications equipment to store,
retrieve, transmit and manipulate data. The term is commonly used as a synonym
for computers and computer networks, but it also encompasses other information
distribution technologies, such as television and telephones.
Computers have brought about major changes in all spheres of life. Today it
is extremely difficult to imagine the world without computers. Computers help us
to communicate using modems, telephone and Wi-Fi facilities and it seems as if
you are sitting side-by-side and communicating directly with each other. This
modern way of communicating has been made possible by computers. Through
the Internet and e-mail, we now have the ability to communicate with anybody in
any part of the world in a matter of minutes. Internet links are computer networks
across the world so that users can share resources and also communicate with
each other. Some computers have direct access to all the facilities on the Internet
and e-mail facilities of the Internet have been a boon to society, especially in terms
of time saved.
Today, the fact that computers have made a big impact on many aspects of
our lives can hardly be questioned. They have opened up an entire world of
knowledge and information that is readily accessible. Thus, information about
computers right from the first mechanical adding machine to the latest
microprocessors has become imperative for students as well as anybody who has
something to do with a computer system. Hence, the fact that computers have
made a big impact on many aspects of our lives can hardly be questioned. They
have opened up an entire world of knowledge and information that is readily
accessible. Today, we are using the fifth generation of computers. The term
‘generation’ is used to distinguish between varying hardware and software
technologies. The hardware by itself cannot do any calculation or manipulation of
data without being instructed what to do and how to do it. Thus, there is a need of
software in a computer system. The software used in a computer system is grouped
into applications software, system software and utility software. Today, we use
computer systems in the organizations to automate the work with the help of specific
software.
This book, Fundamentals of Information Technology, has been written
in the SIM format or the self-instructional mode wherein each unit begins with an
‘Introduction’ to the topic followed by an outline of the ‘Objectives’. The detailed
content is then presented in a simple and an organized manner, interspersed with
‘Check Your Progress’ questions to test the understanding of the students. A
‘Summary’ along with a list of ‘Key Words’ and a set of ‘Self Assessment Questions
and Exercises’ is also provided at the end of each unit for effective recapitulation.
Self-Instructional
Material 1
Computers
BLOCK - I
FUNDAMENTALS OF COMPUTER AND CIRCUIT
NOTES
UNIT 1 COMPUTERS
Structure
1.0 Introduction
1.1 Objectives
1.2 Basics of Computer
1.3 Characteristics and Limitations of Computers
1.4 System Components
1.5 Answers to Check Your Progress Questions
1.6 Summary
1.7 Key Words
1.8 Self Assessment Questions and Exercises
1.9 Further Readings

1.0 INTRODUCTION

In this unit, you will learn about the basics of computer, input/output devices and
computer memory. A computer is an electronic device that is used to carry out
sequences of arithmetic or logical operations automatically. Input device is a
hardware component that is used to enter data in the computer while output device
provides the result after arithmetic and logical operations. Memory is used for
storing data and information temporarily or permanently.

1.1 OBJECTIVES

After going through this unit, you will be able to:


 Understand the basics of computer
 Discuss the characteristics and limitations of computers
 Explain the different types of input and output devices
 Explain the various types of memory

1.2 BASICS OF COMPUTER

A computer is an electronic device that can implement a programmed list of


instructions and also respond to new instructions that it is given to perform some
task.

Self-Instructional
Material 1
Computers A computer performs four functions:
1. Accepts data
2. Processes data
NOTES 3. Produces output
4. Stores results
In the following units you will learn about the parts of a computer and each
of the four parts of the information processing cycle.
Modern computers are both electronic and digital. The instructions
and data are called software; whereas wires, transistors and circuits together form
the hardware.
Generally, all computers have the following hardware components:
 Memory: Enables a computer to store data and programs
 Mass storage devices: Allow a computer to keep large amounts of data.
Disk drives and tape drives are the common mass storage devices.
 Input device: The input device is the medium through which data and
instructions enter a computer, usually a keyboard and a mouse.
 Output device: A display screen, printer or some other device that lets
you see what the computer has achieved.
 Central Processing Unit (CPU): The heart of the computer that actually
executes instructions and processes data.
In addition to these, many other components make it possible for the main
components to work together efficiently. For example, the bus helps every computer
to transmit data from one part of the computer to another.
Computers can mainly be classified by power and size. These are as follows:
 Personal computer: A small and single-user computer based on
a microprocessor is popularly known as Personal Computer (PC). Besides
the microprocessor, a personal computer has a keyboard (helps to enter
data), a monitor (helps to display information) and also a storage device to
store data.
 Workstation: It is also a single-user and a powerful computer. A work-
station is like a personal computer which consists of a higher-quality monitor
and a more powerful microprocessor.
 Minicomputer: A multi-user computer that can support ten to hundreds
of users at the same time.
 Mainframe: A powerful multi-user computer which can support hundreds
and thousands of users simultaneously.
 Supercomputer: An extremely fast computer capable of implementing
millions of instructions every second.
Self-Instructional
2 Material
Some Basic Computer Terminologies Computers

Hardware: This refers to the physical parts of the computer.


Software: The instructions or programs that instruct the computer what to do.
Data: The individual facts like name (first name, middle name and surname), price NOTES
quantity ordered, etc.
Information: Data which has been processed into a useful form, e.g., a complete
mailing address.
Default: The original settings if instructions for any further change are not given.

1.3 CHARACTERISTICS AND LIMITATIONS OF


COMPUTERS

If any computer can carry out a large number of calculations in a short period of
time, it can be called a ‘powerful’ computer. Ignoring all the limiting factors of the
other hardware, according to the above definition, the CPU makes a computer
powerful. Computers are powerful for different reasons. They operate with amazing
reliability, speed, and accuracy. Huge amounts of data and information can be
stored in computers. Computers also permit their users to communicate with other
users.
Speed
A computer can perform billions of actions in a second. The important factors that
determine the speed of a computer are:
 The amount of data that the CPU can execute in a given period of time
 CPU’s clock speed
Clock speed
Clock speed is the rate at which a CPU implements instructions. An internal clock
that regulates the rate at which instructions are executed and coordinates all the
various computer components exists in every system. The CPU needs a
predetermined number of clock ticks to execute each instruction it receives.
If this clock runs faster, the CPU will execute more instructions per second.
Clock speeds are expressed in megahertz MHz (Mega means million and hertz
means times per second) or gigahertz (GHz). For example, 200 MHz means 200
million times per second whereas 200 GHz means 200 billion times per second.
Reliability
‘Failures are usually due to human error, one way or another.’ We can depend on
the modern computer because the electronic components in these have a very low
failure rate. The high reliability of the components of a computer ensures that the
computer consistently gives good results.
Self-Instructional
Material 3
Computers Accuracy
Computers process large amounts of data and produce error-free results if
(and only if) the input data is correct and the instructions it receives are input
NOTES properly. If the input data is inaccurate, the resulting output will certainly be incorrect.
So, the accuracy of a computer’s output depends mainly on the accuracy of the
data input.
Storage
A computer can store huge amounts of data. Using the availability of the modern
storage devices, the computer can transfer data quickly from storage to memory,
process the data, and then reserve the data again for further use.
Diligence
Computers are highly consistent compared to human beings. They never suffer
from human traits like boredom, tiredness and lack of concentration. Therefore,
computers score over human beings in performing repetitive and voluminous tasks.
Versatility
Computers are capable of performing any task following a series of logical steps.
They are versatile machines, but their capability is limited only by human intelligence.
In today’s fast developing, technology-savvy world, it is almost impossible to find
an area where computers are not being used. Banks, railway/air reservation, hotels,
weather forecasting and many more—computers are essential in every sector.
Power of Remembering
In human memory, information that is less important is relegated to the back of the
mind and forgotten with the progression of the time; whereas a piece of information
once stored (or recorded) in the computer can never be forgotten and can be
retrieved at any time! Therefore, information can be retained as long as desired.
We can use secondary storage (a type of detachable memory) for this purpose.
No Intelligence Quotient (IQ)
Computers have no real intellect or common sense unlike the human brain.
Computers are still not complex enough to understand and analyze and act
accordingly like the brain does. These can only follow rules and instructions preset
by the programmer. Traditional machines manages to replicate and bear a
resemblance to human intellect because programmers try to make the computer
react just like a human brain by programming a set of rules and instruction for the
computer to follow. Like the human brain, the computer can hardly innovate and
invent new ideas. The computer is faster than the human brain at doing logical
things and computations. Only by using preset algorithms (or programs) to merge
current ideas, computer can do creative innovation (which only shows a limited
Self-Instructional
4 Material
range of creativity). However, the brain is capable of imagination. It is better at Computers

interpreting the outside world and can create new ideas.


No Feelings
The human brain always acts under the influence of emotions, but computers act NOTES
only on logic. Many of our actions are based on our emotions. It has been already
proved by researchers that brains act on emotions. However, computers act
completely on logical bits following the programmers’ instructions that are absolute.
No other factors have any effect on them, because all actions performed by
computers are based on the coding.

1.4 SYSTEM COMPONENTS

As seen in earlier sections, the size, shape, cost, and performance of computers
have changed over the years, but the basic logical structure has not changed. Any
computer system essentially consists of three important parts, namely, input device,
central processing unit (CPU) and output device. The CPU itself consists of the
main memory, the arithmetic logic unit, and the control unit.
In addition to the five basic parts mentioned above, computers also employ
secondary storage devices (also referred to as auxiliary storage or backing storage),
which are used for storing data and instructions on a long-term basis.
Figure 1.1 shows the basic anatomy of computer system.

Secondary Storage

Main Memory

Input Unit Control Unit

(Data and Instructions) (Information-Results)

Arithmetic Logic Unit

Central Processing Unit (CPU)


Fig. 1.1 Schematic Representation of a Computer System

All computer systems perform the following five basic operations for converting
raw data into relevant information:
Self-Instructional
Material 5
Computers 1. Inputting: The process of entering data and instructions into the computer
system.
2. Storing: The process of saving data and instructions so that they are available
for use as and when required.
NOTES
3. Processing: Performing arithmetic or logical operations on data, to convert
them into useful information. Arithmetic operations include operations of
add, subtract, multiply, divide etc., and logical operations are operations of
comparison like equal to, less than, greater than, etc.
4. Outputting: This is the process of providing the results to the user. These
could be in the form of visual display and/or printed reports.
5. Controlling: Refers to directing the sequence and manner in which all the
above operations are performed.
Let us now familiarise ourselves with the various computer units that perform
these functions:
Input Unit
Both program and data need to be in the computer system before any kind of
operation can be performed. Program refers to the set of instructions which the
computer is to carry out, and data is the information on which these instructions
are to operate. For example, if the task is to rearrange a list of telephone subscribers
in alphabetical order, the sequence of instructions that guide the computer through
this operation is the program, whilst the list of names to be sorted is the data.
The Input Unit performs the process of transferring data and instructions
from the external environment into the computer system. Instructions and data
enter the input unit depending upon the particular input device used (keyboard,
scanner, card reader etc.). Regardless of the form in which the input unit receives
data, it converts these instructions and data into computer acceptable form (Binary
Codes). It then supplies the converted data and instructions to the computer system
for further processing.
Main Memory (Primary Storage)
Data and instructions are stored in the primary storage before processing and are
transferred as and when needed to the Arithmetic Logic Unit (ALU) where the
actual processing takes place. Once the processing is completed, the final results
are again stored in the primary storage till they are released to an output device.
Also, any intermediate results generated by the ALU are temporarily transferred
back to the primary storage until needed at a later time. Thus data and instructions
may move many times back and forth between the primary storage and the ALU
before the processing is completed.
It may be worth remembering that no processing is done in the primary
storage.
Self-Instructional
6 Material
Arithmetic Logic Unit (ALU) Computers

After the input unit transfers the information into the memory unit the information
can then be further transferred to the ALU where comparisons or calculations are
done and results sent back to the memory unit. NOTES
Since all data and instructions are represented in numeric form (bit patterns),
ALUs are designed to perform the four basic arithmetic operations: add, subtract,
multiply, divide, and logic/comparison operations such as equal to, less than, greater
than.
Output Unit
Since computers work with binary code, the results produced are also in binary
form. The basic function of the output unit therefore is to convert these results into
human readable form before providing the output through various output devices
like terminals, printers etc.
Control Unit
It is the function of the control unit to ensure that according to the stored instructions,
the right operation is done on the right data at the right time. It is the control unit
that obtains instructions from the program stored in the main memory, interprets
them, and ensures that other units of the system execute them in the desired order.
In effect, the control unit is comparable to the central nervous system in the human
body.
Central Processing Unit
The control unit, Arithmetic Logic Unit along with the main memory are together
known as the Central Processing Unit (CPU). It is the brain of any computer
system.
Secondary Storage
The storage capacity of the primary memory of the computer is limited. Often, it is
necessary to store large amounts of data. So, additional memory called secondary
storage or auxiliary memory is used in most computer systems.
Secondary storage is storage other than the primary storage. These are
peripheral devices connected to and controlled by the computer to enable
permanent storage of user data and programs. Typically, hardware devices like
magnetic tapes and magnetic disks fall under this category.

Check Your Progress


1. Define computer?
2. What is clock speed in computers?
3. What is central processing unit?
Self-Instructional
Material 7
Computers
1.5 ANSWERS TO CHECK YOUR PROGRESS
QUESTIONS

NOTES 1. A computer is an electronic device that can implement a programmed list of


instructions and also respond to new instructions that it is given to perform
some task.
2. Clock speed is the rate at which a CPU implements instructions.
3. The control unit, Arithmetic Logic Unit along with the main memory are
together known as the Central Processing Unit (CPU).

1.6 SUMMARY

 A computer is an electronic device that can implement a programmed list of


instructions and also respond to new instructions that it is given to perform
some task.
 Modern computers are both electronic and digital. The instructions and
data are called software; whereas wires, transistors and circuits together
form the hardware.
 If any computer can carry out a large number of calculations in a short
period of time, it can be called a ‘powerful’ computer.
 Clock speed is the rate at which a CPU implements instructions. An internal
clock that regulates the rate at which instructions are executed and
coordinates all the various computer components exists in every system.
 A computer can store huge amounts of data. Using the availability of the
modern storage devices, the computer can transfer data quickly from storage
to memory, process the data, and then reserve the data again for further
use.
 The Input Unit performs the process of transferring data and instructions
from the external environment into the computer system.
 After the input unit transfers the information into the memory unit the
information can then be further transferred to the ALU where comparisons
or calculations are done and results sent back to the memory unit.
 The control unit, Arithmetic Logic Unit along with the main memory are
together known as the Central Processing Unit (CPU).

1.7 KEY WORDS

 Hardware: It includes the physical, tangible parts or components of a


computer, such as the central processing unit, monitor, keyboard, computer
data storage, graphic card, sound card, speakers and motherboard.
Self-Instructional
8 Material
 Software: It is a program that enables a computer to perform a specific Computers

task.

1.8 SELF ASSESSMENT QUESTIONS AND NOTES


EXERCISES

Short Answer Questions


1. What are the functions of a computer system?
2. What are the different hardware components of a computer?
Long Answer Questions
1. Explain the characteristics and limitations of computers.
2. Describe the basic anatomy of computer system.
3. What are the different units of computer? Explain.

1.9 FURTHER READINGS

Bhatt, Pramod Chandra P. 2003. An Introduction to Operating Systems—


Concepts and Practice. New Delhi: PHI.
Bhattacharjee, Satyapriya. 2001. A Textbook of Client/Server Computing. New
Delhi: Dominant Publishers and Distributers.
Hamacher, V.C., Vranesic, Z.G. and Zaky. 2002. S.G. Computer Organization,
5th edition. New York: McGraw-Hill International Edition.
Mano, M. Morris. 1993. Computer System Architecture, 3th edition. New Jersey:
Prentice-Hall Inc.
Nutt, Gary. 2006. Operating Systems. New Delhi: Pearson Education.

Self-Instructional
Material 9
Computer Generations and
Classification
UNIT 2 COMPUTER
GENERATIONS AND
NOTES
CLASSIFICATION
Structure
2.0 Introduction
2.1 Objectives
2.2 Evolution of Computers
2.3 Generations of Computer
2.4 Classification of Computers
2.5 Distributed Computer Systems
2.6 Answers to Check Your Progress Questions
2.7 Summary
2.8 Key Words
2.9 Self Assessment Questions and Exercises
2.10 Further Readings

2.0 INTRODUCTION

In this unit, you will learn about the evolution and various types of computers.
computer generation terminology is change in technology a computer. It is used to
distinguish between varying hardware technologies. Computers can be classified
on the basis of their size, processing speed and cost. It can be classified in Digital,
Analog and Hybrid. You will also learn about the distributed computer systems.

2.1 OBJECTIVES

After going through this unit, you will be able to:


 Discuss the various generations of computers
 Explain the different types of computer
 Understand the concept of distributed computer system

2.2 EVOLUTION OF COMPUTERS

The history of computers began almost 2000 years ago with the advent of the
abacus. An abacus is a wooden rack which holds two horizontal wires with beads
strung on them. Numbers are represented using the respective position of beads
on the rack. Simple calculations can be carried out by appropriately placing the
beads.
Self-Instructional
10 Material
Computer Generations and
Classification

NOTES

Fig. 2.1 An Abacus

Types of Calculating Machines


This section attempts to classify the various types of calculating machines. This
classification reflects mainly the items which can still be found, and is from a
collector’s point of view.
The items are divided mainly into two majors groups:
 Adding Machines
 Calculating Machines
Adding machines
These machines were mainly designed to execute addition and subtraction
(see Figure 2.2). They are very small, cheap and easy to use.

Fig. 2.2 Work Process for an Adding Machine

Calculating machines
These machines were mainly designed to execute all basic four operations: addition,
subtraction, multiplication and division (see Figure 2.3). They are large machines
and need skillful operation. They are also very expensive and are usually owned
by companies.
Self-Instructional
Material 11
Computer Generations and
Classification

NOTES

Fig. 2.3 Work Process for a Calculating Machine

Napier’s Bones
A clever multiplication tool, Napier’s bones, is another interesting invention which
was invented in 1617 by a mathematician John Napier (1550–1617) of Scotland.
Napier, a clergyman and philosopher, played an important role in the history of
computing. He was a gifted mathematician and published his great work of
Logarithms in 1614 (not long before his death) in his book called Rabdologia.
This was a brilliant invention since it enabled the simplification of a very complicated
task. People could now transform multiplication and division into simple addition
and subtraction. His Logarithm tables were used by many people and it soon
became very popular. Napier is also remembered for his important invention of
the ‘Napier’s Bones’ which is a small instrument consisting of ten rods. The
multiplication table is engraved on the Napier’s Bones. This simple device is capable
of carrying out quickly multiplication if one of the numbers was of one digit only
(Example: 6 × 6742). Napier’s bones was very successful, but people of Europe
used it only until the mid 1960’s (see Figure 2.4).

Fig. 2.4 Napier’s Bones

Slide Rule
In 1620, a calculating device called ‘Slide Rule’ based on the principle of logarithms
was invented by an English mathematician, William Oughtred. This slide rule became
Self-Instructional
12 Material
the first analog computer of the modern ages. In 1850, a French artillery officer Computer Generations and
Classification
named Amedee Mannheim improvised upon it by adding the movable double
sided cursor. The two inbuilt graduated scales were placed in a way that the suitable
alignment of one scale against the other made it possible to do addition and
multiplication just by inspection. NOTES

Pascal’s Adding and Subtracting Machine


A French mathematician, Blaise Pascal, is usually credited for building the world’s
first digital computer (Pascaline) in 1642. Blaise Pascal (1623–1662) conceived
the Pascaline, in 1642, when he was just 18 years old. At the tender age of 12
Pascal invented that the sum of the angles in a triangle is always equal to 180
degrees.
Later, he discovered the fundamentals of the probability theory and made
noteworthy contributions to the science of hydraulics. The Pascaline was probably
the first mechanical adding device, which was used for practical applications.
Pascal’s father, Etienne Pascal had to face the tiresome activity of adding and
subtracting large sequences of numbers as a tax collector. Pascal had built the
Pascaline to help his father. However, the machine was not very useful and was
also difficult to use. The major reason behind this was that the French currency
system was not based on ten. (A livre had twenty sols whereas a sol had
twelve deniers).
In 1623, Withelm Schickard had built one of the first calculating machines.
Pascal’s solution was rather crude and was not as efficient as Schickard’s machine.
As per Paul E. Dune, ‘…had Schickard’s ideas found a wide audience then Pascal’s
machine would not have been invented.’

Fig. 2.5 Pascal’s Adding and Subtracting Machine

The Pascaline was built on a brass rectangular box (see Figure 2.5). A set
of jagged dials moved internal wheels in such a way that a complete rotation of a
wheel caused the wheel on the left to advance one tenth. The first prototype
consisted of only five wheels, but later units consisted of six and eight wheels.
A pin was used in order to rotate the dials. Compared to Schickard’s machine, the
wheels were designed only to add numbers and moved clockwise only. Subtraction
was performed by applying a tedious technique which was based on the addition
of the nine’s complement. Though the machine did not get wide acceptance as it
was expensive and unreliable as well as difficult to manufacture and use, it did
attract plenty of attention. Several people built calculating machines based on the
same design during a period of thirty years after Pascal invented his machine.
Self-Instructional
Material 13
Computer Generations and Among them the most significant was the adding machine invented by Sir Samuel
Classification
Morland (1625–1695) of England in 1666. It had a duodecimal scale based on
the English currency, and human intervention was required to enter the carry
displayed in an auxiliary dial.
NOTES
It is interesting to note that many companies introduced models based on
Pascal’s design even at the beginning of the 20th century. For example, the Lightning
Portable Adder introduced in 1908 by the Lightning Adding Machine Co. of Los
Angeles, and the Addometer introduced by the Reliable Typewriter and Adding
Machine Co. of Chicago in 1920. However, none of these machines achieved
commercial success.
Leibniz’s Multiplication and Division Machine
In 1672, the famous German polymath, mathematician and philosopher, Gottfried
Wilhelm Von Leibniz (1646–1716), who was also the co-inventor of differential
calculus, planned to build a machine to implement the four basic arithmetical
operations. He was inspired by a steps-counting device known as a pedometer.
Leibniz was a child prodigy and he had earned his second doctorate when he
was only nineteen years old. He had gone through Pascal’s design and he improvised
on the design in order to perform multiplication and division. He finalized his design
by 1674. Leibniz called his machine Stepped Reckoner. It used a special type of
gear named Leibniz Wheel or Stepped Drum. It consisted of a cylinder with nine
bar-shaped teeth of incrementing length and placed parallel to the cylinder’s axis.
When the drum was rotated with the help of a crank, a regular ten-tooth wheel
which was fixed over a sliding axis was rotated between the zero to nine positions.
This rotation depended on its relative position on the drum. Like the Pascal device,
there was one set of wheels for each digit which allowed the user to slide the mobile
axis. As a result, when the drum was rotated it generated a movement proportional
to its relative position in the regular wheels. This movement was then transformed by
Leibniz Wheel into multiplication or division, but this movement depended on which
direction the stepped drum was rotated.
Babbage’s Analytical Engine

Fig. 2.6 Babbage’s Analytical Engine


Self-Instructional
14 Material
Babbage first attempted to invent a mechanical computing machine which had a Computer Generations and
Classification
special purpose (see Figure 2.6). It was a calculator intended to tabulate,
trigonometric and logarithms functions by assessing approximate polynomials.
However, this project was unfinished due to some personal and political problems.
Babbage realized that a simpler design was possible and he started working on NOTES
the analytical engine.
The input data and input programs had to be fed into the machine
through punched cards, which was a method being used to instruct
mechanical looms (example: the Jacquard loom). The machine had a printer, a
curve plotter and a bell to exhibit the output result. The machine was able to punch
numbers onto cards to be read in the future. It used ordinary base-10 fixed-point
arithmetic.
There was a store (example: a memory) which was capable of holding
1,000 numbers of 50 decimal digits each (ca. 20.7kB). The ‘mill’, an arithmetical
unit, was able to carry out all four arithmetic operations, and plus comparisons
and square roots as well. The mill relied on its own internal procedures (as in
the Central Processing Unit (CPU) of a modern computer) to be stored in the
form of pegs inserted into rotating drums known as ‘barrels,’ to implement more
complex instructions specified by the program.
Mechanical Calculator
In the first half of the 20th century, people saw the slow but steady development of
the mechanical calculator method. The adding-listing machine was pioneered by
Dalton in 1902. It was the first of its kind to use only ten keys. This model became
the first of various types of models of ‘10-key add-listers’ manufactured by many
companies.

Fig. 2.7 An Addiator


Self-Instructional
Material 15
Computer Generations and Miniature model Curta calculator (See Figure 2.7) which could be held in
Classification
one hand for operation was introduced after being developed by Curt Herzstark in
1938. This model was the most advanced model using the stepped-gear calculating
mechanism.
NOTES Addition and subtraction could be carried out in a single operation in these
machines, like in a conventional adding machine. Actually, multiplication and
division were carried out by repeated mechanical additions and subtractions in
this device. Another calculator prepared by Friden also provided square roots. It
was based on division, but used an updated mechanism that automatically
incremented the number in the keyboard in a systematic way. Another type of
calculator was made by Friden and Marchant (Model SKA) by using square
root. Curta and other handheld mechanical calculators were in use till the 1970s,
and were later displaced by electronic calculators.
Popular European machines were Facit, Triumphator and Walther
calculators. Examples of the same type of machines are Odhner and Brunsviga.
These machines were operated by handcranks, but there were motor-driven
versions also. Generally, most of the machines that look like these use the Odhner
mechanism, or variations of it. Another model Olivetti Divisumma executed all
four basic operations of arithmetic, and had a printer too. In Europe, for many
decades, full-keyboard machines, including motor-driven ones, were used.
Here, some machines had as many as 20 columns in their full keyboards.

2.3 GENERATIONS OF COMPUTER

Computers can be categorized into different generations according to their


development and modernization, since its advent and subsequent advancement.
1. First Generation (1939–1954): Vacuum Tube
2. Second Generation Computers (1954–1959): Transistor
3. Third Generation Computers (1959–1971): IC
4. Fourth Generation (1971–1991): Microprocessor
5. Fifth Generation Computers (1991 and beyond)
First Generation Computers (1939–1954): Vacuum Tube
The computers of this generation utilized vacuum tubes for circuitry and magnetic
drums for memory. Most of these computers were huge and required the entire
room, and were very expensive to operate as well. Additionally, apart from using
a great deal of electricity, they also produced a lot of heat which often resulted in
malfunctioning.
The first generation computers relied on machine language. This was the
lowest-level programming language utilized by computers to execute operations.
These programming languages could only resolve one problem at a time. Input
Self-Instructional
16 Material
data was dependent on punched cards and paper tape, and output data was Computer Generations and
Classification
shown on printouts.
Second Generation Computers (1954–1959): Transistor
The transistor was invented in 1947 but did not see widespread use in computers NOTES
until the late 1950s. Transistors ushered in the second generation of computers
and replaced vacuum tubes. The transistor allowed computers to become smaller,
quicker, cheaper, more energy-efficient and more dependable as compared to
their first generation predecessors. These qualities made the transistor far superior
to the vacuum tube.
Instead of using cryptic binary machine language, second generation
computers started to use symbolic or assembly languages. These languages
facilitated the specification of instructions in words. High-level programming
languages (such as early versions of COBOL and FORTRAN) were also
developed at this time.
Third Generation Computers (1959–1971): IC
The third generation computers saw the development of the integrated circuit.
Transistors became much smaller and were placed on silicon chips, called
semiconductors. These transistors augmented the speed and efficiency of computers
significantly.
Fourth Generation (1971–1991): Microprocessor
During 1971–1991, microprocessor created the fourth generation of computers
which had thousands of integrated circuits which were built onto a single silicon
chip. The first generation computer would fill an entire room, but now they could
fit into one’s palm. Developed in 1971, the Intel 4004 chip combined all the
components of the computer into a single chip—from the central processing unit and
memory to input/output controls.
IBM gifted its first computer for the home user in 1981. Apple developed
the Macintosh in 1984. The Microprocessor could be used for varied purposes,
in various fields of life, and in various everyday products.
Fifth Generation Computers (1991 and Beyond)
Computing devices of the fifth generation are still in the process of being developed,
but some applications, such as voice recognition are already in use today. Given
below is a brief description of the progression of the fifth generation computers.
 1991: Tim Berners Lee developed World Wide Web (WWW) and it was
released by CERN.
 1993:The first web browser known as Mosaic was developed by a student
Marc Andreesen and programmer Eric Bina at NCSA. The beta version
0.5 of X Mosaic for UNIX that was a huge success was also launched in
1993. Self-Instructional
Material 17
Computer Generations and  1994: Netscape Navigator 1.0 was released and it soon gained 75 per
Classification
cent of world browser market.
 1996: Microsoft released the much improved browser, named Explorer 3.0.
NOTES
2.4 CLASSIFICATION OF COMPUTERS

A computer is a general purpose device which can be programmed to carry out a


finite set of arithmetic and logical operations. Computers can be classified on the
basis of their size, processing speed and cost.
According to data processing capabilities computers are classified as analog, digital
and hybrid.
Analog
Analog computers are generally used in industrial process controls and to measure
physical quantities, such as pressure, temperature, etc. An analog computer does
not operate on binary digits to compute. It works on continuous electrical signal
inputs and the output is displayed continuously. Its memory capacity is less and
can perform only certain type of calculations. However, its operating speed is
faster than the digital computer as it works in a totally different mode.
Digital
Digital computers are commonly used for data processing and problem solving
using specific programs. A digital computer is designed to proces data in numerical
form. It is in the discrete form from one state to the next. These processing states
involve binary digits which acquire the form of the existence or nonexistence of
magnetic markers in a standard storage devices, ON/OFF switches or relays. In
a digital computer, letters, words, symbols and complete texts are digitally
represented, i.e., using only two digits 0 and 1. It processes data in discrete form
and has a large memory to store huge quantity of data.
Hybrid
Hybrid computers are the combination of digital and analog computers. A hybrid
computer uses the best features of digital and analog computers. It helps the user
to process both continuous and discrete data. Hybrid computers are generally
used for weather forecasting and industrial process control.
General Purpose
Workstations are high end, general purpose computers designed to meet the
computing needs of engineers, architects and other professionals who need
computers with greater processing power, larger storage and better graphic display
facilities. These are commonly used for Computer Aided Design (CAD) and for
multimedia applications, such as creating special audiovisual effects for television
Self-Instructional
18 Material
programmes and movies. A workstation looks like a PC and can be used by only Computer Generations and
Classification
one person at a time.
Special Purpose
A special purpose computer is a digital or an analog computer specifically designed NOTES
to perform desired specific task. These are high performance computing systems
with special hardware architecture which is dedicated to solve a specific problem.
This is performed with the help of specially programmed Field Programmable
Gate Array (FPGA) chips or custom Very Large Scale Integration (VLSI) chips.
They are used for special applications, for example, astrophysics computations,
Very Large Scale Integration or GRAvity PipE (GRAPE) 6 (for astrophysics and
molecular dynamics), Hydra (for playing chess), MDGRAPE 3 (for protein
structure computations), etc.
Micro, Mini, Mainframe and Supercomputers
On the basis of the size, computers are classified as micro, mini, mainframe and
supercomputers.
Microcomputers
Microcomputers are developed from advanced computer technology. They are
commonly used at home, classroom and in the workplace. Microcomputers are
called home computers, personal computers, laptops, personal digital assistants,
etc. They are powerful and easy to operate. In recent years, computers were
made portable and affordable. The major characteristics of a microcomputer are
as follows:
 Microcomputers are capable of performing data processing jobs and solving
numerical programs. Microcomputers work rapidly like minicomputers.
 Microcomputers have reasonable memory capacity which can be measured
in megabytes.
 Microcomputers are reasonably priced. Varieties of microcomputers are
available in the market as per the requirement of smaller business companies
and educational institutions.
 Processing speed of microcomputers is measured in MHz. A microcomputer
running at 90MHz works approximately at 90 MIPS (Million Instructions
Per Second).
 Microcomputers have drives for floppy disk, compact disk and hard disks.
 Only one user can operate a microcomputer at a time.
 Microcomputers are usually dedicated to one job. Millions of people use
microcomputers to increase their personal productivity.
 Useful accessory tools, such as clock, calendar, calculator, daily schedule
reminders, scratch pads, etc., are available in a microcomputer.
Self-Instructional
Material 19
Computer Generations and  Laptop computers, also called notebook computers, are microcomputers.
Classification
They use the battery power source. Laptop computers have a keyboard,
mouse, floppy disc drive, CD drive, hard disk drive and monitor. Laptop
computers are expensive in comparison to personal computers.
NOTES
Personal Computers
A PC is a small single user microprocessor based computer that sits on your
desktop and is generally used at homes, offices and schools. As the name implies,
PCs were mainly designed to meet the personal computing needs of individuals.
Personal computers are used for preparing normal text documents, spreadsheets
with predefined calculations and business analysis charts, database management
systems, accounting systems and also for designing office stationary, banners, bills
and handouts. Children and youth love to play games and surf the Internet,
communicate with friends via e-mail and the Internet telephony, and do many
other entertaining and useful tasks.
The configuration varies from one PC to another depending on its usage.
However, it consists of a CPU or system unit, a monitor, a keyboard and a mouse.
It has a main circuit board or motherboard (consisting of the CPU and the memory),
hard disk storage, floppy disk drive, CD-ROM drive and some special add-on
cards like Network Interface Card or NIC and ports for connecting peripheral
devices like printers.
PCs are available in two models—desktop and tower. In the desktop model,
the monitor is positioned on top of the system unit whereas in the tower model the
system unit is designed to stand by the side of the monitor or even on the floor to
save desktop space. Due to this feature, the tower model is very popular.
Some popular operating systems for PCs are MS DOS, Microsoft
Windows, Windows NT, Linux and UNIX. Most of these operating systems have
the capability of multitasking which eases operation and saves time when a user
has to switch between two or more applications while performing a job. Some
leading PC manufacturers are IBM, Apple, Compaq, Dell, Toshiba and Siemens.
Types of Personal Computers

Notebook/Laptop Computers
Notebook computers are battery operated personal computers. Smaller than the
size of a briefcase, these are portable computers and can be used in places like
libraries, in meetings or even while travelling. Popularly known as laptop computers,
or simply laptops, they weigh less than 2.5 kg and can be only 3 inches thick (refer
Figure 2.8). Notebook computers are usually more expensive as compared to
desktop computers though they have almost the same functions, but since they are
sleeker and portable, they have a complex design and are more difficult to
manufacture. These computers have large storage space and other peripherals,
such as serial port, PC card, modem or network interface card, CD-ROM drive
Self-Instructional
20 Material
and printer. They can also be connected to a network to download data from Computer Generations and
Classification
other computers or to the Internet. A notebook computer has a keyboard, a flat
screen with Liquid Crystal Display (LCD) display and can also have a trackball
and a pointing stick.
NOTES

Foldable flat screen

Fig. 2.8 Laptop Computer

A notebook computer uses the MS DOS or Windows operating system. It


is used for making presentations as it can be plugged into an LCD projection
system. The data processing capability of a notebook computer is as good as an
ordinary PC because both use the same type of processor, such as an Intel Pentium
processor. However, a notebook computer generally has lesser hard disk storage
than a PC.
Tablet PC
Tablet PC is a mobile computer that looks like a notebook or a small writing slate
but uses a stylus pen or your finger tip to write on the touch screen. It saves
whatever you scribble on the screen with the pen in the same way as you have
written it. The same picture can than be converted to text with the help of a HR
(Hand Recognition) software.
PDA
A Personal Digital Assistant (PDA) is a small palm sized hand-held computer
which has a small color touch screen with audio and video features. They are
nowadays used as smart phones, Web enabled palmtop computers, portable media
players or gaming devices.
Most PDAs today typically have a touch screen for data entry, a data storage/
memory card, Bluetooth, Wireless Fidelity (Wi-Fi) or an infrared connectivity and
can be used to access the Internet and other networks.
Minicomputers
Minicomputers are a scaled down version of mainframe computers. The processing
power and cost of a minicomputer are less than that of the mainframe. The
minicomputers have big memory sizes and faster processing speed compared to
the microcomputer. Minicomputers are also called workgroup systems because
Self-Instructional
Material 21
Computer Generations and they are well suited to the requirements of the minor workgroups within an
Classification
organization. The major characteristics of a minicomputer are as follows:
 Minicomputers have extensive problem solving capabilities.
NOTES  Minicomputers have reasonable memory capacity which can be measured
in MB or GB.
 Minicomputers have quick processing speeds and operating systems
facilitated with multitasking and network capabilities.
 Minicomputers have drives for floppy disk, magnetic tape, compact disk,
hard disks, etc.
 Minicomputers can serve as network servers.
 Minicomputers are used as a substitute of one mainframe by big
organizations.
Mainframe Computers
Mainframe computers are generally used for handling the needs of information
processing of organizations like banks, insurance companies, hospitals and railways.
This type of system is placed in a central location with several user terminals
connected to it. The user terminals act as access stations and may be located in
the same building as shown in Figure 2.9.

Fig. 2.9 Mainframe Computer

Mainframe computers are bigger and more expensive than workstations.


They look like a row of large file cabinets and need a large room with closely
monitored humidity and temperature levels. A mainframe system of lower
configuration is often referred to as a minicomputer system.
Supercomputers
Supercomputers are the most powerful and expensive computers available today.
They are primarily used for processing complex scientific applications that involve
tasks with highly complex calculations and solving problems with mechanical
physics, such as weather forecasting and climate research systems, nuclear weapon
simulation and simulation of automated aircrafts. Military organizations, major
research and development centres, universities and chemical laboratories are major
users of supercomputers.
Self-Instructional
22 Material
Supercomputers use multiprocessing and parallel processing technologies Computer Generations and
Classification
to solve complex problems promptly. They use multiprocessors which enable the
user to divide a complex problem into smaller problems. A parallel program is
written in a manner that can break up the original problem into smaller computational
modules. Supercomputers also support multiprogramming, which allows NOTES
simultaneous access to the computer by multiple users. Some of the manufacturers
of supercomputers are IBM, Silicon Graphics, Fujitsu and Intel.

2.5 DISTRIBUTED COMPUTER SYSTEMS

A distributed system is a collection of hardware and software components which


are connected through a network and distributed system layer (middleware) which
allows these components to communicate and coordinate with each other and
share the resources in such a way that it appears to its user as a single computing
facility.
Distributed system is a collection of autonomous computers. These computers
are linked via networks and they do not share primary memory. These computers
communicate and cooperate with each other only by passing the messages over a
communication network. To the users, this collection of computers appears to be
a single coherent system. Users can communicate easily with this system without
knowing the physical location of the system.
There is a layer which is logically placed between a higher level layer
consisting of user applications and a lower level layer consisting of operating systems
and other communication services. This layer is known as middleware and it offers
a single coherent view of heterogeneous computers.
Sometimes, words like distributed, concurrent, parallel, networks and
decentralized are used interchangeably. Distributed, concurrent and parallel
computations are used for collective and coordinated activities on multiple
processing units. In parallel computation similar operations are performed on
multiple data streams, i.e., SIMD (Single Instruction Multiple Data). Concurrent
means that operations can be performed in any order on multiple data, i.e., MIMD
(Multiple Instruction Multiple Data). In decentralized systems the components are
located at different sites with limited, closed or no coordination, while in centralized
systems all the components are located at the same site. When a decentralized
system has close coordination then it is known as distributed systems; otherwise,
it is referred to as network system. Thus, computer networks cannot be considered
as distributed system. Distributed Shared Memory (DSM) systems are generally
considered as special distributed systems because in such systems failure of a
single component does not affect others. Distributed and multiprocessor systems
support the virtual uniprocessor systems, but network systems do not. Distributed
systems support multiple copies of the same operating system and multiprocessor
systems support one copy of the operating system. Hence, distributed systems
Self-Instructional
Material 23
Computer Generations and and multiprocessor systems use one operating system, while network system works
Classification
with multiple operating systems.

Check Your Progress


NOTES
1. What do you understand by computer Generation?
2. What are microcomputers?
3. What is the basis for classification of computers?

2.6 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. Computer generation terminology is change in technology a computer.


Generation used to distinguish between varying hardware technologies.
2. Microcomputers is also known as advance computer technology. It is used
in home classroom and work place.
3. Computers can be classified on the basis of their size, processing speed
and cost. It can be divided in Analog, Digital and Hybrid.

2.7 SUMMARY

 Computer generation terminology is change in technology a computer.


Computers can be categorized into different generations according to their
development and modernization.
 A computer is a general-purpose device which can be programmed to carry
arithmetic and logical operations. Computers can be classified their size,
processing speed and cost.
 Analog computers are generally used in industrial process controls and to
measure physical quantities, such as pressure, temperature.
 Digital computers are commonly used for data processing and problem
solving using specific programs.
 Hybrid computers are the combination of digital and analog computers. A
hybrid computer uses the best features of digital and analog computers.
 Microcomputers are developed from advanced computer technology.
Microcomputers are called home computers, personal computers, laptops,
personal digital assistants.
 Mainframe computers are generally used for handling the needs of information
processing of organizations like banks, insurance companies, hospitals and
railways.

Self-Instructional
24 Material
Computer Generations and
2.8 KEY WORDS Classification

 Analog Computer: It is a type of computer that uses the continuously


changeable aspects of physical phenomena such as electrical, mechanical, NOTES
or hydraulic quantities to model the problem being solved.
 PDA: It is a small palm sized hand-held computer which has a small colour
touch screen with audio and video features.

2.9 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short Answer Questions


1. Discuss the evolutions of computer.
2. What are the different types of computer based on processing capabilities?
3. What is a distributed computer?
Long Answer Questions
1. What are the different generations of computer?
2. What are the different types of computer?
3. What are microcomputers? Explain its types.

2.10 FURTHER READINGS

Bhatt, Pramod Chandra P. 2003. An Introduction to Operating Systems—


Concepts and Practice. New Delhi: PHI.
Bhattacharjee, Satyapriya. 2001. A Textbook of Client/Server Computing. New
Delhi: Dominant Publishers and Distributers.
Hamacher, V.C., Vranesic, Z.G. and Zaky. 2002. S.G. Computer Organization,
5th edition. New York: McGraw-Hill International Edition.
Mano, M. Morris. 1993. Computer System Architecture, 3th edition. New Jersey:
Prentice-Hall Inc.
Nutt, Gary. 2006. Operating Systems. New Delhi: Pearson Education.

Self-Instructional
Material 25
Number Systems and
Boolean Algebra
UNIT 3 NUMBER SYSTEMS AND
BOOLEAN ALGEBRA
NOTES
Structure
3.0 Introduction
3.1 Objectives
3.2 Number Systems
3.2.1 Decimal Number System
3.2.2 Binary Number System
3.2.3 Octal Number System
3.2.4 Hexadecimal Number System
3.2.5 Conversion from One Number System to the Other
3.3 Complements
3.4 Numeric and Character Codes
3.5 Basic Gates
3.5.1 AND Gate
3.5.2 OR Gate
3.6 Boolean Algebra
3.6.1 Laws and Rules of Boolean Algebra
3.7 Answers to Check Your Progress Questions
3.8 Summary
3.9 Key Words
3.10 Self Assessment Questions and Exercises
3.11 Further Readings

3.0 INTRODUCTION

In this unit, you will learn about number systems and binary codes. In mathematics,
a 'number system' is a set of numbers together with one or more operations, such
as addition or multiplication. The number systems are represented as natural
numbers, integers, rational numbers, algebraic numbers, real numbers, complex
numbers, etc. A number symbol is called a numeral. A numeral system or system
of numeration is a writing system for expressing numbers. For example, the standard
decimal representation of whole numbers gives every whole number a unique
representation as a finite sequence of digits. You will learn about the binary numeral
system or base-2 number system that represents numeric values using two symbols,
0 and 1. This base-2 system is specifically a positional notation with a radix of 2.
It is implemented in digital electronic circuitry using logic gates and the binary
system used by all modern computers. Since binary is a base-2 system, hence
each digit represents an increasing power of 2 with the rightmost digit representing
20, the next representing 21, then 22, and so on. To determine the decimal
representation of a binary number simply take the sum of the products of the

Self-Instructional
26 Material
binary digits and the powers of 2 which they represent. You will also learn about Number Systems and
Boolean Algebra
basic gates and Boolean algebra.

3.1 OBJECTIVES NOTES


After going through this unit, you will be able to:
 Describe number systems
 Understand the decimal, binary, octal and hexadecimal number systems
 Convert a number from one number system into another number system
 Understand numeric and character codes
 Understand basic gates
 Discuss the rules and laws of Boolean algebra

3.2 NUMBER SYSTEMS

A number is an idea that is used to refer amount of things. People use number
words, number gestures and number symbols. Number words are said out loud.
Number gestures are made with some part of the body, usually the hands. Number
symbols are marked or written down. A number symbol is called a numeral. The
number is the idea we think of when we see the numeral, or when we see or hear
the word.
On hearing the word number, we immediately think of the familiar decimal
number system with its 10 digits, i.e., 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9. These numerals
are called Arabic numerals. Our present number system provides modern
mathematicians and scientists with great advantages over those of previous
civilizations and is an important factor in our advancement. Since fingers are the
most convenient tools nature has provided, human beings use them in counting.
So, the decimal number system followed naturally from this usage.
A number of base, or radix r, is a system that uses distinct symbols of r
digits. Numbers are represented by a string of digit symbols. To determine the
quantity that the number represents, it is necessary to multiply each digit by an
integer power of r and then form the sum of all the weighted digits. It is possible to
use any whole number greater than one as a base in building a numeration system.
The number of digits used is always equal to the base.
There are four systems of arithmetic which are often used in digital systems.
These systems are as follows:
1. Decimal
2. Binary
3. Octal

Self-Instructional
Material 27
Number Systems and 4. Hexadecimal
Boolean Algebra
In any number system, there is an ordered set of symbols known as digits.
Collection of these digits makes a number which in general has two parts, integer
and fractional, set apart by a radix point (.). Hence, a number system can be
NOTES
represented as,

n 3 ... a1a0  a
an 1an  2a 1a2 
a
3 ... a– 
Nb̂ =   m
Integer Portion Fractional Portion

where, N = A number.
b = Radix or base of the number system.
n = Number of digits in integer portion.
m = Number of digits in fractional portion.
an – 1 = Most Significant Digit (MSD).
a– m = Least Significant Digit (LSD).
and 0  (ai or a–f )  b–1
Base or Radix: The base or radix of a number is defined as the number of
different digits which can occur in each position in the number system.

3.2.1 Decimal Number System


The number system which utilizes ten distinct digits, i.e., 0, 1, 2, 3, 4, 5, 6, 7, 8
and 9 is known as decimal number system. It represents numbers in terms of
groups of ten, as shown in Figure 3.1.
We would be forced to stop at 9 or to invent more symbols if it were not for
the use of positional notation. It is necessary to learn only 10 basic numbers and
positional notational system in order to count any desired figure.

Fig. 3.1 Decimal Position Values as Powers of 10

The decimal number system has a base or radix of 10. Each of the ten
decimal digits 0 through 9, has a place value or weight depending on its position.
The weights are units, tens, hundreds, and so on. The same can be written as the
power of its base as 100, 101, 102, 103,..., etc. Thus, the number 1993 represents
quantity equal to 1000 + 900 + 90 + 3. Actually, this should be written as {1 ×
103 + 9 × 102 + 9 × 101 + 3 × 100}. Hence, 1993 is the sum of all digits
multiplied by their weights. Each position has a value 10 times greater than the
position to its right.
Self-Instructional
28 Material
For example, the number 379 actually stands for the following representation. Number Systems and
Boolean Algebra
100 10 1
2 1
10 10 100
3 7 9 NOTES
3 × 100 + 7 × 10 + 9 × 1
 37910 = 3 × 100 + 7 × 10 + 9 × 1
In this example, 9 is the Least Significant Digit (LSD) and 3 is the Most
Significant Digit (MSD).
Example 3.1: Write the number 1936.469 using decimal representation.
Solution: 1936.46910 = 1 × 103 + 9 × 102 + 3 × 101 + 6 × 100 + 4 × 10–1
+ 6 × 10–2 + 9 × 10–3
= 1000 + 900 + 30 + 6 + 0.4 + 0.06 + 0.009
It is seen that powers are numbered to the left of the decimal point starting
with 0 and to the right of the decimal point starting with –1.
The general rule for representing numbers in the decimal system by using
positional notation is as follows:
anan – 1 ... a2a1a0 = an10n + an – 110n–1 + ... a2102 + a1101 + a0100
Where n is the number of digits to the left of the decimal point.

3.2.2 Binary Number System


A number system that uses only two digits, 0 and 1 is called the binary number
system. The binary number system is also called a base two system. The two
symbols 0 and 1 are known as bits (binary digits).
The binary system groups numbers by two’s and by powers of two as
shown in Figure 3.2. The word binary comes from a Latin word meaning two at a
time.

Fig. 3.2 Binary Position Values as a Power of 2

The weight or place value of each position can be expressed in terms of


2 and is represented as 20, 21, 22, etc. The least significant digit has a weight of
20 (= 1). The second position to the left of the least significant digit is multiplied by
21 (= 2). The third position has a weight equal to 22 (= 4). Thus, the weights are in
the ascending powers of 2 or 1, 2, 4, 8, 16, 32, 64, 128, etc.

Self-Instructional
Material 29
Number Systems and The numeral 102 (one, zero, base two) stands for two, the base of the
Boolean Algebra
system.
In binary counting, single digits are used for none and one. Two digit
numbers are used for 102 and 112 [2 and 3 in decimal numerals]. For the next
NOTES
counting number, 1002 (4 in decimal numerals) three digits are necessary. After
1112 (7 in decimal numerals) four-digit numerals are used until 11112 (15 in
decimal numerals) is reached, and so on. In a binary numeral, every position
has a value 2 times the value of the position to its right.
A binary number with 4 bits, is called a nibble and a binary number with 8
bits is known as a byte.
For example, the number 10112 actually stands for the following
representation:
10112 = 1 × 23 + 0 × 22 + 1 × 21 + 1 × 20
=1 × 8 + 0 × 4 +1 × 2 + 1 ×1
 10112 = 8 + 0 + 2 + 1 = 1110
In general,
[bnbn – 1 ... b2, b1, b0]2 = bn2n + bn – 12n–1 + ... + b222 + b121 + b020
Similarly, the binary number 10101.011 can be written as follows:
1 0 1 0 1 . 0 1 1
24 23 22 21 20 . 2– 1 2– 2 2– 3
(MSD) (LSD)
 10101.0112 = 1 × 24 + 0 × 23 + 1 × 22 + 0 × 21 + 1 × 20
+ 0 × 2–1 + 1 × 2–2 + 1 × 2–3
= 16 + 0 + 4 + 0 + 1 + 0 + 0.25 + 0.125 = 21.37510
In each binary digit, the value increases in powers of two starting with 0 to
the left of the binary point and decreases to the right of the binary point starting
with power –1.

Why Binary Number System is used in Digital Computers?


Binary number system is used in digital computers because all electrical and
electronic circuits can be made to respond to the two states concept. A switch, for
instance, can be either opened or closed, only two possible states exist. A transistor
can be made to operate either in cutoff or saturation, a magnetic tape can be either
magnetized or non magnetized, a signal can be either HIGH or LOW, a punched
tape can have a hole or no hole. In all of the above illustrations, each device is
operated in any one of the two possible states and the intermediate condition does
not exist. Thus, 0 can represent one of the states and 1 can represent the other.
Hence, binary numbers are convenient to use in analysing or designing digital circuits.
Self-Instructional
30 Material
3.2.3 Octal Number System Number Systems and
Boolean Algebra
The octal number system was used extensively by early minicomputers. However,
for both large and small systems, it has largely been supplanted by the hexadecimal
system. Sets of 3-bit binary numbers can be represented by octal numbers and NOTES
this can be conveniently used for the entire data in the computer.
A number system that uses eight digits, 0, 1, 2, 3, 4, 5, 6 and 7, is called an octal
number system. It has a base of eight. The digits, 0 through 7 have exactly the
same physical meaning as decimal symbols. In this system, each digit has a weight
corresponding to its position as shown below:
an8n + ... a383 + a282 + a181 + a– 18–1 + a– 28–2 + ... + a– n8–n
Octal Odometer
Octal odometer is a hypothetical device similar to the odometer of a car. Each
display wheel of this odometer contains only eight digits (teeth), numbered 0 to 7.
When a wheel turns from 7 back to 0 after one rotation, it sends a carry to the next
higher wheel. Table 3.1 shows equivalent numbers in decimal, binary and octal
systems.
Table 3.1 Equivalent Numbers in Decimal, Binary and Octal Systems

Decimal (Radix 10) Binary (Radix 2) Octal (Radix 8)

0 000 000 0
1 000 001 1
2 000 010 2
3 000 011 3
4 000 100 4
5 000 101 5
6 000 110 6
7 000 111 7
8 001 000 10
9 001 001 11
10 001 010 12
11 001 011 13
12 001 100 14
13 001 101 15
14 001 110 16
15 001 111 17
16 010 000 20

Consider an octal number [567.3]8. It is pronounced as five, six, seven octal


point three and not five hundred sixty seven point three. The coefficients of the
integer part are a0 = 7, a1 = 6, a2 = 5 and the coefficient of the fractional part is
a– 1 = 3.

Self-Instructional
Material 31
Number Systems and 3.2.4 Hexadecimal Number System
Boolean Algebra
The hexadecimal system groups numbers by sixteen and powers of sixteen.
Hexadecimal numbers are used extensively in microprocessor work. Most
NOTES minicomputers and microcomputers have their memories organized into sets of
bytes, each consisting of eight binary digits. Each byte either is used as a single
entity to represent a single alphanumeric character or broken into two 4-bit pieces.
When the bytes are handled in two 4-bit pieces, the programmer is given the
option of declaring each 4-bit character as a piece of a binary number or as two
BCD numbers.
The hexadecimal number is formed from a binary number by grouping bits
in groups of 4 bits each, starting at the binary point. This is a logical way of grouping,
since computer words come in 8 bits, 16 bits, 32 bits, and so on. In a group of 4
bits, the decimal numbers 0 to 15 can be represented as shown in Table 3.2.
The hexadecimal number system has a base of 16. Thus, it has 16 distinct
digit symbols. It uses the digits 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9 plus the letters A, B,
C, D, E and F as 16 digit symbols. The relationship among octal, hexadecimal and
binary is shown in Table 3.2. Each hexadecimal number represents a group of
four binary digits.
Table 3.2 Equivalent Numbers in Decimal, Binary, Octal and Hexadecimal Number Systems

Decimal Binary Octal Hexadecimal


(Radix 10) (Radix 2) (Radix 8) (Radix 16)
0 0000 0 0
1 0001 1 1
2 0010 2 2
3 0011 3 3
4 0100 4 4
5 0101 5 5
6 0110 6 6
7 0111 7 7
8 1000 10 8
9 1001 11 9
10 1010 12 A
11 1011 13 B
12 1100 14 C
13 1101 15 D
14 1110 16 E
15 1111 17 F
16 0001 0000 20 10
17 0001 0001 21 11
18 0001 0010 22 12
19 0001 0011 23 13
20 0001 0100 24 14

Self-Instructional
32 Material
Counting in Hexadecimal Number Systems and
Boolean Algebra
When counting in hex, each digit can be incremented from 0 to F. Once it reaches
F, the next count causes it to recycle to 0 and the next higher digit is incremented.
This is illustrated in the following counting sequences: 0038, 0039, 003A, 003B,
NOTES
003C, 003D, 003E, 003F, 0040; 06B8, 06B9, 06BA, 06BB, 06BC, 06BD,
06BE, 06BF, 06C0, 06C1.
3.2.5 Conversion from One Number System to the Other
Binary to Decimal Conversion
A binary number can be converted into decimal number by multiplying the binary
1 or 0 by the weight corresponding to its position and adding all the values.
Example 3.2: Convert the binary number 110111 to decimal number.
Solution: 1101112 = 1 × 25 + 1 × 24 + 0 × 23 + 1 × 22 + 1 × 21 + 1 × 20
= 1 × 32 + 1 × 16 + 0 × 8 + 1 × 4 + 1 × 2 + 1 × 1
= 32 + 16 + 0 + 4 + 2 + 1
= 5510
We can streamline binary to decimal conversion by the following procedure:
Step 1: Write the binary, i.e., all its bits in a row.
Step 2: Write 1, 2, 4, 8, 16, 32, ..., directly under the binary number working
from right to left.
Step 3: Omit the decimal weight which lies under zero bits.
Step 4: Add the remaining weights to obtain the decimal equivalent.
The same method is used for binary fractional number.
Example 3.3: Convert the binary number 11101.1011 into its decimal
equivalent.
Solution:
Step 1: 1 1 1 0 1 . 1 0 1 1

Binary Point
Step 2: 16 8 4 2 1 . 0.5 0.25 0.125 0.0625
Step 3: 16 8 4 0 1 . 0.5 0 0.125 0.0625
Step 4: 16 + 8 + 4 + 1 + 0.5 + 0.125 + 0.0625 = [29.6875]10
Hence, [11101.1011]2 = [29.6875]10
Table 3.3 lists the binary numbers from 0000 to 10000. Table 3.4 lists
powers of 2 and their decimal equivalents and the number of K. The abbreviation
K stands for 210 = 1024. Therefore, 1K = 1024, 2K = 2048, 3K = 3072, 4K =

Self-Instructional
Material 33
Number Systems and 4096, and so on. Many personal computers have 64K memory this means that
Boolean Algebra
computers can store up to 65,536 bytes in the memory section.
Table 3.3 Binary Numbers Table 3.4 Powers of 2
NOTES Decimal Binary Powers of 2 Equivalent Abbreviation
0
0 0 2 1
1
1 01 2 2
2 10 22 4
3
3 11 2 8
4
4 100 2 16
5 101 25 32
6
6 110 2 64
7 111 27 128
8 1000 28 256
9
9 1001 2 512
10 1010 210 1024 1K
11
11 1011 2 2048 2K
12
12 1100 2 4096 4K
13 1101 213 8192 8K
14
14 1110 2 16384 16K
15 1111 215 32768 32K
16
16 10000 2 65536 64K

Decimal to Binary Conversion


There are several methods for converting a decimal number to a binary number.
The first method is simply to subtract values of powers of 2 which can be subtracted
from the decimal number until nothing remains. The value of the highest power of
2 is subtracted first, then the second highest, and so on.
Example 3.4: Convert the decimal integer 29 to the binary number system.
Solution: First the value of the highest power of 2 which can be subtracted from
29 is found. This is 24 = 16.
Then, 29 – 16 = 13
The value of the highest power of 2 which can be subtracted from 13, is 23,
then 13 – 23 = 13 – 8 = 5. The value of the highest power of 2 which can be
subtracted from 5, is 22. Then 5 – 22 = 5 – 4 = 1. The remainder after subtraction
is 10 or 20. Therefore, the binary representation for 29 is given by,
2910 = 24 + 23 + 22 + 20 = 16 + 8 + 4 + 0 × 2 + 1
=1 1 1 0 1
[29]10 = [11101]2

Self-Instructional
34 Material
Similarly, [25.375]10 = 16 + 8 + 1 + 0.25 + 0.125 Number Systems and
Boolean Algebra
= 24 + 23 + 0 + 0 + 20 + 0 + 2–2 + 2–3
[25.375]10 = [11011.011]2
This is a laborious method for converting numbers. It is convenient for small numbers NOTES
and can be performed mentally, but is less used for larger numbers.
Double Dabble Method
A popular method known as double dabble method, also known as divide-by-
two method, is used to convert a large decimal number into its binary equivalent.
In this method, the decimal number is repeatedly divided by 2 and the remainder
after each division is used to indicate the coefficient of the binary number to be
formed. Notice that the binary number derived is written from the bottom up.
Example 3.5: Convert 19910 into its binary equivalent.
Solution: 199  2 = 99 + remainder 1 (LSB)
99  2 = 49 + remainder 1
49  2 = 24 + remainder 1
24  2 = 12 + remainder 0
12  2 = 6 + remainder 0
62 = 3 + remainder 0
32 = 1 + remainder 1
12 = 0 + remainder 1 (MSB)
The binary representation of 199 is, therefore, 11000111. Checking the
result we have,
[11000111]2 = 1 × 27 + 1 × 26 + 0 × 25 + 0 × 24 + 0 × 23 + 1 × 22 + 1 × 21
+ 1 × 20
= 128 + 64 + 0 + 0 + 0 + 4 + 2 + 1
 [11000111]2 = [199]10
Notice that the first remainder is the LSB and last remainder is the MSB.
This method will not work for mixed numbers.
Decimal Fraction to Binary
The conversion of decimal fraction to binary fractions may be accomplished by
using several techniques. Again, the most obvious method is to subtract the highest
value of the negative power of 2, which may be subtracted from the decimal
fraction. Then, the next highest value of the negative power of 2 is subtracted from
the remainder of the first subtraction and this process is continued until there is no
remainder or to the desired precision.

Self-Instructional
Material 35
Number Systems and Example 3.6: Convert decimal 0.875 to a binary number.
Boolean Algebra
Solution: 0.875 – 1 × 2–1 = 0.875 – 0.5 = 0.375
0.375 – 1 × 2–2 = 0.375 – 0.25 = 0.125
NOTES 0.125 – 1 × 2–3 = 0.125 – 0.125 = 0
 [0.875]10 = [0.111]2
A much simpler method of converting longer decimal fractions to binary
consists of repeatedly multiplying by 2 and recording any carriers in the integer
position.
Example 3.7: Convert 0.694010 to a binary number.
Solution: 0.6940 × 2 = 1.3880 = 0.3880 with a carry of 1
0.3880 × 2 = 0.7760 = 0.7760 with a carry of 0
0.7760 × 2 = 1.5520 = 0.5520 with a carry of 1
0.5520 × 2 = 1.1040 = 0.1040 with a carry of 1
0.1040 × 2 = 0.2080 = 0.2080 with a carry of 0
0.2080 × 2 = 0.4160 = 0.4160 with a carry of 0
0.4160 × 2 = 0.8320 = 0.8320 with a carry of 0
0.8320 × 2 = 1.6640 = 0.6640 with a carry of 1
0.6640 × 2 = 1.3280 = 0.3280 with a carry of 1
We may stop here as the answer would be approximate.
 [0.6940]10 = [0.101100011]2
If more accuracy is needed, continue multiplying by 2 until you have as
many digits as necessary for your application.
Example 3.8: Convert 14.62510 to binary number.
Solution: First the integer part 14 is converted into binary and then, the fractional
part 0.625 is converted into binary as shown below:
Integer part Fractional part
14  2 =7 + 0 0.625 × 2 = 1.250 with a carry of 1
72 =3 + 1 0.250 × 2 = 0.500 with a carry of 0
3 2 =1 + 1 0.500 × 2 = 1.000 with a carry of 1
1 2 =0 + 1
 The binary equivalent is [1110.101]2
Octal to Decimal Conversion
An octal number can be easily converted to its decimal equivalent by multiplying
each octal digit by its positional weight.

Self-Instructional
36 Material
Example 3.9: Convert (376)8 to decimal number. Number Systems and
Boolean Algebra
Solution: The process is similar to binary to decimal conversion except that the
base here is 8.
[376]8 = 3 × 82 + 7 × 81 + 6 × 80 NOTES
= 3 × 64 + 7 × 8 + 6 × 1 = 192 + 56 + 6 = [254]10
The fractional part can be converted into decimal by multiplying it by the
negative powers of 8.
Example 3.10: Convert (0.4051)8 to decimal number.
Solution: [0.4051]8 = 4 × 8–1 + 0 × 8–2 + 5 × 8–3 + 1 × 8– 4
1 1 1 1
= 4  0  5  1
8 64 512 4096
 [0.4051]8 = [0.5100098]10
Example 3.11: Convert (6327.45)8 to its decimal number.
Solution: [6327.45]8 = 6 × 83 + 3 × 82 + 2 × 81 + 7 × 80 + 4 × 8–1 + 5 × 8–2
= 3072 + 192 + 16 + 7 + 0.5 + 0.078125
[6327.45]8 = [3287.578125]10
Decimal to Octal Conversion
The methods used for converting a decimal number to its octal equivalent are the
same as those used to convert from decimal to binary. To convert a decimal number
to octal, we progressively divide the decimal number by 8, writing down the
remainders after each division. This process is continued until zero is obtained as
the quotient, the first remainder being the LSD.
The fractional part is multiplied by 8 to get a carry and a fraction. The new
fraction obtained is again multiplied by 8 to get a new carry and a new fraction.
This process is continued until the number of digits have sufficient accuracy.
Example 3.12: Convert [416.12]10 to octal number.
Solution: Integer part 416  8 = 52 + remainder 0 (LSD)
52  8 = 6 + remainder 4
6  8 = 0 + remainder 6 (MSD)
Fractional part 0.12 × 8 = 0.96 = 0.96 with a carry of 0
0.96 × 8 = 7.68 = 0.68 with a carry of 7
0.68 × 8 = 5.44 = 0.44 with a carry of 5
0.44 × 8 = 3.52 = 0.52 with a carry of 3
0.52 × 8 = 4.16 = 0.16 with a carry of 4
0.16 × 8 = 1.28 = 0.28 with a carry of 1
0.28 × 8 = 2.24 = 0.24 with a carry of 2
0.24 × 8 = 1.92 = 0.92 with a carry of 1
 [416.12]10 = [640.07534121]8
Self-Instructional
Material 37
Number Systems and Example 3.13: Convert [3964.63]10 to octal number.
Boolean Algebra
Solution: Integer part 3964  8 = 495 with a remainder of 4 (LSD)

NOTES 495  8 = 61 with a remainder of 7


61  8 = 7 with a remainder of 5
7  8 = 0 with a remainder of 7 (MSD)
 [3964]10 = [7574]8
Fractional part 0.63 × 8 = 5.04 = 0.04 with a carry of 5
0.04 × 8 = 0.32 = 0.32 with a carry of 0
0.32 × 8 = 2.56 = 0.56 with a carry of 2
0.56 × 8 = 4.48 = 0.48 with a carry of 4
0.48 × 8 = 3.84 = 0.84 with a carry of 3 [LSD]
 [3964.63]10 = [7574.50243]8
Note that the first carry is the MSD of the fraction. More accuracy can be
obtained by continuing the process to obtain octal digits.

Octal to Binary Conversion


Since 8 is the third power of 2, we can convert each octal digit into its 3-bit binary
form and from binary to octal form. All 3-bit binary numbers are required to
represent the eight octal digits of the octal form. The octal number system is often
used in digital systems, especially for input/output applications. Each octal digit
that is represented by 3 bits is shown in Table 3.5.
Table 3.5 Octal to Binary Conversion

Octal digit Binary equivalent


0 000
1 001
2 010
3 011
4 100
5 101
6 110
7 111
10 001 000
11 001 001
12 001 010
13 001 011
14 001 100
15 001 101
16 001 110
17 001 111
Self-Instructional
38 Material
Example 3.14: Convert [675]8 to binary number. Number Systems and
Boolean Algebra
Solution: Octal digit 6 7 5
  
NOTES
Binary 110 111 101
 [675]8 = [110 111 101]2
Example 3.15: Convert [246.71]8 to binary number.
Solution: Octal digit 2 4 6 . 7 1
    
Binary 010 100 110 111 001
 [246.71]8 = [010 100 110 . 111 001]2

Binary to Octal Conversion


The simplest procedure is to use the binary triplet method. The binary digits are
grouped into groups of three on each side of the binary point with zeros added on
either side if needed to complete a group of three. Then, each group of 3 bits is
converted to its octal equivalent. Note that the highest digit in the octal system is 7.
Example 3.16: Convert [11001.101011]2 to octal number.
Solution: Binary 11001.101011
Divide into groups of 3 bits 011 001 . 101 011
   
3 1 5 3
Note that a zero is added to the left-most group of the integer part. Thus, the
desired octal conversion is [31.53]8.
Example 3.17: Convert [11101.101101]2 to octal number.
Solution: Binary [11101.101101]2
Divide into groups of 3 bits 011 101 . 101 101
   
3 5 5 5
 [11101.101101]2 = [35.55]8

Hexadecimal to Binary Conversion


Hexadecimal numbers can be converted into binary numbers by converting each
hexadecimal digit to 4-bit binary equivalent using the code given in Table 3.2. If
the hexadecimal digit is 3, it should not be represented by 2 bits [11]2, but it
should be represented by 4 bits as [0011]2.
Self-Instructional
Material 39
Number Systems and Example 3.18: Convert [EC2]16 to binary number.
Boolean Algebra
Solution: Hexadecimal Number E C 2
  
NOTES Binary Equivalent 1110 1100 0010
 [EC2]16 = [1110 1100 0010]2
Example 3.19: Convert [2AB.81]16 to binary number.
Solution: Hexadecimal Number
2 A B . 8 1
    
0010 1010 1011 1000 0001
 [2AB.81]16 = [0010 1010 1011 . 1000 0001]2

Binary to Hexadecimal Conversion


Conversion from binary to hexadecimal is easily accomplished by partitioning the
binary number into groups of four binary digits, starting from the binary point to
the left and to the right. It may be necessary to add zero to the last group, if it does
not end in exactly 4 bits. Each group of 4 bits binary must be represented by its
hexadecimal equivalent.
Example 3.20: Convert [10011100110]2 to hexadecimal number.
Solution: Binary Number [10011100110]2
Grouping the above binary number into 4-bits, we have
0100 1110 0110
Hexadecimal Equivalent   
4 E 6
 [10011100110]2 = [4E6]16
Example 3.21: Convert [111101110111.111011]2 to hexadecimal number.
Solution: Binary number [111101110111.111011]2
By grouping into 4 bits we have, 1111 0111 0111 . 1110 1100
     
Hexadecimal equivalent, F 7 7 . E C
 [111101110111.111011]2 = [F77.EC]16
The conversion between hexadecimal and binary is done in exactly the same
manner as octal and binary, except that groups of 4 bits are used.

Hexadecimal to Decimal Conversion


As in octal, each hexadecimal number is multiplied by the powers of 16, which
represents the weight according to its position and finally adding all the values.
Self-Instructional
40 Material
Another way of converting a hexadecimal number into its decimal equivalent Number Systems and
Boolean Algebra
is to first convert the hexadecimal number to binary and then convert from binary
to decimal.
Example 3.22: Convert [B6A]16 to decimal number. NOTES
Solution: Hexadecimal number [B6A]16
[B6A]16 = B × 162 + 6 × 161 + A × 160
= 11 × 256 + 6 × 16 + 10 × 1 = 2816 + 96 + 10 = [2922]10
Example 3.23: Convert [2AB.8]16 to decimal number.
Solution: Hexadecimal number,
[2AB.8]16 = 2 × 162 + A × 161 + B × 160 + 8 × 16–1
= 2 × 256 + 10 × 16 + 11 × 1 + 8 × 0.0625
 [2AB.8]16 = [683.5]10
Example 3.24: Convert [A85]16 to decimal number.
Solution: Converting the given hexadecimal number into binary, we have
A 8 5
[A85]16 = 1010 1000 0101
[1010 1000 0101]2 = 211 + 29 + 27 + 22 + 20 = 2048 + 512 + 128 + 4 + 1
 [A85]16 = [2693]10
Example 3.25: Convert [269]16 to decimal number.
Solution: Hexadecimal number,
2
[269]16 = 0010 6 9
0110 1001
[001001101001]2 = 29 + 26 + 25 + 23 + 20 = 512 + 64 + 32 + 8 + 1
 [269]16 = [617]10
or, [269]16 = 2 × 162 + 6 × 161 + 9 × 160 = 512 + 96 + 9 = [617]10
Example 3.26: Convert [AF.2F]16 to decimal number.
Solution: Hexadecimal number,
[AF.2F]16 = A × 161 + F × 160 + 2 × 16–1 + F × 16–2
= 10 × 16 + 15 × 1 + 2 × 16–1 + 15 × 16–2
= 160 + 15 + 0.125 + 0.0586
 [AF.2F]16 = [175.1836]10
Decimal to Hexadecimal Conversion
One way to convert from decimal to hexadecimal is the hex dabble method. The
conversion is done in a similar fashion, as in the case of binary and octal, taking the
factor for division and multiplication as 16.
Self-Instructional
Material 41
Number Systems and Any decimal integer number can be converted to hex successively dividing
Boolean Algebra
by 16 until zero is obtained in the quotient. The remainders can then be written
from bottom to top to obtain the hexadecimal results.
The fractional part of the decimal number is converted to hexadecimal
NOTES
number by multiplying it by 16, and writing down the carry and the fraction
separately. This process is continued until the fraction is reduced to zero or the
required number of significant bits is obtained.
Example 3.27: Convert [854]10 to hexadecimal number.
Solution: 854  16 = 53 + with a remainder of 6
53  16 = 3 + with a remainder of 5
3  16 = 0 + with a remainder of 3
 [854]10 = [356]16
Example 3.28: Convert [106.0664]10 to hexadecimal number.
Solution: Integer part
106  16 = 6 + with a remainder of 10
6  16 = 0 + with a remainder of 6
Fractional part
0.0664 × 16 = 1.0624 = 0.0624 + with a carry of 1
0.0624 × 16 = 0.9984 = 0.9984 + with a carry of 0
0.9984 × 16 = 15.9744 = 0.9744 + with a carry of 15
0.9744 × 16 = 15.5904 = 0.5904 + with a carry of 15
Fractional part [0.0664]10 = [0.10FF]16
Thus, the answer is [106.0664]10 = [6A.10FF]16
Example 3.29: Convert [65, 535]10 to hexadecimal and binary equivalents.
Solution: (i) Conversion of decimal to hexadecimal number
65,535  16 = 4095 + with a remainder of F
4095  16 = 255 + with a remainder of F
255  16 = 15 + with a remainder of F
15  16 = 0 + with a remainder of F
 [65535]10 = [FFFF]16
(ii) Conversion of hexadecimal to binary number
F F F F
1111 1111 1111 1111
 [65535]10 = [FFFF]16 = [1111 1111 1111 1111]2

Self-Instructional
42 Material
A typical microcomputer can store up to 65,535 bytes. The decimal Number Systems and
Boolean Algebra
addresses of these bytes are from 0 to 65,535. The equivalent binary addresses
are from
0000 0000 0000 0000 to 1111 1111 1111 1111
NOTES
The first 8 bits are called the upper byte and second 8 bits are called lower
byte.
When the decimal is greater than 255, we have to use both the upper byte
and the lower byte.
Hexadecimal to Octal Conversion
This can be accomplished by first writing down the 4-bit binary equivalent of
hexadecimal digit and then partitioning it into groups of 3 bits each. Finally, the 3-
bit octal equivalent is written down.
Example 3.30: Convert [2AB.9]16 to octal number.
Solution: Hexadecimal number 2 A B . 9
   
4 bit numbers 0010 1010 1011 . 1001
3 bit pattern 001 010 101 011 . 100 100
     
Octal number 1 2 5 3 . 4 4
 [2AB.9]16 = [1253.44]8
Example 3.31: Convert [3FC.82]16 to octal number.
Solution: Hexadecimal number 3 F C . 8 2
4 bit binary numbers 0011 1111 1100 . 1000 0010
3 bit pattern 001 111 111 100 . 100 000 100
      
Octal number 1 7 7 4 . 4 0 4
[3FC.82]16 = [1774.404]8
Notice that zeros are added to the rightmost bit in the above two examples
to make them group of 3 bits.
Octal to Hexadecimal Conversion
It is the reverse of the above procedure. First the 3-bit equivalent of the octal digit
is written down and partitioned into groups of 4 bits, then the hexadecimal equivalent
of that group is written down.

Self-Instructional
Material 43
Number Systems and Example 3.32: Convert [16.2]8 to hexadecimal number.
Boolean Algebra
Solution: Octal number 1 6 . 2
  
NOTES 3 bit binary 001 110 . 010
4 bit pattern 1110 . 0100
 
Hexadecimal E . 4
 [16.2]8 = [E.4]16
Example 3.33: Convert [764.352]8 to hexadecimal number.
Solution: Octal number 7 6 4 . 3 5 2
3 bit binary 111 110 100 . 011 101 010
4 bit pattern 0001 1111 0100 . 0111 0101 000
     
Hexadecimal number 1 F 4 . 7 5 0
 [764.352]8 = [1F4.75]16

Integers and Fractions

Binary Fractions
A binary fraction can be represented by a series of 1 and 0 to the right of a binary
point. The weights of digit positions to the right of the binary point are given by
2–1, 2–2, 2–3 and so on.
For example, the binary fraction 0.1011 can be written as,
0.1011 = 1 × 2–1 + 0 × 2–2 + 1 × 2–3 + 1 × 2– 4
= 1 × 0.5 + 0 × 0.25 + 1 × 0.125 + 1 × 0.0625
(0.1011)2 = (0.6875)10
Mixed Numbers
Mixed numbers contain both integer and fractional parts. The weights of mixed
numbers are,
23 22 21 . 2–1 2–2 2–3 etc.

Binary Point
For example, a mixed binary number 1011.101 can be written as,
(1011.101)2 = 1 × 23 + 0 × 22 + 1 × 21 + 1 × 20 + 1 × 2–1 + 0 × 2–2 + 1 × 2–3
= 1 × 8 + 0 × 4 + 1 × 2 + 1 × 1 + 1 × 0.5 + 0 × 0.25 + 1 × 0.125
 [1011.101]2 = [11.625]10
Self-Instructional
44 Material
When different number systems are used, it is customary to enclose the Number Systems and
Boolean Algebra
number within big brackets and the subscripts indicate the type of the number
system.

NOTES
3.3 COMPLEMENTS

Complements are used in digital computer to simplify the subtraction operation,


that is, to easily represent the negative number, and for logical manipulation.
There are two types of complement for each base system:
1. The radix (i.e. r’s) complement
2. The diminished radix [i.e. (r-1)’s] complement.
All (r-1)’s complement subtract the given number from the maximum possible
numbers in the given base. All r’s complements are obtained by adding 1 to the
(r - 1)’s complement.
To determine r’s complement, first write (r - 1) complement and then add 1
to least significant bit (LSB), that is, most right side bit.
Some examples of the complements are mentioned here under:

r 1's
r
r's complement

1's complement
Binary
2's

7's complement
Octal
8's complement

9's complement
Decimal
10's complement

15's complement
Hexadecimal
16's complement

Binary Number in Complement Form


The 1’s complement of a binary number is obtained by complementing all its bits,
that is, by replacing 0s with 1s and 1s with 0s. The 2’s complement of a binary
number is obtained by adding ‘1’ to its 1’s complement.

Self-Instructional
Material 45
Number Systems and Example 3.34
Boolean Algebra
1. 1’s complement of binary number (101101)2 is
111111
NOTES 101101
010010 2

01101
010010
Charging1's to 0's and 0's to 1's
2. 2’s complement of binary number (1 0 1 0 0)2 is
01011
1
2's 01100
3. The 1’s complement of (10010110)2 is (01101001)2.
4. The 2’s complement of (10010110)2 is (01101010)2.
5. 2’s complement of binary number (11001.11)2 is
1's 00110.00
1
2's 00110.01

3.4 NUMERIC AND CHARACTER CODES

Representing numbers within the computer circuits, registers and the memory unit
by means of Electrical signals or Magnetism is called NUMERIC CODING. In
the computer system, the numbers are stored in the Binary form, since any number
can be represented by the use of 1’s and 0’s only. Numeric codes are divided into
two categories i.e. weighted codes and non-weighted codes. The different type of
the Weighted Codes are:
(i) BCD Code,
(ii) 2-4-2-1 Code,
(iii) 4-2-2-1 Code,
(iv) 5-2-1-1 Code,
(v) 7 – 4 – 2 – 1 Code, and
(vi) 8-4-2-1 Code.

Self-Instructional
46 Material
The Non-Weighted Codes are of two types i.e. Number Systems and
Boolean Algebra
(i) Non-Error Detecting Codes and,
(ii) Error Detecting Codes.
Character codes NOTES

Alphanumeric codes are also called character codes. These are binary codes
which are used to represent alphanumeric data. The codes write alphanumeric
data including letters of the alphabet, numbers, mathematical symbols and
punctuation marks in a form that is understandable and processable by a computer.
All these codes are discussed in detail in unit 6.

3.5 BASIC GATES

A logic gate is an electronic circuit, which makes logical decisions. To arrive at


these decisions, the most common logic gates used are OR, AND, NOT, NAND
and NOR gates. The NAND and NOR gates are called as the universal gates.
The exclusive-OR gate is another logic gate, which can be constructed using basic
gates, such as AND, OR and NOT gates.
Logic gates have two or more inputs and only one output except for the
NOT gate, which has only one input. The output signal appears only for certain
combinations of the input signals. The manipulation of binary information is done
by the gates. The logic gates are the building blocks of hardware which are available
in the form of various IC families. Each gate has a distinct logic symbol and its
operation can be described by means of an algebraic function. The relationship
between input and output variables of each gate can be represented in a tabular
form called a truth table.
An inverter performs the function of negation on signals and negates, the
Boolean expression of the input signals. Boolean algebra is a system of mathematical
logic, using the function AND, NOT and OR.
3.5.1 AND Gate
An AND gate has two or more inputs and a single output, and it operates in
accordance with the following definition: The AND gate is defined as an electronic
circuit in which all the inputs must be HIGH in order to have a HIGH output.
The truth table for the 2-input AND gate is shown in Table 3.6. It is seen
that the AND gate has a HIGH output only when both A and B are HIGH. When
there are more inputs, all inputs must be HIGH for a HIGH output. For this reason,
the AND gate is also called ALL GATE. The truth table for the 3-input AND gate
is shown in Table 3.7.

Self-Instructional
Material 47
Number Systems and Table 3.6 2-Input AND Gate Table 3.7 3-Input AND Gate
Boolean Algebra
Inputs Output Inputs Output

A B Y A B C Y
NOTES 0 0 0 0 0 0 0
0 1 0 0 0 1 0
1 0 0 0 1 0 0
1 1 1 0 1 1 0
1 0 0 0
1 0 1 0
1 1 0 0
1 1 1 1

Logic Symbol: The schematic symbols of 2-input, 3-input and 4-input


AND gates are shown symbolically in Figure 3.3.
Y=ABC A Y=ABCD
Y=AB A B
A B
C
B C D
(a) (b) (c)

Fig. 3.3 Schematic Symbols of AND Gate

3.5.2 OR Gate
The OR gate is a digital logic gate that implements logical disfunction. A basic
circuit has two or more inputs and a single output and it operates in accordance with
the following definition: The output of an OR gate assumes state 1 if one or more
(all) inputs assume state 1.
From the truth table it can be seen that all switches must be opened (0
state) for the light to be off (output 0 state). This type of circuit is called an OR
gate. Table 3.8 shows the truth table of three input or gates.
Table 3.8 Truth Table of Three-Input OR Gates

Inputs Outputs
A B C Y=A+B+C
0 0 0 0
0 0 1 1
0 1 0 1
0 1 1 1
1 0 0 1
1 0 1 1
1 1 0 1
1 1 1 1
Table 3.9 is the truth table for two-input OR gate. The OR gate is an ANY
OR ALL gate; an output occurs when any or all of the inputs are high. Table 3.10
Self-Instructional
48 Material
shows binary equivalent details in which A and B are determined for inputs and Y Number Systems and
Boolean Algebra
= A+B is expressed for output.
Table 3.9 Two-Input OR Gate Table 3.10 Binary Equivalent

Inputs Output Inputs Output NOTES


A B Y=A+B A B Y=A+B
Low Low Low 0 0 0
Low High High 0 1 1
High Low High 1 0 1
High High High 1 1 1

In general, if n is the number of input variables, then there will be 2n possible


cases, since each variable can take on either of two values.
Y=A+B Y=A+B+C
A
A
B
B C
A Y=A+B+C+D
B

C
D
(a) (b) (c)

Fig. 3.4 Schematic Symbols of OR Gate

Logic Symbol: The schematic symbols of an OR gate for two-input, three-


input and four-input are shown in Figure 3.4.

3.6 BOOLEAN ALGEBRA

Boolean algebra or Boolean logic was developed by English mathematician George


Boole. It is considered as a logical calculus of truth values and resembles the algebra
of real numbers along with the numeric operations of multiplication xy, addition
x + y, and negation ¬x substituted by the respective logical operations of conjunction
x y, disjunction x y and complement ¬x. These set of rules explain specific
propositions whose result would be either true (1) or false (0). In digital logic, these
rules are used to define digital circuits whose state can be either 1 or 0.
Boolean logic forms the basis for computation in contemporary binary computer
systems. Using Boolean equations, any algorithm or any electronic computer circuit
can be represented. Even one Boolean expression can be transformed into an
equivalent expression by applying the theorems of Boolean algebra. This helps in
converting a given expression to a canonical or standardized form and minimizing
the number of terms in an expression. By minimizing terms and expressions the
designer can use less number of electric components while creating electrical circuits
so that the cost of system can be reduced. Boolean logical operations are performed
to simplify a Boolean expression using the following basic and derived operations.

Self-Instructional
Material 49
Number Systems and Basic Operations: Boolean algebra is specifically based on logical
Boolean Algebra
counterparts to numeric operations multiplication xy, addition x + y, and negation –x,
namely conjunction x y (AND), disjunction x y (OR) and complement or negation
¬x (NOT). In digital electronics, the AND is represented as a multiplication, the OR
NOTES is represented as an addition and the NOT is denoted with a post fix prime, for
example A which means NOT A. Conjunction is the closest of these three operations.
As a logical operation the conjunction of two propositions is true when both
propositions are true and false otherwise. Disjunction works almost like addition
with one exception, i.e., the disjunction of 1 and 1 is neither 2 nor 0 but 1. Hence, the
disjunction of two propositions is false when both propositions are false and true
otherwise. The disjunction is also termed as the dual of conjunction. Logical negation,
however, does not work like numerical negation. It corresponds to incrementation,
i.e., ¬x = x+1 mod 2. An operation with this property is termed an involution. Using
negation we can formalize the notion that conjunction is dual to disjunction as per
De Morgan’s laws, ¬(x y) = ¬x ¬y and ¬(x y) = ¬x ¬y. These can also be
construed as definitions of conjunction in terms of disjunction and vice versa: x y
= ¬(¬x ¬y) and x y = ¬(¬x ¬y).
Derived operations: Other Boolean operations can be derived from
these by composition. For example, implication xy of is a binary operation,
which is false when x is true and y is false, and true otherwise. It can also be
expressed as xy = ¬x y or equivalently ¬(x ¬y). In Boolean logic this
operation is termed as material implication, which distinguishes it from related
but non-Boolean logical concepts. The basic concept is that an implication xy
is by default true.
Boolean algebra, however, does have an exact counterpart called
eXclusive-OR (XOR) or parity, represented as x  y. The XOR of two
propositions is true only when exactly one of the propositions is true. Further,
the XOR of any value with itself vanishes, for example x x = 0. Its digital
electronics symbol is a hybrid of the disjunction symbol and the equality symbol.
XOR is the only binary Boolean operation that is commutative and whose truth
table has equally many 0s and 1s.
Another example is x|y, the NAND gate in digital electronics, which is false
when both arguments are true and true otherwise. NAND can be defined by
composition of negation with conjunction because x |y = ¬(x y). It does not have
its own schematic symbol and is represented using an AND gate with an inverted
output. Unlike conjunction and disjunction, NAND is a binary operation that can be
used to obtain negation using the notation ¬x = x|x. Using negation one can define
conjunction in terms of NAND through x y = ¬(x|y) from which all other Boolean
operations of nonzero parity can be obtained. NOR, ¬(x y), is termed as the
evident dual of NAND and is equally used for this purpose. This universal character
of NAND and NOR has been widely used for gate arrays and also for integrated
circuits with multiple general-purpose gates.
In logical circuits, a simple adder can be made using an XOR gate to add the
numbers and a series of AND, OR and NOT gates to create the carry output. XOR
Self-Instructional
50 Material
is also used for detecting an overflow in the result of a signed binary arithmetic Number Systems and
Boolean Algebra
operation, which occurs when the leftmost retained bit of the result is not the same
as the infinite number of digits to the left.
3.6.1 Laws and Rules of Boolean Algebra NOTES
Boolean algebra is a system of mathematical logic. Properties of ordinary algebra
are valid for Boolean algebra. In Boolean algebra, every number is either 0 or 1.
There are no negative or fractional numbers. Though many of these laws have
already been discussed they provide the tools necessary for Boolean expressions.
The following are the basic laws of Boolean algebra:
Laws of Complementation
The term complement means to invert, to change 1s to 0s and 0s to 1s. The
following are the laws of complementation:
Law 1 0 1
Law 2 10
Law 3 A A
OR Laws AND Laws
Law 4 0+0=0 Law 12 0.0 = 0
Law 5 0+1=1 Law 13 1.0 = 0
Law 6 1+0=1 Law 14 0.1 = 0
Law 7 1+1=1 Law 15 1.1 = 1
Law 8 A+0=A Law 16 A.0 = 0
Law 9 A+1=1 Law 17 A.1 = A
Law 10 A+A=A Law 18 A.A = 0
Law 11 A A 1 Law 19 A. A  0
Laws of ordinary algebra that are also valid for Boolean algebra are:
Commutative Laws
Law 20 A+B=B+A
Law 21 A.B= B.A
Associative Laws
Law 22 A  ( B  C )  ( A  B)  C
Law 23 A.( BC )  ( AB).C

Distributive Laws
Law 24 A.( B  C )  A.B  A.C
Law 25 A  BC  ( A  B).( A  C )
Law 26 A  ( A.B)  A  B

Self-Instructional
Material 51
Number Systems and Example 3.35: Prove A + BC = (A + B) (A + C).
Boolean Algebra
Solution: A + BC = A.1 + BC Law A . 1 =A
= A(1 + B) + BC Law A + 1 =1
= A . 1 + AB + BC Law A(B + C) = AB + AC
NOTES = A . (1 + C) + AB + BC Law 1 + A =1
= A . 1 + AC + AB + BC
= A . A + AC +AB + BC Law A . A = A
= A (A + C) + B (A + C)
 A + BC = (A + C) (A + B)
Alternative proof:
( A  C ) ( A  B)  AA  AB  AC  BC
 A  AB  AC  BC
 A (1  B)  AC  BC
 A .1  AC  BC
 A (1  C )  BC
 A  BC

Example 3.36: Prove A+ AB = A+ B.


Solution:
A  AB  A.1  AB Law A.1  1
 A(1  B)  AB Law 1+ A  1
 A.1  AB  AB Law A( B  C )  AB  AC
 A  B )( A  A) Law A .1  A
 A  B ).1 Law A  A  1
 A+ AB = A + B Law A . 1  A

Check Your Progress


1. What is number system?
2. What is binary number system?
3. Define logic gates.

3.8 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. Number systems are the technique to represent numbers in the


computer system architecture. Computer architecture supports
following number systems.
2. A number system that uses only two digits, 0 and 1 is called the binary
number system.

Self-Instructional
52 Material
3. A logic gate is an electronic circuit, which makes logical decisions. The Number Systems and
Boolean Algebra
most common logic gates used are OR, AND, NOT, NAND and NOR
gates.

NOTES
3.9 SUMMARY

 A number of base, or radix r, is a system that uses distinct symbols of r


digits.
 The number system which utilizes ten distinct digits, i.e., 0, 1, 2, 3, 4, 5, 6,
7, 8 and 9 is known as decimal number system.
 A number system that uses only two digits, 0 and 1 is called the binary
number system. The binary number system is also called a base two system.
 A number system that uses eight digits, 0, 1, 2, 3, 4, 5, 6 and 7, is called an
octal number system.
 A popular method known as double dabble method, also known as divide-
by two method, is used to convert a large decimal number into its binary
equivalent. In this method, the decimal number is repeatedly divided by 2
and the remainder after each division is used to indicate the coefficient of
the binary number to be formed.
 Complements are used in digital computer to simplify the subtraction
operation, that is, to easily represent the negative number, and for logical
manipulation.
 A logic gate is an electronic circuit, which makes logical decisions. To arrive
at these decisions, the most common logic gates used are OR, AND, NOT,
NAND and NOR gates.

3.10 KEY WORDS

 Radix: The base or radix of a number is defined as the number of different


digits which can occur in each position in the number system.
 Logic gate: It is an electronic circuit which makes logical decisions.

3.11 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short Answer Questions


1. What is a number system?
2. What is the significance of a binary number system?
3. What is double dabble method?
4. What are complements? Self-Instructional
Material 53
Number Systems and Long Answer Questions
Boolean Algebra
1. How will you convert a number from binary to decimal and vice-versa?
2. What are the two types of complements? Explain.
NOTES 3. Explain the different types of basic gates.
4. Explain the laws and rules of Boolean algebra.

3.12 FURTHER READINGS

Bhatt, Pramod Chandra P. 2003. An Introduction to Operating Systems—


Concepts and Practice. New Delhi: PHI.
Bhattacharjee, Satyapriya. 2001. A Textbook of Client/Server Computing. New
Delhi: Dominant Publishers and Distributers.
Hamacher, V.C., Vranesic, Z.G. and Zaky. 2002. S.G. Computer Organization,
5th edition. New York: McGraw-Hill International Edition.
Mano, M. Morris. 1993. Computer System Architecture, 3th edition. New Jersey:
Prentice-Hall Inc.
Nutt, Gary. 2006. Operating Systems. New Delhi: Pearson Education.

Self-Instructional
54 Material
Logical Circuits

UNIT 4 LOGICAL CIRCUITS


Structure NOTES
4.0 Introduction
4.1 Objectives
4.2 Combinational Logic
4.3 Adders and subtractors
4.3.1 Full-Adder
4.3.2 Half-Subtractor
4.3.3 Full-Subtractor
4.4 Decoders
4.4.1 3-Line-to-8-Line Decoder
4.5 Encoders
4.5.1 Octal-to-Binary Encoder
4.6 Multiplexer
4.7 Demultiplexer
4.7.1 Basic Two-Input Multiplexer
4.7.2 Four-Input Multiplexer
4.8 Flip-flops
4.8.1 S-R Flip-Flop
4.8.2 D Flip-Flop
4.8.3 J-K Flip-Flop
4.8.4 T Flip-Flop
4.8.5 Master–Slave Flip-Flops
4.9 Registers
4.9.1 Shift Registers Basics
4.9.2 Serial In/Serial out Shift Registers
4.9.3 Serial In/Parallel Out Shift Registers
4.9.4 Parallel In/Serial Out Shift Registers
4.9.5 Parallel In/Parallel out Registers
4.10 Counters
4.10.1 Asynchronous Counter Operations
4.10.2 Synchronous Counter Operations
4.11 Answers to Check Your Progress Questions
4.12 Summary
4.13 Key Words
4.14 Self Assessment Questions and Exercises
4.15 Further Readings

4.0 INTRODUCTION

Logic circuits whose outputs at any instance of time are entirely dependent on the
input signals present at that time are known as combinational circuits. A
combinational circuit has no memory characteristic as its output does not depend
upon any past inputs. A combinational logic circuit consists of input variables, logic
gates and output variables. The design of a combinational circuit starts from the
Self-Instructional
Material 55
Logical Circuits verbal outline of the problem and ends in a logic circuit diagram or a set of Boolean
functions from which the logic diagram can be easily obtained.
You will also learn about the registers and counters. A register is a group of
flip-flops suitable for storing binary information. Each flip-flop is a binary cell capable
NOTES
of storing one bit of information. An n-bit register has a group of n flip-flops and is
capable of storing any binary information containing n bits. The register is mainly
used for storing and shifting binary data entered into it from an external source. A
counter, by function, is a sequential circuit consisting of a set of flip-flops connected
in a special manner to count the sequence of the input pulses received in digital
form. Counters are fundamental components of digital system. Digital counters
find wide application like pulse counting, frequency division, time measurement
and control and timing operations.

4.1 OBJECTIVES

After going through this unit, you will be able to:


 Describe the basic operation of a half-adder and a full-adder
 Learn about adders and subtractors
 Understand the operation of a half-subtractor and a full-subtractor
 Learn about multiplexers and demultiplexers
 Understand the concept of flip-flops
 Define counters
 Understand the concept of shift registers
 Describe the types of shift registers

4.2 COMBINATIONAL LOGIC

The outputs of combinational logic circuits are only determined by their current
input state as they have no feedback, and any changes to the signals being applied
to their inputs will immediately have an effect at the output. In other words, in a
combination logic circuit, the input condition changes state so too does the output
as combinational circuits have no memory. Combination logic circuits are made up
from basic logic AND, OR or NOT gates that are combined or connected together
to produce more complicated switching circuits. As combination logic circuits are
made up from individual logic gates they can also be considered as decision making
circuits and combinational logic is about combining logic gates together to process
two or more signals in order to produce at least one output signal according to the
logical function of each logic gate. Common combinational circuits made up from
individual logic gates include multiplexers, decoders and demultiplexers, full and
half adders etc. One of the most common uses of combination logic is in multiplexer

Self-Instructional
56 Material
and demultiplexer type circuits. Here, multiple inputs or outputs are connected to Logical Circuits
a common signal line and logic gates are used to decode an address to select a
single data input or output switch. A multiplexer consists of two separate
components, a logic decoder and some solid state switches. Figure 4.1 shows the
hierarchy of combinational logic circuit. NOTES

Fig. 4.1 Hierarchy of Combinational Logic

A sequential circuit uses flip flops. Unlike combinational logic, sequential


circuits have state, which means basically, sequential circuits have memory. The
main difference between sequential circuits and combinational circuits is that
sequential circuits compute their output based on input and state, and that the
state is updated based on a clock. Combinational logic circuits implement Boolean
functions, so they are functions only of their inputs, and are not based on clocks.
Combinational logic is considered as the easiest circuitry to design. The outputs
from a combinational logic circuit depend only on the current inputs. The circuit
has no remembrance of what it did at any time in the past. Much of logic design
involves connecting simple, easily understood circuits to construct a larger circuit
that performs a much more complicated function.

4.3 ADDERS AND SUBTRACTORS

An electronic device (combinational circuit), which performs arithmetic addition of


two bits is called a half-adder. It is an electronic device which can receive two
digital inputs representing AUGEND and ADDEND or CARRY and produces SUM
and CARRY outputs.
A half-adder has two inputs and two outputs. The two inputs are the two bit
members A and B, and the two outputs are the sum (S) of A and B and the carry
bit, denoted by C. The symbol for a half-adder is shown in Figure 4.2(a). The
Self-Instructional
Material 57
Logical Circuits logic diagram of a half-adder is shown in Figure 4.2(b). Figure 4.2(c) gives the
realization of the half-adder using five NAND gates.
A SUM = A + B
NOTES B
A S
Inputs Half-Adder Outputs

B C CARRY = AB

(a) Symbol of Half-Adder


(b) Logic Diagram

SUM = A + B
A
= AB + BA
B

CARRY = AB

(c) Half-Adder using NAND Gates


Fig. 4.2 Half-Adder
A half-adder functions according to the truth table. We know that an AND
gate produces a high output only when both inputs are high, and the exclusive OR
gate produces a high output if either input, but not both is high. From the truth
table, the sum output corresponds to a logic XOR function, while the carry output
corresponds to AND function.
Let us examine each entry in Table 4.1. Half-adder does electronically what
we do mentally, when we add two bits.
Table 4.1 Truth Table for a Half-Adder

Inputs Outputs
Addend Augend Sum Carry
0 0 0 0
0 1 1 0
1 0 1 0
1 1 0 1
First entry : Inputs : A = 0 and B = 0
Human reponse : 0 plus 0 is 0 with a carry of 0.
Half-adder response : SUM = 0 and CARRY = 0
Second entry: Inputs : A = 1 and B = 0
Human response : 1 plus 0 is 1 with a carry of 0.
Self-Instructional
58 Material
Half-adder response : SUM = 1 and CARRY = 0 Logical Circuits

Third entry: Inputs : A = 0 and B = 1


Human response : 0 plus 1 is 1 with a carry of 0.
Half-adder response : SUM = 1 and CARRY = 0 NOTES
Fourth entry: Inputs : A = 1 and B = 1
Human response : 1 plus 1 is 0 with a carry of 1.
Half-adder response : SUM = 0 and CARRY = 1
The SUM output represents the least significant bit (LSB) of the sum. The Boolean
expression for the two outputs can be obtained directly from the truth table.
Ssum = AB AB = ( A B )( A B) A B

C(carry) = AB = ( A B)( A B )( A B)
The implementation of the half-adder circuit using basic gates is shown in Figure
4.3.
A B

AB

S = AB + AB

AB

C = AB

Fig.4.3 Half-Adder using Basic Gates

4.3.1 Full-Adder
A half-adder has only two inputs and there is no provision to add a carry coming
from the lower order bits when multi-bit addition is performed. For this purpose,
a third input terminal is added and this circuit is used to add A, B and Cin.
A full-adder is a combinational circuit that performs the arithmetic sum of
three input bits and produces a SUM and a CARRY.
It consists of three inputs and two outputs. Two input variables denoted by
A and B, represent the carry from the previous lower significant position. Two
outputs are necessary because the arithmetic sum of three binary digits ranges
from 0 to 3, and binary 2 or 3 needs two digits. The outputs are designed by the
symbol S (for SUM) and Cout (for CARRY). The binary variable S gives the value
of the LSB (least significant bit) of the SUM. The binary variable Cout gives the
output CARRY.
Self-Instructional
Material 59
Logical Circuits

NOTES
(a) Logic Symbol of Full-Adder

(b) Full-Adder using Two Half-Adders

(c) Logic Circuit of Full-Adder

Half-adder Half-adder
A
B Cin Sum

Cout

(d) Full-Adder using Two Half-Adders and an OR Gate

Fig. 4.4 Full-Adder Circuits


Self-Instructional
60 Material
The symbolic diagram for a full-addder is shown in Figure 4.4(a). A full- Logical Circuits

adder is formed by using two half-adder circuits and an OR gate as shown in


Figure 4.4(b). Note the symbol S(sigma) for the sum. The full-adder circuit which
consists of three AND gates, an OR gate and a 3-input exclusive OR gate is
shown in Figure 4.4(c). NOTES
Table 4.2 shows the truth table of a full-adder. There are several possible
cases for the three inputs and for each case the desired output values are listed.
For example, consider the case A = 1, B = 0 and Cin = 1. The full-adder must add
these bits to produce a sum (S) of 0 and carry (Cout) of 1. The reader should
check the other cases to understand them. The full-adder can do more than a
million additions per second.
Table 4.2 Truth Table for a Full-Adder
Inputs Outputs
Augend bit Addend bit Carry bit Sum bit Carry bit
A B Cin  Output Cout
0 0 0 0 0
0 0 1 1 0
0 1 0 1 0
0 1 1 0 1
1 0 0 1 0
1 0 1 0 1
1 1 0 0 1
1 1 1 1 1
The logic expression of exclusive ORing of three variables A, B and Cin is,
A  B  C = ( AB  AB )  Cin

 
= ( AB  AB ) Cin  Cin ( AB  AB )

=  AB    AB  C
in  Cin ( AB  AB )

= ( A  B ) . ( A  B ) Cin  Cin ( AB  AB )
SUM = A  B  Cin = ABCin  ABCin  ABCin  ABCin
For A = 1, B = 0 and Cin = 1,
 = 1 . 0 .1 1 . 0 . 1 1. 0 . 1 1. 0 .1
=0 . 1 . 1+0 . 0 . 0+1 . 1 . 0+1 . 0 . 1=0
The sum of products for
Cout = ABCin  ABCin  ABCin  ABCin  ABCin  ABCin
= ABCin  ABCin  ABCin  ABCin  ABCin  ABCin
= BCin [ A  A]  ACin [ B  B]  AB[Cin  Cin ]
Cout = BCin + ACin + AB = AB + BCin + CinA
For A = 1, B = 0 and Cin = 1, Cout = 1.0 + 0.1 + 1.1 = 1 Self-Instructional
Material 61
Logical Circuits 4.3.2 Half-Subtractor
A combinational circuit which is used to perform subtraction of two binary bits is
known as a half-subtractor.
NOTES The logic symbol of a half-subtractor is shown in Figure 4.5(a). It has two
inputs, A (minuend) and B (subtrahend) and two outputs D (difference) and C
(borrow out). It is made up of an XOR gate, a NOT gate and an AND gate.
[Figure 4.5(b)]. Subtraction of two binary numbers may be accomplished by taking
the complement of the subtrahend and adding it to the minuend; that is, the
subtraction becomes an addition operation. The truth table for half-subtraction is
given in Table 4.3. From the truth table, it is clear that the difference output is 0 if
A = B and 1 if A ± B; the borrow output C is 1 whenever A < B. If A is less than
B, then subtraction is done by borrowing, from the next higher order bit.
The Boolean expressions for difference (D) and carry (C) are given by,
D = AB AB = A  B
C = AB
A (A + B)
A D D
B
Half
Subtractor (A B)
B C C

(a) Logic Symbol (b) Logic Diagram


Fig. 4.5 Logic Symbol and Diagram of a Half-Subtractor

Table 4.3 shows the truth table for a half-subtractor:


Table 4.3 Truth Table for a Half-Subtractor

Inputs Outputs
Minuend Subtrahend Difference Borrow
A B D C
0 0 0 0
0 1 1 1
1 0 1 0
1 1 0 0

4.3.3 Full-Subtractor
A full-subtractor is a combinational circuit that performs 3-bit subtraction.
The logic symbol of a full-subtractor is shown in Figure. 4.6(a). It has three
inputs, An (minuend), Bn (Subtrahend) and Cn–1 (borrow from previous state) and
two outputs D (difference) and Cn (borrow). The truth table for a full-subtractor is
given in Table 4.4.

Self-Instructional
62 Material
Logical Circuits
An D
Bn Full
Subtractor
Cn–1 Cn
(a) Logic Symbol NOTES
Cn–1 D2 D
Half
Subtractor
An D1 2 C2
Half
Subtractor Cn
Bn 1 C1

(b) Logic Diagram using Two Half-Subtractors


Fig. 4.6 Formulation of a Full-Subtractor using Two Half-Subtractors
The full-subtractor can be accomplished by using two half-subtractors and
an OR gate as shown in Figure 4.6(b).
An

Bn G1 D
Cn–1

G1

G2 G4 Cn

G3

Fig. 4.7 Realization of a Full-Subtractor


Table 4.4 shows the truth table for a full-subtractor.
Table 4.4 Truth Table for a Full-Subtractor

Inputs Outputs
Minuend Subtrahend Subtrahend Difference Borrowout
An Bn Cn–1 D Cn
0 0 0 0 0
0 0 1 1 1
0 1 0 1 1
0 1 1 0 1
1 0 0 1 0
1 0 1 0 0
1 1 0 0 0
1 1 1 1 1

Self-Instructional
Material 63
Logical Circuits The minterms taken from the truth table gives the Boolean expression (SOP)
for difference D and is given by,
D = An BnCn 1 An BnCn 1 An BnCn 1 An BnCn 1 An BnCn 1
Simplifying, D = ( An Bn An Bn )Cn 1 ( An Bn An Bn )Cn 1
NOTES
= ( An Bn ) Cn 1 ( An Bn ) Cn 1
or, D = An  Bn  Cn–1
Similarly, the sum of product expression for Cn can be written from the truth
table as
C n = An BnCn 1 An BnCn 1 An BnCn 1 An BnCn 1

The equation for borrow after simplification by Karnaugh map is,


C n = An Bn ACn 1 BnCn 1

We notice that the equation for D is the same as the sum output for a full-
adder and the output Cn resembles the carry out for full-adder, except that An is
complemented. From these similarities, we understand that it is possible to convert
a full-adder into a full-subtractor by merely complementing An prior to its application
to the input of gates that form the borrow output as shown in Figure 4.8.
An Bn
Cn –1 00 01 11 10

0 0 1 0 0

1 1 1 1 0

Fig. 4.8 Karnaugh Map

Check Your Progress


1. What is combinational logic?
2. What is a half-adder?
3. Define full-adder.

4.4 DECODERS

Many digital systems require the decoding of data. Decoding is necessary in such
applications as data multiplexing, rate multiplying, digital display, digital-to-analog
converters and memory addressing. It is accomplished by matrix systems that can
be constructed from such devices as magnetic cores, diodes, resistors, transistors
and FETs.

Self-Instructional
64 Material
A decoder is a combinational logic circuit, which converts binary information Logical Circuits

from n input lines to a maximum of 2n unique output lines such that each output line
will be activated for only one of the possible combinations of inputs. If the n-bit
decoded information has unused or don’t care combinations, the decoder output
will have fewer than 2n outputs. NOTES
A decoder is similar to demultiplexer, with one exception there is no data
input.
A single binary word n digits in length can represent 2n different elements of
information.
An AND gate can be used as the basic decoding element because its output
is HIGH only when all of its inputs are HIGH. For example, the input binary is
1011. In order to make sure that all of the inputs to the AND gate are HIGH when
binary number 1011 occurs, then the third bit (0) must be inverted.
If a NAND gate is used in place of the AND gate, a LOW output will
indicate the presence of the proper binary code.
4.4.1 3-Line-to-8-Line Decoder
Figure shows the reference matrix for decoding a binary word of 3 bits. In this
case, 3-inputs are decoded into eight outputs. Each output represents one of
the minterms of the 3-input variables. A 3-bit binary decoder whose control
equations are implemented in Figure 4.9. The operation of this circuit is listed in
Table 4.5.
Table 4.5 Truth Table for 3-to-8 Line Decoder

Inputs Outputs
A B C D0 D1 D2 D3 D4 D5 D6 D7
0 0 0 1 0 0 0 0 0 0 0
0 0 1 0 1 0 0 0 0 0 0
0 1 0 0 0 1 0 0 0 0 0
0 1 1 0 0 0 1 0 0 0 0
1 0 0 0 0 0 0 1 0 0 0
1 0 1 0 0 0 0 0 1 0 0
1 1 0 0 0 0 0 0 0 1 0
1 1 1 0 0 0 0 0 0 0 1

Figure 4.9 shows the diagram of 3-line-to-8-line decoder.

Self-Instructional
Material 65
Logical Circuits A B C

A B C

NOTES D0 = ABC

D1 = ABC

D2 = ABC

D3 = ABC

D4 = ABC

D5 = ABC

D6 = ABC

D7 = ABC

Fig. 4.9 A 3-Line-to-8-Line Decoder

4.5 ENCODERS

An encoder is a digital circui7t that performs the inverse operation of a decoder.


Hence, the opposite of the decoding process is called encoding. An encoder is
also a combinational logic circuit that converts an active input signal into a coded
output signal.
A0 O0
A1 O1
A2 O2
Encoder

AN–1 ON–1

n-inputs m-bit
only one HIGH output code
at a time
Fig. 4.10 Block Diagram of Encoder

An encoder has n input lines only one of which is active at any time and m output
lines. It encodes one of the active inputs such as a decimal or octal digit to a coded

Self-Instructional
66 Material
output such as binary or BCD. Encoders can also be used to encode various Logical Circuits

symbols and alphabetic characters. The process of converting from familiar symbols
or numbers to a coded format is called encoding. In an encoder, the number of
outputs is always less than the number of inputs. The block diagram of an encoder
is shown in Figure 4.10. NOTES

4.5.1 Octal-to-Binary Encoder


We know that binary-to-octal decoder (3-line-to-8-line decoder) accepts a 3-bit
input code and activates one of eight output lines corresponding to that code. An
octal-to-binary encoder (8-line-to-3-line encoder) performs the opposite function,
it accepts eight input lines and produces a 3-bit output code corresponding to the
activated input. The logic diagram and the truth table for an octal-to-binary encoder
is shown in Figure 4.11. It is implemented with three 4-input OR gates.The circuit
is designed so that when D0 is HIGH, the binary code 000 is generated, when D1
is HIGH, the binary code 001 is generated and so on.
D7 D6 D5 D4 D3 D2 D1 D0

Y0 = D1 + D3 + D5 + D7

Y1 = D2 + D3 + D6 + D7

Y2 = D4 + D5 + D6 + D7

Fig. 4.11 Logic Diagram of Octal-to-Binary Encoder

The design is made simple by the fact that only eight out of the total 2n possible
input conditions are used. Table 4.6 shows the truth table for octal-to-binary
encoder.
Table 4.6 Truth Table Octal-to-Binary Encoder

Inputs Outputs
D0 D1 D2 D3 D4 D5 D6 D7 Y2 Y1 Y0
1 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0 1
0 0 1 0 0 0 0 0 0 1 0
0 0 0 1 0 0 0 0 0 1 1
0 0 0 0 1 0 0 0 1 0 0
0 0 0 0 0 1 0 0 1 0 1
0 0 0 0 0 0 1 0 1 1 0
0 0 0 0 0 0 0 1 1 1 1

Self-Instructional
Material 67
Logical Circuits
4.6 MULTIPLEXER

This type of encoder has ten inputs-one for each decimal input and four outputs
NOTES corresponding to the BCD code, as shown in Figure 4.12. The truth table for a
decimal-to-BCD encoder is given in Table 4.7. From the truth table, we can
determine the relationship between each BCD input and the decimal digits. For
example, the most significant bit of the BCD code, D is a 1 for decimal digit 8 or
9. The OR expression for bit D in terms of decimal digits can therefore the written
D=8+9
The output C is HIGH for decimal digits 4, 5, 6 and 7 and can be written as,
C =4 + 5 + 6 + 7
0
1
2 1

Decimal 3 2 BCD
input output
4 4
5 8
6
7

Fig. 4.12 Logic Symbol for a Decimal-to-BCD Converter

Similarly, B =2+ 3+6+ 7 and A=1+3+5+7+9


The above expressions for BCD outputs can be implemented using OR gates as
shown in Figure 4.12. The basic operation is as follows. When a HIGH appears
on one of the decimal digit input lines, the appropriate levels occur on the four
BCD output lines.
Table 4.7 Truth Table for Decimal-to-BCD Converter

Decimal Digit BCD code


D C B A
0 0 0 0 0
1 0 0 0 1
2 0 0 1 0
3 0 0 1 1
4 0 1 0 0
5 0 1 0 1
6 0 1 1 0
7 0 1 1 1
8 1 0 0 0
9 1 0 0 1

Self-Instructional
68 Material
9 8 7 6 5 4 3 2 1 0 Logical Circuits

A (LSB)

NOTES
B

D (MSB)

Fig. 4.13 Logic Diagram for Decimal-to-BCD Converter

4.7 DEMULTIPLEXER

Multiplexer means ‘many into one’. Multiplexing is the process of transmitting a


large number of information units over a small number of channels or lines.
A digital multiplexer or a data selector (MUX) is a combinational circuit that
accepts several digital data inputs and selects one of them and transmits information
on a single output line.
Control lines are used to make the selection. The basic multiplexer has
several data input lines and a single output line. The selection of a particular line is
controlled by a set of selection lines. The block diagram of a multiplexer with n
input lines, m control signals and one output line is shown in Figure 4.14. A
multiplexer is also called a data selector since it selects one of many inputs and
steers the data to the output line.
The multiplexer acts like a digitally controlled multiplexer switch where the
digital code applied to the SELECT input controls which data inputs will be
switched to the output. A digital multiplexer has N inputs and only one output.
m-controls signals

Output
n-input signal
signals Multiplexer

Fig. 4.14 Block Diagram of Multiplexer


Self-Instructional
Material 69
Logical Circuits 4.7.1 Basic Two-Input Multiplexer
Figure 4.15 shows the basic 2 × 1 MUX. This MUX has two input lines A and B
and one ouput line Y. There is one select input lines. When the select input S = 0,
NOTES data from A is selected to the output line Y. If S = 1, data from B will be selected
to the output Y. The logic circuitry for a two-input MUX with data inputs A and B
and select input S is shown in Figure 4.15. It consists of two AND gates G1 and
G2, a NOT gate G3 and an OR gate G4. The Boolean expression for the output
is given by
Y = A S  BS
When the select line input S = 0, the expression becomes
Y = A .1 + B . 0 (Gate G1 is enabled)
which indicates that output Y will be identical to input signal A.
Similarly, when S = 1, the expression becomes
Y =A . 0+B . 1=B (Gate G2 is enabled)
showing that output Y will be identical to input signal B.
In many situations a strobe or enable input E is added to the select line S, as
shown in Figure 4.16. The multiplexer becomes operative only when the strobe
line E = 0.
Data select line S (Select line)
A AS
G1
Input
Output
lines G3
A G4 Y
2×1 Y
Input lines B
MUX Output line G2 BS
B
Y = AS + BS

(a) Block Diagram of 2 × 1 MUX (b) Logic Diagram

Fig. 4.15 Basic 2-Input Multiplexer

Figure 4.16 shows the logic diagram of 2-input multiplexer with strobe input.
S
Select Strobe or Enable
E
G5
A
G1
Input G3
lines
G4 Y
B
G2

Fig. 4.16 Logic Diagram of 2-Input Multiplexer with Strobe Input

Self-Instructional
70 Material
When the strobe input E is at logic 0, the NOT gate G5 is 1 and all AND gates G1 Logical Circuits

and G2 are enabled. Accordingly, when S = 0 and 1, inputs A and B are selected
as before. When the strobe input E = 1, all lines are disabled and the circuit will
not function.
NOTES
4.7.2 Four-Input Multiplexer
A logic symbol and diagram of a 4-input multiplexer are shown in Figure 4.17. It
has two data select lines S0 and S1 and four data input lines. Each of the four data
input lines is applied to one input of an AND gate.
Depending on S0 and S1 being 00, 01, 10 or 11, data from input lines A to D
are selected in that order. The Boolean expression for the output is given by the
Table 4.8.
Table 4.8 Truth Table for Function Table

Select Lines Output

S1 S0 Y

0 0 A
0 1 B
1 0 C
1 1 D

S1 S0

G5 G6
A
G1

B
Data select lines G2
S1 S0 Y
G7
C
G3
A
Input B 4×1 Output D
Y
lines C MUX G4
D

(a) Block Diagram of 4 × 1 Multiplexer (b) Logic Diagram

Fig. 4.17 Four-Input Multiplexer

Y = A S0 S1  BS 0 S1  CS0 S1  DS 0 S1
If S0S1 = 00 (binary 0) is applied to data select lines, the data on input A appears
on the data output line.

Self-Instructional
Material 71
Logical Circuits Y =A . 1 . 1+ B . 0 . 1+ C . 1 . 0+D . 0 . 0
= A (Gate G1 is enabled)
Similarly, Y = BS0 S1 = B . 1 . 1 = B when S1S0 = 01 (Gate G2 is enabled)
NOTES Y = CS0 S1 = C . 1 . 1 = C when S1S0 = 10 (Gate G3 is enabled)
Y = DS0S1 = D . 1 . 1 = D when S1S0 = 11 (Gate G4 is enabled)
In a similar style, we can construct 8 × 1 MUXes, 16 × 1 MUXes, etc. Nowadays
two-, four-, eight- and 16-input multiplexes are readily available in the TTL and
CMOS logic families. These basic ICs can be combined for multiplexing a larger
number of inputs.
MultiplexerApplications: Multiplexer circuits find numerous applications in digital
systems. These applications include data selection, data rating, operation
sequencing, parallel to several conversion, waveform generation and logic function
generation.

Check Your Progress


4. Why is decoding used in a digital circuit?
5. Define encoder.
6. What is a digital multiplexer?
7. Define data distributor or demultiplxer.

4.8 FLIP-FLOPS

Synchronous circuits change their states only when clock pulses are present. The
operation of the basic latch can be modified, by providing an additional control
input that determines, when the state of the circuit is to be changed. The latch with
the additional control input is called the flip-flop. The additional control input is
either the clock or enable input.
Flip-flops are of different types depending on how their inputs and clock
pulses cause transition between two states. There are four basic types, namely, S-
R, J-K, D and T flip-flops.
4.8.1 S-R Flip-Flop
The S-R flip-flop consists of two additional AND gates at the S and R inputs of
S-R latch as shown in Figure 4.18.

Self-Instructional Fig. 4.18 Block Diagram of S-R Flip-Flop


72 Material
In this circuit, when the clock input is LOW, the output of both the AND Logical Circuits

gates are LOW and the changes in S and R inputs will not affect the output (Q ) of
the flip-flop. When the clock input becomes HIGH, the value at S and R inputs will
be passed to the output of the AND gates and the output (Q ) of the flip-flop will
change according to the changes in S and R inputs as long as the clock input is NOTES
HIGH. In this manner, one can strobe or clock the flip-flop so as to store either a
1 by applying S = 1, R = 0 (to set) or a 0 by applying S = 0, R = 1 (to reset) at any
time and then hold that bit of information for any desired period of time by applying
a LOW at the clock input. This flip-flop is called clocked S-R flip-flop.
The S-R flip-flop which consists of the basic NOR latch and two AND
gates is shown in Figure 4.19.

Fig. 4.19 Clocked NOR-Based S-R Flip-Flop

The S-R flip-flop which consists of the basic NAND latch and two other
NAND gates is shown in Figure 4.20. The S and R inputs control the state of the
flip-flop in the same manner as described earlier for the basic or unclocked S-R
latch. However, the flip-flop does not respond to these inputs until the rising edge
of the clock signal occurs. The clock pulse input acts as an enable signal for the
other two inputs. The outputs of NAND gates 1 and 2 stay at the logic 1 level as
long as the clock input remains at 0. This 1 level at the inputs of NAND-based
basic S-R latch retains the present state, i.e., no change occurs. The characteristic
table of the S-R flip-flop is shown in truth table of Table 4.9 which shows the
operation of the flip-flop in tabular form.

(a) NAND-based S-R Flip-Flop (b) Graphic Symbol

Fig. 4.20 NAND Based S-R Flip-Flop

Self-Instructional
Material 73
Logical Circuits Table 4.9 Characteristic Truth Table of S-R Flip-Flop

Present State Clock Pulse Data Inputs Next State Action


Qn CLK S R Qn+1
0 0 0 0 0 No change
NOTES 1 0 0 0 1 No change
0 1 0 0 0 No change
1 1 0 0 1 No change
0 0 0 1 0 No change
1 0 0 1 1 No change
0 1 0 1 0 Reset
1 1 0 1 0 Reset
0 0 1 0 0 No change
1 0 1 0 1 No change
0 1 1 0 1 Set
1 1 1 0 1 Set
0 0 1 1 0 No change
1 0 1 1 1 No change
0 1 1 1 ? Forbidden
1 1 1 1 ? Forbidden

4.8.2 D Flip-Flop
The D (delay) flip-flop has only one input called the Delay (D ) input and two
outputs Q and Q . It can be constructed from an S-R flip-flop by inserting an
inverter between S and R and assigning the symbol D to the S input. The structure
of D flip-flop is shown in Figure 4.21(a). Basically, it consists of a NAND flip-flop
with a gating arrangement on its inputs. It operates as follows:
1. When the CLK input is LOW, the D input has no effect, since the set and
reset inputs of the NAND flip-flop are kept HIGH.
2. When the CLK goes HIGH, the Q output will take on the value of the D
input. If CLK =1 and D =1, the NAND gate-1 output goes 0 which is the
S input of the basic NAND-based S-R flip-flop and NAND gate-2 output
goes 1 which is the R input of the basic NAND-based S-R flip-flop.
Therefore, for S = 0 and R = 1, the flip-flop output will be 1, i.e., it follows
D input. Similarly, for CLK=1 and D = 0, the flip-flop output will be 0. If D
changes while the CLK is HIGH, Q will follow and change quickly.
The logic symbol for the D flip-flop is shown in Figure 4.21(b). A simple
way of building a delay D flip-flop is shown in Figure 4.21(c). The truth table of D
flip-flop is given in Table 4.10 from which it is clear that the next state of the flip-
flop at time (Q n 1 ) follows the value of the input D when the clock pulse is applied.
As transfer of data from the input to the output is delayed, it is known as Delay
(D ) flip-flop. The D-type flip-flop is either used as a delay device or as a latch to
store 1 bit of binary information.

Self-Instructional
74 Material
Logical Circuits

NOTES

(a) Using NAND Gates (b) Logic Symbol (c) Using S-R Flip-Flop

Fig. 4.21 D Flip-Flop

Table 4.10 Truth Table of D Flip-Flop

CLK Input Output


D Qn+1

1 0 0
1 1 1
0 X No change

State Diagram and Characteristic Equation of D Flip-Flop


The state transition diagram for the delay flip-flop is shown in Figure. 4.22.

Fig. 4.22 State Diagram of Delay Flip-Flop

From the above state diagram, it is clear that when D =1, the next state will
be 1; when D = 0, the next state will be 0, irrespective of its previous state. From
the state diagram, one can draw the Present state–Next state table and the
application or excitation table for the Delay flip-flop as shown in Table 4.11 and
Table 4.12 respectively.
Table 4.11 Present State–Next State Table for D Flip-Flop

Present State Delay Input Next State


Qn D Qn 1

0 0 0
0 1 1
1 0 0
1 1 1

Self-Instructional
Material 75
Logical Circuits Table 4.12 Application or Excitation Table for D Flip-Flop

Qn Qn 1
Excitation Input
D
0 0 0
NOTES 0 1 1
1 0 0
1 1 1

Using the Present state–Next state table, the K-map for the next state (Q n 1 )
of the Delay flip-flop can be drawn as shown in Figure 4.23 and the simplified
expression for Q n 1 can be obtained as described below..

Fig. 4.23 Next State ( Q n 1


) Map for D Flip-Flop

From the above K-map, the characteristic equation for Delay flip-flop is,
Qn 1 D

Hence, in a Delay flip-flop, the next state follows the Delay input as
represented by the characterisitic equation.
4.8.3 J-K Flip-Flop
A J-K flip-flop has a characteristic similar to that of an S-R flip-flop. In addition,
the indeterminate condition of the S-R flip-flop is permitted in it. Inputs J and K
behave like inputs S and R to set and reset the flip-flop, respectively. When J = K =
1, the flip-flop output toggles, i.e., switches to its complement state; if Q = 0, it
switches to Q =1 and vice versa.
A J-K flip-flop can be obtained from the clocked S-R flip-flop by augmenting
two AND gates as shown in Figure 4.24(a). The data input J and the output Q are
applied to the first AND gate, and its output ( J Q) is applied to the S input of S-R
flip-flop. Similarly, the data input K and the output Q are connected to the second
AND gate and its output (KQ ) is applied to R input of S-R flip-flop. The graphic
symbol of J-K flip-flop is shown in Figure 4.24(b) and the truth table is shown in
Table 4.13. The output for the four possible input sequences are as follows.

Self-Instructional
76 Material
Logical Circuits

NOTES

(a) J-K Flip-Flop using S-R Flip-Flop (b) Graphic Symbol of J-K Flip-Flop

Fig. 4.24 J-K Flip-Flop

Table 4.13 Truth Table of J-K Flip-Flop

CLK Inputs Output


J K Q n+1 Action

X 0 0 Qn No change
1 0 1 0 Reset
1 1 0 1 Set
1 1 1 Qn Toggle

State Diagram and Characteristic Equation of J-K Flip-Flop


The state transition diagram for J-K flip-flop can be drawn as shown in Figure
4.25.

Fig. 4.25 State Diagram of J-K Flip-Flop

From the above state diagram, one can easily understand that the state
transition from 0 to 1 takes place whenever J is asserted (i.e., J =1 ) irrespective
of K value. Similarly, state transition from 1 to 0 takes place whenever K is asserted
(i.e., K = 1) irrespective of the value of J. Also, the state transition from 0 to 0
occurs whenever J = 0, irrespective of the value of K and the state transition from
1 to 1 occurs whenever K = 0, irrespective of J value.
From the above state diagram and truth table (Table 4.13) of J-K flip-flop,
the Present state–Next state table and application table or excitation table for J-K
flip-flop are shown in Table 4.14 and Table 4.15, respectively.

Self-Instructional
Material 77
Logical Circuits Table 4.14 Present State–Next State Table for J-K Flip-Flop

Present State Inputs Next State


Qn J K Qn+1
0 0 0 0
NOTES 0 0 1 0
0 1 0 1
0 1 1 1
1 0 0 1
1 0 1 0
1 1 0 1
1 1 1 0

Table 4.15 Application or Excitation Table for J-K Flip-Flop

Qn Qn+1 Excitation Inputs


J K
0 0 0 d
0 1 1 d
1 0 d 1
1 1 d 0

From the Table 4.14, a Karnaugh map (K-Map) for the next state transition
(Q n 1 ) can be drawn as shown in Figure 4.25 and the simplified logic expression
which represents the characteristic equation of J-K flip-flop can be obtained as
follows.
From the K-map shown in Figure 5.9, the characteristic equation of J-K
flip-flop can be written as,
Qn 1 JQ n KQ n

Fig. 4.26 Next–State ( Q n 1


) K-Map for J-K Flip-Flop

4.8.4 T Flip-Flop
Another basic flip-flop, called the T or Trigger or Toggle flip-flop, has only a
single data (T) input, a clock input and two outputs Q and Q. The T--type flip-flop
is obtained from a J-K flip-flop by connecting its J and K inputs together. The
designation T comes from the ability of the flip-flop to ‘toggle’ or complement its
state.

Self-Instructional
78 Material
The block diagram of a T flip-flop and its circuit implementation using a J- Logical Circuits

K flip-flop are shown in Figure 4.27. The J and K inputs are wired together. The
truth table for T flip-flop is shown in Table 4.16.

NOTES

(a) Block Diagram of T Flip-Flop (b) T Flip-Flop using a J-K Flip Flop

Fig. 4.27 T Flip-flop

When the T input is in the 0 state (i.e., J = K = 0) prior to a clock pulse, the
Q output will not change with clocking. When the T input is at 1(i.e., J = K = 1)
level prior to clocking, the output will be in the Q state after clocking. In other
words, if the T input is a logical 1 and the device is clocked, then the output will
change state regardless of what output was prior to clocking. This is called Toggling
hence the name T flip-flop is given.
Table 4.16 Truth Table of T Flip-Flop

Qn T Qn+1
0 0 0
0 1 1
1 0 1
1 1 0

The above truth table shows that when T = 0, then Q n 1 =Q n , i.e., the next
state is the same as the present state and no change occurs. When T = 1, then
Q n 1 = Q n , i.e., the state of the flip-flop is complemented.

Application of T flip-flop: T-type flip-flop is most often seen in counters and


sequential counting networks because of its inherent divide-by-2 capability. When
a clock pulse is applied, then the output changes state once every input cycle, thus
repeating one cycle for every two input cycles. This is the action required in many
cases for binary counters.
State Diagram and Characteristic Equation of T Flip-Flop
The state transition diagram for the Trigger flip-flop is shown in Figure 4.28.

Fig. 4.28 State Diagram of Trigger Flip-Flop

Self-Instructional
Material 79
Logical Circuits From the above state diagram, it is clear that when T = 1, the flip-flop
changes or toggles its state irrespective of its previous state. When T = 1 and Q n =
0, the next state will be 1 and when T = 1 and Q n = 1, the next state will be 0.
NOTES Similarly, one can understand that when T = 0, the flip-flop retains its previous
state. From the above state diagram, one can draw the Present state–Next state
table and application or excitation table for the Trigger flip-flop as shown in Table
4.17 and Table 4.18, respectively.
Table 4.17 Present State–Next State Table for T Flip-Flop

Qn T Qn+1
0 0 0
0 1 1
1 0 1
1 1 0

Table 4.18 Application or Excitation Table for T Flip-Flop

Qn Qn+1 Excitation Input


T
0 0 0
0 1 1
1 0 1
1 1 0

From the Table 4.17, the K-map for the next state (Q n 1 ) of Trigger flip-
flop can be drawn as shown in Figure 4.29 and the simplified expression for Q n 1
can be obtained as follows.

Fig. 4.29 Next State ( Q n 1


) Map for T Flip-Flop

From the K-map shown in Figure 4.29, the characteristic equation for Trigger
flip-flop is,
Qn 1 TQ n TQ n

So, in a Trigger flip-flop, the next state will be the complement of the previous
state when T = 1.
4.8.5 Master–Slave Flip-Flops
A Master–Slave flip-flop can be constructed using two J-K flip-flops as shown in
Figure 4.30. The first flip-flop, called the Master, is driven by the positive edge of
the clock pulse; the second flip-flop, called the Slave, is driven by the negative
edge of the clock pulse. Therefore, when the clock input has a positive edge, the
Self-Instructional
80 Material
master acts according to its J-K inputs, but the slave does not respond since it Logical Circuits

requires a negative edge at the clock input. When the clock input has a negative
edge, the slave flip-flop copies the master outputs. But the master does not respond
to the feedback from Q and Q , since it requires a positive edge at its clock input.
NOTES
Thus, the Master–Slave flip-flop does not have race around problem.

Fig. 4.30 Master–Slave J-K Flip-Flop

A Master–Slave J-K flip-flop constructed using NAND gates is shown in


Figure 4.30. It consists of two flip-flops connected in series. NAND gates-1
through 4 form the master flip-flop and NAND gates-5 through 8 form the slave
flip-flop. When the clock is positive, a change in J and K inputs cause a change of
state in the master flip-flop. During this period, the slave retains its previous state
and serves as a buffer between the master and the output. When the clock goes
negative, the master flip-flop does not respond, i.e., it maintains its previous state,
while the slave flip-flop is enabled and changes its state to that of the master flip-
flop. The new state of the slave then becomes the state of the entire Master–Slave
flip-flop. The operation of Master–Slave J-K flip-flop for different J-K input
combinations can be explained as follows:

Fig. 4.31 Clocked Master–Slave J-K Flip-Flop using NAND Gates

If J = 1 and K = 0, the master flip-flop sets on the positive clock edge. The
HIGH Q (1) output of the master drives the input ( J ) of the slave. So, when the
negative clock edge hits, the slave also sets. The slave flip-flop copies the action
of the master flip-flop.
If J = 0 and K = 1, the master resets on the leading edge of the CLK pulse.
The HIGH Q output of the master drives the input (K) of the slave flip-flop. Then,
Self-Instructional
Material 81
Logical Circuits the slave flip-flop resets at the arrival of the trailing edge of the CLK pulse. Once
again, the slave flip-flop copies the action of the master flip-flop.
If J = K = 1, the master flip-flop toggles on the positive clock edge and the
slave toggles on the negative clock edge. The condition J = K = 0 input does not
NOTES
produce any change.
Master–Slave flip-flops operate from a complete clock pulse and the outputs
change on the negative transition.

Check Your Progress


8. Define a flip-flop.
9. What are the different types of flip-flops?
10. How will you obtain the T-type flip-flop from J-K flip-flop?

4.9 REGISTERS

A register is a group of flip-flops used to store or manipulate data or both. Each


flip-flop is capable of storing one bit of information. An n-bit register has n flip-
flop and is capable of storing any binary information containing n-bits.
The register is a type of sequential circuit and an important building block
used in digital system like multiplies, dividers, memories, microprocessors, etc.
A register stores a sequence of 0’s and l’s. Register that are used to store
information are known as memory registers. If they are used to process
information, they are called shift registers.
4.9.1 Shift Registers Basics
A shift register is a group of FFs arranged so that the binary numbers stored in the
FFs are shifted from one FF to the next for every clock pulse.
Shift registers often are used to store data momentarily. Figure 4.32 shows
a typical example of where shift registers might be used in a digital system
(calculator). Notice that use of shift registers to hold information from the encoder
for the processing unit. A shift register is also being used for temporary storage
between the processing unit and the decoder. Shift registers are also used at other
locations within a digital system.

7 8 9
Processing

Decoder
Encoder

Register

Register

4 5 6
Shift

Shift
Uint

1 2 3
0

Fig. 4.32 Block Diagram of a Digital System using Shift Registers

Self-Instructional
82 Material
There are two modes of operation for registers. The first operation is series Logical Circuits
or serial operation. The second type of operation is parallel shifting. Input
and output functions associated with registers include (1) serial input/serial output
(2) serial input/parallel output (3) parallel input/parallel output (4) parallel input/
serial output. NOTES
Hence input data are presented to registers in either a parallel or a serial
format.
To input parallel data to a register requires that all the flip-flops be affected
(set or reset) at the same time. To output parallel data requires that the flip-flop Q
outputs be accessible. Serial input data loading requires that one data bit at a time
is presented to either the most or least significant flip-flop. Data are shifted from
the flip-flop initially loaded to the neat one in series. Serial output data are taken
from a single flip-flop, one bit at a time.
Serial data input or output operations require multiple clock pulses. Parallel
data operations only take one clock pulse. Data can be loaded in one format and
removed in another. Two functional parts are required by all shift registers: (1)
data storage flip-flops and (2) logic to load, unload and shift the stored information.
The block diagrams of four basic register types is shown in Figure 4.33.
Registers can be designed using-discrete flip-flops (S-R J-K and D-type). Registers
are also available as MSI.

Serial
data n-bit
input
Serial Serial
data n-bit data
input output

MSB LSB
Parallel data outputs
(a) Serial In/Serial Out (b) Serial In/Parallel Out
Parallel data inputs

Parallel data inputs

n-bit
Serial data
n-bit output

Parallel in/Serial out


Parallel data outputs
(c) Parallel In/Serial Out (d)
Fig. 4.33 Register Types

4.9.2 Serial In/Serial Out Shift Registers


This type of shift register accept data serially-that is, one bit at a time on a single
line. It produces the stored information on its output also in serial form. Data may
be shifted left (from low-to high order bits) using shift-left register or shifted
right (from high to low order bits) using a shift right register. Self-Instructional
Material 83
Logical Circuits Shift Left Register
A shift left register can be built using D FFs or J-K FFs as shown in Figure 4.34.
A J-K FF register requires connection of both J and K inputs, input data are
NOTES connected to the right most (lowest order) stage with date being shift bit-by-bit to
the left.
D C B A
Q J Q J Q J Q J

Input
data
D C B A
Q K Q K Q K Q K

Shift pulses (four)


(a) J-K Type
Serial
D C B A
Q D Q D Q D Q D Input
Serial data
output
data
Q Q Q Q

Shift pulses

(b) D Type

Fig. 4.34 Shift-Left Registers J-K, and D Types

For register of Figure 4.34 (b) using D FFs, a single data line is connected
between states, again, 4 shift pulse are required to shift a 4-bit word into the
4-stage register.
The shift pulse is applied to each stage, operating each simultaneously. When
the shift pulse occurs, the date input is shifted into that stage. Each stage is set or
reset corresponding to the input data at the time of shift pulse occurs. Thus the
input data bit is shifted into stage A by the first shift pulse. At the same time the
data of stage A is shifted into stage B, and so on for the following stages. For each
shift pulse, data stored in the register stages shift left by one stage. News data are
shifted into stage A, where as the data present in stage D are shifted out (to the
left) for use by some other shift register or computer unit.
For example, consider starting with all stages reset and applying a steady
logical-1 input a data input to stage A. The data in each stage after each of four
shift pulses is shown in Table 4.19. Notice in Table 4.19 that the logical-1 input
shifts into stage A and the shifts left to stage D after four shift pulses.
As another example, consider shifting of alternate 0 and 1 data into stage A
starting all stages logical-1. Table 4.19 shows the data in each stage after each of
four shift pulses.

Self-Instructional
84 Material
Table 4.19 Operation of Shift-Left Register Logical Circuits

Shift Pulse D C B A
0 0 0 0 0
1 0 0 0 1 NOTES
2 0 0 1 1
3 0 1 1 1
4 1 1 1 1

As a third example of shift register operation, consider starting with the count
starting with the count in step 4 of Table 4.20 and applying four more shift pulses
while placing a steady logical-0 input as data input to stage A. This is shown in
Table 4.21.
Table 4.20 Shift- Register Operation Table 4.21 Final Stage
Shift Pulse D C B A Shift Pulse D C B A
0 1 1 1 1 0 0 1 0 1
1 1 1 1 0 1 1 0 1 0
2 1 1 0 1 2 0 1 0 0
3 1 0 1 0 3 1 0 0 0
4 0 1 0 1 4 0 0 0 0

Shift Right Register


A shift-right register can also be built using D FFs of J-K FFs as shown in
Figure 4.35. Let us illustrate the entry of the 4-bit binary number 1101 into the
register, beginning with the right most bit. The 1 is put into the date input line,
making D = 1 for stage D. When the first clock pulse is applied, FF A is SET, thus
storing the 1. Next the 0 is applied to the date input, making D = 0 for FF B
because D (input) of FF B is connected to the QA output.
Data
input QA QB QC QD
D Q D Q D Q D Q
Data
output

Q Q Q Q

CLK
(a)
QA QB QC QD
J Q J Q J Q J Q
Serial
data
input
K Q K Q K Q K Q

CLK
(b)
Fig. 4.35 J-K Flip-Flops in Shift Right Register

When the second clock pulse occurs, the 0 on the data input is “shifted” into the
FF A because FF A RESETs, and the 1 that was in FF A is “shifted” into FF B.
The next 1 in the binary number is now put onto the data-input line, and a clock
pulse is applied. The l is entered into FF A, the 0 stored in FF A is shifted into FF
Self-Instructional
Material 85
Logical Circuits B, and the l stored in FF B is shifted into FF C. The last bit in the binary number,
a l, is now applied to the data input, and a clock pulse is applied. This time the l is
entered into FF A, the l stored in FF A is shifted into FF B, the 0 stored in FF B is
shifted into FF C, and the l stored in FF C is shifted into FF D. This completes the
NOTES serial entry of the 4-bit binary number into the shift register, where it can be stored
for any amount of time. Table 4.23 shows the action of shifting all logical-l inputs
into an initially reset shift register. Table 4.22 shows the register operation for the
entry of 1101.
Table 4.22 Register Operation Table 4.23 Shifting Logical Inputs
Shift Pulse QA QB QC QD Shift Pulse QA QB QC QD
0 0 0 0 0 0 0 0 0 0
1 1 0 0 0 1 1 0 0 0
2 0 1 0 0 2 1 1 0 0
3 1 0 1 0 3 1 1 1 0
4 1 1 0 1 4 1 1 1 1
The waveforms shown in Figure 4.36 illustrate the entry of 4-bit number 0100.
For a J-K FF, the data bit to be shifted into the FF must be present at the J and
K inputs when the clock transitions (low or high). Since the data bit is either a l or
a 0, there are two cases:
1. To shift a 0 into the FF, J = 0 and K = 1,
2. To shift a l into the FF, J = 1 and K = 0,
At time A : All the FFs are reset. The FF output just after time A are
QRST = 0000.
At time B : The FFs all contain 0s, the FF outputs are QRST = 0000.
A B C D
Time

Clock 0

J
0
Serial
data
input
K 0

Q
0 0

1
R
0

S 0
0

T 0
0

Self-Instructional Fig. 4.36 Waveforms of 4-Bit Serial Input Shift Register


86 Material
At time C : The FFs still all contain 0s. The FF output after time C are QRST = 1000. Logical Circuits

At time D : The FF output are QRST = 0100.

4.9.3 Serial In/Parallel Out Shift Registers


NOTES
The logic diagram of a 4-bit serial-in-parallel-out shift register is shown in
Figure 4.37. It has one input and the number of output pins is equal to the number
of FFs in the register. In this register data is entered serially but shifted out in
parallel. In order to shift the data out in parallel, it is necessary to have all the data
available at the outputs at the same time. Once the data are stored, each bit appears
on its respective output and all bits are available simultaneously, rather than on a
bit-by-bit basis as with the serial output.
Data
D QA D QB D QC D QD
input
A B C D

CLK input
QA QB QC QD
(a) Logic Diagram

Data input
D SRG 4

CLK

QA QB QC QD
(b) Logic Symbol

Fig. 4.37 A Serial-In-Parallel-Out Shift Register

4.9.4 Parallel In/Serial Out Shift Registers


For a register with parallel data inputs, the bits are entered simultaneoulsy into
their respective stages on parallel lines rather than on a bit-by-bit basis on one line.
A 4-bit parallel-in-serial-out shift register is illustrated in Figure 4.38. It has
four data-input lines A, B, C and D and a SHIFT/ LOAD input. SHIFT/ LOAD is
a control input that allows four bits of data to enter the register in parallel or shift
the data in serial.
When SHIFT/ LOAD is LOW, AND gates G111 through G3 are enabled,
allowing each data bit to be applied to the D input of its respective FF. When a
clock pulse is applied, the FFs with D = 1 will SET and those with D = 0 will
RESET, thereby storing all four bits simultaneously.

Self-Instructional
Material 87
Logical Circuits
When SHIFT/ LOAD is HIGH, AND gates through G1 through G3 are
disabled and AND gates G4 through G6 are enabled, allowing the data bits to shift
right from one stage to the next. The OR gates allow either the normal shifting
operation or the parallel data-entry operation, depending on which AND gates
NOTES
are enabled by the level on the SHIFT/ LOAD input.
A B C D

SHIFT/LOAD

G4 G1 G5 G2 G6 G3

D QA D QB D QC D QD
Serial
A B C D data out

CLK
(a) Logic Diagram
Data in
A B C D

SHIFT/LOAD
Data out
SRG 4
CLK
(b) Logic Symbol

Fig 4.38 A 4-Bit Parallel-In-Serial-Out Shift Register

4.9.5 Parallel In/Parallel out Registers


In this type of register, data inputs can be shifted either in or out of the register in
parallel. It has four inputs and four outputs. In this register, there is no inter
connection between successive FFs since no serial shifting is required. Therefore,
the moment the parallel entry of the input data is accomplished, the respective bits
will appear at the parallel outputs.
The logic diagram of a 4-bit parallel-in-parallel-out shift register is shown in
Figure 4.39. Let A, B, C and D be the inputs applied directly to delay (D) inputs
of respective FFs. Now on applying a clock pulse, these inputs are entered into
the register and are immediately available at the outputs QA, QB, QC and QD.

Self-Instructional
88 Material
A B C D Logical Circuits

D QA D QB D QC D QD
A B C D

NOTES
CLK
QA QB QC QD

Fig. 4.39 Logic Diagram of a 4-Bit Parallel-In-Parallel-Out Shift Register

4.10 COUNTERS

In addition to functioning as a frequency divider, the circuit of Figure 4.40(a)


operates as a binary counter. Here J-K flip-flops are negative edge-triggered.
Flip-flops are initially RESET. Let Q 2 Q 1 Q 0 be a binary number where Q2 is the
2 2 position, Q 1 is the 21 position and Q 0 is the 2 0 position. The first eight states
of Q 2 Q 1 Q 0 in the timing diagram should be recognised as the binary counting
sequence from 000 to 111. After the first clock pulse, the flip-flops are in the 001
state, i.e., Q2 = 0, Q 1 = 0 and Q0 = 1, which represents 0012 (=110); after the
second CLK pulse, the flip-flops represent 010 2 (= 210 ); after the third pulse,
0112 (= 310 ) and so on until after seven CLK pulses, 1112 (= 710 ). On the eighth
pulse, the flip-flops return to the 000 state, and the binary sequence repeats itself
after every eight clock pulses as shown in the timing diagram of Figure 4.40(b).
Thus, the flip-flops count in sequence from 0 to 7 and then recycle back to 0 to
begin the sequence again.

Fig. 4.40 J-K Flip-Flops Wired as 3-Bit Binary Counter


Self-Instructional
Material 89
Logical Circuits 4.10.1 Asynchronous Counter Operations
Figure 4.41 shows a 4 bit binary ripple counter using J-K flip-flops. The clock
signal is connected to the clock input of only first stage flip-flop A, i.e., the least
NOTES significant bit stage of the counter, the output of A drives B, and the output of B
drives flip-flop C and the output of C drives flip-flop D. The triggers move through
the flip-flops like a ripple. Hence this counter is known as a ripple counter. All the
J and K inputs are tied to VCC (1) which means that each flip-flop toggles on the
negative edge of its clock input. With four binary places (QD, QC, QB and QA), we
can count from 0000 to 1111 (0 to 15 in decimal).
Consider, initially, all flip-flops to be in the logical 0 state (i.e., QA = QB = QC
= QD = 0) in Figure 4.41(a). As clock pulse 1 arrives at the clock (CLK) input of
flip-flop A, it toggles (on the negative edge) and the display shows 0001. With the
arrival of the second clock pulse flip-flop A toggles again then QB goes from 1 to 0.
This causes flip-flop B to toggle to 1. The count on the display now reads 0010.
The counting continues, with each flip-flop output triggering the next flip-flop on
its negative going pulse. Before the arrival of sixteenth clock pulse all flip-flops are
in the logical 1 state and the display reads 1111. Clock pulse 16 causes QA, QB,
QC, QD to go to logical 0 state in turn.
Table 4.24 shows the sequence of binary states that the flip-flops will follow
as clock pulses are applied continuously. The counting mode of the mod-16 counter
is shown by waveforms in Figure 4.41(b). The clock input is shown on the top
line. The state of each flip-flop is shown on the waveforms. The binary count is
shown across the bottom of the diagram.
The delay between the responses of successive flip-flops is typically 5–20
numbers.
[1]
+ VCC
J QA J QB J QC J QD

A B C C
Clock
K QA K QB K QC K QD

QA QB QC QD

Output
(a) Logic Diagram
Clock
input 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

QA

QB

QC

QD
(b) Waveform Diagram

Self-Instructional Fig. 4.41 4-Bit Binary Ripple Counter


90 Material
Table 4.24 State Table of 4-Bit Binary Ripple Counter 9 Logical Circuits

States
Number of
Clock Pulses QD QC QB QA
0 0 0 0 0 NOTES
1 0 0 0 1
2 0 0 1 0
3 0 0 1 1
4 0 1 0 0
5 0 1 0 1
6 0 1 1 0
7 0 1 1 1
8 1 0 0 0
9 1 0 0 1
10 1 0 1 0
11 1 0 1 1
12 1 1 0 0
13 1 1 0 1
14 1 1 1 0
15 1 1 1 1
0 0 0 0 0

MOD–Number or Modulus
The MOD-number (or the modulus) of a counter is the total number of states
which the counter goes through in each complete cycle.
MOD number = 2N
Where N = Number of flip-flops.
The maximum binary counted by the counter is 2N – 1. Thus, a 4 flip-flop counter
can count as high as (1111) = 24 – 1 = 16 – 1 = 1510. The MOD number can be
increased by adding more FFs to the counter.
4.10.2 Synchronous Counter Operations
A synchronous, parallel, or clocked counter is one in which all stages are triggered
simultaneously.
When the carry has to propagate through a chain of n flip-flops, the overall
propagation delay time is ntpd. For this reason ripple counters are too slow for some
application. To get around the ripple-delay problem, can use a synchronous counter.
A 4-bit (MOD-16) synchronous counter, with parallel carry is shown in
Figure 4.42. The clock is connected directly to the CLK input of each flip-flop,
i.e., the clock pulses drive all flip-flops in parallel. In this counter only the LSB flip-
flop A has its J and K inputs connected permanently to VCC, i.e, at the high level.
The J, K inputs of the other flip-flops are driven by some combination of flip-flop
Self-Instructional
Material 91
Logical Circuits outputs. The J and K inputs of the flip-flop B are connected to QA output of flip-
flop. As the J and K inputs of the FF D are connected with AND operated output
of QA, and QB. Similarly, the J and K inputs of FF D are connected with AND
operated output of QA, QB, QC.
NOTES
Clock-input

JA QA JB QB JC QC JD QD

A B C D

KA QA KB QB KC QC KD QD

VCC (1)

Fig. 4.42 4-Stage Synchronous Counter

For this circuit to count properly, on a given negative transition of the clock, only
those FFs are supposed to toggle on the negative transition should have J = K =
1 when the negative transition occurs. According to the state Table 4.25, FF A is
required to change state with occurrence of each clock pulse. FF B changes its
state when QA=1. The flip-flop QC toggles only when QA = QB=1. And the flip-
flop QD changes state only when QA=QB=QC=1. In other words, a flip–flop toggles
on the next negative transition clock edge if all lower bits are 1s.
The counting action of counter is as follows:
1. The first negative clock edge sets QA to get Q = 0001.
2. Since QA is 1, FF is conditioned to toggle on the next negative clock edge.
3. When the second negative clock edge arrives, QB and QA simultaneously
toggle and the output word becomes Q = 0010. This process continues.
4. By adding more flip-flops and gate we can build synchronous counter of
any length. The advantage of synchronous counter is its speed, it takes only
one propagation delay time for the correct binary count to appear after the
clock edge hits.
Table 4.25 State Table of 4-bit Binary Ripple Counter
State QD QC QB QA

0 0 0 0 0
1 0 0 0 1
2 0 0 1 0
3 0 0 1 1
4 0 1 0 0
5 0 1 0 1
6 0 1 1 0
7 0 1 1 1
Self-Instructional
92 Material
8 1 0 0 0 Logical Circuits
9 1 0 0 1
10 1 0 1 0
11 1 0 1 1
12 1 1 0 0 NOTES
13 1 1 0 1
14 1 1 1 0
15 1 1 1 1
0 0 0 0 0

Advantage of Synchronous Counters Over Asynchronous


In asynchronous counters, the propagation delays of the FFs add together to
produce the overall delay. In synchronous counters, the total response is the sum
of time it takes one FF to toggle and the time for new logic levels to propagate
through a single AND gate to the J, K inputs. The total delay is the same no matter
how many FFs are in the counter. Thus, a synchronous counter operates at a
much higher input frequency. But, the circuitry of the synchronous counter is more
complex than that of the asynchronous counter.

Check Your Progress


11. Define registers.
12. What are memory and shift registers?
13. What do you understand by MOD-number of a counter?

4.11 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. Combinational logic determines the logical outputs. These outputs are


determined by the logical function being performed and the logical input
states at that particular moment.
2. An electronic device (combinational circuit) which performs arithmetic
addition of two bits is called a half-adder.
3. A full-adder is a combinational circuit that performs the arithmetic sum of
three input bits and produces a sum and a carry.
4. Decoding is necessary in such applications as data multiplexing, rate
multiplying, digital display, digital-to-analog converters and memory
addressing.
5. An encoder is a digital circuit that performs the inverse operation of a decoder
and the opposite of the decoding process is called encoding. An encoder is
also a combinational logic circuit that converts an active input signal into a
coded output signal.
Self-Instructional
Material 93
Logical Circuits 6. A digital multiplexer or a data selector (MUX) is a combinational circuit
that accepts several digital data inputs and selects one of them and transmits
information on a single output line.
7. A demultiplexer is a combinational logic circuit that receives information on
NOTES
a single line and transmits this information on one of the many output lines.
8. The latch with the additional control input is called the flip-flop.
9. Flip-flops are of different types depending on how their inputs and clock
pulses cause transition between two states. There are four basic types,
namely, S-R, J-K, D and T flip-flops.
10. The T-type flip-flop is obtained from a J-K flip-flop by connecting its J and
K inputs together.
11. A register is a group of flip-flops used to store or manipulate data or both.
Each flip-flop is capable of storing one bit of information.
12. A register stores a sequence of 0’s and l’s. Register that are used to store
information are known as memory registers. If they are used to process
information, they are called shift registers.
13. The MOD-number (or the modulus) of a counter is the total number of
states which the counter goes through in each complete cycle.

4.12 SUMMARY

 The outputs of combinational logic circuits are only determined by their


current input state as they have no feedback, and any changes to the signals
being applied to their inputs will immediately have an effect at the output.
 Common combinational circuits made up from individual logic gates include
multiplexers, decoders and demultiplexers, full and half adders etc. One of
the most common uses of combination logic is in multiplexer and demultiplexer
type circuits.
 A sequential circuit uses flip flops. Unlike combinational logic, sequential
circuits have state, which means basically, sequential circuits have memory.
The main difference between sequential circuits and combinational circuits
is that sequential circuits compute their output based on input and state, and
that the state is updated based on a clock.
 An electronic device (combinational circuit), which performs arithmetic
addition of two bits is called a half-adder.
 Decoding is necessary in such applications as data multiplexing, rate
multiplying, digital display, digital-to-analog converters and memory
addressing.
 An encoder is a digital circuit that performs the inverse operation of a
decoder. Hence, the opposite of the decoding process is called encoding.
Self-Instructional
94 Material
 Multiplexer means ‘many into one’. Multiplexing is the process of transmitting Logical Circuits

a large number of information units over a small number of channels or lines.


 The latch with the additional control input is called the flip-flop. The additional
control input is either the clock or enable input.
NOTES
 The D (delay) flip-flop has only one input called the Delay (D ) input and
two outputs Q and Q’.
 A J-K flip-flop has a characteristic similar to that of an S-R flip-flop. In
addition, the indeterminate condition of the S-R flip-flop is permitted in it.
Inputs J and K behave like inputs S and R to set and reset the flip-flop,
respectively.
 T or Trigger or Toggle flip-flop, has only a single data (T) input, a clock
input and two outputs Q and Q’. The T-type flip-flop is obtained from a J-
K flip-flop by connecting its J and K inputs together.
 A Master–Slave flip-flop can be constructed using two J-K flip-flops. The
first flip-flop, called the Master, is driven by the positive edge of the clock
pulse; the second flip-flop, called the Slave, is driven by the negative edge
of the clock pulse.
 Register that are used to store information are known as memory registers.
If they are used to process information, they are called shift registers.
 A register which is capable of shifting data either left or right is called a
bidirectional shift register. A register that can shift in only one direction is
called a uni-directional shift register.
 The MOD-number (or the modulus) of a counter is the total number of
states which the counter goes through in each complete cycle.

4.13 KEY WORDS

 Full-adder: A combinational circuit that performs the arithmetic sum of


three input bits and produces a SUM and a CARRY.
 Half-subtractor: A combinational circuit which is used to perform
subtraction of two binary bits.
 Full-subtractor: A combinational circuit that performs 3-bit subtraction.
 Multiplexer: It acts like a digitally controlled multiposition switch where
the digital code applied to the SELECT input controls which data inputs
will be switched to the output.
 Flip-flop: It is the latch with the additional control input.
 Counter: It is a sequential circuit consisting a set of flip-flops connected in
a specific manner to count the sequence of the input pulses presented to it in
digital form.
Self-Instructional
Material 95
Logical Circuits
4.14 SELF ASSESSMENT QUESTIONS AND
EXERCISES

NOTES Short Answer Questions


1. Write briefly on the operation of a full adder.
2. What is a basic two-input multiplexer?
3. Describe the operation of a D flip-flop.
4. What are the applications of T flip-flop?
5. What is register/shift register?
Long Answer Questions
1. Draw the circuit design for full-adder.
2. Why is 3-line-to-8-line decoder used in circuit design?
3. Draw the block diagram of an encoder.
4. Draw the logic diagram of a 1:4 demultiplexer.
5. Explain with a logic diagram how a J–K master-slave FF is triggered.
6. Describe synchronous counter operations.
7. Explain how an S-R FF can be converted into a D FF.

4.15 FURTHER READINGS

Basavaraj, B. and H.N. Shivashankar. 2004. Basic Electronics. New Delhi: Vikas
Publishing House Pvt. Ltd.
Kumar, A. Anand. 2003. Fundamentals of Digital Circuits. New Delhi: Prentice-
Hall of India.
Mano, Morris. 1979. Digital Logic and Computer Design. New Delhi: Prentice-
Hall of India.
Roth, Charles. 2001. Fundamentals of Logic Design. Thomson Learning.
Yarbarough, John M. 1996. Digital Logic Applications and Design. Thomson
Learning.

Self-Instructional
96 Material
CPU Essentials
BLOCK - II
BASICS OF CPU AND BUSES
NOTES
UNIT 5 CPU ESSENTIALS
Structure
5.0 Introduction
5.1 Objectives
5.2 Modern CPU Concepts
5.3 CPU: Circuit Size and Die Size
5.4 Answers to Check Your Progress Questions
5.5 Summary
5.6 Key Words
5.7 Self Assessment Questions and Exercises
5.8 Further Readings

5.0 INTRODUCTION

In this unit, you will learn about the Central Processing Unit (CPU), circuit size,
Die size and Processor Cooling. A central processing unit (CPU) is the electronic
circuitry within a computer that carries out the instructions of a computer program
by performing the basic arithmetic, logical, control and input/output (I/O) operations.
It is the most important component of the computer. A die, in the context
of integrated circuits, is a small block of semiconducting material on which a given
functional circuit is fabricated. The die size of the processor refers to its physical
surface area size on the wafer. The circuit size or feature size refers to the level of
miniaturization of the processor. To make more powerful processors,
more transistors are needed. A CPU cooler is device designed to draw heat away
from the system CPU and other components in the enclosure.

5.1 OBJECTIVES

After going through this unit, you will be able to:


 Explain the CISC and RISC processors
 Understand the CPU circuit size and die size
 Discuss about processor clock.
 Discuss the processor cooling.

Self-Instructional
Material 97
CPU Essentials
5.2 MODERN CPU CONCEPTS

The Central Processing Unit (CPU) is the most important component of the
NOTES computer. The CPU itself is an internal part of the computer system and is usually
a microprocessor-based chip housed on single or at times multiple printed circuit
boards. The CPU is directly inserted on the motherboard and each motherboard
is compatible with a specific series of CPUs only. The CPU generates a lot of heat
and has a heat sink and a cooling fan attached on the top which helps it to disperse
heat.
The market of microprocessors is dominated primarily by Intel and AMD,
both of which manufacture IBM-compatible CPUs. Motorola also manufactures
CPUs for Macintosh-based PCs. Cyrix, another IBM-compatible CPU
manufacturer is next in line after Motorola in the market in terms of global sales.
Types of Processors
The brands of CPUs listed are not the only differentiating factors, between different
processors. There are various technical aspects to these processors, which allow
us to differentiate between CPUs of different power, speed and processing
capability. Accordingly, each of these manufacturers sells numerous product lines
offering CPUs of different architecture, speed, price range, etc. The following are
the most common aspects of modern CPUs that enable us to judge their quality or
performance:
1. 32 or 64-Bit Architecture: A bit is the smallest unit of data that a computer
processes. 32 or 64-bit architecture refers to the number of bits that the
CPU can process at a time.
2. Clock Rate: The speed at which the CPU performs basic operations,
measured in Hertz (Hz) or in modern computers Megahertz – MHz or
Gigahertz – GHz.
3. Number of Cores: CPUs with more than one core are essentially multiple
CPUs running in parallel to enable more than one operation to be performed
simultaneously. Current ranges of CPUs offer up to eight cores. Currently,
the dual core, i.e., two cores CPU is most commonly used for standard
desktops and laptops and Quad core, i.e., four cores, is popular for entry
level servers.
4. Additional Technology or Instruction Sets: These refer to unique features
that a particular CPU or range of CPUs offer to provide additional processing
power or reduced running temperature. These range from Intel’s MMX,
SSE3 and HT to AMD’s 3DNOW and Cool n Quiet.
These technical factors are the basic way to judge how a CPU will perform.
It is important to consider multiple factors when looking at a CPU rather than just

Self-Instructional
98 Material
the clock speed or any one specification on its own. For example, a 64-bit 3GHz CPU Essentials

processor with one core may perform poorly in comparison to a 32-bit 2GHz
processor with two cores. Similarly, different processors are better suited for
different tasks. For instance, Motorola processors are always rated higher for
graphic applications than Intel or AMD processors, which perhaps explains why NOTES
Macintosh uses them for their computers. It is very easy for a single-core processor
to run music videos, the Internet applications or games individually, but when multiple
applications are run together, it starts to slow down. A system running on a dual-
core processor would be able to multitask better then a single-core processor,
while it is very easy for an 8-core processor to run all these applications plus a lot
more without showing any signs of slowing down. However, Intel’s 4-core
processors are actually two dual-core processors combined in a single processor,
whereas AMD’s 4-core processors are actually four processors built in a single
chip.
It is not true that more the number of processors, the faster it gets, but it is
true that more the number of processors, the higher is the processing capability.
Therefore, a combination of the above-mentioned specifications, along with
the operating systems that the processor supports and the specific purpose for
which the computer is to be used, are the factors to be considered when deciding
which CPU is the most suitable for your needs.
Processor Clock
The speed at which the processor executes commands is called the processor
speed or clock speed. Every computer contains an internal clock, (known as the
system clock) that regulates the rate at which the instructions are executed and
synchronizes the various computer components. The processor requires fixed
number of clock cycles (electric pulses per second) to execute each instruction.
Clock cycles are required to fetch, decode, and execute a single program
instruction. Thus, the shorter the clock cycle, the faster the processor.
In a computer, clock speed, therefore refers to the number of pulses per
second generated by an oscillator that sets the tempo for the processor. It is usually
measured in MHz (Megahertz - Million of pulses per second) or GHz (Gigahertz
- Billions of pulses per second).
Computer clock speed has been roughly doubling every year. The Intel
8088, common in computers around the year 1990, ran at 4.77 MHz. Today’s
personal computers run at clock speeds of a 100–1000 MHz and some even
exceed one gigahertz.
Although the processing speed in personal computers is measured in terms
of megahertz, the processing speed of mini computers and mainframe systems is
measured in terms of Millions of Instructions Per Second (MIPS) or Billions of
Instructions Per Second (BIPS). This is because personal computers generally

Self-Instructional
Material 99
CPU Essentials employ a single microprocessor chip as their CPU while other classes of computers
employ multiple processors to speed up their overall performance. Thus, a
minicomputer having a speed of 500 MIPS is capable of executing 500 million
instructions per second.
NOTES
Clock speed is a measure of computer ‘power,’ but it is not always directly
proportional to the performance level. If you double the speed of the clock,
leaving all other hardware unchanged, you will not necessarily double the
processing speed. The type of microprocessor, the bus architecture, and the
nature of the instruction set all make a difference. In some applications the amount
of RAM is important too.
Processor/Computer Cooling
Computer cooling is required to remove the waste heat produced by computer
components, to keep components within permissible operating temperature limits.
Components that are susceptible to temporary malfunction or permanent failure if
overheated include integrated circuits such as central processing units (CPUs),
chipset, graphics cards, and hard disk drives.
Components are often designed to generate as little heat as possible, and
computers and operating systems may be designed to reduce power consumption
and consequent heating according to workload, but more heat may still be
produced than can be removed without attention to cooling. Use of heatsinks
cooled by airflow reduces the temperature rise produced by a given amount of
heat. Attention to patterns of airflow can prevent the development of hotspots.
Computer fans are widely used along with heatsink fans to reduce temperature
by actively exhausting hot air.
Pipelining
Pipelining is a technique that decomposes any sequential process into smaller
subprocesses, which are independent of each other so that each subprocess can
be executed in a special dedicated segment and all these segments operate
concurrently. Thus, the whole task is partitioned into independent tasks and these
subtasks are executed by a segment. The result obtained as an output of a segment
(after performing all computation in it) is transferred to next segment in pipeline
and the final result is obtained after the data have been through all segments. So, it
could be understood each segment consists of an input register followed by a
combinational circuit. This combinational circuit performs the required sub-
operation and register holds the intermediate result. The output of one combinational
circuit is given as input to the next segment.
The concept of pipelining in computer organization is analogous to an industrial
assembly line. As in industry, pipelining also has different divisions like manufacturing,
packing and delivery division. Thus, pipeline results in speeding the overall process.

Self-Instructional
100 Material
Pipelining can be effectively implemented for systems having following CPU Essentials

characteristics:
 The system should repeatedly execute a basic function.
 The basic function must be divisible into independent stages such that each NOTES
stage has minimal overlap.
 The complexity of the stages should be roughly similar.
The pipelining in computer organization has basic flow of information. To
understand how it works for the computer systems let us consider a process which
involves four steps/segment and the process is to be repeated six times. If a single
step takes t nsec time, then time required to complete one process is 4 t nsec and
to repeat it 6 times we require 24t nsec.
Instruction Sets: CISC and RISC
Along with the reduction in the hardware prices and the increase in the number of
computer instructions, the complexity of the computer system has also increased.
New models can provide more customer-based applications. It is easier to add
more instruction to facilitate the translation from high level language into machine
language program. A computer with a large number of instructions is classified as
Complex Instruction Set Computer (CISC).
Complex Instruction Set Computer (CISC)
While designing the CISC computers, the main concern was to develop a program
using high-level language. For this, it was required to translate high-level language
into machine-level language with the help of a compiler. Thus, the basic purpose of
developing a CISC was to provide simplification in compilation. The goal of CISC
architecture was to provide a single instruction for each statement written in a
high-level language. The CISC processor was so designed that it was easy to
program and make an efficient use of memory. This technology was commonly
implemented in such large computers as the PDP-11 and the DEC systems. The
CISC philosophy became popular with the advent of the Intel family. Most of
common microprocessor designs, such as the Intel 80x86 and Motorola 68K
series, are designed on the CISC philosophy. However, current trends are the
hybrids of CSIC and RSIC technologies. Thus, CISC and RSIC share many
principles.
Initially, CISC machines used available technologies to optimize computer
performance. Later, the microprogramming control units were designed. Written
in assembly language and easy to implement, it is much less expensive than a
hardwiring control unit. Because of the ease of microprogramming, new
instructions can easily be introduced allowing the designers to make CISC
machines compatible with pervious one. Now a computer designed with new
microprogramming code could run the same programs which were executed in
earlier computers as it contains a superset of the instructions of the earlier
Self-Instructional
Material 101
CPU Essentials computers. As these microprogrammed instruction sets are written with an aim
to match the programming done by the high-level languages, the design of the
compiler should not be complicated.

NOTES Characteristics of CISC


The characteristics of CISC may be summarized as follows:
 A large number of instructions, typically about 100 to 250 instructions.
 Complex operations resulting in multiple clock cycles to execute an instruction
and long execution time.
 A large variety of addressing modes, typically from 5 to 20 different verities
modes, e.g., multiple addressing modes for memory, including specialized
modes for indexing through arrays.
 Variable length instructions format where the length varies according to the
addressing mode. A 2-operand format, where instructions have a source and
a destination and possible operation involve data transfer between register to
register, register to memory, and memory to register.
 Requirement of complex hardware for instruction decoding, especially as
there are cases of single instructions supporting multiple addressing modes.
 Minimum number of instructions needed for a given task.
 Easy to program.
 Lesser load on compiler.
 Direct manipulation of operands residing in memory requiring instructions
to be designed such that they manipulate operands in memory.
 As more instruction and addressing modes are incorporated, the computation
of the hardware logic, its implementation and support required to them may
result in reduced speed of computation.
 Requirement of a small number of general-purpose registers. As direct
manipulation on memory can be taken, no registers (dedicated space) are
required for instruction decoding, execution and microcode storage.
 Deployment of several special-purpose registers, such as for the stack
pointer, interrupt handling, and so on. Using these makes the instruction set
more complex but will result in simplified hardware design.
 Use of a ‘condition code’ register in almost all instructions, which holds the
information, such as the result of the last operation which is less than, equal
to, or greater than zero and records if certain error conditions occur.
 Based on microprogrammed control unit, leading to microencoding of the
machine instructions.
 Use of chips, such as newer X86 and VAX models e.g., Pentium systems
are based on CISC technology.
Self-Instructional
102 Material
Reduced Instruction Set Computer (RISC) CPU Essentials

Pronounced as ‘risk’, RISC is a type of microprocessor that is designed with


limited number of instructions. With the advent of X86 series, computer
manufacturers tended to design a CPU with a complex and large instruction set, NOTES
resulting in a complex hardware. But in a research by U.C. Berkeley and IBM in
the early 1980s, researchers found that most computer language compilers and
interpreters used only a small subset of the instructions of a CISC. In his research
in 1972, John Cocke of IBM Research found that a computer used only twenty
per cent of the instructions, i.e., the remaining eighty per cent were superfluous.
Hence, a demand arose to design a processor with a simpler and less orthogonal
instruction set, so that their execution is fast as well as less expensive. CPUs
became fast with calculations involving less memory access.
To implement this concept, computer designers experimented to design a
processor using large sets of internal registers. A processor based upon this concept
has a small instructions set, requiring fewer transistors. This makes its manufacturing
cheaper. Also, as the use of transistors and instructions is restricted to only those
that are most frequently used, the computer works more in less time.
The term ‘RISC’ was later coined by David Patterson, a teacher at the
University of California in Berkeley. After the emergence of RISC computers, it
became a practice to see conventional computers as CISC’s. RISCs generally
had larger numbers of registers, accessed by simple instructions set like load and
store, for transferring data to and from memory. The result was a very simple core
CPU running at very high speed and supporting all type of operations that compilers
were using.
The RISC architecture is basically designed on Harvard architecture, as
contrast to the Von Neumann (Stored Program) architecture on which most pre-
RISC processors were designed. In the Harvard architecture, machine program
and data are two different entities. So, separate memory devices are used to store
the program and data, but these are accessed simultaneously. The Von Neumann
architecture, on the other hand, considers the data and programs as the same
entity and stores them in a single memory device. As both of them are accessed
sequentially, it produces the so-called ‘Von Neumann bottleneck’.
Characteristics of RISC
The characteristics of RISC are as follows:
 Small instruction set; less than 150 machine instructions.
 Simple instructions, usually register-based instructions to allow fast execution.
 Simple addressing modes (less than 2) to allow fast address computation,
as instructions are register based so complex addressing modes are not
used.

Self-Instructional
Material 103
CPU Essentials  Fixed-length instructions, i.e., all instructions have the same length of 32
bits (or 64 bits).
 Small number of the instruction format; less than 2.
NOTES  Fields aligned in instruction to allow fast instruction decoding.
 Presence of both operands in registers to allow short fetch time.
 Large number of General Purpose Registers (GPRs); more than 32.
 Only one main memory access per instruction.
 Onlyread/write (load/store) instructions to access the main memory.
 Translation of the complex tasks into simple operations by the compiler,
increasing the compiler complexity and compiling time.
 Compiler in RISC processors not developed for a specific chip; instead, it
is developed in conjunction with the chip to produce one unit.
 Simpler and faster hardware implementation.
 Very suitable for pipelined architecture.
 Single cycle execution.
 Design dominated by hardwired control unit.
 Supportive to the High-Level Language (HLL).
 All operations related to registers task.
 Registers managed in the form of a variable window in some RISC
processors, allowing a ‘look’ at certain register files instead of using registers
as AX, BX, etc.
Advantage of RISC Machines
RISC machine has the following advantages over CISC machine:
 Smaller instruction set.
 Single-cycle execution resulting in faster execution.
 Fast instruction decoding because of fixed format.
 Easy implementation of pipelining in instruction through interleaving many
instructions.
 Memory access done only by load/store instructions; execution of all other
instructions using internal registers only.
 Simple design and short design time.
 Best target for the state-of-the-art optimizing compilation techniques.
 Simplified interrupt service logic.

Self-Instructional
104 Material
CPU Essentials
5.3 CPU: CIRCUIT SIZE AND DIE SIZE

The circuit size or feature size refers to the level of miniaturization of the processor.
To make more powerful processors, more transistors are needed. In order to NOTES
pack more transistors into the same space, they must be continually made smaller
and smaller.
Die size
The die or processor die is a rectangular pattern on a wafer that contains circuitry
to perform a specific function. For example, the picture to the right shows hundreds
of dies on the silicon wafer. A die, in the context of integrated circuits, is a small
block of semiconducting material on which a given functional circuit is fabricated.
Typically, integrated circuits are produced in large batches on a single wafer of
electronic-grade silicon (EGS) or other semiconductor (such as GaAs) through
processes such as photolithography. The wafer is cut (“diced”) into many pieces,
each containing one copy of the circuit. Each of these pieces is called a die.

Check Your Progress


1. Define central processing unit.
2. What is processor clock?
3. What is requirement of processor Cooling?

5.4 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. A central processing unit (CPU) is the electronic circuitry within a computer


that carries out the instructions of a computer program by performing the
basic arithmetic, logical, control and input/output (I/O) operations specified
by the instructions.
2. The speed at which the processor executes commands is called the processor
speed or clock speed.
3. Computer cooling is required to remove the waste heat produced by
computer components.

5.5 SUMMARY

 A central processing unit (CPU) is the electronic circuitry within a computer


that carries out the instructions of a computer program by performing the
basic arithmetic, logical, control and input/output (I/O) operations.

Self-Instructional
Material 105
CPU Essentials  The speed at which the processor executes commands is called the processor
speed or clock speed.
 Pipelining is a technique that decomposes any sequential process into smaller
subprocesses.
NOTES
 Computer cooling is required to remove the waste heat produced by
computer components.
 The system clock switches from zero and one at a million times per second
rate.
 Overclocking a CPU is the process of increasing the clock speed that the
CPU operates.
 RISC is a type of microprocessor that is designed with limited number of
instructions.
 CISC computers, the main concern was to develop a program using high-
level language.

5.6 KEY WORDS

 Pipelining: Pipelining is a technique that decomposes any sequential process


into smaller subprocesses.
 Processor Clock: The speed at which the processor executes commands
is called the processor speed or clock speed.

5.7 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short Answer Questions


1. Define central processing unit.
2. What is the significance of processor clock?
3. What is system clock?
Long Answer Questions
1. What do you understand by circuit and die size? Explain.
2. Describe the characteristics of CISC and RISC?

5.8 FURTHER READINGS

Bhatt, Pramod Chandra P. 2003. An Introduction to Operating Systems—


Concepts and Practice. New Delhi: PHI.
Self-Instructional
106 Material
Bhattacharjee, Satyapriya. 2001. A Textbook of Client/Server Computing. New CPU Essentials

Delhi: Dominant Publishers and Distributers.


Hamacher, V.C., Vranesic, Z.G. and Zaky. 2002. S.G. Computer Organization,
5th edition. New York: McGraw-Hill International Edition.
NOTES
Mano, M. Morris. 1993. Computer System Architecture, 3th edition. New Jersey:
Prentice-Hall Inc.
Nutt, Gary. 2006. Operating Systems. New Delhi: Pearson Education.

Self-Instructional
Material 107
Computer Memory

UNIT 6 COMPUTER MEMORY


NOTES Structure
6.0 Introduction
6.1 Objectives
6.2 Memory System
6.3 Physical Devices Used to Construct Memories
6.4 Answers to Check Your Progress Questions
6.5 Summary
6.6 Key Words
6.7 Self Assessment Questions and Exercises
6.8 Further Readings

6.0 INTRODUCTION

The computer memory is an essential part of a computer system. Memory can be


divided into two types, primary memory and secondary memory. The main memory
communicates directly with the CPU. The secondary memory communicates with
the main memory through the I/O processor. The main memory is of two types—
RAM and ROM. DRAM is the main memory in a computer and SRAM is used
for high-speed caches and buffers.

6.1 OBJECTIVES

After going through this unit, you will be able to:


 Understand the memory hierarchy
 Discuss dynamic RAM and static RAM.
 Explain Physical devices used to construct memories

6.2 MEMORY SYSTEM

The memory hierarchy consists of the total memory system of any computer. The
memory components range from higher capacity slow auxiliary memory to a
relatively fast main memory to cache memory that can be accessible to the high
speed processing logic. A five-level memory hierarchy is shown in Figure 3.1.
At the top of this hierarchy, there is a Central Processing Unit (CPU) register
which is accessed at full CPU speed. This is local memory to the CPU as the CPU
requires it. Next comes cache memory which is currently on the order of 32 KB
to few megabytes. After that, is the main memory with sizes currently ranging from
16 MB for an entry level system to few gigabytes at the other end. Next are
magnetic disks, and finally we have magnetic tape and optical tapes.
Self-Instructional
108 Material
The memory, as we move down the hierarchy, mainly depends on the Computer Memory

following three key parameters:


 Access Time
 Storage Capacity NOTES
 Cost
 Access Time: CPU registers are the CPU’s local memory and are
accessed in a few nanoseconds. Cache memory takes a small multiple of
CPU registers. Main memory access time is typically few tens of
nanoseconds.
Now comes a big gap, as disk access time are at least 10 Millisecond/
(msec) and tapes and optical disk access may be measured in seconds if
the media is to be fetched and inserted into a drive.
 Storage Capacity: The storage capacity increases as we go down the
hierarchy. CPU registers are good for 128 bytes. Cache memories are a
few Megabytes (MB). The main memory are 10 to thousands MB. Magnetic
disk of capacities are few Gigabytes (GB) to tens of GB. The capacity of
tapes and optical disks are limited as they are usually kept offline.

Registers
Cache

Main Memory

Magnetic Disk

Tape Optical Disk

Fig. 6.1 Five-level Memory Hierarchy

Another way of viewing the memory hierarchy in any computer system is


illustrated in Figure 6.1. The main memory is at the central place as it can
communicate directly with the CPU and through the Input/Output or I/O processor
with the auxiliary devices. Cache memory is placed in between the CPU and the
main memory.
Cache usually stores the program segments currently being executed in the
CPU and temporary data frequently asked by the CPU in the present calculations.
The I/O processor manages the data transfer between the auxiliary memory and
the main memory. The auxiliary memory has usually a large storing capacity but
has low access rate as compared to the main memory and hence, is relatively
inexpensive. Cache is very small but has very high access speed and is relatively
expensive. Thus, we can say that
Access speed  Cost

Self-Instructional
Material 109
Computer Memory Thus, the overall goal of using a memory hierarchy is to obtain the highest
possible average speed while minimizing the total cost of the entire memory system.
Magnetic
Tapes I/O
NOTES
Processor Main
Magnetic
Memory
Disk

Cache
CPU
Memory

Fig. 6.2 Memory Hierarchy System

Main Memory: Basics


The memory unit that communicates directly with the CPU is called main memory.
It is relatively large and fast and is basically used to store programs and data
during computer operation. The main memory can be classified into the following
two categories:
RAM
The term, Random Access Memory (RAM), is basically applied to the memory
system that is easily read from and written to by the processor. For a memory to
be random access means that any address can be accessed at any time, i.e., any
memory, location can be accessed in a random manner without going through any
other memory location. The access search time for each memory location is same.
In general, RAM is the main memory of a computer system. Its purpose is
to store data and applications that are currently in use by the processor. The
operating system controls the use of RAM memory by taking different decisions,
such as when items are to be loaded into RAM, at what memory location items
are to be loaded in RAM, and when they need to be removed from RAM. RAM
is very fast memory, both for reading and writing data. Hence, a user can write
information into RAM and can also read information from it. Information written
in it is retained as long as the power supply is on. All stored information in RAM is
lost as soon as the power is off.
The two main classifications of RAM are Static RAM (SRAM) and Dynamic
RAM (DRAM).
Static RAM or SRAM
Static RAM is made from an array of flip-flops where each flip-flop maintains a
single bit of data within a single memory address or location.
SRAM is a type of RAM that holds its data without external refresh as long
as power is supplied to the circuit. The word ‘static’ indicates that the memory
Self-Instructional
110 Material
retains its content as long as power is applied to the circuit. The following are the Computer Memory

characteristics of SRAM:
 It is a type of semiconductor memory.
 It does not require any external refresh circuitry in order to keep data intact. NOTES
 SRAM is used for high speed registers, caches and small memory banks
such as router buffers.
 It has access time in the range of 10 to 30 nanoseconds and hence allows
for very fast access.
 It is very expensive.
Dynamic RAM or DRAM
Dynamic RAM is a type of RAM that only holds its data if it is continuously
accessed by special logic called refresh circuit. This circuitry reads the contents of
each memory cell many hunderds of times per second to find out whether the
memory cell is being used at that time by computer or not. Due to the way in which
the memory cells are constructed, the reading action itself refreshes the contents
of the memory. If this is not done regularly, then DRAM will lose its contents even
if it continues to have power supplied to it. Because of this refreshing action, the
memory is called dynamic. The following are the characteristics of DRAM:
 It is the most common type of memory in use. All Personal Computers or
PCs use DRAM for their main memory system instead of SRAM.
 DRAM has much higher capacity.
 It is much cheaper than SRAM.
 It is slower than SRAM because of the overhead of the refresh circuitry.
ROM
In every computer system, there is a portion of memory that is stable and impervious
to power loss. This type of memory is called Read Only Memory or in short
ROM. It is non-volatile memory, i.e., information stored in it is not lost even if the
power supply goes off. It is used for permanent storage of information and it
possesses random access property.
The most common application of ROM is to store the computer’s Basic
Input-Output System (BIOS). Since the BIOS is the code that tells the processors
to access its resources on powering up the system. Another application is the
code for embedded systems.
There are different types of ROMs. They are as follows:
 PROM or Programmable Read Only Memory: Data is written into a
ROM at the time of manufacture. However, the contents can be programmed
by a user with a special PROM programmer. PROM provides flexible and
economical storage for fixed programs and data.
Self-Instructional
Material 111
Computer Memory  EPROM or Erasable Programmable Read Only Memory: This allows
the programmer to erase the contents of the ROM and reprogram it. The
contents of EPROM cells can be erased using ultra violet light using an
EPROM programmer. This type of ROM provides more flexibility than
NOTES ROM during the development of digital systems. Since they are able to
retain the stored information for longer duration, any change can be easily
made.
 EEPROM or Electrically Erasable Programmable Read Only
Memory: In this type of ROM, the contents of the cell can be erased
electrically by applying a high voltage. EEPROM need not be removed
physically for reprogramming.
Organization of RAM and ROM Chips
A RAM chip is best suited for communication with the CPU if it has one or more
control lines to select the chip only when needed. It has a bidirectional data bus
that allows the transfer of data either from the memory to the CPU during a read
operation or from the CPU to the memory during a write operation. This
bidirectional bus can be constructed using a three-state buffer. A three-state buffer
has the following three possible states:
 Logic 1
 Logic 0
 High impedance
The high impedance state behaves like an open circuit which means that the
output does not carry any signal. It leads to very high resistance and hence no
current flows.
Chip select 1 CS1
Chip select 2 CS2
Read RD 128 × 8 8-bit data bus
RAM
Write WR
7-bit Address AD7

Fig. 6.3 Block Diagram of RAM Chip

Figure 6.3 shows the block diagram of a RAM chip. The capacity of memory
is 128 words of 8 bits per word. This requires a 7-bit address and an 8-bit
bidirectional data bus.
RD and WR are read and write inputs that specify the memory operations
during read and write, respectively. Two Chip Select (CS) control inputs are for
enabling the particular chip only when it is selected by the microprocessor. The
operation of the RAM chip will be according to the function table as shown in Table
6.1. The unit is in operation only when CS1 = 1 and CS2 = 0. The bar on top of the
second chip select indicates that this input is enabled when it is equal to 0.
Self-Instructional
112 Material
Table 6.1 Function Table Computer Memory

CS1 CS2 RD WR Memory Function State of Data Bus


0 0 × × Inhibit Hight-impedance
0 1 × × Inhibit High-impedance NOTES
1 0 0 0 Inhibit Hight-impedance
1 0 0 1 Write Input data to RAM
1 0 1 × Read Output data from RAM
1 1 × × Inhibit Hight-impedance

Thus, if the chip select inputs are not enabled, or if they are enabled but
read and write lines are not enabled, the memory is inhibited and its data bus is in
high impedance. When chip select inputs are enabled, i.e., CS1 = 1 and CS2 = 0,
the memory can be in the read or write mode. When the write WR input is enabled,
a byte is transferred from the data bus to the memory location specified by the
address lines. When the read RD input is enabled, a byte from the specified memory
location by the address line is placed into the data bus.
A ROM chip is organized in the same way as a RAM chip. The block
diagram of a ROM chip is shown in Figure 6.4.

Chip select 1 CS1


Chip select 2 CS2
512 × 8
RAM 8-bit data bus

9-bit Address AD9

Fig. 6.4 Block Diagram of a ROM Chip

The two chip select lines must be CS1 = 1 and CS2 = 0 for the unit to be
operational. Otherwise, the data bus is in high impedance. There is no need for the
read and write input control because the unit can only read. Thus, when the chip is
selected, the byte selected by the address line appears to be in the data bus.
Memory Address Map
A table called a memory address map is a pictorial representation of the assigned
address space for each chip in the system.
The interconnection between the memory and the processor is established
from the knowledge of the size of memory required and the types of RAM and
ROM chips available. RAM and ROM chips are available in a variety of sizes. If
a memory needed for the computer is larger than the capacity of one chip, it is
necessary to combine a number of chips to get the required memory size. If the
required size of the memory is M × N and if the chip capacity is m × n, then the
number of chips required can be calculated as

Self-Instructional
Material 113
Computer Memory
M N
k =
mn
Suppose, a computer system needs 512 bytes of RAM and 512 bytes of
NOTES ROM. The capacity of RAM chip is 128 × 8 and that of ROM is 512 × 8. Hence,
the number of RAM chips required will be:

512  8
k = = 4 RAM chips
128  8

One ROM chip will be required by the computer system. The memory
address map for the system is illustrated in Table 3.2. Table 3.2 consists of three
columns. The first column specifies whether a RAM or a ROM chip is used. The
next column specifies a range of hexadecimal addresses for each chip. The third
column lists the address bus lines.
Table 6.2 Memory Address Map

Component Hexadecimal Address Bus


Address 10 9 8 7 6 5 4 3 2 1
RAM 1 0000–007F 0 0 0 × × × × × × ×
RAM 2 0080–00FF 0 0 1 × × × × × × ×
RAM 3 0100–017F 0 1 0 × × × × × × ×
RAM 4 0180–01FF 0 1 1 × × × × × × ×
ROM 0200–03FF 1 × × × × × × × × ×

Table 6.2 shows only 10 lines for the address bus although the address bus
consists of 16 lines. These six lines are assumed to be zero. The RAM chip has
128 bytes and needs seven address lines, which are common to all four RAM
chips. The ROM chip has 512 bytes and needs nine address lines. Thus, ×’s are
assigned to the low-order bus lines, line 1 to 7 for the RAM chip and lines 1
through 9 for the ROM chip, where these ×’s represent a binary number, which is
a combination of all possible 0’s and 1’s values. Also, there must be a way to
distinguish between the four RAM chips. Lines 8 and 9 are used for this purpose.
If lines 9 and 8 are 00, it is a RAM1 chip; if it is 01, it is a RAM2 chip; if it is 10,
it is RAM3 chip and if the value of lines 9 and 8 is 11, it is RAM4 chip. Also, the
distinction between RAM and ROM is required. This distinction is done with the
help of line 10. When line 10 is 0, the CPU selects one of the RAM chips, and
when line 10 is 1, the CPU selects the ROM chip.
Memory Connection to CPU
The CPU connects the RAM and ROM chips through the data and address buses.
Low-order lines within the address bus select the byte within the chip and other
Self-Instructional
114 Material
lines select a particular chip through its chip select lines. The memory chip Computer Memory

connection to the CPU is shown in Figure 6.5. Seven low-order bits of the address
bus are connected directly to each RAM chip to select one of 128 possible bytes.
CPU NOTES
Address bus
16-11 10 9 8 7-1 RD WR Data bus
2×4
Decoder
3210

CS1
CS2 RAM1 Data
RD 128 × 8
WR
AD7

CS1
CS2
RAM2 Data
RD 128 × 8
WR
AD7

CS1
CS2 RAM3 Data
RD 128 × 8
WR
AD7

CS1
CS2 RAM4 Data
RD 128 × 8
WR
AD7

CS1
CS2
ROM Data
512 × 8
AD9

Fig. 6.5 Memory Connection to the CPU

Lines 8 and 9 are connected as inputs to a 2 × 4 decoder, whose outputs


are connected to the CS1 input of each RAM chip. Thus, when lines 8 and 9 are
00, the first RAM chip is selected. The RD and WR outputs from the CPU are
connected to the inputs of each RAM chip. To distinguish between RAM and
ROM, line 10 is used. When line 10 is 0, RAM is selected and when line 10 is 1,
ROM is selected. Line 10 is connected to CS2 of each RAM chip and with the
help of an inverter to of the ROM chip. The other chip select input in ROM is
connected to the RD control line. Address bus lines 1 to 9 are applied to the input
address of ROM. The data bus lines of RAM and ROM are connected to the
data bus of the CPU where RAM can transfer information in both directions
whereas ROM has only one output capability.
Self-Instructional
Material 115
Computer Memory
6.3 PHYSICAL DEVICES USED TO CONSTRUCT
MEMORIES
NOTES Semiconductor is used to construct memory elements like flip-flops and registers.
Most semiconductor memory is organized into memory cells or bi-stable flip-flops,
each storing one bit (0 or 1). Flash memory organization includes both one bit per
memory cell and multiple bits per cell (called MLC, Multiple Level Cell). The
memory cells are grouped into words of fixed word length, for example 1, 2, 4, 8,
16, 32, 64 or 128 bit. Each word can be accessed by a binary address of N bit,
making it possible to store 2 raised by N words in the memory. This implies
that processor registers normally are not considered as memory, since they only
store one word and do not include an addressing mechanism. You have already
learn about the flip-flops and registers in the unit 4.

Check Your Progress


1. What is Memory?
2. What is memory address map?

6.4 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS
1. Memory is the faculty of the brain by which information is encoded, stored,
and retrieved when needed.
2. A table called a memory address map is a pictorial representation of the
assigned address space for each chip in the system.

6.5 SUMMARY
 Memory is the faculty of the brain by which information is encoded, stored,
and retrieved when needed.
 RAM is the main memory of a computer system. Its purpose is to store
data and applications that are currently in use by the processor.
 SRAM is a type of RAM that holds its data without external refresh as long
as power is supplied to the circuit.
 Dynamic RAM that only holds its data if it is continuously accessed by
special logic called refresh circuit.
 ROM (Read Only Memory) which we can only read but cannot write on it.
This type of memory is non-volatile. The information is stored permanently
in such memories during manufacture.
Self-Instructional
116 Material
 A table called a memory address map is a pictorial representation of the Computer Memory

assigned address space for each chip in the system.


 The memory cell is the fundamental building block of computer memory..
 Computer can easily calculate to find the memory address for any other NOTES
element of the array.

6.6 KEY WORDS


 Main memory: Communicates directly with the CPU and with the auxiliary
devices through the I/O processor
 RAM: Main memory of a computer system
 DRAM: A type of RAM that only holds its data if it is continuously accessed
by special logic called refresh circuit

6.7 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short Answer Questions


1. What is RAM? Discuss its types.
2. What is Memory Cell?
Long Answer Questions
1. Describe memory address map.
2. Explain Random Access Memory (RAM) and Read Only Memory (ROM).
3. What are the devices used to construct memories?

6.8 FURTHER READINGS


Bhatt, Pramod Chandra P. 2003. An Introduction to Operating Systems—
Concepts and Practice. New Delhi: PHI.
Bhattacharjee, Satyapriya. 2001. A Textbook of Client/Server Computing. New
Delhi: Dominant Publishers and Distributers.
Hamacher, V.C., Vranesic, Z.G. and Zaky. 2002. S.G. Computer Organization,
5th edition. New York: McGraw-Hill International Edition.
Mano, M. Morris. 1993. Computer System Architecture, 3th edition. New Jersey:
Prentice-Hall Inc.
Nutt, Gary. 2006. Operating Systems. New Delhi: Pearson Education.

Self-Instructional
Material 117
Bus

UNIT 7 BUS
NOTES Structure
7.0 Introduction
7.1 Objectives
7.2 Bus Interface and Expansion Slots
7.2.1 Industry Standard Architecture
7.2.2 Extended Industry Standard Architecture
7.2.3 Micro Channel Architecture
7.2.4 Video Electronics Standards Association
7.2.5 Peripheral Component Interconnect or Personal Computer Bus
7.2.6 Accelerated Graphics Port
7.3 FSB
7.4 USB
7.5 Dual Independent Bus
7.6 Answers to Check Your Progress Questions
7.7 Summary
7.8 Key Words
7.9 Self Assessment Questions and Exercises
7.10 Further Readings

7.0 INTRODUCTION

In this unit, you will learn about expansion slots, USB and dual independent bus.
An expansion slot is a socket on the motherboard that is used to insert an expansion
card (or circuit board), which provides additional features to a computer such as
video, sound, advanced graphics, Ethernet or memory. The expansion card has
an edge connector that fits precisely into the expansion slot as well as a row of
contacts that is designed to establish an electrical connection between the
motherboard and the electronics on the card, which are mostly integrated circuits.
Depending on the form factor of the case and motherboard, a computer system
generally can have anywhere from one to seven expansion slots. There are several
types of expansion slots, including AGP (Accelerated Graphics Port), PCIe (also
known as PCI express), PCI (Peripheral Component Interconnect) and ISA
(Industry Standard Architecture).

7.1 OBJECTIVES

After going through this unit, you will be able to:


 Explain the various types of expansion slots
 Understand the universal serial bus
 Explain FBS and dual independent bus
Self-Instructional
118 Material
Bus
7.2 BUS INTERFACE AND EXPANSION SLOTS
Expansion slots are located on the motherboard on the back of the computer. It
arranges the ports on the cards for slot distribution. There are several types of NOTES
expansion slots, including AGP, PCIe also known as PCIexpress, PCI and ISA.
The top card of the SoundBlaster Live sound card plugs into a PCI expansion
slot, while the bottom card sends and receives its data to and from the larger card
through an IDE cable. The smaller card simply needs an empty spot in the case to
be mounted to. It does not need to be placed into an expansion slot on the
motherboard. A slot is located inside a computer on the motherboard or riser
board that allows additional boards are to be connected to it. Below is a listing of
some of the expansion slots commonly found in IBM compatible computers as
well as other brands of computers and a graphic illustration of a motherboard and
its expansion slots. Under the control of the Northbridge chipset, the memory
slots represent where the memory modules are inserted in a modern PC. For a
modern motherboard the memory will undoubtedly be DDR or DDR2 with the
type determined by the clock-speed of the CPU and ultimately on the clock-
speed of the Northbridge chip. The types of expansion slots are AGP, EISA, ISA,
PCI and Video Electronics Standards (VESA). An expansion slot usually refers
to any of the slots available on a motherboard for PCI, AGP, ISA, or other format
expansion cards. Sometimes the openings on the rear of arrangement are referred
to as expansion slots. Table 7.1 summarizes the basic characteristics of the bus
structures.
Table 7.1 Bus Architecture Characteristics
Bus Bus Width Bus Speed How Configured
(bits) (MHz)
8-bit 8 8 Jumpers and DIP switches
ISA 16 8 Jumpers and DIP switches
MCA 32 10 Software
EISA 32 8 Software
VL-Bus 32 Processor speed Jumpers and DIP switches
(up to 40 MHz)
PCI 32/64 Processor speed PnP (up to 33 MHz)
USB Serial Serial PnP
AGP 32 66 MHz PnP

In Figure 7.1, you can see how PCI expansion slots are displayed with the help
of numbers and AGP too. Developed to support 3D graphic applications, AGP
has a 32-bit wide channel that runs at 66 MHz quad pumped, which translates
into a total bandwidth of 1.06 GB/sec (1.056 GB/sec), which is four times the
bandwidth of PCI buses (133 MB/sec). The motherboard has various types of
expansion slots, Peripheral Component Interconnect (PCI) and Accelerated
Graphics Port (AGP), as shown in the Figure 7.1.
Self-Instructional
Material 119
Bus

NOTES

Fig. 7.1 Allotment of Expansion Slots in Motherboards

AGP also accesses the main memory directly, allowing 3D textures to be stored in
main memory as well as in video memory. Each node has an AGP Pro 50 slot that
enables users to install both AGP and AGP Pro cards in the system. In the SGI
graphics cluster nodes, the slot is occupied by the graphics card. The motherboard
has five PCI slots that support 32-bit 33-MHz PCI devices. Expansion cards in
the SGI graphics cluster reside in specific slots, as outlined in the Table 7.2, which
are based on Figure 7.2.
Table 7.2 Expansion Slots and Cards in the SGI Graphics Cluster

Slot Card
AGP AGP represents graphics card.
PCI slot 1 (closest to the AGP This slot represents an optional
slot) gigabit Ethernet card.
PCI slot 2 This slot represents represents SGI
ImageSync card, i.e., SGI graphics
cluster series 12 only.
PCI slot 3 This slot refers to Network
Interface Card (NIC). The card is
basically secondary Ethernet,
master node only and included
with SGI graphics cluster series
12.
PCI slot 4 This slot represents empty and
available slot for customer option.
PCI slot 5 (closest to the chassis This slot represents the
wall) commercial audio card. It is
included with master node of SGI
graphics cluster series 12.
Self-Instructional
120 Material
7.2.1 Industry Standard Architecture Bus

The expansion ports are generally sited in the back of the motherboard and aligned
with the back-plate of the case. It is worked via several cards, such as sound
cards, network cards, wireless cards and firewire cards. Each type of card has NOTES
different function for allotted expansion slot. This standard is provided by add-on
card for extending the capabilities of a PC. The original bus was 8-bit but it was
replaced by a 16-bit architecture in the mid 1980s. At the time of final displacement
of PCI bus, the Industry Standard Architecture (ISA) bus was survived. When it
appeared on the first PC, the 8-bit ISA bus ran at a modest 4.77MHz, which is
the same speed as the processor. It was improved over the years, eventually
becoming the ISA bus in 1982 with the advent of the IBM PC/AT using the Intel
80286 processor and 16-bit data bus. At this stage, it is kept up with the speed of
the system bus, in which first speed prefers the speed at 6MHz and second speed
works at 8MHz. Both types of speed are preferable as per requirement. The ISA
bus specifies a 16-bit connection driven by an 8MHz clock, which seems primitive
compared with the speed of latest processors. It has a theoretical data transfer
rate of up to 16 MBps. Functionally, this rate would reduce by a half to 8 MBps
since one bus cycle is required for addressing and a further bus cycle for the 16-
bits of data. ISA bus provides a 16-bit data bus. In Figure 7.2, the ISA bus is
characterized by adding an additional short slot to a slot on the 8-bit bus to create
the 16-bit connector. ISA is supported by eight additional IRQs that is doubled
the number of DMA channels. ISA expansion cards are designated to the
appropriate IRQ or DMA numbers through jumpers and DIP switches. The ISA
architecture also separated the bus clock from the CPU clock to allow the slower
data bus to operate at its own speeds. ISA slots are found on 286, 386, 486 and
some Pentium PCs. Figure 7.2 shows the ISA 16-bit card and slot.

Fig. 7.2 ISA 16-Bit Card and Slot

7.2.2 Extended Industry Standard Architecture


Extended Industry Standard Architecture (EISA) is similar to Micro-Channel
Architecture (MCA) bus both in terms of technology and marketing strategy. It
Self-Instructional
Material 121
Bus had significant technical advantages over ISA. EISA are sometimes found in
network fileservers too. The EISA bus is virtually non-existent on desktop systems
for several reasons. First, EISA based systems tend to be much more expensive
than other types of systems. Second, there are few EISA based cards available.
NOTES Finally, the performance of this bus is quite low compared to the popular local
buses like the VESA local bus and PCI. EISA takes the best parts of MCA and
builds on them. It has a 32-bit data bus, uses software setup, has more I/O
addresses available and ignores Interrupt ReQuest (IRQ) and Direct Memory
Access (DMA) channels. EISA uses only an 8 MHz bus clock to be backward
compatible to ISA boards. The adoption of LED video display boards can become
even more widespread if the overall system price can be significantly reduced and
the operational procedure of such display boards can be simplified. Some of the
key features of the EISA bus are as follows:
 ISA Compatibility: This feature refers to ISA cards that work with EISA
slots too.
 32-Bit Bus Width: This feature works like MCA in which the bus is
expanded to 32-bits.
 Bus Mastering: This feature for EISA bus supports bus mastering adapters
to greater efficiency and proper bus arbitration.
 Plug-and-Play (PnP): This feature of EISA automatically configures adapter
cards, similar to the PnP standards of modern systems.
EISA introduced the following improvements and benefits over ISA:
 It supports intelligent bus master expansion cards.
 Improved bus arbitration and transfer rates are used in EISA.
 It facilitates 8-bit, 16-bit or 32-bit data transfer by the main CPU, DMA
and bus master devices.
 It supports an efficient synchronous data transfer mechanism, permitting
single transfers as well as high-speed burst transfers.
 It allows 32-bit memory addressing for the main CPU, DMA devices and
bus master cards.
 EISA supports shareable and ISA-compatible handling of interrupt requests.
 Automatic steering of data during bus cycles between EISA and ISA masters
and slaves.
 EISA uses 33MB/second data transfer rate for bus masters and DMA
devices.
 Automatic configuration of the system board and EISA expansion cards
are more beneficial and they are not used in ISA.
7.2.3 Micro Channel Architecture
MCA or Micro Chance Architecture was introduced by IBM in 1987 and possess
the same types of signals and accomplishes the same functions as the EISA. MCA
Self-Instructional
boards are smaller and use different edge connectors. The MCA bus is designed
122 Material
to work with peripheral boards that transfer data in 8-bit, 16-bit or 32-bit words. Bus
For 16-bit peripheral boards, an additional 12-bit connector is used. One of the
16-bit slots also has a 20-bit video extension connector. This slot can be used for
graphics cards. The 32-bit slots also have an 8-pin matched memory extension
connector. The MCA bus offered several additional features over the ISA, such NOTES
as a 32-bit bus runs at 10MHz, automatically configure cards, and bus mastering
for greater efficiency. The MCA bus architecture was developed by IBM to replace
the original PC bus. The MCA bus connector is unique. The PC bus uses 62-pin
connectors (0.1-inch spacing). MCA uses 90-pin, 22-pin and 84-pin connectors
(0.05-inch spacing). There are also auxiliary connectors designed for special
purposes. MCA is technically superior to the PC bus. The bus supported multiple
processors using a system of priorities. Also, eight DMA channels were supported.
The PC bus uses edge-triggered interrupts, whereas MCA uses level-triggered
interrupts. The level-triggered interrupts made it easy for multiple devices to share
interrupt lines. Additional memory address lines are also included in the MCA
specification. MCA also supports 16-bit I/O and increases the number of devices
that could be addressed. MCA also introduced an effective PnP system with its
programmable option select feature. Jumpers and switches are replaced with a
programmable register that was configured automatically if computer starts. The
various operations of hardware devices can be controlled by jumpers. Jumper
device is placed on two jumper pins in the motherboard. It can be removed after
breaking the connection. It is known by another name as shunts. It contains a
small piece of metal and plastic and then makes the connection with motherboard.
It contains a set of pins. Manually, it can easily be modified. It can be referred as
a circuit in which a pin is placed at the each end of broken connection. The main
advantages of jumpers are as follows:
 They are easy to use and hence known as controlling hardware.
 They are very small and used in hardware settings as plug-and-play technique.
 They can easily be numbered as ‘JP1’, ‘JP2’, etc.
 They are used to set the BIOS setting and CPU speed.
Figure 7.3 shows the set-up for 172 pin MCA bus connector. MCA was
accomplished through a manufacturer-supplied adapter description file provided
on disk with the adapter card and keyed to the card by a unique identification
number.

Fig. 7.3 172 Pin MCA BUS Connector

7.2.4 Video Electronics Standards Association


The first local bus to gain popularity, the Video Electronics Standards Association
(VESA) local bus also called VL-Bus or VLB was introduced in 1992. VESA Self-Instructional
Material 123
Bus refers to a standards group that is formed in the late eighties to address video-
related issues in personal computers. Indeed, the major reason for the development
of VLB was to improve video performance in PCs. The VLB is a 32-bit bus
which is in a way a direct extension of the 486 processor/memory bus. A VLB
NOTES slot is a 16-bit ISA slot with third and fourth slot connectors added on the end.
The VLB normally runs at 33 MHz, although higher speeds are possible on some
systems. Since it is an extension of the ISA bus, an ISA card can be used in a VLB
slot, although it makes sense to use the regular ISA slots first and leave the small
number of VLB slots open for VLB cards, which will not work in an ISA slot. Use
of a VLB video card and I/O controller greatly increases system performance
over an ISA. While VLB was extremely popular during the reign of the 486, with
the introduction of the Pentium and its PCI local bus in 1994, wholesale
abandonment of the VLB began in earnest. While Intel pushing PCI was one
reason why this happened, there were also several key problems with the VLB
implementation. First, the design was strongly based on the 486 processor, and
adapting it to the Pentium caused a host of compatibility and other problems.
Second, the bus itself was tricky electrically; for example, the number of cards
that could be used on the bus was low and occasionally there could be timing
problems on the bus when more than one card was used. Finally, the bus did not
support bus mastering properly since there was no good arbitration scheme, and
did not support PnP.
7.2.5 Peripheral Component Interconnect or Personal Computer Bus
Peripheral Component Interconnect (PCI) standard specifies a computer bus for
attaching peripheral devices to a computer motherboard. Modern motherboards
tend to come with a few PCI connectors and a number of PCIe connectors and,
as well as graphics cards, most new gigabit Ethernet cards and even some wireless
cards now use PCIe. PCI originally operated at 33 MHz using a 32-bit-wide
path. Revisions to the standard include increasing the speed from 33 MHz to 66
MHz and doubling the bit count to 64. Currently, PCI-X provides for 64-bit
transfers at a speed of 133 MHz for an amazing 1-Gbps, i.e., Gigabyte per second
transfer rate.

Fig. 7.4 PCI Cards use 47 Pins

Figure 7.4 illustrates that PCI cards use 47 pins to connect in which 49 pins are
used for a mastering card, which can control the PCI bus without CPU intervention.
The PCI bus is able to work with so few pins because of hardware multiplexing,
Self-Instructional
124 Material
which means that the device sends more than one signal over a single pin. Also, Bus

PCI supports devices that use either 5 volts or 3.3 volts. Intel proposed the PCI
standard in 1991 and it did not achieve popularity until the arrival of Windows 95.
This feature in Windows 95 is supported a feature known as PnP. The PCI-express
(PCIe) bus standard was published in 2003 and was initially adopted for video NOTES
cards. PCI Express is a more recent technology that is slowly replacing AGP. PCI
Express x16 slots can transfer data at 4GBs per second, which is about double
that of an AGP 8x slot. PCI Express slots come in PCIe x1, PCIe x2, PCIe x4,
PCIe x8, and PCIe x16. PCIe x16 slots are used for video cards. This is the latest
specification for graphics card technologies and the port for these cards is shown
in the bottom pane of the above figure. It is an implementation of the PCI computer
bus that uses existing PCI programming concepts, but bases it on a completely
different and much faster serial physical-layer communications protocol which
allows for considerable parallelism in the interface and therefore large data
throughput. Because of this parallelism the size of the PCIe socket grows between
the 1x, 4x, 8x and 16x versions of this interface. The PCIe link is built around a
bidirectional, serial (1-bit) and point-to-point connection known as a lane. This is
in sharp contrast to the PCI connection, which is a bus-based system where all
the devices share the same unidirectional, 32-bit, parallel bus. A connection between
any two PCIe devices is known as a link, and is built up from a collection of 1 or
more lanes. All devices must minimally support single-lane (x1) links. Devices
may optionally support wider links composed of 2, 4, 8, 12, 16, or 32 lanes.
PCIe is useful in applications other than graphics cards and hence became a new
backplane standard in personal computers. PCIe is an enhanced PCI bus
technology originally developed by IBM, HP and Compaq that is backward
compatible with existing PCI cards. PCI and 32-bit PCI-X slots are physically
the same, and PCI cards can plug into PCI-X slots. PCI-X cards will run in PCI
slots, but at the slower PCI rates. The 64-bit PCI-X slots are longer.
PCI-X Slots: The two long green slots on Gigabyte motherboard are 64-
bit PCI-X slots, which will accept all 32-bit and 64-bit PCI-X and PCI cards.
Table 7.3 shows the types of PCI buses and their speed.
Table 7.3 PCI Bus

Bus Type Bus Width Bus Speed MB/sec


ISA 16 bits 8 MHz 16 MBps
EISA 32 bits 8 MHz 32 MBps
VL-bus 32 bits 25 MHz 100 MBps
VL-bus 32 bits 33 MHz 132 MBps
PCI 32 bits 33 MHz 132 MBps
PCI 64 bits 33 MHz 264 MBps
PCI 64 bits 66 MHz 512 MBps
PCI 64 bits 133 MHz 1 GBps

Self-Instructional
Material 125
Bus PCI can connect more devices, for example up to five external components. Each
of the five connectors for an external component can be replaced with two fixed
devices on the motherboard. The PCI bridge chip regulates the speed of the PCI
bus independently of the speed of the CPU. This provides a higher degree of
NOTES reliability and ensures that PCI-hardware manufacturers know exactly what to
design for.
7.2.6 Accelerated Graphics Port
The Accelerated Graphics Port (AGP) expansion slot connects AGP video cards
to a motherboard. Example of AGP is AGP GeForce FX 5500. Video expansion
cards are also known as graphic expansion cards. AGP video cards are capable
of a higher data transfer rate than PCI video cards. Video cards simply plugs into
an AGP slot and connect to the monitor or other video display device to a computer.
The Digital Video Interface (DVI) out connector is also used to connect the digital
video display. Video cards with a TV output connection are capable of displaying
a computer’s video on a television. Video cards with a TV input connection are
able of displaying a television’s video on a computer. The AGP card and the monitor
are what determine the quality of a computer’s video display. AGP also supports
two optional faster modes, with throughputs of 533 MBps and 1.07 GBps
respectively. It also allows 3-D textures to be stored in main memory rather than
video memory. Each computer with AGP support will either have one AGP slot
or onboard AGP video. If the user wishes to have multiple video cards in the
computer they would have one AGP video card as well as one or more PCI video
cards. Figure 7.5 shows an AGP slot in which you can find that AGP slots and
cards come in different modes.

Fig. 7.5 AGP Slot

You must be careful to match the card and slot with the correct mode otherwise
it will not work. Some cards and slots are capable of running in more than one
mode. AGP 1x mode is the oldest. It transfers data at 266MBs per second.
AGP 2x mode transfers data at 533MBs per second. AGP 4x mode transfers
data at 1.07GBs per second. The latest AGP mode is AGP 8x. It transfers data
at 2.14GBs per second. GPU stands for Graphics Processing Unit. The video
card is in charge of controlling the video display. Much like the CPU’s relationship
with the motherboard, the brain of the video card is the GPU. It is responsible
for making the decisions for processing the video card’s graphical input and
output data.

Self-Instructional
126 Material
Bus
7.3 FSB

Front-side bus (FSB) is a bus interface which was often used in Intel-chip-based
computers during the 1990s and 2000s. The competing EV6 bus served the same NOTES
function for AMD CPUs. Both typically carry data between the central processing
unit (CPU) and a memory controller hub, known as the Northbridge. Depending
on the implementation, some computers may also have a back-side bus that
connects the CPU to the cache. This bus and the cache connected to it are faster
than accessing the system memory (or RAM) via the front-side bus. The speed of
the front side bus is often used as an important measure of the performance of a
computer.

7.4 USB

Universal Serial Bus (USB) is considered as high-speed serial bus. Its data
transfer rate is higher than that of a serial port. It supports interfaces, such as
monitors, keyboard, mouse, speaker, microphones, scanner, printer and modems.
It allows interfacing several devices to a single port in a daisy-chain. USB
provides power lines along with data-lines. USB cable contains four wires
collectively. Two of them are used to supply electrical power to peripherals,
eliminating bulky power supply. The other two wires are used to send data and
commands. USB uses three types of data transfer and they are isochronous or
real-time, interrupt driver and bulk-data transfer. USB is a set of connectivity
which is developed by Intel. It is easily connected with other peripherals to the
system unit. The configuration process takes place after plugging in the Integrated
Drive Electronics (IDE) cables to the socket. USB is to be considered as most
successful interconnection technology in the world of system unit. It can easily
migrate to the mobile gadgets and other consumer electronics too. It avoids
special types of interface cards and is easily movable to the laptop. In the year
of 1995, the USB was released. It operates at the speed of 480 Mbps and is
portable. The various types of portable devices, such as handhelds, digital
cameras, mobiles are connected to the system unit. For example, the images
and pictures, music files, multimedia files are transferred from digital camera to
a printer with the help of USB or wireless USB. The wired technology is enabled
to the mobile lifestyle. It connects the power telephonic conversation and
videoconferencing technique. In USB, four-wire cable is interfaced. Two of the
wires are set for differential mode. The function of this mode is to transit and
receive the information/data. The rest two wires are set for power supply. The
source of power comes from the host. Hub is self-powered. Two different
connectors are used with USB cable in which one end connector is attached for
upstream communication, whereas other end connector is used for downstream
communication. The USB cable length is available as 5 meters.
Self-Instructional
Material 127
Bus Types of Communication Transfer Modes in USB
Some of the communication transfer modes available for USB are as follows:
 Control Mode: Host uses this mode in which data is transferred in both
NOTES directions to send and transfer the small amount of data.
 Interrupt Mode: This mode is hosted by querying devices in which host is
used to transfer the data.
 Bulk Mode: The bulk mode is used to get the features of data accuracy,
disk drive storage.
 Isochronous Mode: This mode guarantees the timing of data delivery, for
example USB audio speakers.

7.5 DUAL INDEPENDENT BUS

As CPU design evolved into using faster local buses and slower peripheral
uses, Intel adopted the dual independent bus (DIB) terminology. The
external front-side bus to the main system memory, and the internal back-side
bus between one or more CPUs and the CPU caches. It was introduced in
the Pentium Pro and Pentium II products in the mid to late 1990s. The primary
bus is used for communicating data between the CPU and main memory and
input and output devices is called the front-side bus. The back-side bus accesses
the level 2 cache.

Check Your Progress


1. What is Expansion Slots?
2. What is USB?
3. Write a note on Accelerated Graphics Port?

7.6 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. Expansion slots are located on the motherboard on the back of the computer.
It arranges the ports on the cards for slot distribution. There are several
types of expansion slots, including AGP, PCIe also known as PCI express,
PCI and ISA.
2. Universal Serial Bus (USB) is considered as high-speed serial bus. Its data
transfer rate is higher than that of a serial port. It supports interfaces, such
as monitors, keyboard, mouse, speaker, microphones, scanner, printer and
modems.

Self-Instructional
128 Material
3. The Accelerated Graphics Port (AGP) expansion slot connects AGP video Bus

cards to a motherboard. Example of AGP is AGP GeForce FX 5500. Video


expansion cards are also known as graphic expansion cards.

NOTES
7.7 SUMMARY

 Expansion slots are located on the motherboard on the back of the computer.
It arranges the ports on the cards for slot distribution. There are several
types of expansion slots, including AGP, PCIe also known as PCIexpress,
PCI and ISA.
 ISA expansion cards are designated to the appropriate IRQ or DMA
numbers through jumpers and DIP switches. The ISA architecture also
separated the bus clock from the CPU clock to allow the slower data bus
to operate at its own speeds.
 Extended Industry Standard Architecture (EISA) is similar to Micro-Channel
Architecture (MCA) bus both in terms of technology and marketing strategy
 The MCA bus is designed to work with peripheral boards that transfer
data in 8-bit, 16-bit or 32-bit words.
 VESA refers to a standards group that is formed in the late eighties to
address video related issues in personal computers.
 Peripheral Component Interconnect (PCI) standard specifies a computer
bus for attaching peripheral devices to a computer motherboard.
 The Accelerated Graphics Port (AGP) expansion slot connects AGP video
cards to a motherboard.
 Universal Serial Bus (USB) is considered as high-speed serial bus. Its data
transfer rate is higher than that of a serial port. It supports interfaces, such
as monitors, keyboard, mouse, speaker, microphones, scanner, printer and
modems.

7.8 KEY WORDS

 Mother board: A motherboard is one of the most essential parts of a


computer system. It holds together many of the crucial components of a
computer, including the central processing unit (CPU), memory and
connectors for input and output devices.
 USB: Universal Serial Bus is an industry standard that establishes
specifications for cables, connectors and protocols for connection,
communication and power supply between personal computers and their
peripheral devices.

Self-Instructional
Material 129
Bus
7.9 SELF ASSESSMENT QUESTIONS AND
EXERCISES

NOTES Short Answer Questions


1. Discuss the various types of transfer mode in USB?
2. Define FSB?
3. What are the features of EISA?
Long Answer Questions
1. Describe micro channel architecture?
2. Explain extended industry standard architecture?

7.10 FURTHER READINGS

Bhatt, Pramod Chandra P. 2003. An Introduction to Operating Systems—


Concepts and Practice. New Delhi: PHI.
Bhattacharjee, Satyapriya. 2001. A Textbook of Client/Server Computing. New
Delhi: Dominant Publishers and Distributers.
Hamacher, V.C., Vranesic, Z.G. and Zaky. 2002. S.G. Computer Organization,
5th edition. New York: McGraw-Hill International Edition.
Mano, M. Morris. 1993. Computer System Architecture, 3th edition. New Jersey:
Prentice-Hall Inc.
Nutt, Gary. 2006. Operating Systems. New Delhi: Pearson Education.

Self-Instructional
130 Material
Storage Devices

UNIT 8 STORAGE DEVICES


Structure NOTES
8.0 Introduction
8.1 Objectives
8.2 Storage Devices
8.3 Hard Disk Construction
8.4 IDE Drive Standard and Features
8.5 Answers to Check Your Progress Questions
8.6 Summary
8.7 Key Words
8.8 Self Assessment Questions and Exercises
8.9 Further Readings

8.0 INTRODUCTION

In this unit, you will learn about the various types of storage devices. Storage
devices which help in backup storage are called auxiliary memory. RAM is a
volatile memory and, thus, a permanent storage medium is required in a computer
system. Auxiliary memory devices are used in a computer system for permanent
storage of information and hence these are the devices that provide backup
storage.

8.1 OBJECTIVES

After going through this unit, you will be able to:


 Discuss the various types of storage devices
 Understand the construction of hard disk
 Explain the IDE drive standard and features

8.2 STOARAGE DEVICES

Storage devices which help in backup storage are called auxiliary memory. RASM
is a volatile memory and, thus, a permanent storage medium is required in a
computer system. Auxiliary memory devices are used in a computer system for
permanent storage of information and hence these are the devices that provide
backup storage. They are used for storing system programs, large data files and
other backup information. The auxiliary memory has a large storage capacity
and is relatively inexpensive, but has low access speed as compared to the main
memory. The most common auxiliary memory devices used in computer systems

Self-Instructional
Material 131
Storage Devices are magnetic disks and tapes. Now, optical disks are also used as auxiliary
memory.
Magnetic Disk
NOTES Magnetic disks are circular metal plates coated with a magnetized material on
both sides. Several disks are stacked on a spindle one below the other with read/
write heads to make a disk pack. The disk drive consists of a motor and all the
disks rotate together at very high speed. Information is stored on the surface of a
disk along a concentric set of rings called tracks. These tracks are divided into
sections called sectors. A cylinder is a pair of corresponding tracks in every surface
of a disk pack. Disk drives are connected to a disk controller.
Thus, if a disk pack has n plates, there will be 2n surfaces; hence, the
number of tracks per cylinder is 2n. The minimum quantity of information which
can be stored, is a sector. If the number of bytes to be stored in a sector is less
than the capacity of the sector, the rest of the sector is padded with the last type
recorded. Figure 8.1 shows a magnetic disk memory.
Spindle
Surface 1

Surface 2
Read/Write
Head
Surface 3

Surface 4

Cylinder

Surface 2n
Fig. 8.1 Magnetic Disk

The subdivision of a disk surface into tracks and sectors is shown in


Figure 8.2.

Self-Instructional
132 Material
Storage Devices

Sector
Tracks
NOTES

Read/Write
head

Fig. 8.2 Surface of a Disk

Let s bytes be stored per sector; p sectors exist per track, t tracks per surface
and m surfaces. Then, the capacity of the disk will be defined as:
Capacity = m × t × p × s bytes
If d is the diameter of the disk, the density of recording would be:
( p  s)
Density = = bytes/inch
(  d )
A set of disk drives is linked to a disk controller; the latter agrees to the
instructions and keeps ready the read/write heads for reading or writing. When
the read/write instruction is accepted by the disk controller, the controller first
positions the arm so that the read/write head reaches the appropriate cylinder. An
appropriate seek time (Ts) is the time needed to reach the proper cylinder. The
maximum Ts is the time needed by the head to reach the innermost cylinder from
the outermost cylinder or vice versa. The minimum Ts will be 0 if the head is
already positioned on the appropriate cylinder. Once the head is positioned on the
cylinder, there is further delay because the read/write head has to be positioned
on the appropriate sector. This is rotational delay, also known as latency time
(Tl). The average rotational delay equals half the time taken by the disk to complete
one rotation.
Floppy Disk
A floppy disk, also known as a diskette, is a very convenient bulk storage device
and can be taken out of the computer. It is of 5.25" or 3.5" size, the latter size
being more common. It is contained in a rigid plastic case. The read/write heads
of the disk drive can write or read information from both sides of the disk. The
storage of data is in magnetic form, similar to that of the hard disk. The 3.5" floppy
disk has a capacity of storage up to 1.44 Mbytes. It has a hole in the centre for
mounting it on the drive. Data on the floppy disk is organized during the formatting
process. The disk is also organized into sectors and tracks. The 3.5" high density
disk has 80 concentric circles called tracks and each track is divided into 18
Self-Instructional
Material 133
Storage Devices sectors. Tracks and circles exist on both sides of the disk. Each sector can hold
512 bytes of data plus other information like address, etc. It is a cheap read/write
bulk storage device.

NOTES Magnetic Tape


A magnetic disk is used by almost all computer systems as a permanent storage
device. It is still the accepted low-cost magnetic storage medium, and is primarily
used for backup storage purposes. The digital audio tape (DAT) is the normal
backup magnetic tape tool used these days. A standard cartridge-size cassette
tape offers about 1.2 GB of storage. These magnetic tape memories are similar to
that of audio tape recorders.
A magnetic tape drive consists of two spools on which the tape is wound.
Between the two spools, there is a set of nine magnetic heads to write and read
information on the tape. The nine heads operate independently and record
information on nine parallel tracks, all parallel to the edge of the tape. Eight tracks
are used to record a byte of data and the ninth track is used to record a parity bit
for each byte. The standard width of the tape is half an inch. The number of bits
per inch (bpi) is known as recording density.
Normally, when data is recorded on to a tape, a block of data is recorded
and then a gap is left and then another block is recorded, and so on. This gap is
known as Inter-Block Gap (IBG). The blocks are normally 10 times as long as
IBG. The Beginning Of the Tape (BOT) is indicated through a metal foil known as
marker, and the end of the tape (EOT) is also indicated through a metal foil known
as end of tape marker.
The data on the tape is arranged as blocks and cannot be addressed. It can
only be retrieved sequentially in the same order in which it is written. Thus, if a
desired record is at the end of the tape, earlier records have to be read before it is
reached and hence the access time is very high as compared to magnetic disks.
Optical Disk
This (optical disk) storage technology has the benefit of high-volume economical
storage coupled with slower times than magnetic disk storage.
Compact Disk
The Compact Disk (CD) was invented by James Russell. A CD is a small, portable
and easy to use device made of molded polymer. It is used record, store, play
back audio, video, text, graphics, etc. in a digital form. It comes in the shape of
circle. No other shape for CD is available in the market. The main feature of CD
is that it is portable and hence is tended to replace the tape cartridge for using cars
and other portable playback devices. The other types of CD are popularly used
as CD-ROM, CD-I, CD-RW, CD-RW XA, photo CD, video CD, etc. Figure
8.3 shows compact disk, which is frequently used to store data.
Self-Instructional
134 Material
Storage Devices

NOTES

Fig. 8.3 Compact Disk

CD is an optical storage medium that reads the recorded data by the arrangement
of optical beams. For example, the process of storing audio data (having large
amount of data) in digital formats is known as audio encoding because one second
of audio information can store one million bits of data. Therefore, CD is quite
capable to store one millions of data and it takes area as tiny as pinhead. A CD is
4.75" diameter and made up of polycarbonate plastic disc. It can contain
approximately 74 minutes of audio information. The information is encoded into
the CD in terms of lands and pits and is represented by binary highs and lows that
are read as laser ‘reads’. The future of this disk is that it would be common
medium of exchanging the data for audio, video and other applications. Figure 8.4
shows the structure of compact disk, which is generally connected with circuit
board.

Compact Compact disc


disc Pit
Land Objective
lens
Optical Pick-up
sub-assembly Prism
Photo
Gear assembly diode
Laser

Lens Carries electric


Optical pick-up signal
Motor Roil

Circuit board

Fig. 8.4 Structure of Compact Disk

Self-Instructional
Material 135
Storage Devices Compact Disk Interactive
Compact Disk Interactive (CD-I) is the name of the interactive multimedia CD
player developed by Royal Philips Electronics N.V. It is mainly useful for creating
NOTES movies, game videos. A CD-I movie disk is also known as video CD holds
approximately 70 minutes Video Home System (VHS) quality video. In 1990,
Sony company launched a portable CD-I device with the 4" LCD monitor. Figure
8.5 shows compact disk inter-active, which is designed for real-time animation,
video and sound.

Fig. 8.5 Compact Disk Interactive

CD-ROM
This is an optical medium of data storage. The current maximum capacity of a
CD-ROM is 900MB with a maximum read/write access speed of 52X, (which
means 10,350 Rotations Per Minute (RPM) and transfer rate of 7.62 Mega-
Bytes Per Second (MBPS). The data is written with the help of a red infrared
laser beam from an optical lens and the same laser of lower intensity is used to
read data from the CD-ROM.
CD-Recordable
Write Once Read Many (WORM) storage has been working around since
1980s and is considered as a type of optical drive that can be written to and
read from. When data is written to a WORM drive, physical marks are made
on the media surface by a low-powered laser and since these marks are
permanent. They can not be erased, hence write once. Compact Disk-
Recordable (CD-R) media manufacturers use media longevity to define tests
and mathematical modeling techniques. The colour of the CD-R disc is related
to the colour of the specific dye that was used in the recording layer. This base
dye colour is modified when the reflective coating is recognized by either in
gold colour or silver colour.

Self-Instructional
136 Material
Storage Devices

NOTES

Fig. 8.6 Protective and Recording Layer in CD-R

Figure 8.6 shows the protective and recording layer in CD-R. Pre-groove section
lies between recording layer and polycarbonate disc substrate. The CD-R has a
spiral track, which is preformed during manufacture, onto which data is written
during the recording process. This ensures that the recorder follows the same
spiral pattern as a conventional CD. It has also the same width of 0.6 microns and
pitch of 1.6 microns as a conventional disc. Discs are written from the inside of the
disc outward. The spiral track makes 22,188 revolutions around the CD, with
roughly 600 track revolutions per millimetre. CD-R writes data to a disc by using
its laser to physically burn pits into the organic dye. The CD-R is not strictly
WORM. By mid-1998, drives were capable of writing at quad-speed and reading
at twelve-speed, which is denoted as 4X/12X and were bundled with much
improved CD mastering software. The CD-R format has not been free of
compatibility issues. The surface of a CD-R is made to exactly match the 780nm
laser of an ordinary CD-ROM drive. However, CD-R’s real disadvantage is that
the writing process is permanent. The media can not be erased and again written
to CD-R.
CD-Rewritable
A CD-Rewritable (CD-RW) disk looks like CD-ROM and hence, distinguishable
from CD-R discs by their metallic gray color. It acts as CD-ROM in the time of
reading data. It also allows data recording for thousands of times. Recording on a
CD-RW is accomplished by local melting the recording material, which is then
being cooled quickly enough to quench in its amorphous phase. The cooling rate
is apparently a strong function of thermal properties of the layer and surrounding
layers, in particularly and the thermal diffusivity is calculated by Equation 8.1:
k
Thermal diffusivity,   (8.1)
c p
Where,
 k = The thermal conductivity (W/(m·K)).
  = The density (kg/m³). Self-Instructional
Material 137
Storage Devices
 c p = The specific heat capacity (J/(kg·K)).
 c p = The volumetric heat capacity (J/(m³·K)).
In heat transfer analysis, thermal diffusivity is usually denoted by á. It is
NOTES the thermal conductivity divided by density and specific heat capacity at constant
pressure and has the SI unit of m²/s.
A process of annealing is used, when material is heated slightly below its
melting temperature. Fast data erasure could be achieved if the annealing rate is
very high and the temperature is slightly below its melting temperature. The following
basic requirements for an effective erasable phase-change material:
 This CD-RW has different refractive index for crystalline and amorphous
phases for maintaining optical contrast,
 Low melting point (low laser power) and moderate thermal conductivity
fast cooling and quenching must be with this CD-RW, and
 Rapid annealing below melting temperature single-pass erasure process is
passed with CD-RW.
The structure of the CD-RW disk is similar to CD-R. It has the similar polycarbonate
substrate layer, protective layer and reflective metal layer. It has two dielectric
layers and a layer of phase-changing metal alloy. The dielectric layers prevent
overheating of the phase-changing layer during data recording process. The data
marks called pits are formed inside the light-adsorbing phase-changing film and
have different optical properties and different light reflectance. A typical structure
of the CD-RW disk and the data reading process is shown in the Figure 8.7.

Fig. 8.7 Structure of CD-RW Disk and Data Reading Process

To simplify the head positioning mechanism on a blank CD-RW, the laser beam of
the servo mechanism can follow this groove during both data reading and
writing. The CD-RW drive is different from the regular CD-ROM drive since its
laser can operate on the different power levels. The highest level causes phase

Self-Instructional
138 Material
transitions in the recording material and is used for data recording. The medium Storage Devices

level is used for annealing or erasuring. And, the lowest level of laser power is
used for data reading that scans the pits and lands without damaging the disk
surface. CD-RW uses Direct Over-Write (DOW) method when the new data are
just written on top of the old data. The design of CD-RW itself makes them a NOTES
perfect-writable storage, which is inexpensive and mobile. On the other hand, the
distant future of CD-RW technology is not fair enough since new technology DVD-
RAM gains momentum in the market.
Concepts of Latest Storage Devices
Various storage devices, such as input storage devices and output storage devices
are used in computer peripherals. The input storage devices allow information on
a computer to be retrieved anytime. Depending on the computer manufacturer,
different internal storage devices are made with computers. Magnetic disks use a
read-write head that give direct access to storage and the information can be
skipped to get the correct data. Redundant Array of Independent/Inexpensive
Disks (RAID) uses a stripping method where data is stored on individual physical
disks and information lost is retrieved by the individual disks. Magneto-optical
disks use a laser beam to record information. Magnetic tape can be used on a
computer internally or externally. Information from a magnetic tape is saved
sequentially, so data is lost while the time of accessing certain files or records. The
external storage output external hardware devices are used to save information
from a computer. Optical disks use laser beams to record information on a CD or
DVD. For example, Iomega zip drives compress data onto a disk. Virtual tape
stores information on a tape cartridge. PCMCIA (Personal Computer Memory
Card International Association) cards are used in digital camera or cellular phones.
These cards can also be used to save or upload files to a computer. Songs and
music files are also be stored on an iPod or MP3 music device. The latest storage
devices, such as DVD-RW, zip disk, Blu-Ray disk, HVD, USB, external HDD,
pen drives and memory sticks, iPod, MPEG audio layer III, Set-Top-Box, etc.
are frequently used in networking era as follows:
Digital Versatile Disk-Rewriteable
Digital Versatile Disk-Rewritable (DVD-RW) is like a DVD-R but can be erased
and written to again. It can be erased so that new data can be added. DVD-RWs
can hold 4.7GB of data and do not come in double-layered or double-sided
versions like DVD-R does. Because of their large capacity and ability to be used
multiple times, DVD-RW discs are a great solution for frequent backups. To record
data onto a DVD-RW disc, you will need a DVD burner that supports the DVD-
RW format. DVD-RW disc brings increased functionality to the DVD-R format.
These discs are rewritable up to 1,000 times. With this built-in versatility, you can
store a combination of both digital video and digital audio files on the same disc.
Some of the examples of rewritable media are 17344 DVD-RW 4×1pk w/
Self-Instructional
Material 139
Storage Devices Standard Jewel Cases, 17345 DVD-RW 4x5pk w/Standard Jewel Cases and
17346 DVD-RW 4x25pk Spindle. The features of DVD-RW are as follows:
 It has 4.7GB capacity that is equal to 2 hours of video.
NOTES  It has high-storage density, which is compatible with existing DVD video
players and DVD-ROM drives.
 It holds seven times more data than a full size CD-R.
 It has outstanding picture quality and long archival life.
 It is capable in 2x and 4x recording speeds.
 It transfers data easily and is useful for video recording or authoring.
Figure 8.8 shows the DVD-RW, in which data can be erased and recorded over
numerous times without damaging the medium.

Fig. 8.8 DVD-RW

DVD-RW and DVD+RW formats are known as re-writable formats. The sister
format of DVD-RW is known as DVD-R, which is essentially a record-once
version of DVD-RW. DVD+RW’s sister format is called DVD+R. DVD-RW
discs can be read with virtually any PC DVD-ROM drive and with most of the
regular, stand-alone DVD players.
ZIP Drives
These are similar to disk drives but with thicker magnetic disks and a larger number
of heads in the drive to read/write. The Zip drive was introduced mainly to overcome
the limitations of the floppy drive and replace it with a higher capacity and faster
medium. They are better than floppy disks but still slow in performance and with a
high cost-to-storage ratio. The disk size ranges from 100MB to 750MB. Zip
drives were popular for several years until the introduction of CD-ROMs and
CD-Writers, which have now come to be widely accepted due to their cost,
convenience and speed.
Blu-Ray Disc
Another high-density optical storage media format is gaining popularity these days.
It is mainly used for high-definition video and storing data. The storage capacity of
a dual layer Blue-ray disc is 50 GB, almost equal to storing data in six double-dual
layer DVD or more than 10 single-layer DVD. With high storage capacity, Blu-
Self-Instructional
140 Material
ray discs can hold and play back large quantities of high-definition video and Storage Devices

audio, as well as photos, data and other digital content.


Blu-Ray is considered as the next-generation DVD or Digital Video Disc.
It facilitates the recording, storing and play back of HD (High-Definition) video,
NOTES
digital audio and computer data. The main advantage of Blu-Ray disc is its capacity
of storing sheer amount of information. The following are the types of Blu-Ray
disc:
 Single-Layer Blu-Ray Disc: It has approximately similar size as of DVD.
It can store nearly 27 GB of data which is equivalent to more than two
hours of high-definition video or approximately 13 hours of standard video.
 Double-Layer Blu-Ray Disc: It can store up to 50 GB, enough to hold
about 4.5 hours of high-definition video or more than 20 hours of standard
video.
Figure 8.9 illustrates the structure of double-layer blu-ray disc.

Fig. 8.9 Structure of Double-Layer Blu-Ray Disc

The following are some of the features of Blu-Ray disc:


 Without any loss of quality the High-Definition Television (HDTV) can be
recorded easily.
 Any particular spot can be instantly skipped on the disc.
 Recording and watching of certain program can be done simultaneously on
the disc.
 One can create their own playlists.
 Programs can be edited and reordered that are recorded on the disc.
 Blu-Ray disc automatically searches empty space on the disc to avoid
recording over a program.
 It can access the Web for downloading subtitles and other additional
features.

Self-Instructional
Material 141
Storage Devices Unlike DVDs and CDs, which started with read-only formats and only later
added recordable and re-writable formats, Blu-Ray is initially designed in several
different formats:
 BD-ROM (Read-Only): For pre-recorded content.
NOTES
 BD-R (Recordable): For PC data storage.
 BD-RW (Rewritable): For PC data storage.
 BD-RE (Rewritable): For HDTV recording.

8.3 HARD DISK CONSTRUCTION

A hard disk drive (HDD), hard disk, hard drive, or fixed disk is an
electromechanical data storage device that uses magnetic storage to store and
retrieve digital information using one or more rigid rapidly rotating disks (platters)
coated with magnetic material. The platters are paired with magnetic heads, usually
arranged on a moving actuator arm, which read and write data to the platter surfaces.
Data is accessed in a random-access manner, meaning that individual blocks of data
can be stored or retrieved in any order and not only sequentially. Hard disks are
rigid platters, composed of a substrate and a magnetic medium. The substrate – the
platter's base material – must be non-magnetic and capable of being machined to a
smooth finish. It is made either of aluminium alloy or a mixture of glass and ceramic.

8.4 IDE DRIVE STANDARD AND FEATURES

Integrated Drive Electronics (IDE) is a standard interface for connecting a


motherboard to storage devices such as hard drives and CD-ROM/DVD drives.
The original IDE is a 16-bit interface which connect two devices with a single-
ribbon cable. It has its own circuitry and integrated disk drive controller. Prior to
IDE, controllers were separate external devices.
IDE’s development increased data transfer rate (DTR) speed and reduced
storage device and controller issues. IDE is also known as Advanced Technology
Attachment (ATA) or intelligent drive electronics (IDE). The ATA interface is
controlled by an independent group of representatives from major PC, drive, and
component manufacturers called Technical Committee T13. They are responsible
for all interface standards relating to the parallel AT Attachment storage interface.
A second group called the Serial ATA Workgroup has formed to create the Serial
ATA Standards.
Following are the features of IDE.
1. It provides enhanced command set.
2. It Provides support for drive password.
3. It also provides host protected area.
Self-Instructional
142 Material
Storage Devices

Check Your Progress


1. Define ZIP drives?
2. What is magnetic disk? NOTES
3. What is compact disk?

8.6 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. The Zip drive is introduced mainly to overcome the limitations of the floppy
drive and replace it with a higher capacity and faster medium. Zip drive is
thicker magnetic disks and a larger number of heads in the drive to read/
write.
2. Magnetic disks are circular metal plates coated with a magnetized material
on both sides. Several disks are stacked on a spindle one below the other
with read/ write heads to make a disk pack.
3. Compact Disk Interactive (CD-I) is the name of the interactive multimedia
CD player developed by Royal Philips Electronics N.V. It is mainly useful
for creating movies, game videos.

8.7 SUMMARY

 Auxiliary memory devices are used in a computer system for permanent


storage of information and hence these are the devices that provide backup
storage.
 Magnetic disks are circular metal plates coated with a magnetized material
on both sides.
 A floppy disk, also known as a diskette, is a very convenient bulk storage
device and can be taken out of the computer.
 A magnetic disk is used by almost all computer systems as a permanent
storage device. It is still the accepted low-cost magnetic storage medium,
and is primarily used for backup storage purposes.
 This (optical disk) storage technology has the benefit of high-volume
economical storage coupled with slower times than magnetic disk storage.
 A CD-Rewritable (CD-RW) disk looks like CD-ROM and hence,
distinguishable from CD-R discs by their metallic gray color. It acts as CD-
ROM in the time of reading data.
 Digital Versatile Disk-Rewritable (DVD-RW) is like a DVD-R but can be
erased and written to again. It can be erased so that new data can be
added.
Self-Instructional
Material 143
Storage Devices  Blu Ray disc used for high-definition video and storing data. The storage
capacity of a dual layer Blue-ray disc is 50 GB, almost equal to storing data
in six double-dual layer DVD or more than 10 single-layer DVD.

NOTES
8.8 KEY WORDS

 Hard Disk: A hard disk drive (HDD), hard disk, hard drive, or fixed disk is
an electromechanical data storage device that uses magnetic storage to store
and retrieve digital information using one or more rigid rapidly rotating disks
(platters) coated with magnetic material.
 Optical Disk: optical disk storage technology has the benefit of high-volume
economical storage coupled with slower times than magnetic disk storage.

8.9 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short Answer Questions


1. What is magnetic tape?
2. Discuss the features of IDE.
3. What are the different types of Blu-Ray disc?
Long Answer Questions
1. Explain the Blu-Ray disc with structure.
2. What are the various types of storage devices? Explain.
3. Explain the structure of magnetic disk.

8.10 FURTHER READINGS

Bhatt, Pramod Chandra P. 2003. An Introduction to Operating Systems—


Concepts and Practice. New Delhi: PHI.
Bhattacharjee, Satyapriya. 2001. A Textbook of Client/Server Computing. New
Delhi: Dominant Publishers and Distributers.
Hamacher, V.C., Vranesic, Z.G. and Zaky. 2002. S.G. Computer Organization,
5th edition. New York: McGraw-Hill International Edition.
Mano, M. Morris. 1993. Computer System Architecture, 3th edition. New Jersey:
Prentice-Hall Inc.
Nutt, Gary. 2006. Operating Systems. New Delhi: Pearson Education.

Self-Instructional
144 Material
Input Output Devices, Wired
BLOCK - III and Wireless Connectivity

STORAGE DEVICES AND COMPUTER SOFTWARE


NOTES
UNIT 9 INPUT OUTPUT DEVICES,
WIRED AND WIRELESS
CONNECTIVITY
Structure
9.0 Introduction
9.1 Objectives
9.2 Wired and Wireless Devices
9.3 Input and Output Devices
9.4 Answers to Check Your Progress Questions
9.5 Summary
9.6 Key Words
9.7 Self Assessment Questions and Exercises
9.8 Further Readings

9.0 INTRODUCTION

In this unit, you will learn about the I/O devices, wired and wireless devices.
Computers have an input/output subsystem which provides an efficient mode of
communication between the central system and the outside world. The I/O devices
refers to input devices, such as keyboard, point and draw devices, such as mouse,
touch pads, light pens, trackball, joystick, etc., and various types of output devices,
such as multimedia projectors, printers, etc., which collectively provide a means
of communication between the computer and the outside world are known as
peripheral devices. This is because they surround the CPU and the memory of a
computer system. You will also study about various external as well as internal I/O
devices, such as keyboard, mouse, scanner, printer, monitor, universal serial bus,
projectors, etc. A multimedia projector is an output device which is used to project
information from the computer onto a large screen so that it can be viewed by a
large group of people.

9.1 OBJECTIVES

After going through this unit, you will be able to:


 Explain wired and wireless devices
 Discuss various types of input/output devices
Self-Instructional
Material 145
Input Output Devices, Wired
and Wireless Connectivity 9.2 WIRED AND WIRELESS DEVICES

Wired communication refers to the transmission of data over a wire-


NOTES based communication technology. Examples include telephone networks, cable
television or internet access, and fiber-optic communication. Also waveguide
(electromagnetism), used for high-power applications, is considered as wired line.
Local telephone networks often form the basis for wired communications that are
used by both residential and business customers in the area. Most of the networks
today rely on the use of fiber-optic communication technology as a means of
providing clear signalling for both inbound and outbound transmissions. Fiber optics
are capable of accommodating far more signals than the older copper wiring used
in generations past, while still maintaining the integrity of the signal over longer
distances.
Wireless communication, or sometimes simply wireless, is the transfer of
information or power between two or more points that are not connected by
an electrical conductor. The most common wireless technologies use radio waves.
With radio waves distances can be short, such as a few meters for Bluetooth or as
far as millions of kilometres for deep-space radio communications. It encompasses
various types of fixed, mobile, and portable applications, including two-way
radios, cellular telephones, personal digital assistants (PDAs), and wireless
networking. Other examples of applications of radio wireless
technology include GPS units, garage door openers, wireless computer
mice, keyboards and headsets, headphones, radio receivers, satellite
television, broadcast television and cordless telephones. Somewhat less common
methods of achieving wireless communications include the use of
other electromagnetic wireless technologies, such as light, magnetic, or electric
fields or the use of sound.

9.3 INPUT AND OUTPUT DEVICES

The computer system is a dumb and a useless machine if it is not capable of


communicating with the outside world. It is very important for a computer system
to have the ability to communicate with the outside world, i.e., receive and send
data and information.
Computers have an Input/Output (I/O) subsystem referred to as I/O
subsystem which provides an efficient mode of communication between the central
system and the outside world. Programs and data must be entered into the computer
memory for processing, and results obtained from computations must be displayed
or recorded for the user’s benefit. This can be explained with a very common
scenario where the average marks of a student need to be calculated based on the
marks obtained in various subjects. The marks would typically be available in the

Self-Instructional
146 Material
form of a document containing the student’s name, roll number and marks scored Input Output Devices, Wired
and Wireless Connectivity
in each subject. This data must first be stored in the computer’s memory after
converting it into machine readable form. The data will then be processed (average
marks calculated) and sent from the memory to the output unit which will present
the data in a form that can be read by users. NOTES
The I/O devices that provide a means of communication between the
computer and the outside world are known as peripheral devices. This is because
they surround the CPU and the memory of a computer system. While input devices
are used to enter data from the outside world into the primary storage, output
devices are used to provide the processed results from primary storage to users.
Input Devices
Input devices are used to transfer user data and instructions to the computer. The
most commonly used input devices can be classified into the following five categories:
 Keyboard devices (general and special purpose, key to tape, key to disk,
key to diskette).
 Point and draw devices (touch screen, touch pads, light pen, trackball,
joystick).
 Scanning devices (mouse, optical mark recognition, magnetic ink character
recognition, optical bar code reader, digitizer, electronic card reader).
 Voice recognition devices.
 Vision input devices (webcam, video camera).
Keyboard Devices
Keyboard devices allow input into the computer system by pressing a set of keys
mounted on a board, connected to the computer system. Keyboard devices are
typically classified as general purpose keyboards and special purpose keyboards.
General Purpose Keyboard
The most familiar means of entering information into a computer is through a
typewriter like keyboard that allows a person to enter alphanumeric information
directly.
The most popular keyboard used today is the 101 key with a traditional QWERTY
layout, with an alphanumeric keypad, 12 function keys, a variety of special function
keys, numeric keypad, and dedicated cursor control keys. It is so called because
the arrangement of its alphanumeric keys in the upper left row.
 Alphanumeric Keypad: This contains keys for the English alphabets, 0 to
9 numbers, special characters like * + – / [ ], etc.
 12 Function Keys: These are keys labelled F1, F2 ... F12 and are a set of
user programmable function keys. The actual function assigned to a function

Self-Instructional
Material 147
Input Output Devices, Wired key differs from one software package to another. These keys are also
and Wireless Connectivity
called soft keys since their functionality can be defined by the software.
 Special Function Keys: Special functions are assigned to each of these
keys. The enter key, for example is used to send the keyed in data into the
NOTES
memory. Other special keys include:
o Shift (used to enter capital letters or special characters defined above
the number keys).
o Spacebar (used to enter a space at the cursor location).
o Ctrl (used in conjunction with other keys to provide added functionality
on the keyboard).
o Alt (like CTRL, used to expand the functionality of the keyboard).
o Tab (used to move the cursor to the next tab position defined).
o Backspace (used to move the cursor a position to the left and also
delete the character in that position).
o Caps Lock (to toggle between the capital letter lock feature – when
‘on’, it locks the keypad for capital letters input).
o Num Lock (to toggle the number lock feature – when ‘on’, it inputs
numbers when you press the numbers on the numeric keypad).
o Insert (used to toggle between the insert and overwrite mode during
data entry when ‘on’, entered text is inserted at the cursor location).
o Delete (used to delete the character at the cursor location).
o Home (used to move the cursor to the beginning of the work area which
could be the line, screen or document depending on the software being
used).
o End (used to move the cursor to the end of the work area).
o Page Up (used to display the previous page of the document being
currently viewed on screen).
o Page Down (used to view the next page of the document being currently
viewed on screen).
o Escape (usually used to negate the current command).
o Print Screen (used to print what is being currently displayed on the
screen).
 Numeric Keypad: This consists of keys with numbers (0 to 9) and
mathematical operators (+ – * /) defined on them. It is usually located on
the right side of the keyboard and supports quick entry of numerical data.
 Cursor Control Keys: They are defined by the arrow keys used to move
the cursor in the direction indicated by the arrow (top, down, left, right).
Another popular key arrangement, called Dvorak system, was designed
Self-Instructional
148 Material
for easy learning and use. It was designed with the most common consonants in Input Output Devices, Wired
and Wireless Connectivity
one part and all the vowels on the other part of the middle row of the keyboard.
This key arrangement made the users use alternate keystrokes back and forth
between both the hands. This keyboard was never been commonly used.
NOTES
Special Purpose Keyboard
These are standalone data entry systems used for computers deployed for specific
applications. These typically have special purpose keyboards to enable faster
data entry. A very typical example of such keyboards can be seen at the Automatic
Teller Machines (ATMs) where the keyboard is required for limited functionality
(support for some financial transactions) by the customers. Point Of Sale (POS)
terminals at fast food joints and Air/Railway reservation counters are some other
examples of special purpose keyboards. These keyboards are specifically designed
for special types of applications only.
Key to Tape, Key to Disk, Key to Diskette
These are individual standalone workstations used for data entry only. These
processor-based workstations normally have a keyboard and a small monitor.
The function of the processor is to check the accuracy of the data when it is being
entered.
The screen displays data as it is being entered. These facilities are very
useful and desirable during mass data entry and are therefore becoming very
popular in data processing centres.
Point and Draw Devices
The keyboard facilitates input of data in text form only. While working with display-
based packages, we usually point to a display area and select an option from the
screen [fundamentals of Graphical User Interface (GUI) applications]. For such
cases, the sheer user friendliness of input devices that can rapidly point to a particular
option displayed on screen and support its selection resulted in the advent of
various point and draw devices.
Mouse
A mouse is a small input device used to move the cursor on a computer screen to
give instructions to the computer and to run programs and applications It can be
used to select menu commands, move icons, size windows, start programs, close
windows, etc. Initially, the mouse was a widely used input device for the Apple
computer and was a regular device of the Apple Macintosh. Nowadays, the mouse
is the most important device in the functioning of a GUI of almost all computer
systems.
You can click a mouse button, i.e., press and release the left mouse button
to select an item. You can right click, i.e., press and release the right mouse button
to display a list of commands. You can double click, i.e., quickly press the left
mouse button twice without any time gap between the press of the buttons to open Self-Instructional
Material 149
Input Output Devices, Wired a program or a document. You can also drag and drop, i.e., place the cursor over
and Wireless Connectivity
an item on the screen and than press and hold down the left mouse button. Holding
down the button, move the cursor to where you want to place the item and then
release the button.
NOTES
Touch Screen
A touch screen is probably one of the simplest and most intuitive of all input devices
(refer Figure 9.3). It uses optical sensors in or near the computer screen that can
detect the touch of a finger on the screen. Once the user touches a particular
screen position sensors communicate the position to the computer. This is then
interpreted by the computer to understand the user’s choice for input. The most
common usage of touch screens is in information kiosks where users can receive
information at the touch of a screen. These devices are becoming increasingly
popular today.
Touch Pads
A touch pad is a touch sensitive input device which takes user input to control the
onscreen pointer and perform other functions similar to that of a mouse. Instead of
having an external peripheral device, such as a mouse, the touch pad enables the
user to interact with the device through the use of a single or multiple fingers being
dragged across relative positions on a sensitive pad. They are mostly found in
notebooks and laptops where convenience, portability and space are the prime
design concerns.
Light Pen
The light pen is a small input device used to select and display objects on a screen.
It functions with a light sensor and has a lens on the tip of a pen shaped device.
The light receptor is activated by pointing the light pen towards the display screen
and it then locates the position of the pen with the help of a scanning beam application
to directly draw on screen.
Trackball
The trackball is a pointing device that is much like an inverted mouse. It consists of
a ball inset in a small external box or adjacent to and in the same unit as the
keyboard of some portable computers.
It is more convenient and requires much less space than the mouse since
here the whole device is not moved (as in the case of a mouse). Trackball comes
in various shapes but supports the same functionality. Typical shapes used are a
ball, a square and a button (typically seen in laptops).
Joystick
The joystick is a vertical stick that moves the graphic cursor in the direction the
stick is moved. It consists of a spherical ball which moves within a socket and has
Self-Instructional
a stick mounted on it. The user moves the ball with the help of the stick that can be
150 Material
moved left or right, forward or backward, to move and position the cursor in the Input Output Devices, Wired
and Wireless Connectivity
desired location. Joysticks typically have a button on top that is used to select the
option pointed by the cursor.
Video games, training simulators and control panels of robots are some
NOTES
common uses of a joystick.
Digitizer
Digitizers are used to convert drawings or pictures and maps into a digital format
for storage into the computer. A digitizer consists of a digitizing or graphics tablet
which is a pressure sensitive tablet, and a pen with the same X and Y coordinates
as on the screen. Some digitizing tablets also use a crosshair device instead of a
pen. The movement of the pen or crosshair is reproduced simultaneously on the
display screen. When the pen is moved on the tablet, the cursor on the computer
screen moves simultaneously to the corresponding position on the screen (X and Y
coordinates). This allows the user to draw sketches directly or input existing
sketched drawings easily. Digitizers are commonly used by architects and engineers
as a tool for Computer Aided Designing (CAD).
Electronic Card Reader
Card readers are devices that also allow direct data input into a computer system.
The electronic card reader is connected to a computer system and reads the data
encoded on an electronic card and transfers it to the computer system for further
processing. Electronic cards are plastic cards with data encoded on them and
meant for a specific application. Typical examples of electronic cards are the plastic
cards issued by banks to their customers for use in ATMs. Electronic cards are
also used by many organizations for controlling access of various types of
employees to physically secured areas.
Depending on the manner in which the data is encoded, electronic cards
may be either magnetic strip cards or smart cards. Magnetic strip cards have a
magnetic strip on the back of the card. Data stored on magnetic strips cannot be
read with the naked eye, a useful way to maintain confidential data. Smart cards,
going a stage further, have a built-in microprocessor chip where data can be
permanently stored. They also possess some processing capability making them
suitable for a variety of applications. To gain access, for example, an employee
inserts a card or badge in the reader. This device reads and checks the authorization
code before permitting the individual to enter a secured area. Since smart cards
can hold more information as compared to magnetic strip cards, they are gaining
popularity.
Voice Recognition Devices
One of the most exciting areas of research involves the recognizing of an individual
human voice as the basis of input to the computer system. Eliminating the keying in
of data, basic commands can very easily be given, facilitating quick operation.
Voice recognition devices consist of a microphone attached to the computer system.
Self-Instructional
Material 151
Input Output Devices, Wired A user speaks into the microphone to input data. The spoken words are then
and Wireless Connectivity
converted into electrical signals (this is in the analog form). A digital to analog
converter then converts the analog form to digital form (0s and 1s) that can be
interpreted by the computer. The digitized version is then matched with the existing
NOTES pre-created dictionary to perform the necessary action. Voice recognition devices
have limited usage today because they have several problems. Not only do they
require the ability to recognize who is speaking but also what is being said (the
message). This difficulty arises primarily because people speak with different accents
and different tones and pitches. The computer requires a large vocabulary to be
able to interpret what is being said. Today’s voice recognition systems are therefore
successful in a limited domain. They are limited to accepting words and tasks
within a limited scope of operation and can handle only small quantities of data.
Most speech recognition systems are speaker dependent, i.e., they respond
to the unique speech of a particular individual, a feature not necessarily inconvenient
but limiting generalized applications. It therefore requires creating a database of
words for each person using the system.
Vision Input Devices
Vision input devices allow data input in the form of images. It usually consists of a
digital camera which focuses on the object whose picture is to be taken. The
camera creates the image of the object in digital format which can then be stored
within the computer.
As the speech recognition system digitizes the voice input, this system
similarly compares the digitized images to be interpreted to the pre-recorded
digitized images of the database of your computer system. Once it finds the right
match it sends it for further processing or pre-defined action.
Video input or capture is the recording and storing of full motion video
recordings on your computer system’s storage device. High end video accelerator
and capture cards are required to capture and view good quality video recordings
on your computer. File compression of the recorded video file is very important
for file storage, since the video files captured are of high quality and may take up
one gigabyte of disk space. The most popular standard used for compression is
the Motion Pictures Expert Group (MPEG). Webcams and video cameras are
most commonly used to input visual data.
Web Camera
A Web camera is a video capturing device attached to the computer system,
mostly using a USB port used for video conferencing, video security, as a control
input device and also in gaming.
Output Devices
An output device is an electromechanical device that accepts data from the
computer and translates it into a form that can be understood by the outside world.
Self-Instructional
152 Material
The processed data, stored in the memory of the computer, is sent to an output Input Output Devices, Wired
and Wireless Connectivity
unit which then transforms the internal representation of data into a form that can
be read by the users.
Normally, the output is produced on a display unit like a computer monitor
NOTES
or can be printed through a printer on paper. At times speech outputs and mechanical
outputs are also used for some specific applications.
Output produced on display units or speech output that cannot be touched
is referred to as softcopy output while output produced on paper or material that
can be touched is known as hardcopy output. A wide range of output devices are
available today and can be broadly classified into the following four types:
 Display devices (monitors, multimedia projectors)
 Speakers
 Printers (dot matrix, inkjet, laser)
 Plotters (flatbed, drum)
Display Devices
It is almost impossible to even think of using a computer system without a display
device. A display device is the most essential peripheral of a computer system.
Initially, alphanumeric display terminals were used that formed a 7×5 or 9×7 array
of dots to display text characters only. As a result of the increasing demand and
use of graphics and GUIs, graphic display units were introduced. These graphic
display units are based on series of dots known as pixels used to display images.
Every single dot displayed on the screen can be addressed uniquely and directly.
Display Screen Technologies
Owing to the fact that each dot can be addressed as a separate unit, it provides
greater flexibility for drawing pictures. Display screen technology may be of the
following categories:
Cathode Ray Tube: The CRT consist of an electron gun with an electron
beam controlled with electromagnetic fields and a phosphate coated glass display
screen structured into a grid of small dots known as pixels. The image is created
with the electron beam produced by the electron gun which is thrown on the
phosphor coat displayed by the electromagnetic field.
Liquid Crystal Display: Liquid Crystal Display (LCD) was first introduced
in the 1970s in digital clocks and watches, and is now widely being used in computer
display units. The Cathode Ray Tube (CRT) was replaced with the LCD making
it slimmer and more compact. But, the image quality and the image colour capability
got comparatively poorer.
The main advantage of LCD is its low energy consumption. It finds its most
common usage in portable devices where size and energy consumption are of
main importance.
Self-Instructional
Material 153
Input Output Devices, Wired Projection Display: Projection display technology is characterized by
and Wireless Connectivity
replacing the personal size screen with large screens upon which the images are
projected. It is attached to the computer system and the magnified display of the
computer system is projected on a large screen.
NOTES
Monitors: Monitors use a CRT to display information. It resembles a
television screen and is similar to it in other respects.
The monitor is typically associated with a keyboard for manual input of
characters. The screen displays information as it is keyed in enabling a visual
check of the input before it is transferred to the computer. It is also used to display
the output from the computer and hence serves as both an input and an output
device.
This is the most commonly used (I/O) device today and is also known as a
soft copy terminal. A printing device is usually required to provide a hard copy of
the output.
Multimedia Projectors
A multimedia projector is an output device which is used to project information
from the computer onto a large screen so that it can be viewed by a large group of
people. Prior to this, the standard mode of making presentations was to make
transparencies and project them using an overhead projector. This was a tedious
and time consuming activity since for every change in the subject matter a new
transparency had to be prepared. And of course, since electronic cut, copy and
paste was not possible this, meant additional work.
A multimedia projector can directly be plugged into a computer system and
the information projected on a large screen thereby making it possible to present
information to a large audience. The presenter can also use a pointer to emphasize
specific areas of interest in the presentation.
LCD flat screens connected to the computer systems for projecting the
LCD image using an overhead multimedia projector are widely used today.
Owing to its convenience and applicability, multimedia projectors are
increasingly becoming popular in seminars, presentations and classrooms.
Speakers
Used to produce music or speech from programs, a speaker port (a port is a
connector in your computer wherein you can connect an external device) allows
connection of a speaker to a computer. Speakers can be built into the computer
or can be attached separately.
Printers
Printers are used for creating paper output. There is a huge range of commercially
available printers today (estimated to be 1500 different types). These printers can
be classified into the following three categories based on:
Self-Instructional
154 Material
Printing Technology Input Output Devices, Wired
and Wireless Connectivity
Printers can be classified as impact or non-impact printers, based on the technology
they use for producing output. Impact printers work on the mechanism similar to a
manual typewriter where the printer head strikes on the paper and leaves the NOTES
impression through an inked ribbon. Dot matrix printers and character printers fall
under this category. Non-impact printers use chemicals, inks, toners, heat or electric
signals to print on the paper and they do not physically touch the paper while printing.
Printing Speed
This refers to the number of characters printed in a unit of time. Based on speed,
these may be classified as character printer (prints one character at a time), line
printers (prints one line at a time), and page printers (print the entire page at a time).
Printer speeds are therefore measured in terms of characters per second (cps) for
a character printer, lines per minute (lpm) for a line printer, and pages per minute
(ppm) for a page printer.
Printing Quality
It is determined by the resolution of printing and is characterized by the number of
dots that can be printed per linear inch, horizontally or vertically. It is measured in
terms of dots per inch or dpi. Printers can be classified as near letter quality or
NLQ, letter quality or LQ, near typeset quality or NTQ and typeset quality or TQ,
based on their printing quality. NLQ printers have resolutions of about 300 dpi, LQ
of about 600 dpi, NTQ of about 1200 dpi and TQ of about 2000 dpi. NLQ and LQ
printers are used for ordinary printing in day to day activities while NTQ and TQ
printers are used to produce top quality printing typically required in the publishing
industry.
Types of Printers
Dot Matrix: Dot matrix printers are the most widely used impact printers in personal
computing. These printers use a print head consisting of a series of small metal pins
that strike on a paper through an inked ribbon leaving an impression on the paper
through the ink transferred. Characters thus produced are in a matrix format. The
shape of each character, i.e., the dot pattern, is obtained from information held
electronically.
The speed, versatility and ruggedness, combined with low cost, tend to
make such printers particularly attractive in the personal computer market. Typical
printing speeds in case of dot matrix printers range between 40–1000 characters
per second (cps). In spite of all theses features in dot matrix printer technology,
the low print quality gives it a major disadvantage.
Inkjet: Inkjet printers are based on the use of a series of nozzles for propelling
droplets of printing ink directly on almost any size of paper. They, therefore, fall
under the category of non-impact printers. The print head of an inkjet printer consists
of a number of tiny nozzles that can be selectively heated up in a few microseconds
Self-Instructional
Material 155
Input Output Devices, Wired by an Integrated Circuit (IC) register. When this happens, the ink near it vaporizes
and Wireless Connectivity
and is ejected through the nozzle to make a dot on the paper placed in front of the
print head. The character is printed by selectively heating the appropriate set of
nozzles as the print head moves horizontally.
NOTES Laser: Laser printers work on the same printing technology as photocopiers using
static electricity and heat to print with a high quality powder substance known as
toner.
Laser printers are capable of converting computer output into print, page
by page. Since characters are formed by very tiny ink particles they can produce
very high quality images (text and graphics). They generally offer a wide variety of
character fonts, and are silent and fast in use. Laser printers are faster in printing
speed than other printers. Laser printers can print from 10 pages to 100 pages per
minute, depending upon the model. Laser is high quality, high speed, high volume
and non-impact technology that works on almost any kind of paper. Even though
this technology is more expensive than inkjet printers it is preferred because of its
unmatched features, such as high quality, high-speed printing and noiseless and
easy to use operations.
Plotters
Plotters are used to make line illustrations on paper. They are capable of producing
charts, drawings, graphics, maps, and so on. A plotter is much like a printer but is
designed to print graphs instead of alphanumeric characters. Based on the
technology used, there are mainly two type of plotters: pen plotters or electrostatic
plotters. Pen plotters, also known as flatbed plotters, draw images with multicolored
pens attached to a mechanical arm. Electrostatic plotters, also known as drum
plotters, work on the same technology as laser printers.
Flatbed plotters and drum are the most commonly used plotters.
Flatbed Plotters
Flatbed plotters have a flat base like a drawing board on which the paper is laid.
One or more arms, each of them carrying an ink pen, moves across the paper to
draw. The arm movement is controlled by a microprocessor (chip). The arm can
move in two directions, one parallel to the plotter and the other perpendicular to it
(called the x and y directions). With this kind of movement, it can move very
precisely to any point on the paper placed below.
The computer sends the commands to the plotter which are translated into
x and y movements. The arm moves in very small steps to produce continuous
and smooth graphics.
The size of the plot in a flatbed plotter is limited only by the size of the
plotter’s bed.
The advantage of flatbed plotters is that the user can easily control the
graphics. He can manually pick up the arm anytime during the production of graphics
Self-Instructional
156 Material
and place it on any position on the paper to alter the position of graphics to his Input Output Devices, Wired
and Wireless Connectivity
choice.
The disadvantage here is that flatbed plotters occupy a large amount of
space. NOTES
Drum Plotters
Drum plotters move the paper with the help of a drum revolver during printing.
The arm carrying a pen moves only in one direction, perpendicular to the direction
of the motion of the paper. It means that while printing, the plotter pens print on
one axis of the paper and the cylindrical drum moves the paper on the other
axis. With this printing technology, the plotter has an advantage to print on
unlimited length of paper but on a limited width. Drum plotters are compact and
lightweight compared to flatbed plotters. This is one of the advantages of such
plotters.
The disadvantage of drum plotters, however, is that the user cannot freely
control the graphics when they are being created. Plotters are more expensive
when compared to printers. Typical application areas for plotters include Computer
Aided Engineering (CAE) applications, such as Computer Aided Design (CAD),
Computer Aided Manufacturing (CAM) and architectural drawing and map
drawing.

Check Your Progress


1. What is input device?
2. What are the different types of printers?
3. What is a web camera?

9.4 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. Input devices are used to transfer user data and instructions to the computer.
2. There are three types of printer.
 Dot matrix
 Ink Jet
 Lazer
3. A Web camera is a video capturing device attached to the computer system,
mostly using a USB port used for video conferencing, video security, as a
control input device

Self-Instructional
Material 157
Input Output Devices, Wired
and Wireless Connectivity 9.5 SUMMARY

 Keyboard devices allow input into the computer system by pressing a set
NOTES of keys mounted on a board, connected to the computer system. Keyboard
devices are typically classified as general purpose keyboards and special
purpose keyboards.
 A mouse is a small input device used to move the cursor on a computer
screen to give instructions to the computer and to run programs and
applications.
 The light pen is a small input device used to select and display objects on a
screen. It functions with a light sensor and has a lens on the tip of a pen
shaped device..
 An output device is an electromechanical device that accepts data from the
computer and translates it into a form that can be understood by the outside
world. The processed data, stored in the memory of the computer, is sent
to an output unit which then transforms the internal representation of data
into a form that can be read by the users.
 Printers are used for creating paper output.
 Plotters are used to make line illustrations on paper. They are capable of
producing charts, drawings, graphics, maps
 Flatbed plotters have a flat base like a drawing board on which the paper is
laid. One or more arms, each of them carrying an ink pen.

9.6 KEY WORDS

 Input devices: These devices are used to transfer user data and instructions
to the computer.
 Digitizers: They are used to convert drawings or pictures and maps into a
digital format for storage into the computer.

9.7 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short Answer Questions


1. What are wireless devices? Give some examples.
2. What are point and draw devices?
3. What are the different types of keyboard?

Self-Instructional
158 Material
Long Answer Questions Input Output Devices, Wired
and Wireless Connectivity
1. Explain the different types of input devices.
2. Explain the different types of output devices.
NOTES
9.8 FURTHER READINGS

Bhatt, Pramod Chandra P. 2003. An Introduction to Operating Systems—


Concepts and Practice. New Delhi: PHI.
Bhattacharjee, Satyapriya. 2001. A Textbook of Client/Server Computing. New
Delhi: Dominant Publishers and Distributers.
Hamacher, V.C., Vranesic, Z.G. and Zaky. 2002. S.G. Computer Organization,
5th edition. New York: McGraw-Hill International Edition.
Mano, M. Morris. 1993. Computer System Architecture, 3th edition. New Jersey:
Prentice-Hall Inc.
Nutt, Gary. 2006. Operating Systems. New Delhi: Pearson Education.

Self-Instructional
Material 159
Computer Software

UNIT 10 COMPUTER SOFTWARE


NOTES Structure
10.0 Introduction
10.1 Objectives
10.2 Overview of Different Software
10.2.1 System Software
10.2.2 Application Software
10.3 Overview of Different Operating System
10.4 Answers to Check Your Progress Questions
10.5 Summary
10.6 Key Words
10.7 Self Assessment Questions and Exercises
10.8 Further Readings

10.0 INTRODUCTION

A computer cannot operate without any instructions and is based on a logical


sequence of instructions in order to perform a function. These instructions are
known as a ‘computer program’, and constitute the computer software. The
sequences of instructions are based on algorithms that provide the computer with
instructions on how to perform a function. Thus, it is impossible for a computer to
process without software, a term attributed to John W. Tukey in 1958.
Different kinds of software designs have been developed for particular
functions. Popular computer software include interpreter, assembler, compiler,
operating systems, networking, word processing, accounting, presentation, graphics,
computer games, etc. The computer software is responsible for converting the
instructions in a program into a machine language facilitating their execution.

10.1 OBJECTIVES

After going through this unit you will be able to:


 Define computer software
 Explain the different types of computer software
 Discuss about the various types of operating system

10.2 OVERVIEW OF DIFFERENT SOFTWARE

Software engineers develop computer software depending on basic mathematical


analysis and logical reasoning. Before implementation, the software undergoes a
number of tests. Thus, the programming software allows you to develop the desired
Self-Instructional
160 Material
instruction sequences, whereas in the application software the instruction sequences Computer Software

are predefined. Computer software can function from only a few instructions to
millions of instructions; for example, a word processor or a Web browser. Figure
10.1 shows how software interacts between user and computer system.
NOTES
Users

Application Software

Operating System Software

Hardware System

Fig. 10.1 Interaction of Software between User and a Computer System

On the functional basis, software is categorized as follows:


 System Software: It helps in the proper functioning of computer hardware.
It includes device drivers, operating systems, servers and utilities.
 Programming Software: It provides tools to help a programmer in writing
computer programs and software using various programming languages. It
includes compilers, debuggers, interpreters, linkers, text editors and an
Integrated Development Environment (IDE).
 Application Software: It helps the end users to complete one or more
specific tasks. The specific applications include industrial automation,
business software, computer games, telecommunications, databases,
educational software, medical software and military software.
Types of Computer Software
Today, software is a significant aspect of almost all fields including business,
education, medicine, etc. The basic requirement for software is a distinct set of
procedures. Thus, software can be used in any domain that can be described in
logical and related steps and every software is developed with the aim of catering
to a particular objective, such as data processing, information sharing,
communication, etc. Software is based on the type of applications that are as
follows:
 System Software: This type of software is involved managing and
controlling the operations of a computer system. System software is a group
of programs rather than one program and is responsible for using computer
Self-Instructional
Material 161
Computer Software resources efficiently and effectively. Operating system, for example, is system
software, which controls the hardware, manages memory and multitasking
functions and acts as an interface between applications programs and the
computer.
NOTES
 Real-Time Software: This is based on observing, analysing and controlling
real life events as they occur. Manually, a real-time system guarantees a
response to an external event within a specified period of time. The real-
time software, for instance, is used for navigation in which the computer
must react to a steady flow of new information without interruption. Most
defence organizations all over the world use real time software to control
their military hardware.
 Business Software: This kind of software is functional in the domain of
management and finance. The basic aspect of a business system comprises
payroll, inventory, accounting and software that permits users to access
relevant data from the database. These activities are usually performed with
the help of specialized business software that facilitates efficient framework
in the business operation and in management decisions.
 Engineering and Scientific Software: This software has developed as a
significant tool used in the research and development of next generation
technology. Applications, such as study of celestial bodies, study of
undersurface activities and programming of orbital path for space shuttle,
are heavily dependent on engineering and scientific software. This software
is designed to perform precise calculations on complex numerical data that
are obtained during real-time environment.
 Artificial Intelligence (AI) Software: Certain problem solving techniques
are non-algorithmic in nature and primarily require this type of software.
The solutions to such problems normally cannot be arrived at using
computation or straightforward analysis. Such problems need particular
problem solving techniques including expert system, pattern recognition and
game playing. Also, it constitutes various kinds of searching techniques,
such as the application of heuristics. The function of AI is to add certain
degree of intelligence into the mechanical hardware to have the desired
work done in an agile manner.
 Web Based Software: This category of software performs the function of
an interface between the user and the Internet. There are various forms in
which data is available online, such as text, audio or video format, linked
with hyperlinks. For the retrieval of Web pages from the Internet a Web
browser is used, which is a Web based software. The software incorporates
executable instructions written in special scripting languages, such as Common
Gateway Interface (CGI) or Active Server Page (ASP). Apart from providing
navigation on the Web, this software also supports additional features that
are useful while surfing the Internet.
Self-Instructional
162 Material
 Personal Computer (PC) Software: This software is primarily Computer Software

designed for personal use on a daily basis. The past few years have
seen a marked increase in the personal computer software market from
normal text editor to word processor and from simple paintbrush to
advance image editing software. This software is used mostly in almost NOTES
every field, whether it is database management system, financial
accounting package or a multimedia based software. It has emerged as
a versatile tool for daily life applications.
Software can also be classified in terms of the relationship between software users
or software purchasers and software development.
 Commercial Off-The-Shelf (COTS): This comprises the software without
any committed user before it is put up for sale. The software users have less
or no contact with the vendor during development. It is sold through retail
stores or distributed electronically. This software includes commonly used
programs, such as word processors, spreadsheets, games, income tax
programs, as well as software development tools, such as software testing
tools and object modelling tools.
 Customized or Bespoke: This software is designed for a specific user,
who is bound by some kind of formal contract. Software developed for
an aircraft, for example, is usually done for a particular aircraft making
company. They are not purchased ‘off-the-shelf’ like any word processing
software.
 Customized COTS: In this classification, a user can enter into a contract
with the software vendor to develop a COTS product for a special purpose,
that is, software can be customized according to the needs of the user.
Another growing trend is the development of COTS software components—
the components that are purchased and used to develop new applications.
The COTS software component vendors are essentially parts stores which
are classified according to their application types. These types are listed as
follows:
o Stand-Alone Software: A software that resides on a single computer
and does not interact with any other software installed in a different
computer.
o Embedded Software: A software that pertains to the part of unique
application involving hardware like automobile controller.
o Real-Time Software: In this type of software the Operations are
executed within very short time limits, often microseconds, e.g., radar
software in air traffic control system.
o Network Software: In this type of software, software and its
components interact across a network.

Self-Instructional
Material 163
Computer Software 10.2.1 System Software
System software constitutes all the programs, languages and documentation
provided by the manufacturer in the computer. These programs provide the user
NOTES with an access to the system so that he can communicate with the computer and
write or develop his own programs. The software makes the machine user-friendly
and makes an efficient use of the resources of the hardware. Systems software are
permanent programs on a system and reduce the burden of the programmer as
well as aid in maximum resource utilization. MS DOS (Microsoft Disk Operating
System) was one of the most widely used systems software for IBM compatible
microcomputers. Windows and its different versions are popular examples of
systems software. Systems software are installed permanently on a computer system
used on a daily basis.
Operating System
An Operating System (OS) is the main control program for handling all other
programs in a computer. The other programs, usually known as ‘application
programs’, use the services provided by the OS through a well-defined
Application Program Interface (API). Every computer necessarily requires some
type of operating system that instructs the computer about operations and use
other programs installed in the computer. The role of an OS in a computer is
similar to the role of the manager in an office for the overall management of the
college.
10.2.2 Application Software
Users install specific software programs based on their requirements; for instance,
accounting software (like Tally) used in business organizations and designing
software used by architects. All programs, languages and utility programs constitute
software. With the help of these programs, users can design their own software
based on individual preferences. Software programs aid in achieving efficient
application of computer hardware and other resources.
1. Licensed Software
Although there is a large availability of open source or free software online, not all
software available in the market is free for use. Some software falls under the
category of Commercial Off-The-Shelf (COTS). COTS is a term used for
software and hardware technology which is available to the general public for
sale, license or lease. In other words, to use COTS software, you must pay its
developer in one way or another.
Most of the application software available in the market need a software
license for use.
Software is licensed in different categories. Some of these licenses are based
on the number of unique users of the software while other licenses are based on
Self-Instructional
the number of computers on which the software can be installed.A specific distinction
164 Material
between licenses would be an Organizational Software License, which grants an Computer Software
organization the right to distribute the software or application to a certain number
of users or computers within the organization, and a Personal Software License
which allows the purchaser of the application to use the software on his or her
computer only. NOTES
2. Free Domain Software
To understand this, let us distinguish between the commonly used terms Freeware
and Free Domain software. The term ‘freeware’ has no clear accepted definition,
but is commonly used for packages that permit redistribution but not modification.
This means that their source code is not available. Free domain software is software
that comes with permission for anyone to use, copy, and distribute, either verbatim
or with modifications, either gratis or for a fee. In particular, this means that the
source code must be available. Free domain software can be freely used, modified,
and redistributed but with one restriction: the redistributed software must be
distributed with the original terms of free use, modification and distribution. This is
known as ‘copyleft’. Free software is a matter of freedom, not price. Free software
may be packaged and distributed for a fee. The ‘Free’ here refers to the ability of
reusing it — modified or unmodified, as a part of another software package. The
concept of free software is the brainchild of Richard Stallman, head of the GNU
project. The best known example of free software is Linux, an operating system
that is proposed as an alternative to Windows or other proprietary operating
systems. Debian is an example of a distributor of a Linux package.
Free software should, therefore, not be confused with freeware, which is a
term used for describing software that can be freely downloaded and used but
which may contain restrictions for modification and reuse.
A few types of application programs that are widely accepted these days,
are:
1. Word Processing
A word processor is an application program used for the production of any type
of printable text document including composition, editing, formatting and printing.
It takes the advantage of a Graphical User Interface (GUI) to present data in a
required format. It can produce any arbitrary combination of images, graphics
and text. Microsoft Word is the most widely used word processing system.
Microsoft Word can be used for the simplest to the most complex word
processing applications. Using Word, you can write letters and reports, prepare
bills and invoices, prepare office stationery, such as letterheads, envelopes and
forms, design brochures, pamphlets, newsletters and magazines, etc.
2. Spreadsheet
Excel is ideal for a task that needs a number of lists, tables, financial calculations,
analysis and graphs. Excel is good for organizing different kinds of data, however
Self-Instructional
Material 165
Computer Software it is numerical data that is best suited. Thus, Excel can be used when you not only
need a tool for storing and managing data, but also analysing and querying it. In
addition to providing simple database capabilities, Excel also allows you to create
documents for the World Wide Web (WWW).
NOTES
The menus, toolbars and icons of MS Excel are very similar (though not the
same) to MS Word. This is in keeping with Microsoft’s much hyped philosophy
and strategy of offering users a totally integrated office suite pack. From the user’s
point of view, this means less time spent in learning the second package once you
know the first, and almost effortless and seamless exchange of data between various
components.
3. Presentation Graphics
PowerPoint is a presentation tool that helps create eye-catching and effective
presentations in a matter of minutes. A presentation comprises of individual slides
arranged in a sequential manner. Normally, each slide covers a brief topic. The
term ‘Free’ software specifies the freedom of using the software by various
computer users (private individuals as well as organizations and companies) granting
them freedom and control in running and adapting the computing and data
processing as per their needs. The key objective of free software is to grant freedom
rights to users so that the users are free to run, copy, distribute, study, change and
improve the software. For example, you can use PowerPoint software for preparing
presentations and adding notes to the specific slides. Similarly, you have the option
of either printing the slides—in case you want to use an overhead projector—or
simply attach your computer to an LCD display panel that enlarges the picture
several times and shows the output on a screen.
You have three options for creating a new presentation:
(i) Begin by working with a wizard (called the AutoContent Wizard) that
helps you determine the theme, contents and organization of your presentation
by using a predefined outline, or
(ii) Start by picking out a PowerPoint Design Template which determines
the presentation’s colour scheme, fonts and other design features, or
(iii) Begin with a completely blank presentation with the colour scheme, fonts
and other design features set to default values.
If you decide to choose the third option, PowerPoint designers have
provided a wide assortment of predefined slide formats and Clip Art graphics
libraries. Through these predefined slide formats, you can quickly create slides
based on standard layouts and attributes.
PowerPoint shares a common look and feel with other MS Office
components, and having once mastered Word and Excel, learning PowerPoint is
almost like playing a game. And it is also easy to pick up data from Word and
Excel directly into a PowerPoint presentation and vice versa.
Self-Instructional
166 Material
Database Management Software Computer Software

Nowadays, all large businesses require database management. When managing a


large customer base, it is important to examine vital information like the busying
pattern, cheap suppliers and the number of orders being received. In order to NOTES
efficiently manage all these functions, MS Access is required.
As a first step, plan and create your database structure, identifying the
required fields based on the type of data (numbers, alphanumeric, data, etc.), and
the maximum width of each field. After determining the structure, you can create a
table either in the design mode (which is customized) or you can use the table
wizard and any of the predefined tables, with the required modifications.
Creating the tables through the table wizard is much faster and easier than
through the design mode. However, if you use wizards you are somewhat restricted
with the predefined settings already available.
Once you have created the table you can then use the form’s wizard to
create user friendly and aesthetically pleasing layouts for data entry. Creating forms
for data entry also ensures that the user inputs only the right kind of information
and both data entry errors as well as typing work is minimized.
Once the forms have been created and relevant data has been entered,
using these you can then use the report wizard to generate any kind of report.
Using reports, you can not only organize and present your data in a more meaningful
manner, but you can also use various standard functions like subtotals, totals,
sorting to summarize your data.
Now to really fine-tune this Access application, you can create data access
pages to enable people spread over a large geographical area to share and compile
information using the Internet.

10.3 OVERVIEW OF DIFFERENT OPERATING


SYSTEM

The following are the various types of operating systems in PC:


UNIX
UNIX is an Operating System (OS) that was originally developed in 1969 by
employees of AT&T. The most significant stage in the early development of UNIX
was in 1973 when it was rewritten in the C programming language (also an AT&T
development). This was significant because C is known as a ‘high level’ programming
language, which means that it is written in a form that is closer to human language
than machine code. The philosophy of the IT community at the time was that since
operating systems dealt primarily with low level and basic computer instructions,
they should be written in low level languages that were hardware specific, such as
assembly language. The advantage of developing software in C gave UNIX its
Self-Instructional
Material 167
Computer Software portability; very little changes needed to be made for the operating system to run
on other computing platforms. This portability made UNIX widely used within the
IT community which consisted predominantly of higher education institutions,
government agencies and the IT and telecommunication industries.
NOTES
Currently, the main use of UNIX systems is for Internet or network servers.
Commercial organizations also use this OS for workstations and data servers.
UNIX has been used as the base for other operating systems; for example, Mac
OSX is based on a UNIX kernel. An operating system that conforms to industry
standards of specifications can be called a UNIX system; operating systems that
are modelled on UNIX but do not conform strictly to these standards by fault or
design are known as UNIX like systems. Linux for example is a UNIX like operating
system, whereas Solaris (developed by Sun Microsystems) is a UNIX system
that conforms to standards. Initially, UNIX systems used Command Line Interface
(CLI) for user interaction but now many distributions come with a Graphical User
Interface (GUI).
Linux
Linux is a UNIX like operating system originally developed by Linus Torvalds, a
student at the University of Helsinki. Since the complete source code for Linux is
open and available to everyone, it is referred to as Open Source. The user has the
freedom to copy and change the program or distribute it among friends and
colleagues.
Technically, Linux is strictly an OS kernel (a kernel is the core of an OS).
The first Linux kernel was released to the public in 1991. It had no networking,
ran on limited PC hardware and had little device driver support. Later versions of
Linux come with a bundle of software, including GUI, server programs, networking
suites and other utilities to make it a more complete OS. Typically, an organization
would integrate software with the Linux kernel and release what is called a Linux
Distribution. Examples of popular Linux distributions are Red Hat, Mandrake,
SCSE. These organizations are commercial ventures, selling their distributions,
and developing software for profit.
Linux is primarily used as an OS for network and Internet servers. Lately, it has
been gaining popularity as a desktop OS for general use since the wider inclusion
of GUIs and office suite software in distributions. The general features of Linux
are explained below:
 Multi-Tasking/Multi-User: Linux allows multiple users to run multiple
programs on the same system at the same time.
 Reliability: A highly reliable and stable OS, it can run for months, even
years, without having to be rebooted.
 TCP/IP Networking Support: Linux supports most Internet protocols.
TCP/IP is built into the kernel itself. TCP/IP is the communication protocol
that binds the Internet.
Self-Instructional
168 Material
 High-Level Security: It has many built-in security features to protect your Computer Software

system from unauthorised access. It stores passwords in an encrypted form


which cannot be decrypted.
Mac OS NOTES
Mac OS is the operating system designed for the Apple range of personal
computers, the Macintosh. It was first released in 1984 with the original Macintosh
computer and was the first OS to incorporate GUI. In fact, in contrast to the other
operating systems available at the time which used a CLI, the Mac OS was a pure
GUI, meaning it had no CLI at all. The philosophy behind this approach towards
OS design was to make a system that was user friendly and intuitive, and was
unlike MS DOS and UNIX that appeared complicated and challenging in
comparison.
The Mac OS was originally very hardware specific, only running on Apple
computers using Motorola 68000 processors. When Apple started building
computers using PowerPC processors and hardware, Mac OS was updated to
run on these machines. This was the case from the original Mac OS until Mac OS
version 9 was released in 2000. All these versions of Mac OS were pure GUIs.
The release of OSX (or Mac OS 10) brought about a significant change in the
development of Apple operating systems. OSX was built on UNIX technology
and brought better memory management and multitasking capabilities to the OS.
It also introduced a CLI for the first time. Previous versions of the Mac OS had
problems with multiple applications, causing each other to crash while running
simultaneously. OSX was originally developed to only run on PowerPC hardware
but since 2006 it has been able to run on Intel or x86 processors. Some distinctive
features of the Mac OS are:
 It is the first GUI with a focus on usability and simplicity in an operating
system.
 The intuitive interface and development of publishing and creative software
since the first release of Mac OS has made Macintosh computers a favourite
in the design and publishing industries.
MS DOS
The Microsoft Disk Operating System (MS DOS) is a single-user, single-tasking
operating system offered by Microsoft. It was the most widely used operating
system for Personal Computers (PC) in the 1980s and Microsoft’s first
commercialized operating system offering. It was the same operating system that
Microsoft developed for IBM’s personal computer as a Personal Computer Disk
Operating System (PC-DOS) and was based on the Intel 8086 family of
microprocessors. MS DOS uses CLI and requires knowledge of a large number
of commands. With the GUI based operating system gaining popularity, MS DOS
has lost its appeal quickly though it was the basic OS on which early versions of
the GUI-based Windows Operating Systems ran. Even today you will find that
Self-Instructional
Material 169
Computer Software the Windows OS continues to use and support MS DOS within a Windows
environment. MS DOS was initially released in 1981 and till now eight versions of
it have been released. Today, Microsoft has stopped paying much attention to it
and is focusing primarily on the GUI-based Windows Operating Systems.
NOTES
IBM OS/2
Operating System 2 or OS/2 was a joint effort by IBM and Microsoft for developing
a successor of MS Dos and early versions of Microsoft Windows. However, after
the huge success of Windows 3.1, Microsoft decided to part ways with IBM,
who decided to develop the OS/2 operating system itself. Introduced in 1987,
this OS for personal computers was intended to provide an alternative to Microsoft
Windows for both enterprise and personal users. Though OS/2 looks like Windows
3.1, it has features that are similar to UNIX, particularly the multitasking feature
and the ability to support multiple users. IBM released OS/2 version 3.0 in 1994
and named it OS/2 WARP in order to highlight its new features and also to strengthen
the brand value which was lost due to IBM and Microsoft’s rivalry. OS/2 was the
preferred operating system in various banks for their Automated Teller Machines
(ATM) and railways for their Automated Ticket Vending Machines (ATVM).
Listed below are some features of OS/2 WARP:
 Support for multitasking support for a multi-user environment.
 Internet compatible networking.
 Peer-to-peer networking.
 Receive and send fax.
 Enhanced hardware support.
 Enhanced multimedia capabilities for viewing and editing applications.
 Office Application Suite known as IBM Works, that includes word
processing, spreadsheets, database and other office tools.
 Security and Reliability Object oriented GUI that can manage programs,
files and devices by manipulating objects on the screen.
Windows XP
Windows XP was first released on 25th October 2001 and since then over 600
million copies have been sold worldwide. It is a successor to both Windows 2000
and Windows ME and the first OS aimed at home users built on Windows NT
kernel and architecture. Due to the integration of multiple technologies from various
operating systems it has gained wide popularity among home and business desktop,
notebooks and media centre users. As acknowledged by most Windows XP users
as well as the Microsoft Corporation, this version of Windows is the most stable
and efficient OS released by Microsoft yet.

Self-Instructional
170 Material
The key features of all the different versions of Windows XP are given below: Computer Software

 Enhanced support to drivers for hardware devices connected to the computer


using an improved version of Windows Image Acquisition (WIA). In addition
to this, in the control panel an option to revert back to driver changes has
NOTES
also been inserted into it.
 CD-burning capabilities are integrated into the Windows XP OS. This
technology is adopted from Roxio and doesn’t require a third party software
to be installed in order to burn CDs and DVDs.
 Simultaneous user login into the operating system enables multi-user
switching between tasks and does not require one user to close all
applications before letting another user log in.
 Remote assistance helps a Windows XP user to take control of another
Windows XP machine by using a network or the Internet. This is very helpful
in remotely fixing problems without being physically present around the
machine.
 Improvement in Fonts using ClearType technology has made it easier and
more attractive to work in Windows XP.
 Power Management functions are drastically improved and the default
standard changed to ACPI (Advance Configuration & Power Interface)
from APM (Advanced Power Management). With ACPI, according to
battery status, processor speeds can be reduced or increased instantly,
USB devices can be individually isolated and suspended to save power
and screen brightness can be adjusted to increase battery life.
 The first OS from Microsoft which required product activation to fight piracy.
Windows Vista
The most recent in the line of Microsoft Windows personal computer operating
system, Windows Vista, codenamed Longhorn, was developed to succeed
Windows XP and improve upon the security aspect. Microsoft started the
development of Windows Vista five months after releasing Windows XP and the
work continued till November 2006 when Microsoft announced its completion,
ending the longest development cycle of an operating system. Since the original
idea of building longhorn from Windows XP’s code was scraped and it was built
on Windows 2003 SP1, several developments, including an all new graphical
interface named Windows Aero, refined and faster search capabilities, an array of
new tools such as Windows DVD Maker, integrated Windows Media Centre in
the Vista Home Premium and vista Ultimate Editions, redesigned networking, audio,
print, and display sub-systems, were made.
The key features of Windows Vista are stated below:
 There is an increased level of communication between machines on a home
network. The peer-to-peer technology is used for simplifying sharable files
and digital media between computers and devices. Self-Instructional
Material 171
Computer Software  Windows Vista includes version 3.0 of the .NET Framework, which aims
at making it significantly easier for software developers to write applications
than with the traditional Windows API.
 Windows Aero, the new graphical interface of Windows Vista OS is an
NOTES
aesthetically driven GUI with transparencies, live thumbnails and icons. The
overall look and feel of the GUI is pleasing to the eye and convenient to
work with.
 Instant Search is a new feature of Windows Vista which is significantly faster
and returns better in-depth results for files and folders on the desktop.
 Windows Sidebar is a panel where selected Windows gadgets are located.
These gadgets update the user on various topics such as stock indexes,
sports score, currency exchange rate, etc. and can be customized according
to user requirements.
 Windows Internet Explorer 7 incorporates tabbed browsing and Anti-
Phishing filtering, and works in isolation from other applications using a
protected mode.
 Backup and Restore Center application provides the user with the ability to
backup and restore application at periodic intervals of files and folders present
on their computers. Backups are stored on the basis of changes made to
the data and incremented automatically to the existing backup. The option
to completely back up all data on the PC is also available in selected editions
of Windows Vista, wherein an image can be created on hard drives or
DVDs. In case of a hardware or software failure, complete PC Restore
can be easily performed and data loss can be prevented.
 Windows DVD Maker brings native support to Windows Movie Maker
for creating custom DVDs based on the user’s content. Operations like
designing a Title, menu, video, soundtrack, pan and zoom motion effects on
pictures or slides, can be easily performed.
 Windows Media Center, which used to be a separate edition of Windows
XP, known as Windows XP Media Center Edition, now comes integrated
with Windows Vista in the Home Premium and Ultimate editions.
 Windows Mobility Center refers to a control panel which centralizes the
most relevant information related to mobile computing, such as brightness,
sound, battery level / power scheme selection, wireless network, screen
orientation, presentation settings, etc.
 NetMeeting has been replaced by Windows Meeting Space.Windows
Meeting Space may be used for sharing data or the entire desktop with
other users of this application who are connected over the Intranet, LAN,
or Internet, using the peer-to-peer technology.

Self-Instructional
172 Material
Windows CE Computer Software

The Windows Embedded Compact (CE) is an operating system optimized for


devices with minimum hardware resources, such as embedded devices and
handhelds. It integrates advance and reliable real time capabilities with Windows NOTES
technology. The kernel of this OS isn’t just a trimmed down version of desktop
Windows, but is in fact, a brand new kernel which can run on less than a megabyte
of memory. Besides the advantage of performing on a minimum specification, it is
also an OS which satisfies the prerequisites of a real time operating system. Another
distinct feature of Windows CE is that it was made available in a source code form
to several hardware manufactures so that they could modify the OS to adjust with
their hardware and also to the general public. Since Windows CE was developed
as a component based and embedded operating system, it has been used as the
basis for the development of several mobile operating systems, such as AutoPC,
PocketPC, Windows Mobile, Smartphone, etc. and also embedded into games
consoles, such as Microsoft Xbox.

Check Your Progress


1. What are the different types of software based on function?
2. Define the term COTS.

10.4 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. System software application software and programming software are the


types of software based on function.
2. COTS is a term used for software and hardware technology which is
available to the general public for sale, license or lease. In other words, to
use COTS software, you must pay its developer in one way or another.

10.5 SUMMARY

 A computer cannot operate without any instructions and is based on a logical


sequence of instructions in order to perform a function. These instructions
are known as a ‘computer program’, and constitute the computer software.
 System Software is involved in managing and controlling the operations of a
computer system. System software is a group of programs rather than one
program and is responsible for using computer resources efficiently and
effectively.
 System software constitutes all the programs, languages and documentation
provided by the manufacturer in the computer. These programs provide the
Self-Instructional
Material 173
Computer Software user with an access to the system so that he can communicate with the
computer and write or develop his own programs.
 An Operating System (OS) is the main control program for handling all
other programs in a computer.
NOTES
 COTS is a term used for software and hardware technology which is
available to the general public for sale, license or lease.
 Currently, the main use of UNIX systems is for Internet or network servers.
Commercial organizations also use this OS for workstations and data servers.
UNIX has been used as the base for other operating systems.
 The Microsoft Disk Operating System (MS DOS) is a single-user, single-
tasking operating system offered by Microsoft. It was the most widely used
operating system for Personal Computers (PC) in the 1980s and Microsoft’s
first commercialized operating system offering.

10.6 KEY WORDS

 Software: It is a general term for the various kinds of programs used to


operate computers and related devices.
 Operating system: It is a system software that manages computer
hardware and software resources and provides common services for
computer programs.

10.7 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short Answer Questions


1. Discuss the significance of system and application software.
2. What are licensed and free domain software?
3. What are the most widely used application software?
Long Answer Questions
1. Explain the different types of computer software.
2. Write a detailed note on UNIX and Linux OS.
3. What are the key features of Windows XP?

10.8 FURTHER READINGS

Bhatt, Pramod Chandra P. 2003. An Introduction to Operating Systems—


Concepts and Practice. New Delhi: PHI.
Self-Instructional
174 Material
Bhattacharjee, Satyapriya. 2001. A Textbook of Client/Server Computing. New Computer Software

Delhi: Dominant Publishers and Distributers.


Hamacher, V.C., Vranesic, Z.G. and Zaky. 2002. S.G. Computer Organization,
5th edition. New York: McGraw-Hill International Edition.
NOTES
Mano, M. Morris. 1993. Computer System Architecture, 3th edition. New Jersey:
Prentice-Hall Inc.
Nutt, Gary. 2006. Operating Systems. New Delhi: Pearson Education.

Self-Instructional
Material 175
Software Development

UNIT 11 SOFTWARE
DEVELOPMENT
NOTES
Structure
11.0 Introduction
11.1 Objectives
11.2 Design and Testing Requirement Analysis
11.3 Design Process
11.4 Models for System Development
11.5 Software Testing Life Cycle
11.6 Software Testing
11.7 Software Paradigms and Programming Methods
11.8 Answers to Check Your Progress Questions
11.9 Summary
11.10 Key Words
11.11 Self Assessment Questions and Exercises
11.12 Further Readings

11.0 INTRODUCTION

In this unit, you will learn about the process of software development and
testing. Software development process is the process of dividing software
development work into distinct phases to improve design, product management,
and project management. It is also known as a software development life cycle.
You will also learn about the various levels of software testing.

11.1 OBJECTIVES

After going through this unit, you will be able to:


 Understand the design and testing requirement in software development
 Explain the principles of software design
 Explain the different models for system development
 Understand the software testing life cycle
 Explain the various levels of software testing
 Discuss the various programming methods

Self-Instructional
176 Material
Software Development
11.2 DESIGN AND TESTING REQUIREMENT
ANALYSIS

Requirement is a condition or capability possessed by the software or system NOTES


component in order to solve a real-world problem. The problems can be to
automate a part of system, to correct the shortcomings of an existing system, to
control a device, and so on. IEEE defines requirement as ‘(1) A condition or
capability needed by a user to solve a problem or achieve an objective. (2) A
condition or capability that must be met or possessed by a system or system
component to satisfy a contract, standard, specification, or other formally
imposed documents. (3) A documented representation of a condition or
capability as in (1) or (2).’
Requirements describe how a system should act, appear or perform. For
this, when users request for software, they provide an approximation of what the
new system should be capable of doing. Requirements differ from one user to
another and from one business process to another.
Types of Requirements
Requirements help to understand the behaviour of a system, which is described by
various tasks of the system. For example, some of the tasks of a system are to
provide a response to input values, to determine the state of data objects, and so
on. Note that requirements are considered prior to the development of the software.
The requirements that are commonly considered are classified into three categories,
namely, functional requirements, non-functional requirements and domain
requirements (see Figure 11.1).

Fig. 11.1 Types of Requirements

Self-Instructional
Material 177
Software Development Functional Requirements
IEEE defines function requirements as ‘a function that a system or component
must be able to perform.’ These requirements describe the interaction of software
NOTES with its environment and specify the inputs, outputs, external interfaces and the
functions that should be included in the software. Also, the services provided by
functional requirements specify the procedure by which the software should react
to particular inputs or behave in particular situations.
To understand functional requirements properly, let us consider the following
example of an online banking system:
 The user of the bank should be able to search the desired services from the
available ones.
 There should be appropriate documents for the user to read. This implies
that when a user wants to open an account in the bank, the forms must be
available so that the user can open an account.
 After registration, the user should be provided with a unique
acknowledgement number so that he can later be given an account
number.
The aforementioned functional requirements describe the specific services
provided by the online banking system. These requirements indicate user
requirements and specify that functional requirements may be described at different
levels of detail in an online banking system. With the help of these functional
requirements, users can easily view, search and download registration forms and
other information about the bank. On the other hand, if requirements are not stated
properly, they are misinterpreted by software engineers and user requirements are
not met.
Non-Functional Requirements
The non-functional requirements (also known as quality requirements) relate to
system attributes such as reliability and response time. Non-functional requirements
arise due to user requirements, budget constraints, organizational policies, and so
on. These requirements are not related directly to any particular function provided
by the system.
Non-functional requirements should be accomplished in software to make
it perform efficiently. For example, if an aeroplane is unable to fulfill reliability
requirements, it is not approved for safe operation. Similarly, if a real-time control
system is ineffective in accomplishing non-functional requirements, the control
functions cannot operate correctly. Different types of non-functional requirements
are shown in Figure 11.2.

Self-Instructional
178 Material
Software Development

NOTES

Fig. 11.2 Types of Non-functional Requirements

 Product requirements: These requirements specify how software product


performs. Product requirements comprise the following:
o Efficiency requirements: Describe the extent to which the software
makes optimal use of resources, the speed with which the system
executes and the memory it consumes for its operation. For example,
the system should be able to operate at least three times faster than the
existing system.
o Reliability requirements: Describe the acceptable failure rate of the
software. For example, the software should be able to operate even if a
hazard occurs.
o Portability requirements: Describe the ease with which the software
can be transferred from one platform to another. For example, it should
be easy to port the software to a different operating system without the
need to redesign the entire software.
o Usability requirements: Describe the ease with which users are able
to operate the software. For example, the software should be able to
provide access to functionality with fewer keystrokes and mouse clicks.
 Organizational requirements: These requirements are derived from the
policies and procedures of an organization. Organizational requirements
comprise the following:
o Delivery requirements: Specify when the software and its
documentation are to be delivered to the user.
o Implementation requirements: Describe requirements such as
programming language and design method.

Self-Instructional
Material 179
Software Development o Standards requirements: Describe the process standards to be used
during software development. For example, the software should be
developed using standards specified by the ISO (International
Organization for Standardization) and IEEE standards.
NOTES
 External requirements: These requirements include all the requirements
that affect the software or its development process externally. External
requirements comprise the following:
o Interoperability requirements: Define the way in which different
computer-based systems interact with each other in one or more
organizations.
o Ethical requirements: Specify the rules and regulations of the software
so that they are acceptable to users.
o Legislative requirements: Ensure that the software operates within
the legal jurisdiction. For example, pirated software should not be sold.
Non-functional requirements are difficult to verify. Hence, it is essential to
write non-functional requirements quantitatively so that they can be tested. For
this, non-functional requirements metrics are used.
Domain Requirements
Requirements that are derived from the application domain of a system, instead
from the needs of the users, are known as domain requirements. These
requirements may be new functional requirements or specify a method to perform
some particular computations. In addition, these requirements include any constraint
that may be present in the existing functional requirements. As domain requirements
reflect the fundamentals of the application domain, it is important to understand
these requirements. Also, if these requirements are not fulfilled, it may be difficult
to make the system work as desired.
A system can include a number of domain requirements. For example, it may
comprise a design constraint that describes an user interface which is capable of
accessing all the databases used in a system. It is important for a development
team to create databases and interface designs as per established standards.
Similarly, the requirements of the user such as copyright restrictions and security
mechanism for the files and documents used in the system are also domain
requirements. When domain requirements are not expressed clearly, it can result
in the following problems:
 Problem of understandability: When domain requirements are specified
in the language of application domain (such as mathematical expressions), it
becomes difficult for software engineers to understand them.
 Problem of implicitness: When domain experts understand the domain
requirements but do not express these requirements clearly, it may create a
problem (due to incomplete information) for the development team to
understand and implement the requirements in the system.
Self-Instructional
180 Material
Note: Information about requirements is stored in a database, which helps the software Software Development
development team to understand user requirements and develop the software according to
those requirements.

Requirements Analysis
NOTES
IEEE defines requirements analysis as ‘(1) The process of studying user needs
to arrive at a definition of a system, hardware, or software requirements. (2)
The process of studying and refining system, hardware, or software
requirements.’ Requirements analysis helps to understand, interpret, classify, and
organize the software requirements in order to assess the feasibility, completeness
and consistency of the requirements. The other tasks performed using requirements
analysis are
 To detect and resolve conflicts that arises due to unclear and unspecified
requirements
 To determine operational characteristics of the software and how they
interact with the environment
 To understand the problem for which the software is to be developed
 To develop an analysis model to analyse the requirements in the software.
Software engineers perform analysis modelling and create an analysis model
to provide information of ‘what’ software should do instead of ‘how’ to fulfil the
requirements of the software. This model focuses on the functions that the software
should perform, the behaviour it should exhibit and the constraints that are applied
on the software. This model also determines the relationship of one component
with the other components. Clear and complete requirements specified in the analysis
model help the software development team to develop the software according to
those requirements. An analysis model is created to help the development team to
assess the quality of the software when it is developed. An analysis model helps to
define a set of requirements that can be validated when the software is developed.
Let us consider an example of constructing a study room where the user
knows the dimensions of the room, the location of doors and windows, and the
available wall space. Before constructing the study room, he provides information
about flooring, wallpaper, and so on to the constructor. This information helps the
constructor to analyse the requirements and prepare an analysis model that describes
the requirements. This model also describes what needs to be done to fulfil those
requirements. Similarly, an analysis model created for the software facilitates the
software development team to understand what is required in the software and
then develop it.
In Figure 11.3, the analysis model connects the system description and the
design model. System description provides information about the entire functionality
of the system, which is achieved by implementing the software, hardware and
data. In addition, the analysis model specifies the software design in the form of a
design model which provides information about the software’s architecture, user
interface and component-level structure.
Self-Instructional
Material 181
Software Development

NOTES

Fig. 11.3 Analysis Model as Connector

The guidelines followed while creating an analysis model are:


 The model should concentrate on the requirements in the problem domain
that need to be fulfilled. However, it should not describe the procedure to
fulfil these requirements in the system.
 Every element of the analysis model should help in understanding the software
requirements. This model should also describe the information domain,
function and behaviour of the system.
 The analysis model should be useful to all stakeholders because every
stakeholder uses this model in his own manner. For example, business
stakeholders use this model to validate requirements, whereas software
designers view this model as a basis for design.
 The analysis model should be as simple as possible. For this, additional
diagrams that depict no new or unnecessary information should be avoided.
Also, abbreviations and acronyms should be used instead of complete
notations.

11.3 DESIGN PROCESS

Software design is a phase in software engineering, in which a blueprint is developed


to serve as a base for constructing the software system. IEEE defines software
design as ‘both a process of defining the architecture, components, interfaces,
and other characteristics of a system or component and the result of that
process.’
In the design phase, many critical and strategic decisions are made to achieve
the desired functionality and quality of the system. These decisions are taken into
account to successfully develop the software and carry out its maintenance in a
way that the quality of the end product is improved.
Principles of Software Design
Developing design is a cumbersome process as most expansive errors are often
introduced in this phase. Moreover, if these errors get unnoticed till later phases, it
becomes more difficult to correct them. Therefore, a number of principles are
followed while designing the software. These principles act as a framework for
the designers to follow a good design practice (see Figure 11.4).
Self-Instructional
182 Material
Software Development

NOTES

Fig. 11.4 Principles of Software Design

Some of the commonly followed design principles are as following.


 Software design should correspond to the analysis model: Often a
design element corresponds to many requirements, therefore, we must know
how the design model satisfies all the requirements represented by the analysis
model.
 Choose the right programming paradigm: A programming paradigm
describes the structure of the software system. Depending on the nature
and type of application, different programming paradigms such as procedure
oriented, object-oriented, and prototyping paradigms can be used. The
paradigm should be chosen keeping constraints in mind such as time,
availability of resources and nature of user’s requirements.
 Software design should be uniform and integrated: Software design is
considered uniform and integrated, if the interfaces are properly defined
among the design components. For this, rules, format, and styles are
established before the design team starts designing the software.
 Software design should be flexible: Software design should be flexible
enough to adapt changes easily. To achieve the flexibility, the basic design
concepts such as abstraction, refinement, and modularity should be applied
effectively.
 Software design should ensure minimal conceptual (semantic) errors:
The design team must ensure that major conceptual errors of design such as
ambiguousness and inconsistency are addressed in advance before dealing
with the syntactical errors present in the design model.
 Software design should be structured to degrade gently: Software
should be designed to handle unusual changes and circumstances, and if the
need arises for termination, it must do so in a proper manner so that
functionality of the software is not affected.
 Software design should represent correspondence between the
software and real-world problem: The software design should be
structured in such a way that it always relates with the real-world problem.
Self-Instructional
Material 183
Software Development  Software reuse: Software engineers believe on the phrase: ‘do not
reinvent the wheel’. Therefore, software components should be designed
in such a way that they can be effectively reused to increase the productivity.
 Designing for testability: A common practice that has been followed is
NOTES
to keep the testing phase separate from the design and implementation
phases. That is, first the software is developed (designed and implemented)
and then handed over to the testers who subsequently determine whether
the software is fit for distribution and subsequent use by the customer.
However, it has become apparent that the process of separating testing is
seriously flawed, as if any type of design or implementation errors are found
after implementation, then the entire or a substantial part of the software
requires to be redone. Thus, the test engineers should be involved from the
initial stages. For example, they should be involved with analysts to prepare
tests for determining whether the user requirements are being met.
 Prototyping: Prototyping should be used when the requirements are not
completely defined in the beginning. The user interacts with the developer
to expand and refine the requirements as the development proceeds. Using
prototyping, a quick ‘mock-up’ of the system can be developed. This mock-
up can be used as a effective means to give the users a feel of what the
system will look like and demonstrate functions that will be included in the
developed system. Prototyping also helps in reducing risks of designing
software that is not in accordance with the customer’s requirements.
Note that design principles are often constrained by the existing hardware
configuration, the implementation language, the existing file and data structures,
and the existing organizational practices. Also, the evolution of each software design
should be meticulously designed for future evaluations, references and maintenance.
Software Design Concepts
Every software process is characterized by basic concepts along with certain
practices or methods. Methods represent the manner through which the concepts
are applied. As new technology replaces older technology, many changes occur in
the methods that are used to apply the concepts for the development of software.
However, the fundamental concepts underlining the software design process remain
the same, some of which are described here.
Abstraction
Abstraction refers to a powerful design tool, which allows software designers to
consider components at an abstract level, while neglecting the implementation
details of the components. IEEE defines abstraction as ‘a view of a problem
that extracts the essential information relevant to a particular purpose and
ignores the remainder of the information.’ The concept of abstraction can be
used in two ways: as a process and as an entity. As a process, it refers to a
mechanism of hiding irrelevant details and representing only the essential features
Self-Instructional
184 Material
of an item so that one can focus on important things at a time. As an entity, it Software Development

refers to a model or view of an item.


Each step in the software process is accomplished through various levels of
abstraction. At the highest level, an outline of the solution to the problem is presented
NOTES
whereas at the lower levels, the solution to the problem is presented in detail. For
example, in the requirements analysis phase, a solution to the problem is presented
using the language of problem environment and as we proceed through the software
process, the abstraction level reduces and at the lowest level, source code of the
software is produced.
There are three commonly used abstraction mechanisms in software design,
namely, functional abstraction, data abstraction and control abstraction. All
these mechanisms allow us to control the complexity of the design process by
proceeding from the abstract design model to concrete design model in a systematic
manner.
 Functional abstraction: This involves the use of parameterized
subprograms. Functional abstraction can be generalized as collections of
subprograms referred to as ‘groups’. Within these groups there exist routines
which may be visible or hidden. Visible routines can be used within the
containing groups as well as within other groups, whereas hidden routines
are hidden from other groups and can be used within the containing group
only.
 Data abstraction: This involves specifying data that describes a data object.
For example, the data object window encompasses a set of attributes
(window type, window dimension) that describe the window object clearly.
In this abstraction mechanism, representation and manipulation details are
ignored.
 Control abstraction: This states the desired effect, without stating the exact
mechanism of control. For example, if and while statements in programming
languages (like C and C++) are abstractions of machine code
implementations, which involve conditional instructions. In the architectural
design level, this abstraction mechanism permits specifications of sequential
subprogram and exception handlers without the concern for exact details of
implementation.
Architecture
Software architecture refers to the structure of the system, which is composed of
various components of a program/system, the attributes (properties) of those
components and the relationship amongst them. The software architecture enables
the software engineers to analyze the software design efficiently. In addition, it also
helps them in decision-making and handling risks. The software architecture does
the following.

Self-Instructional
Material 185
Software Development  Provides an insight to all the interested stakeholders that enable them to
communicate with each other
 Highlights early design decisions, which have great impact on the software
engineering activities (like coding and testing) that follow the design phase
NOTES
 Creates intellectual models of how the system is organized into components
and how these components interact with each other.
Currently, software architecture is represented in an informal and unplanned
manner. Though the architectural concepts are often represented in the infrastructure
(for supporting particular architectural styles) and the initial stages of a system
configuration, the lack of an explicit independent characterization of architecture
restricts the advantages of this design concept in the present scenario.
Note that software architecture comprises two elements of design model, namely,
data design and architectural design. Both these elements have been discussed
later in this chapter.
Patterns
A pattern provides a description of the solution to a recurring design problem of
some specific domain in such a way that the solution can be used again and again.
The objective of each pattern is to provide an insight to a designer who can
determine the following.
 Whether the pattern can be reused
 Whether the pattern is applicable to the current project
 Whether the pattern can be used to develop a similar but functionally or
structurally different design pattern.
Types of Design Patterns
Software engineer can use the design pattern during the entire software design
process. When the analysis model is developed, the designer can examine the
problem description at different levels of abstraction to determine whether it
complies with one or more of the following types of design patterns.
 Architectural patterns: These patterns are high-level strategies that refer
to the overall structure and organization of a software system. That is, they
define the elements of a software system such as subsystems, components,
classes, etc. In addition, they also indicate the relationship between the
elements along with the rules and guidelines for specifying these relationships.
Note that architectural patterns are often considered equivalent to software
architecture.
 Design patterns: These patterns are medium-level strategies that are used
to solve design problems. They provide a means for the refinement of the
elements (as defined by architectural pattern) of a software system or the
relationship among them. Specific design elements such as relationship among
Self-Instructional
186 Material
components or mechanisms that affect component-to-component interaction Software Development

are addressed by design patterns. Note that design patterns are often
considered equivalent to software components.
 Idioms: These patterns are low-level patterns, which are programming-
NOTES
language specific. They describe the implementation of a software
component, the method used for interaction among software components,
etc., in a specific programming language. Note that idioms are often termed
as coding patterns.
Modularity
Modularity is achieved by dividing the software into uniquely named and
addressable components, which are also known as modules. A complex system
(large program) is partitioned into a set of discrete modules in such a way that
each module can be developed independent of other modules (see Figure 11.5).
After developing the modules, they are integrated together to meet the software
requirements. Note that larger the number of modules a system is divided into,
greater will be the effort required to integrate the modules.

Fig. 11.5 Modules in Software Programs

Modularizing a design helps to plan the development in a more effective


manner, accommodate changes easily, conduct testing and debugging effectively
and efficiently, and conduct maintenance work without adversely affecting the
functioning of the software.
Information Hiding
Modules should be specified and designed in such a way that the data structures
and processing details of one module are not accessible to other modules (see
Figure 11.6). They pass only that much information to each other, which is required
to accomplish the software functions. The way of hiding unnecessary details is
referred to as information hiding. IEEE defines information hiding as ‘the
technique of encapsulating software design decisions in modules in such a
way that the module’s interfaces reveal as little as possible about the module’s
Self-Instructional
Material 187
Software Development inner workings; thus each module is a ‘black box’ to the other modules in
the system.’

NOTES

Fig. 11.6 Information Hiding

Information hiding is of immense use when modifications are required during the
testing and maintenance phase. Some of the advantages associated with information
hiding are listed below.
 Leads to low coupling
 Emphasizes communication through controlled interfaces
 Decreases the probability of adverse effects
 Restricts the effects of changes in one component on others
 Results in higher quality software.

11.4 MODELS FOR SYSTEM DEVELOPMENT

A process model can be defined as a strategy (also known as software


engineering paradigm), comprising process, methods, and tools layers as well
as the general phases (as described in Chapter 1) for developing the software. It
provides a basis for controlling various activities required to develop and maintain
the software. In addition, it helps the software development team in facilitating and
understanding the activities involved in the project.
A process model for software engineering depends on the nature and
application of the software project. Thus, it is essential to define process models
for each software project. IEEE defines a process model as ‘a framework
containing the processes, activities, and tasks involved in the development,
operation, and maintenance of a software product, spanning the life of the
system from the definition of its requirements to the termination of its use.’ A
process model reflects the goals of software development such as developing a
high quality product and meeting the schedule on time. In addition, it provides a
flexible framework for enhancing the processes. Other advantages of the software
process model are listed below.
Self-Instructional
188 Material
 Enables effective communication: It enhances understanding and Software Development

provides a specific basis for process execution.


 Facilitates process reuse: Process development is a time consuming and
expensive activity. Thus, the software development team utilizes the existing
NOTES
processes for different projects.
 Effective: Since process models can be used again and again; reusable
processes provide an effective means for implementing processes for
software development.
 Facilitates process management: Process models provide a framework
for defining process status criteria and measures for software development.
Thus, effective management is essential to provide a clear description of the
plans for the software project.
Every software development process model takes requirements as input
and delivers products as output. However, a process should detect defects in the
phases in which they occur. This requires Verification and Validation (V&V) of the
products after each and every phase of the software development life cycle (see
Figure 11.7).

Fig. 11.7 Development Process

Verification is the process of evaluating a system or its components for


determining the product developed at the end of each phase of the software
development. IEEE defines verification as ‘a process for determining whether
the software products of an activity fulfill the requirements or conditions
imposed on them in the previous activities.’ Thus, verification confirms that the
product is transformed from one form to another as intended and with sufficient
accuracy.
Validation is the process of evaluating the product at the end of each phase
to check whether the requirements are fulfilled. In addition, it is the process of
establishing a procedure and method, which provides the intended outputs. IEEE
defines validation as ‘a process for determining whether the requirements and
the final, as-built system or software product fulfils its specific intended use.’
Thus, validation substantiates the software functions with sufficient accuracy with
respect to its requirements specification.
Various kinds of process models are waterfall model, prototyping model,
spiral model, incremental model, time-boxing model, RAD model, V model,
build and fix model, and formal method model. Self-Instructional
Material 189
Software Development Waterfall Model
In the waterfall model (also known as the classical life cycle model), the
development of software proceeds linearly and sequentially from requirement
NOTES analysis to design, coding, testing, integration, implementation, and maintenance.
Thus, this model is also known as the linear sequential model.

Fig. 11.8 The Waterfall Model

This model is simple to understand and represents processes which are


easy to manage and measure. The waterfall model comprises different phases and
each phase has its distinct goal. Figure 11.8 shows that after the completion of one
phase, the development of software moves to the next phase. Each phase modifies
the intermediate product to develop a new product as an output. The new product
becomes the input of the next process. Table 11.1 lists the inputs and outputs of
each phase of waterfall model.
Table 11.1 Inputs and Outputs of each Phase of Waterfall Model
Input to Phase Process/ Phase Output of the Phase

Requirements defined Requirements analysis Software requirements


through communication specification document

Software requirements Design Design specification


specification document document

Design specification Coding Executable software


document modules

Executable software Testing Integrated product


modules

Integrated product Implementation Delivered software

Delivered software Maintenance Changed requirements

Self-Instructional
190 Material
As stated earlier, the waterfall model comprises several phases, which are listed Software Development

below.
 System/information engineering modeling: This phase establishes the
requirements for all parts of the system. Software being a part of the larger
NOTES
system, a subset of these requirements is allocated to it. This system view is
necessary when software interacts with other parts of the system including
hardware, databases, and people. System engineering includes collecting
requirements at the system level while information engineering includes
collecting requirements at a level where all decisions regarding business
strategies are taken.
 Requirements analysis: This phase focuses on the requirements of the
software to be developed. It determines the processes that are to be
incorporated during the development of the software. To specify the
requirements, users’ specifications should be clearly understood and their
requirements be analyzed. This phase involves interaction between the users
and the software engineers and produces a document known as Software
Requirements Specification (SRS).
 Design: This phase determines the detailed process of developing the
software after the requirements have been analyzed. It utilizes software
requirements defined by the user and translates them into software
representation. In this phase, the emphasis is on finding solutions to the
problems defined in the requirements analysis phase. The software engineer
is mainly concerned with the data structure, algorithmic detail and interface
representations.
 Coding: This phase emphasizes translation of design into a programming
language using the coding style and guidelines. The programs created should
be easy to read and understand. All the programs written are documented
according to the specification.
 Testing: This phase ensures that the software is developed as per the user’s
requirements. Testing is done to check that the software is running efficiently
and with minimum errors. It focuses on the internal logic and external functions
of the software and ensures that all the statements have been exercised
(tested). Note that testing is a multistage activity, which emphasizes
verification and validation of the software.
 Implementation and maintenance: This phase delivers fully functioning
operational software to the user. Once the software is accepted and deployed
at the user’s end, various changes occur due to changes in the external
environment (these include upgrading a new operating system or addition
of a new peripheral device). The changes also occur due to changing
requirements of the user and changes occurring in the field of technology.

Self-Instructional
Material 191
Software Development This phase focuses on modifying software, correcting errors, and improving
the performance of the software.
Various advantages and disadvantages associated with the waterfall model
are listed in Table 11.2.
NOTES
Table 11.2 Advantages and Disadvantages of Waterfall Model
Advantages Disadvantages

 Relatively simple to understand. Requirements need to be specified



 Each phase of development proceeds before the development proceeds.
sequentially.  The changes of requirements in
later phases of the waterfall
 Allows managerial control where a model cannot be done. This
schedule with deadlines is set for each implies that once the software
stage of development. enters the testing phase, it
 Helps in controlling schedules, budgets, becomes difficult to incorporate
and documentation. changes at such a late phase.
 No user involvement and working
version of the software is
available when the software is
being developed.
 Does not involve risk
management.
 Assumes that requirements are
stable and are frozen across the
project span.

Prototyping Model
The prototyping model is applied when detailed information related to input and
output requirements of the system is not available. In this model, it is assumed that
all the requirements may not be known at the start of the development of the
system. It is usually used when a system does not exist or in case of a large and
complex system where there is no manual process to determine the requirements.
This model allows the users to interact and experiment with a working model of
the system known as prototype. The prototype gives the user an actual feel of the
system.
At any stage, if the user is not satisfied with the prototype, it can be discarded and
an entirely new system can be developed. Generally, prototype can be prepared
by the approaches listed below.
 By creating main user interfaces without any substantial coding so that users
can get a feel of how the actual system will appear
 By abbreviating a version of the system that will perform limited subsets of
functions
 By using system components to illustrate the functions that will be included
in the system to be developed.

Self-Instructional
192 Material
Software Development

NOTES

Fig. 11.9 The Prototyping Model

Figure 2.10 illustrates the steps carried out in the prototyping model. These steps
are listed below.
1. Requirements gathering and analysis: A prototyping model begins with
requirements analysis and the requirements of the system are defined in
detail. The user is interviewed in order to know the requirements of the
system.
2. Quick design: When requirements are known, a preliminary design or
quick design for the system is created. It is not a detailed design and includes
only the important aspects of the system, which gives an idea of the system
to the user. A quick design helps in developing the prototype.
3. Build prototype: Information gathered from quick design is modified to
form the first prototype, which represents the working model of the required
system.
4. User evaluation: Next, the proposed system is presented to the user for
thorough evaluation of the prototype to recognize its strengths and
weaknesses such as what is to be added or removed. Comments and
suggestions are collected from the users and provided to the developer.
5. Refining prototype: Once the user evaluates the prototype and if he is not
satisfied, the current prototype is refined according to the requirements.
That is, a new prototype is developed with the additional information
provided by the user. The new prototype is evaluated just like the previous
prototype. This process continues until all the requirements specified by the
user are met. Once the user is satisfied with the developed prototype, a
final system is developed on the basis of the final prototype.
6. Engineer product: Once the requirements are completely met, the user
accepts the final prototype. The final system is evaluated thoroughly followed
by the routine maintenance on regular basis for preventing large-scale failures
and minimizing downtime.
Various advantages and disadvantages associated with the prototyping model
are listed in Table 11.3.

Self-Instructional
Material 193
Software Development Spiral Model
In the 1980s, Boehm introduced a process model known as the spiral model. The
spiral model comprises activities organized in a spiral, and has many cycles. This
NOTES model combines the features of the prototyping model and waterfall model and is
advantageous for large, complex, and expensive projects. It determines
requirements problems in developing the prototypes. In addition, it guides and
measures the need of risk management in each cycle of the spiral model. IEEE
defines the spiral model as ‘a model of the software development process in
which the constituent activities, typical requirements analysis, preliminary
and detailed design, coding, integration, and testing, are performed iteratively
until the software is complete.’
The objective of the spiral model is to emphasize management to evaluate
and resolve risks in the software project. Different areas of risks in the software
project are project overruns, changed requirements, loss of key project personnel,
delay of necessary hardware, competition with other software developers and
technological breakthroughs, which make the project obsolete. Figure 11.10 shows
the spiral model.
Table 11.3 Advantages and Disadvantages of Prototyping Model

Self-Instructional
194 Material
Software Development

NOTES

Fig. 11.10 The Spiral Model

The steps involved in the spiral model are listed below.


1. Each cycle of the first quadrant commences with identifying the goals for
that cycle. In addition, it determines other alternatives, which are possible in
accomplishing those goals.
2. The next step in the cycle evaluates alternatives based on objectives
and constraints. This process identifies the areas of uncertainty and focuses
on significant sources of the project risks. Risk signifies that there is a
possibility that the objectives of the project cannot be accomplished. If so,
the formulation of a cost-effective strategy for resolving risks is followed.
Figure 2.11 shows the strategy, which includes prototyping, simulation,
benchmarking, administrating user, questionnaires, and risk resolution
technique.
3. The development of the software depends on remaining risks. The third
quadrant develops the final software while considering the risks that can
occur. Risk management considers the time and effort to be devoted to
each project activity such as planning, configuration management, quality
assurance, verification, and testing.
4. The last quadrant plans the next step and includes planning for the next
prototype and thus, comprises the requirements plan, development plan,
integration plan, and test plan.
Self-Instructional
Material 195
Software Development One of the key features of the spiral model is that each cycle is completed
by a review conducted by the individuals or users. This includes the review of all
the intermediate products, which are developed during the cycles. In addition, it
includes the plan for the next cycle and the resources required for that cycle.
NOTES
The spiral model is similar to the waterfall model as software requirements
are understood at the early stages in both the models. However, the major risks
involved with developing the final software are resolved in the spiral model. When
these issues are resolved, a detailed design of the software is developed. Note
that processes in the waterfall model are followed by different cycles in the spiral
model as shown in Figure 11.11.

Fig. 11.11 Spiral and Waterfall Models

The spiral model is also similar to the prototyping model as one of the key
features of prototyping is to develop a prototype until the user requirements are
accomplished. The second step of the spiral model functions similarly. The prototype
is developed to clearly understand and achieve the user requirements. If the user
is not satisfied with the prototype, a new prototype known as operational
prototype is developed.
Various advantages and disadvantages associated with the spiral model are
listed in Table 11.4.
Table 11.4 Advantages and Disadvantages of Spiral Model
Advantages Disadvantages

 Avoids the problems resulting in  Assessment of project risks and its


risk-driven approach in the software. resolution is not an easy task.
 Specifies a mechanism for software  Difficult to estimate budget and
quality assurance activities. schedule in the beginning as some
of
 Is utilized by complex and dynamic the analysis is not done until the
projects. design of the software is developed.

Contd...
Self-Instructional
196 Material
 Re-evaluation after each step allows Software Development
changes in user perspectives,
technology advances, or financial
perspectives.
 Estimation of budget and schedule gets
realistic as the work progresses. NOTES

11.5 SOFTWARE TESTING LIFE CYCLE

As already mentioned, software testing determines the correctness, completeness


and quality of software being developed. IEEE defines testing as ‘the process of
exercising or evaluating a system or system component by manual or
automated means to verify that it satisfies specified requirements or to identify
differences between expected and actual results.’
The activities involved in the testing phase basically evaluate the capability
of the developed system and ensure that the system meets the desired requirements.
It should be noted that testing is fruitful only if it is performed in the correct manner.
Through effective software testing, a software can be examined for correctness,
comprehensiveness, consistency and adherence to standards. This helps in delivering
high-quality software products and lowering maintenance costs, thus leading to
more contented users.
Validation and Verification
Software testing is closely related to the terms verification and validation.
Verification refers to the process of ensuring that the software is being developed
according to its specifications. For verification, techniques such as reviews, analyses,
inspections and walkthroughs are commonly used. While validation refers to the
process of checking that the developed software meets the requirements specified
by the user. Verification and validation can be thus summarized as:
Verification: Is the software being developed in the right way?
Validation: Is the right software being developed?

Fig. 11.12 Advantages of Software Testing


Self-Instructional
Material 197
Software Development Software testing is performed either manually or by using automated tools to make
sure that the software is functioning in accordance with user requirements. Various
advantages associated with testing are (see Figure 11.12):
 It removes errors, which prevent the software from producing output
NOTES
according to user requirements.
 It removes errors that lead to software failure.
 It ensures that the software conforms to business as well as user needs.
 It ensures that the software is being developed according to user
requirements.
 It improves the quality of the software by removing maximum possible errors
from it.
Testing in Software Development Life Cycle
Software testing comprises a set of activities which are planned before testing
begins. These activities are carried out for detecting errors that occur during various
phases of the software development life cycle (SDLC). The role of testing in the
SDLC is listed in Table 11.5.
Table 11.5 Role of Testing in Various Phases of SDLC
Software Development Phase Testing

Requirements specification  To identify the test strategy


 To check the sufficiency of requirements
 To create functional test conditions

Design  To check the consistency of design with the requirements


 To check the sufficiency of design
 To create structural and functional test conditions

Coding  To check the consistency of implementation with the


design
 To check the sufficiency of implementation
 To create structural and functional test conditions for
programs/units

Testing  To check the sufficiency of the test plan


 To test the application programs

Installation and maintenance  To put the tested system under operation


 To make changes in the system and retest the
modified system

Software testing is aimed at identifying any bugs, errors, faults or failures (if any)
present in the software. Bug is defined as a logical mistake which is caused by a
software developer while writing the software code. Error is defined as the measure
of deviation of the output given by the software from the outputs expected by the
user. Fault is defined as the condition that leads to malfunctioning of the software.
Malfunctioning of a software is caused due to several reasons such as change in
Self-Instructional
198 Material
the design, architecture or software code. The defect that causes an error in Software Development

operation or a negative impact is called failure. Failure is defined as that state of


software under which it is unable to perform functions according to user
requirements. Bugs, errors, faults and failures prevent software from performing
efficiently and hence cause the software to produce unexpected output. Errors NOTES
can be present in the software due to the following reasons:
 Programming errors: Programmers can make mistakes while developing
the source code.
 Unclear requirements: The user is not clear about the desired requirements
or the developers are unable to understand user requirements in a clear and
concise manner.
 Software complexity: The greater the complexity of the software, the
more the scope of committing an error (especially by an inexperienced
developer).
 Changing requirements: Users usually keep on changing their
requirements, and it becomes difficult to handle such changes in the later
stage of the development process. Therefore, there are chances of making
mistakes while incorporating these changes in the software.
 Time pressures: Maintaining schedule of software projects is difficult.
When deadlines are not met, the attempt to speed up the work causes
errors.
 Poorly documented code: If the code is not well documented or well
written, then maintaining and modifying it becomes difficult. This causes
errors.
Note: In this chapter, ‘error’ is used as a general term for ‘bugs,’ ‘errors’, ‘faults’ and ‘failures.’

Check Your Progress


1. What are the different types of requirements?
2. Define software design.

11.6 SOFTWARE TESTING

A test plan describes how testing is accomplished. It is a document that specifies


the purpose, scope and method of software testing. It determines the testing tasks
and the persons involved in executing those tasks, the test items and the features
to be tested. It also describes the environment for testing, and the test design and
measurement techniques to be used. Note that a properly defined test plan is an
agreement between testers and users describing the role of testing in software.
A complete test plan helps the people who are not involved in the test group
to understand why product validation is needed and how it is to be performed.
Self-Instructional
Material 199
Software Development However, if the test plan is not complete, it might not be possible to check how the
software operates when installed on different operating systems or when used
with other software. To avoid this problem, IEEE states some components that
should be covered in a test plan. These components are listed in Table 11.6.
NOTES
Table 11.6 Components of a Test Plan
Component Purpose

Responsibilities Assigns responsibilities to different people and keeps them focused

Assumptions Avoids any misinterpretation of schedules

Test Provides an abstract of the entire process and outlines specific tests. The
testing scope, schedule and duration are also outlined

Communication Communication plan (who, what, when, how about the people) is
developed

Risk analysis Identifies areas that are critical for success

Defect reporting Specifies the way in which a defect should be documented so that it may
reoccur and be retested and fixed

Environment Describes the data, interfaces, work area and the technical environment
used in testing. All this is specified to reduce or eliminate the
misunderstandings and sources of potential delay

Steps in Development of a Test Plan


A carefully developed test plan facilitates effective test execution, proper analysis
of errors and preparation of error report. To develop a test plan, a number of
steps are followed (see Figure 11.3):
 Set objectives of test plan: Before developing a test plan, it is necessary
to understand its purpose. But, before determining the objectives of a test
plan, it is necessary to determine the objectives of the software. This is
because the objectives of a test plan are highly dependent on the objectives
of the software. For example, if the objective of the software is to accomplish
all user requirements, then a test plan is generated to meet this objective.
 Develop a test matrix: A test matrix indicates the components of the
software that are to be tested. It also specifies the tests required to check
these components. Test matrix is also used as a test proof to show that a
test exists for all components of the software that require testing. In addition,
test matrix is used to indicate the testing method, which is used to test the
entire software.
 Develop test administrative component: A test plan must be prepared
within a fixed time so that software testing can begin as soon as possible.
The purpose of an administrative component of a test plan is to specify the
time schedule and resources (administrative people involved while developing

Self-Instructional
200 Material
the test plan) required to execute the test plan. However, if the implementation Software Development

plan (plan that describes how the processes in the software are carried out)
of software changes, the test plan also changes. In this case, the schedule to
execute the test plan also gets affected.
NOTES
 Write the test plan: The components of a test plan such as its objectives,
test matrix and administrative component are documented. All these
documents are then collected together to form a complete test plan. These
documents are organized either in an informal or in a formal manner.

Fig. 11.13 Steps Involved in a Test Plan

In the informal manner, all the documents are collected and kept together.
The testers read all the documents to extract information required for testing
the software. On the other hand, in the formal manner, the important points
are extracted from the documents and kept together. This makes it easy for
testers to analyse only important information, which they require during software
testing.

Fig. 11.14 Test Plan


Self-Instructional
Material 201
Software Development A test plan is shown in Figure 11.14. This plan has many sections:
 Overview: Describes the objectives and functions of the software that are
to be performed. It also describes the objectives of the test plan, such as
defining responsibilities, identifying test environment and giving a complete
NOTES
detail of the sources from where the information is gathered to develop the
test plan.
 Test scope: Specifies features and combination of features which are to
be tested. These features may include user manuals or system documents.
It also specifies the features and their combinations that are not to be tested.
 Test methodologies: Specifies the types of tests required for testing features
and combination of these features, such as regression tests and stress tests.
It also provides description of sources of test data along with how test data
is useful to ensure that testing is adequate, such as selection of boundary or
null values. In addition, it describes the procedure for identifying and
recording test results.
 Test phases: Identifies different types of tests, such as unit testing and
integration testing, and provides a brief description of the process used to
perform these tests. Moreover, it identifies the testers who are responsible
for performing testing and provides a detailed description of the source and
type of data to be used. It also describes the procedure of evaluating test
results and describes the work products that are initiated or completed in
this phase.
 Test environment: Identifies the hardware, software, automated testing
tools, operating system, compliers and sites required to perform testing, as
well as the staffing and training needs.
 Schedule: Provides detailed schedule of testing activities and defines the
responsibilities to respective people. In addition, it indicates dependencies
of testing activities and the time frames for them.
 Approvals and distribution: Identifies the individuals who approve a test
plan and its results. It also identifies the people to whom the test plan
document(s) is distributed.
Test Case Design
A test case provides the description of inputs and their expected outputs to observe
whether the software or a part of the software is working correctly. IEEE defines
test case as ‘a set of input values, execution preconditions, expected results
and execution post conditions, developed for a particular objective or test
condition such as to exercise a particular program path or to verify
compliance with a specific requirement.’ Generally, a test case is associated
with details of identifier, name, purpose, required inputs, test conditions and
expected outputs.
Self-Instructional
202 Material
Incomplete and incorrect test cases lead to incorrect and erroneous test outputs. Software Development

To avoid this, the test cases must be prepared in such a way that they check the
software with all possible inputs. This process is known as exhaustive testing,
and the test case which is able to perform exhaustive testing is known as ideal test
case. Generally, a test case is unable to perform exhaustive testing; therefore, a NOTES
test case that gives satisfactory results is selected. In order to select a test case,
certain questions should be addressed:
 How to select a test case?
 On what basis are certain elements of program included or excluded from a
test case?
To provide an answer to these questions, test selection criterion is used,
which specifies the conditions to be met by a set of test cases designed for a given
program. For example, if the criterion is to exercise all the control statements of a
program at least once, then a set of test cases which meets the specified condition
should be selected.
Test Case Generation
The process of generating test cases helps in identifying the problems that exist in
the software requirements and design. For generating a test case, first the criterion
to evaluate a set of test cases is specified and then the set of test cases satisfying
that criterion is generated. There are two methods used to generate test cases:
 Code-based test case generation: This approach, also known as
structure-based test case generation, is used to assess the entire software
code to generate test cases. It considers only the actual software code to
generate test cases and is not concerned with user requirements. Test cases
developed using this approach are generally used for performing unit testing.
These test cases can easily test statements, branches, special values and
symbols present in the unit being tested.
 Specification-based test case generation: This approach uses
specifications that indicate the functions that are produced by the software
to generate test cases. In other words, it considers only the external view of
the software to generate test cases. It is generally used for integration testing
and system testing to ensure that the software is performing the required
task. Since this approach considers only the external view of the software,
it does not test the design decisions and may not cover all statements of a
program. Moreover, as test cases are derived from specifications, the errors
present in these specifications may remain uncovered.
Several tools, known as test case generators, are used for generating test
cases. In addition to test case generation, these tools specify the components of
the software that are to be tested. An example of test case generator is the ‘astra
quick test,’ which captures business processes in a visual map and generates data-
driven tests automatically.
Self-Instructional
Material 203
Software Development Test Case Specifications
A test plan is neither unrelated to the details of testing units nor it specifies the test
cases to be used for testing units. Thus, test case specification is done in order to
NOTES test each unit separately. Depending on the testing method specified in a test plan,
the features of the unit to be tested are determined. The overall approach stated in
the test plan is refined into two parts: specific test methods and the evaluation
criteria. Based on these test methods and the criteria, the test cases to test the unit
are specified.
For each unit being tested, these test case specifications describe the test
cases, required inputs for test cases, test conditions and the expected outputs
from the test cases. Generally, it is required to specify the test cases before using
them for testing. This is because the effectiveness of testing depends to a great
extent on the nature of test cases.
Test case specifications are written in the form of a document. This is because
the quality of test cases is evaluated by performing a test case review, which
requires a formal document. The review of test case document ensures that the
test cases satisfy the chosen criteria and conform to the policy specified in the test
plan. Another benefit of specifying test cases in a formal document is that it helps
testers to select an effective set of test cases.
Levels of Software Testing
As mentioned earlier, a software is tested at different levels. Initially, individual
units are tested, and once they are tested, they are integrated and checked for
interfaces established between them. After this, the entire software is tested to
ensure that the output produced is according to user requirements. As shown in
Figure 11.15, there are four levels of software testing, namely, unit testing,
integration testing, system testing and acceptance testing.

Fig. 11.15 Levels of Software Testing

Self-Instructional
204 Material
Unit Testing Software Development

Unit testing is performed to test individual units of a software. Since the software
comprises various units/modules, detecting errors in these units is simple and
consumes less time, as they are small in size. However, it is possible that the output NOTES
produced by one unit becomes the input for another unit. Hence, if incorrect output
produced by one unit is provided as input to the second unit, then it also produces
wrong output. If this process is not corrected, the entire software may produce
unexpected outputs. To avoid this, all the units in the software are tested
independently using unit testing (see Figure 11.16).

Fig. 11.16 Unit Testing

Unit testing is not just performed once during the software development but is
repeated whenever the software is modified or used in a new environment. Some
other points that must be kept in mind about unit testing are:
 Each unit is tested separately regardless of other units of the software
 The developers themselves perform this testing.
 The methods of white-box testing are used in this testing.
Unit testing is used to verify the code produced during software coding and is
responsible for assessing the correctness of a particular unit of source code.
Integration Testing
Once unit testing is complete, integration testing begins. In integration testing, the
units validated during unit testing are combined to form a subsystem. The integration
testing is aimed at ensuring that all the modules work properly as per user
requirements when they are put together (i.e., integrated).
The objective of integration testing is to take all the tested individual modules,
integrate them, test them again, and develop the software according to design
specifications (see Figure 11.17). Some other points that must be kept in mind
about integration testing are:
Self-Instructional
Material 205
Software Development  It ensures that all modules work together properly and transfer accurate
data across their interfaces.
 It is performed with an intention to uncover errors that lie in the interfaces
among the integrated components.
NOTES
 It tests those components that are new or have been modified or affected
due to a change.

Fig. 11.17 Integration of Individual Modules

The big bang approach and the incremental integration approach are used
to integrate modules of a program. In the big bang approach, initially all modules
are integrated and then the entire program is tested. However, when the entire
program is tested, it is possible that a set of errors is detected. It is difficult to
correct these errors since it is difficult to isolate the exact cause of the errors when
the program is very large. In addition, when one set of errors is corrected, new
sets of errors arise and this process continues indefinitely.
To overcome this problem, incremental integration is followed. The
incremental integration approach tests program in small increments. It is easier
to detect errors in this approach because only a small segment of software code is
tested at a given instance of time. Moreover, interfaces can be tested completely
if this approach is used. Various kinds of approaches are used for performing
incremental integration testing, namely, top-down integration testing, bottom-
up integration testing, regression testing and smoke testing.
Top-Down Integration Testing
In this testing, the software is developed and tested by integrating the individual
modules, moving downwards in the control hierarchy. In top-down integration
testing, initially only one module, known as the main control module, is tested.
After this, all the modules called by it are combined with it and tested. This process
continues till all the modules in the software are integrated and tested.
It is also possible that a module being tested calls some of its subordinate
modules. To simulate the activity of these subordinate modules, a stub is written.
Stub replaces modules that are subordinate to the module being tested. Once the
control is passed to the stub, it manipulates the data as least as possible, verifies
Self-Instructional
206 Material
the entry and passes the control back to the module under test (see Figure 11.18). Software Development

To perform top-down integration testing, the following steps are undertaken:


1. The main control module is used as a test driver and all the modules that are
directly subordinate to the main control module are replaced with stubs.
NOTES
2. The subordinate stubs are then replaced with actual modules, one stub at a
time. The way of replacing stubs with modules depends on the approach
(depth first or breadth first) used for integration.
3. As each new module is integrated, tests are conducted.
4. After each set of tests is complete, its time to replace another stub with
actual module.
5. In order to ensure no new errors have been introduced, regression testing
may be performed.

Fig. 11.18 Top-Down Integration

Top-down integration testing uses either depth-first integration or breadth-


first integration for integrating the modules. In depth-first integration, the modules
are integrated starting from left and then move down in the control hierarchy. As
shown in Figure 11.19(a), initially, modules A1, A2, A5 and A7 are integrated.
Then, module A6 integrates with module A2. After this, control moves to the
modules present at the centre of control hierarchy, that is, module A3 integrates
with module A1 and then module A8 integrates with module A3. Finally, the control
moves towards the right, integrating module A4 with module A1.

(a) Depth-First Integration (b) Breadth-First Integration

Fig. 11.19 Top-Down Integration

In breadth-first integration, initially all modules at the first level are integrated
moving downwards, integrating all modules at the next lower levels. As shown in
Figure 11.19(b), initially modules A2, A3 and A4 are integrated with module A1
and then it moves down, integrating modules A5 and A5 with module A2, and
Self-Instructional
Material 207
Software Development module A8 with module A3. Finally, module A7 is integrated with module A5.
Bottom-Up Integration Testing
In this testing, individual modules are integrated starting from the bottom and then
NOTES moving upwards in the hierarchy. That is, bottom-up integration testing combines
and tests the modules present at the lower levels proceeding towards the modules
present at higher levels of the control hierarchy.
Some of the low-level modules present in the software are integrated to
form clusters or builds (collection of modules). A test driver that coordinates the
test case input and output is written and the clusters are tested. After the clusters
have been tested, the test driver is removed and the clusters are integrated, moving
upwards in the control hierarchy.

Fig. 11.20 Bottom-Up Integration

Figure 11.20 shows modules, drivers and clusters in bottom-up integration.


The low-level modules A4, A5, A6 and A7 are combined to form cluster C1.
Similarly, modules A8, A9, A10, A11 and A12 are combined to form cluster C2.
Finally, modules A13 and A14 are combined to form cluster C3. After clusters are
formed, drivers are developed to test these clusters. Drivers D1, D2 and D3 test
clusters C1, C2 and C3, respectively. Once these clusters are tested, the drivers
are removed and the clusters are integrated with the modules. Cluster C1 and
cluster C2 are integrated with module A2. Similarly, cluster C3 is integrated with
module A3. Finally, both the modules A2 and A3 are integrated with module A1.
Regression Testing
Software undergoes changes every time a new module is integrated with the existing
subsystem (Figure 11.21). Changes can occur in the control logic or input/output
media, and so on. It is possible that new data flow paths are established as a result
of these changes, which may cause problems in the functioning of some parts of
the software that were previously working perfectly. In addition, it is also possible
that new errors may surface during the process of correcting existing errors. To
avoid these problems, regression testing is used.

Self-Instructional
208 Material
Software Development

NOTES

Fig. 11.21 Addition of Module in Integration Testing

Regression testing ‘re-tests’ the software or part of it to ensure that the


components, features and functions, which were previously working properly, do
not fail as a result of the error correction process and integration of modules. It is
regarded as an important activity as it helps in ensuring that changes (due to error
correction or any other reason) do not result in additional errors or unexpected
output from other system components.
To understand the need of regression testing, suppose an existing module
has been modified or a new function is added to the software. These changes may
result in errors in other modules of the software that were previously working
properly. To illustrate this, consider the following code that is working properly:
x := b + 1;
proc (z);
b := x + 2; x:= 3;
Now suppose that in an attempt to optimize the code, it is transformed into the
following:
proc(z);
b := b + 3;
x:= 3;
This may result in an error if procedure ‘proc’ accesses variable ‘x.’ Thus,
testing should be organized with the purpose of verifying possible degradations of
correctness or other qualities due to later modifications. During regression testing,
a subset of already defined test cases is re-executed on the modified software so
that errors can be detected. Test cases for regression testing consist of three different
types of tests:
 Tests that check all functions of the software.
 Tests that check the functions that can be affected due to changes.
 Tests that check the modified software modules.
Self-Instructional
Material 209
Software Development Smoke Testing
Smoke testing is defined as an approach of integration testing in which a subset of
test cases designed to check the main functionality of software are used to test
NOTES whether the vital functions of the software work correctly. This testing is best
suitable for testing a time-critical software as it permits the testers to evaluate the
software frequently.
Smoke testing is performed when the software is under development. As the
modules of the software are developed, they are integrated to form a ‘cluster.’
After the cluster is formed, certain tests are designed to detect errors that prevent
the cluster to perform its function. Next, the cluster is integrated with other clusters,
thereby leading to the development of the entire software, which is smoke tested
frequently. A smoke test should possess the following characteristics:
 It should run quickly.
 It should try to cover a large part of the software and, if possible, the entire
software.
 It should be easy for testers to perform smoke testing on the software.
 It should be able to detect all errors present in the cluster being tested.
 It should try to find showstopper errors.
Generally, smoke testing is conducted every time a new cluster is developed and
integrated with the existing cluster. Smoke testing takes minimum time to detect
errors that occur due to integration of clusters. This reduces the risk associated
with the occurrence of problems such as introduction of new errors in the software.
A cluster cannot be sent for further testing unless smoke testing is performed on it.
Thus, smoke testing determines whether the cluster is suitable to be sent for further
testing.
System Testing
Software is integrated with other elements such as hardware, people, and database
to form a computer-based system. This system is then checked for errors using
system testing. IEEE defines system testing as ‘a testing conducted on a complete,
integrated system to evaluate the system’s compliance with its specified
requirement.’
In system testing, the system is tested against non-functional requirements
such as accuracy, reliability and speed. The main purpose is to validate and verify
the functional design specifications and to check how integrated modules work
together. System testing also evaluates the system’s interfaces to other applications
and utilities as well as the operating environment.
During system testing, associations between objects (such as fields), control and
infrastructure, and the compatibility of the earlier released software versions with

Self-Instructional
210 Material
new versions are tested. System testing also tests some properties of the developed Software Development

software that are essential for users. These properties are:


 Usable: Verifies that the developed software is easy to use and is
understandable
NOTES
 Secure: Verifies that access to important or sensitive data is restricted even
for those individuals who have authority to use the software
 Compatible: Verifies that the developed software works correctly in
conjunction with existing data, software and procedures
 Documented: Verifies that manuals that give information about the
developed software are complete, accurate and understandable
 Recoverable: Verifies that there are adequate methods for recovery in
case of failure
System testing requires a series of tests to be conducted because software
is only a component of a computer-based system, and finally, it has to be integrated
with other components such as information, people and hardware. The test plan
plays an important role in system testing as it describes the set of test cases to be
executed, the order of performing different tests and the required documentation
for each test. During any test, if a defect or error is found, all the system tests that
have already been executed must be re-executed after the repair has been made.
This is required to ensure that the changes made during error correction do not
lead to other problems.
While performing system testing, conformance tests and reviews can also
be conducted to check the conformance of the application (in terms of
interoperability, compliance and portability) with corporate or industry standards.
System testing is considered to be complete when the outputs produced by
the software and the outputs expected by the user are either in line or the difference
between the two is within permissible range specified by the user. Various kinds of
testing performed as part of system testing (see Figure 11.22) are recovery testing,
security testing, stress testing and performance testing.

Fig. 11.22 Types of System Testing


Self-Instructional
Material 211
Software Development Recovery Testing
Recovery testing is a type of system testing in which the system is forced to fail in
different ways to check whether the software recovers from the failures without
NOTES any data loss. The events that lead to failure include system crashes, hardware
failures, unexpected loss of communication and other catastrophic problems.
To recover from any type of failure, a system should be fault-tolerant. A
fault-tolerant system can be defined as a system which continues to perform the
intended functions even when errors are present in it. In case the system is not
fault-tolerant, it needs to be corrected within a specified time limit after failure has
occurred so that the software performs its functions in a desired manner.
Test cases generated for recovery testing not only show the presence of errors in
a system but also provide information about the data lost due to problems such as
power failure and improper shutting down of a computer system. Recovery testing
also ensures that appropriate methods are used to restore the lost data. Other
advantages of recovery testing are:
 It checks whether the backup data is saved properly.
 It ensures that the backup data is stored in a secure location.
 It ensures that proper detail of recovery procedures is maintained.
Security Testing
Systems with sensitive information are generally the target of improper or illegal
use. Therefore, protection mechanisms are required to restrict unauthorized access
to the system. To avoid any improper usage, security testing is performed which
identifies and removes the flaws from software (if any) that can be exploited by
the intruders and, thus, result in security violations. To find such kind of flaws, the
tester, like an intruder, tries to penetrate the system by performing tasks such as
cracking the password, attacking the system with custom software, intentionally
producing errors in the system, etc. Security testing focuses on the following areas
of security:
 Application security: To check whether the user can access only those
data and functions for which the system developer or user of system has
given permission. This security is referred to as authorization.
 System security: To check whether only the users who have permission
to access the system are accessing it. This security is referred to as
authentication.
Generally, the disgruntled/dishonest employees or other individuals outside
the organization make an attempt to gain unauthorized access to the system. If
such people succeed in gaining access to the system, there is a possibility that a
large amount of important data can be lost, resulting in huge loss to the organization
or individuals.
Self-Instructional
212 Material
Security testing verifies that the system accomplishes all the security requirements Software Development

and validates the effectiveness of these security measures. Other advantages


associated with security testing are:
 It determines whether proper techniques are used to identify security risks.
NOTES
 It verifies that appropriate protection techniques are followed to secure the
system.
 It ensures that the system is able to protect its data and maintain its
functionality.
 It conducts tests to ensure that the implemented security measures are
working properly.
Stress Testing
Stress testing is designed to determine the behaviour of the software under abnormal
situations. In this testing, test cases are designed to execute the system in such a
way that abnormal conditions arise. Some examples of test cases that may be
designed for stress testing are:
 Test cases that generate interrupts at a much higher rate than the average
rate.
 Test cases that demand excessive use of memory as well as other resources.
 Test cases that cause ‘thrashing’ by causing excessive disk accessing.
IEEE defines stress testing as ‘testing conducted to evaluate a system or
component at or beyond the limits of its specified requirements.’ For example,
if a software system is developed to execute 100 statements at a time, then stress
testing may generate 110 statements to be executed. This load may increase until
the software fails. Thus, stress testing specifies the way in which a system reacts
when it is made to operate beyond its performance and capacity limits. Some
other advantages associated with stress testing are:
 It indicates the expected behaviour of a system when it reaches the extreme
level of its capacity.
 It executes a system till it fails. This enables the tester to determine the
difference between the expected operating conditions and the failure
conditions.
 It identifies the part of a system that leads to errors.
 It assesses the amount of load that causes a system to fail.
 It evaluates a system at or beyond its specified limits of performance.
Performance Testing
Performance testing is designed to determine the performance of software
(especially real-time and embedded systems) at run-time in the context of the

Self-Instructional
Material 213
Software Development entire computer-based system. It takes various performance factors, such as load,
volume and response time of the system, into consideration and ensures that they
are in accordance with the specifications. It also determines and informs the
software developer about the current performance of the software under various
NOTES parameters (such as the condition to complete the work within a specified time
limit).
Often performance tests and stress tests are used together and require both software
and hardware instrumentation of the system. By instrumenting a system, the tester
can reveal conditions that may result in performance degradation or even failure of
a system. While performance tests are designed to assess the throughput, memory
usage, response time, execution time and device utilization of a system, stress
tests are designed to assess its robustness and error handling capabilities.
Performance testing is used to test several factors that play an important role in
improving the overall performance of the system. Some of these factors are:
 Speed: Refers to how quickly a system is able to respond to its users.
Performance testing verifies whether the response is quick enough.
 Scalability: Refers to the extent to which a system is able to handle the
load given to it. Performance testing verifies whether the system is able to
handle the load expected by users.
 Stability: Refers to how long a system is able to prevent itself from failure.
Performance testing verifies whether the system remains stable under
expected and unexpected loads.
The output produced during performance testing is provided to the system
developer. Based on this output, the developer makes changes to the system in
order to remove the errors. This testing also checks system characteristics such as
its reliability. Other advantages associated with performance testing are:
 It assess whether a component or system complies with specified
performance requirements.
 It compares different systems to determine which system performs the best.
Acceptance Testing
Acceptance testing is performed to ensure that the functional, behavioural and
performance requirements of software are met. IEEE defines acceptance testing
as a ‘formal testing with respect to user needs, requirements and business
processes conducted to determine whether or not a system satisfies the
acceptance criteria and to enable the user, customers or other authorized
entity to determine whether or not to accept the system.’
During acceptance testing, software is tested and evaluated by a group of
users either at the developer’s site or user’s site. This enables the users to test the
software themselves and analyze whether it is meeting their requirements. To perform

Self-Instructional
214 Material
acceptance testing, a predetermined set of data is given to the software as input. It Software Development

is important to know the expected output before performing acceptance testing so


that the output produced by the software as a result of testing can be compared
with it. Based on the results of these tests, users decide whether to accept or
reject the software. That is, if both the outputs (expected and produced) match, NOTES
the software is considered to be correct and is accepted; otherwise, it is rejected.
The various advantages and disadvantages associated with acceptance
testing are listed in Table 11.6.
Since the software is intended for large number of users, it is not possible to
perform acceptance testing with all the users. Therefore, organizations engaged in
software development use alpha and beta testing as a process to detect errors by
allowing a limited number of users to test the software.
Alpha Testing
Alpha testing is considered as a form of internal acceptance testing in which the
users test the software at the developer’s site (Figure 11.23). In other words, this
testing assesses the performance of the software in the environment in which it is
developed. On completion of alpha testing, users report the errors to software
developers so that they can correct them.

Fig. 11.23 Alpha Testing

Some advantages of alpha testing are:


 It identifies all the errors present in the software.
 It checks whether all the functions mentioned in the requirements are
implemented properly in the software.
Beta Testing
Beta testing assesses the performance of the software at user’s site. This testing is
‘live’ testing and is conducted in an environment, which is not controlled by the
developer. That is, this testing is performed without any interference from the
developer (see Figure 11.24). Beta testing is performed to know whether the
developed software satisfies user requirements and fits within the business
processes.

Self-Instructional
Material 215
Software Development

NOTES

Fig. 11.24 Beta Testing

Note that beta testing is considered as external acceptance testing which


aims to get feedback from the potential users. For this, the system and the limited
public tests (known as beta versions) are made available to the groups of people
or to public (for getting more feedback). These people test the software to detect
any faults or bugs that may not have been detected by the developers and report
their feedback. After acquiring the feedback, the system is modified and released
either for sale or for further beta testing.
Some advantages of beta testing are:
 It evaluates the entire documentation of the software. For example, it
examines the detailed description of the software code, which forms a part
of documentation of the software.
 It checks whether the software is operating successfully in the user
environment.

11.7 SOFTWARE PARADIGMS AND


PROGRAMMING METHODS

Programming refers to the method of creating a sequence of instructions to


enable the computer to perform a task. It is done by developing logic and then
writing instructions in a programming language. A program can be written using
various programming practices available. A programming practice refers to
the way of writing a program and is used along with coding style guidelines.
Some of the commonly used programming practices include top-down
programming, bottom-up programming, structured programming, and
information hiding (see Figure 11.25).

Self-Instructional
216 Material
Software Development

NOTES

Fig. 11.25 Programming Practices

Top-down Programming
Top-down programming focuses on the use of modules. It is therefore also known
as modular programming. The program is broken up into small modules so that
it is easy to trace a particular segment of code in the software program. The
modules at the top level are those that perform general tasks and proceed to other
modules to perform a particular task. Each module is based on the functionality of
its functions and procedures. In this approach, programming begins from the top
level of hierarchy and progresses towards the lower levels. The implementation of
modules starts with the main module. After the implementation of the main module,
the subordinate modules are implemented and the process follows in this way. In
top-down programming, there is a risk of implementing data structures as the
modules are dependent on each other and they have to share one or more functions
and procedures. In this way, the functions and procedures are globally visible. In
addition to modules, the top-down programming uses sequences and the nested
levels of commands.
Bottom-up Programming
Bottom-up programming refers to the style of programming where an application
is constructed with the description of modules. The description begins at the bottom
of the hierarchy of modules and progresses through higher levels until it reaches
the top. Bottom-up programming is just the opposite of top-down programming.
Here, the program modules are more general and reusable than top-down
programming.
It is easier to construct functions in bottom-up manner. This is because
bottom-up programming requires a way of passing complicated arguments between
functions. It takes the form of constructing abstract data types in languages
such as C++ or Java, which can be used to implement an entire class of applications
and not only the one that is to be written. It therefore becomes easier to add
Self-Instructional
Material 217
Software Development new features in a bottom-up approach than in a top-down programming
approach.
Structured Programming
NOTES Structured programming is concerned with the structures used in a computer
program. Generally, structures of computer program comprise decisions,
sequences, and loops. The decision structures are used for conditional execution
of statements (for example, ‘if’ statement). The sequence structures are used
for the sequentially executed statements. The loop structures are used for
performing some repetitive tasks in the program.
Structured programming forces a logical structure in the program to be written in
an efficient and understandable manner. The purpose of structured programming
is to make the software code easy to modify when required. Some languages such
as Ada, Pascal, and dBase are designed with features that implement the logical
program structure in the software code. Primarily, the structured programming
focuses on reducing the following statements from the program.
 ‘GOTO’ statements.
 ‘Break’ or ‘Continue’ outside the loops.
 Multiple exit points to a function, procedure, or subroutine. For example,
multiple ‘Return’ statements should not be used.
 Multiple entry points to a function, procedure, or a subroutine.
Structured programming generally makes use of top-down design because
program structure is divided into separate subsections. A defined function or set
of similar functions is kept separately. Due to this separation of functions, they are
easily loaded in the memory. In addition, these functions can be reused in one or
more programs. Each module is tested individually. After testing, they are integrated
with other modules to achieve an overall program structure. Note
that a key characteristic of a structured statement is the presence of single entry
and single exit point. This characteristic implies that during execution, a structured
statement starts from one defined point and terminates at another defined point.
Information Hiding
Information hiding focuses on hiding the non-essential details of functions and
code in a program so that they are inaccessible to other components of the
software. A software developer applies information hiding in software design and
coding to hide unnecessary details from the rest of the program. The objective of
information hiding is to minimize complexities among different modules of the
software. Note that complexities arise when one program or module in software
is dependent on several other programs and modules.
Information hiding is implemented with the help of interfaces. An interface is
a medium of interaction for software components that are using the properties of
Self-Instructional
218 Material
the software modules containing data. The implementation of interfaces depends Software Development
on the syntax and process. Examples of interface include constants, data types,
types of procedures, and so on. Interfaces protect other parts of programs when
a software design is changed.
Generally, the interfaces act as a foundation to modular programming (top- NOTES
down programming) and object-oriented programming. In object-oriented
programming, interface of an object comprises a set of methods, which are used
to interact with the objects of the software programs. Using information hiding, a
single program is divided into several modules. These modules are independent of
each other and can be used interchangeably in other software programs.
To understand the concept of information hiding, let us consider an example
of a program written for ‘car’. The program can be organized in several ways.
One is to arrange modules without using information hiding. In this case, the modules
can be created as ‘front part’, ‘middle part’, and ‘rear part’. On the other hand,
creating modules using information hiding includes specifying names of modules
such as ‘engine’ and ‘steering’.
On comparison, it is found that modules created without using information
hiding affect other modules. This is because when a module is modified, it affects
the data, which does not require modification. However, if modules are created
using information hiding, then modules are concerned only with specific segments
of the program and not the whole program or other parts of the program. In our
example, this statement means that the module ‘engine’ does not have any affect
on the module ‘steering’.

Check Your Progress


3. What is a test plan?
4. What are the different levels of software testing?

11.8 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. The requirements that are commonly considered are classified into three
categories, namely, functional requirements, non-functional requirements and
domain requirements.
2. IEEE defines software design as ‘both a process of defining the architecture,
components, interfaces, and other characteristics of a system or component
and the result of that process.’
3. A test plan describes how testing is accomplished. It is a document that
specifies the purpose, scope and method of software testing.
4. There are four levels of software testing, namely, unit testing, integration
testing, system testing and acceptance testing.
Self-Instructional
Material 219
Software Development
11.9 SUMMARY

 Requirement is a condition or capability possessed by the software or system


NOTES component in order to solve a real-world problem.
 The requirements that are commonly considered are classified into three
categories, namely, functional requirements, non-functional requirements and
domain requirements.
 Software design is a phase in software engineering, in which a blueprint is
developed to serve as a base for constructing the software system.
 IEEE defines a process model as ‘a framework containing the processes,
activities, and tasks involved in the development, operation, and maintenance
of a software product, spanning the life of the system from the definition of
its requirements to the termination of its use.
 IEEE defines testing as ‘the process of exercising or evaluating a system or
system component by manual or automated means to verify that it satisfies
specified requirements or to identify differences between expected and actual
results.’
 A test plan describes how testing is accomplished. It is a document that
specifies the purpose, scope and method of software testing.
 There are four levels of software testing, namely, unit testing, integration
testing, system testing and acceptance testing.
 Programming refers to the method of creating a sequence of instructions to
enable the computer to perform a task.

11.10 KEY WORDS

 Abstraction: It refers to a powerful design tool, which allows software


designers to consider components at an abstract level, while neglecting the
implementation details of the components.
 Test plan: It is a document that specifies the purpose, scope and method
of software testing.

11.11 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short Answer Questions


1. What are the different models of system development?
2. Discuss the various phases in SDLC.
3. What are the different steps in development of a test plan?
Self-Instructional
220 Material
Long Answer Questions Software Development

1. Explain the different types of requirements.


2. Describe the levels of software testing.
3. Explain the different types of programming methods. NOTES

4. What are the principles of software design?

11.12 FURTHER READINGS

Bhatt, Pramod Chandra P. 2003. An Introduction to Operating Systems—


Concepts and Practice. New Delhi: PHI.
Bhattacharjee, Satyapriya. 2001. A Textbook of Client/Server Computing. New
Delhi: Dominant Publishers and Distributers.
Hamacher, V.C., Vranesic, Z.G. and Zaky. 2002. S.G. Computer Organization,
5th edition. New York: McGraw-Hill International Edition.
Mano, M. Morris. 1993. Computer System Architecture, 3th edition. New Jersey:
Prentice-Hall Inc.
Nutt, Gary. 2006. Operating Systems. New Delhi: Pearson Education.

Self-Instructional
Material 221
Operating System
BLOCK - IV
FUNDAMENTALS OF OS AND
WORKINGS OF INTERNET
NOTES

UNIT 12 OPERATING SYSTEM


Structure
12.0 Introduction
12.1 Objectives
12.2 Operating System Concepts
12.3 Functions of OS
12.4 Development of Operating System
12.5 Operating System Virtual Memory
12.6 Operating System Components
12.7 Operating System Services
12.8 Operating System Security
12.9 Answers to Check Your Progress Questions
12.10 Summary
12.11 Key Words
12.12 Self Assessment Questions and Exercises
12.13 Further Readings

12.0 INTRODUCTION

An operating system is a software, consisting of programs and data that run on


computers, manages computer hardware resources and provides common services
for execution of various application software. In this unit, you will learn about
various operating system classifications, such as batch system, single user system,
multiprogramming, time sharing system, distributed system and real time systems.
This unit also explains the operating system services in which users have to run
their programs, store programs or data permanently on secondary storage devices
and have to determine the malfunctioning programs and locate the information
needed to identify the reasons for errors.

12.1 OBJECTIVES

After going through this unit, you will be able to:


 Discuss the various functions of OS
 Explain the development of OS
 Understand the services offered by the operating system
 Explain the security threats and protection measures
Self-Instructional
222 Material
Operating System
12.2 OPERATING SYSTEM CONCEPTS

Why do we need an Operating System (OS)? Or, what are the objectives of
having an OS for a computer? An operating system is a set of programs that NOTES
manages computer hardware resources to provide common services for software
and application programs. Without an operating system, a user cannot run an
application program on their computer unless the application program is executed
with self booting process. For hardware functions, such as input and output, and
memory allocation, the operating system acts as an intermediary between application
programs and the computer hardware although the application code is usually
executed directly by the hardware and will call the operating system or be
interrupted by it. An operating system manages the computer's memory, processes
and all of its software and hardware. There are many objectives to be met for
achieving the ultimate goal of easy to use and human friendly operating system that
transforms the computer to be useful, user friendly, acceptable and affordable to
everyone in this world. We may achieve this ultimate goal through the realization
of the following three major objectives:
Convenience or Ease of Use
Operating systems hide the idiosyncrasies of hardware by providing abstractions
for ease of use. Abstraction hides the low level details of the hardware and provides
high level user friendly functions to use a hardware piece. For example, consider
using a hard disk which records information in terms of magnetic polarities. A hard
disk consists of many cylinders and tracks and sectors within a track and the read/
write heads for writing and reading bits to a sector. A user of a computer system
cannot convert the data to be recorded to a format needed for the disk and issue
low level commands to address the appropriate track read/write head and write
the data on to the right sector. There are software programs called device drivers
for every device types to handle this task. The programmer just issues read/write
commands through the OS (Operating System) and the OS just passes it to the
driver which translates this command to the low level commands that a hard disk
can understand. The OS may issue the same command for reading/writing to a
tape drive or to a printer. However, the device drivers translate it into the proper
low level commands understood by the addressed device. The same is true for
any other electromechanical and electronic devices connected to a computer.
Efficient Allocation and Utilization of Resources
A computer system has various resources like CPU (Central Processing Unit),
memory and I/O devices. Every use of the resources is controlled by the OS.
Executing programs or processes may request for the use of resources and the
OS sanctions the request and allocates the requested resources, if available. The
allocated resources will be taken back from the processes normally after the use
of the same is completed. That is, the OS is considered as the manager of the
Self-Instructional
Material 223
Operating System resources of a computer system. The allocation of resources must be done looking
for overall efficiency and utilization. That is, the allocation must be done to improve
the system through put (number of jobs executed per unit time) and the use of
resources in a fair manner.
NOTES
Ability to Evolve
The OS provides many services to the user community. As the time passes, users
may need new and improved services for accomplishing some task with the
computer. In some situations, as we use the system for doing new task, some of
the existing services may introduce undesirable side effects causing inefficient
utilization of resources or may even compromise with the security of the system
itself. So, it may also be desirable to remove the existing implementations of such
services and introduce new implementations. Also, there can be bugs in the code
that may surface over time as we use the services for various purposes. The design
of the OS must include provisions for easy introduction of new services and
removing or improving/ replacing the existing services. Also, the design must provide
easy interfacing facility to connect and communicate with new types of hardware
devices and upgraded versions of the existing hardware devices.

12.3 FUNCTIONS OF OS

The various functions of the operating system include:


 Processor Management: The management of the allocation of the
processors between the various programs through the use of a scheduling
algorithm is the responsibility of the operating system. The kind of scheduler
is completely reliant on the operating system, regarding the required objective.
 RAM Management: The management of the allocated memory space
provided to every application as well as user is the responsibility of the
operating system. In case the physical memory is not sufficient, a memory
zone can also be created by the OS on the hard drive which is called as the
‘virtual memory’. The virtual memory enables the user to run applications
that need larger memory than the available RAM present in the system.
Although, a memory like this is very slow with respect to the speed.
 Input/Output Management: Unification as well as control over the access
of programs to material resources through drivers is permitted by the
operating system.
 Managing the Implementation of Applications: The uninterrupted
execution of applications through the allocation of resources needed for
them to operate is also the responsibility of the operating system. This implies
that the user can terminate an application that is not properly functioning.
 Authorizations Management: The security regarding the implementation
of the programs through guaranteeing that the resources will be consumed
Self-Instructional
224 Material
only by the programs and users that have relevant authorizations is also the Operating System

responsibility of the OS.


 File Management: The reading as well as writing of a file in a file system
and the user and application file access authorizations is managed by the
NOTES
operating system.
 Information Management: A specific number of indicators which may
be used for the purpose of diagnosing the proper operation of the machine
are provided by the operating system.
The top-down view of the operating system is to basically provide an easy
interface to the users. The contrary, bottom-up view states that the operating
system exists for the purpose of managing every piece of a multifaceted system.
There are various components in the modern computers, such as processors,
memories, timers, disks, mouse, network interfaces, printers, as well as high variety
of various devices. In the contrary view, the objective of the operating system is
providing an allocation of the processors, memories and I/O devices in an orderly
as well as controlled way, amid different programs challenging them.

12.4 DEVELOPMENT OF OPERATING SYSTEM

An operating system might process its task consecutively (sequentially) or parallel


(several tasks simultaneously). It means that the resources of the computer system
may be devoted to an only program till its completion or they may be allocated
amongst several programs in different stages of execution. The feature of operating
system to perform multiple programs in inserted fashion or dissimilar time cycles is
called as multiprogramming systems.
Types of Operating Systems
Operating systems can be classified on the basis of the purposes for which they
are developed and the functions they support into nine classes listed as follows:
1. Early system
2. Batch processing system
3. Multiprogramming system
4. Time-shared system
5. Single user system
6. Multitasking system
7. Distributed system
8. Real-time system
1. Early System
Operating systems have evolved through a number of phases during the last half a
century. The earliest electronic computer was developed in the mid-1940s and
Self-Instructional
Material 225
Operating System had no operating system at all. Machines (computers) were equipped with a console
consisting of toggles switches and display lights. Eight or sixteen mechanical toggle
switches arranged on the console were employed to enter 8 or 16 bits of instruction
and data to the computer and other switches for reset, start and debug operations.
NOTES There were no programming languages or translators for developing high-level
language programs and converting them to machine understandable instructions.
Programs were written in machine language instructions which the hardware or
the CPU could decode and execute to do the work. Mainly the programmers
were the users of the then computer systems.
2. Batch Processing System
To drastically reduce the setup time for executing a job and improving the system
utilization, a batch processing program was developed in the mid-1950s. This
helped to make the setup that was required for executing a job automatically and
executing the programs (submitted for execution at the computer centre in the
form of card decks) of different users one after another without user intervention.
This batch processing system was called the first rudimentary operating system
then known as resident monitor. A set of commands called Job Control Language
(JCL) for reading the programs using card readers/magnetic tapes and loading,
executing and printing the output were developed. The commands of JCL were
used to express the sequence of actions that the resident monitor must do to select
and execute a job from the batch of jobs submitted by the different users.
With the introduction of resident monitors, the users were not permitted to
directly execute their job. The system worked in two modes:
(i) Privileged Mode: Where all types of instructions could be executed.
(ii) Non-Privileged User Mode: Where privileged instructions could not be
executed.
3. Multiprogramming System
Jobs in a computer system consist of alternate cycles of I/O-bound and CPU-
bound operations. In a computer system, I/O bound operations are done by special
processors, called I/O processors. The CPU-bound operations are performed
by the main processor that is termed as the CPU of the system, which is in
privileged monitor mode and on behalf of the user program (process) request,
only needs to start the I/O operations, based on the I/O processors. The main
CPU then continues executing the user programs in case there is some available
process ready with all the required resources. When the I/O is completed, the
interrupt mechanism informs the same to the monitor (OS). The interrupt mechanism
switches the execution mode from the user to the monitor and begins executing
from the interrupt handler routine that is part of the monitor (OS) code. The monitor
then informs the completion of I/O to the concerned waiting user process and
selects the next ready process (or the job or process whose I/O is just finished)
for execution and transfers control to it. That is, the main CPU waits for the
Self-Instructional
226 Material
completion of I/O when there are no other jobs to be executed in the system. Operating System

Therefore, in a uniprogramming system, at a time only one program is ready for


execution, and if that program requests I/O during execution, then the CPU has to
wait until that I/O is completed. Thus, as the I/O operation time is many times
larger (100 or 1000 times) than the CPU instruction execution time, the CPU NOTES
utilization is very low, normally less than the five per cent. In general, in a
uniprogramming environment other resources like memory and I/O devices are
underutilized.
Now you can improve resource utilization by making ready more than one
program or jobs for execution. This requires the loading of more than one program
in the memory. This needs partitioning the memory into more than two regions,
one for OS and as many user memory partitions as there are user programs. By
providing sufficient memory and by loading two or more jobs in the user memory
regions there will be processes that are ready for execution. Therefore, you can
allocate CPU to another process when one process is waiting for I/O completion.
This is called multiprogramming or multitasking. That is, the CPU executes more
than one program concurrently in a time-interleaved fashion. This also improves
the memory and device utilization of the system as the idle time of memory regions
and I/O devices are now less than as compared to the uniprogramming case.
With the introduction of multiprogramming, the OS needs to manage the
memory and protect each user memory from other users in addition to protecting
the operating system memory from users.
4. Time-Shared System
A multiprogrammed batch system does not permit real-time interaction between
users and computer as the user commands that are needed for executing jobs are
prepared as scripts of Job Control Language (JCL) and submitted to the batch
system. As users are not permitted to submit the job script input and observe or
take output directly, it took many days to debug and correct the mistakes in program
development. The solution to this problem was the introduction of Interactive
Time-shared Multiprogramming techniques. This enabled many users to interact
with the computer system simultaneously, each one using a separate terminal
keyboard and monitor connected to the system. Actually, each user is given a
small time quantum (say, 100 milliseconds) to apply commands and receive
responses from the computer system in a round robin fashion. So, there are 10
users, each will be served 100 milliseconds in every one second. Because of this
fast switching of execution among users, each one felt that the entire computer
system time is available for his own use. This drastically improved the ease of use
of computers and reduced the job processing time and program development
time. With this interactive timesharing technology, a single computer system was
made available for many people simultaneously for doing many different types of
tasks.

Self-Instructional
Material 227
Operating System The hardware of a computer system is normally very costly. In a multiuser
system, as many users are sharing this costly hardware, the cost is shared among
these users and accordingly, the resource utilization is also high. However, as the
operating system has to switch between many users in a short time, there are
NOTES some unproductive computations, called overheads computations that are done
for job switching and associated work.
5. Single User System
A computer system in which only one user can work at a time is called a single
user system. The familiar Intel processor based Windows OS Personal Computer
(PC) is an example of single user system. Such a typical system has a single
keyboard, mouse and monitor as I/O devices for the user to interact with the
system. Users apply commands through the keyboard and mouse and the computer
displays its responses (output) on the monitor. A user can do various tasks like
preparation of a document or editing a program or writing a letter and printing it
(using a printer). A user can simultaneously or concurrently execute many tasks or
jobs in currently available personal computers. An important design issue of such
interactive systems is the response time, which is the time taken by the computer
to start producing output after a command has been entered. The response time
must be within a predictable limit for user satisfaction and acceptability. When one
user is working on the computer, the other users have to wait till the current user
finishes his work and leaves. Examples of single user operating systems are MS
DOS, Windows 95, Windows NT, Windows 2000 and Windows Vista, each of
which runs on Intel Processor based computers. Other examples are Macintosh
OS X.x, Linux single user OS which are actually derivatives of UNIX operating
system.
6. Multitasking System
Multitasking is concerned with a single user executing more than one program
simultaneously. It may also mean a user executing his application as many concurrent
processes. An operating system with multitasking ability runs many applications—
for example, Microsoft Office package, Gaming applications, Internet explorer,
etc., simultaneously/concurrently. The operating system executes each for a small
time slice in a round robin fashion so that the user cannot distinguish the switching
of CPU among different applications. It appears to the user that they are running
simultaneously.
The advantages of this system are as follows:
(i) The ability of multitasking system that permits the user to run more than
one task simultaneously, leads to increased productivity.
(ii) Improves the system resource utilization, throughput and overall efficiency
of the system.

Self-Instructional
228 Material
The disadvantages are as follows: Operating System

(i) Increased overhead processing time.


(ii) Needs more resources like memory, CPU time and I/O devices for
achieving user satisfaction. NOTES
7. Distributed System
A distributed system is composed of a large number of computers connected by
high-speed networks. The computers in the distributed system are independent,
but they appear to the users as a large, single system. This is in contrast to the old
mainframe computers which are centralized, single processor systems with large
memory, I/O devices and connected to many terminals (with no processing power),
through which many people can work or run applications concurrently.
In distributed systems, users can work from any of the independent computers
as terminals. The applications or programs run on any of the computers distributed
even geographically distant places and possibly on the computer (intelligent terminal)
from which the user entered a command to run the program. That is, people can
work from any computers and the applications can run on any of the member
computers in the system depending on the availability and workload. There is a
mechanism built into the distributed OS to balance the load (CPU workload) on
different independent computers of the system. A distributed system will have only
a single file system, with all the files of a user accessible (as dictated by the
permissions associated with each file) from any computers of the system with the
same access permissions and using the same path name. Automated Teller Machines
(ATMs) of a bank are an example of a distributed system, which are useful for
banking applications.
The advantages of a distributed system are as follows:
(i) Data Sharing: Allows data sharing access to common databases.
(ii) Device Sharing: Allows many users to share expensive peripherals like
high resolution printers, scanners and digital cameras.
(iii) Communication: Makes communication easier between users.
(iv) Even Workload Distribution: Spreads the available workload over
independent computers of the system in a cost effective way.
The disadvantages of a distributed system are as follows:
(i) Delayed Communication: Network communication is delayed due to
saturation or other problems.
(ii) Security: Needs tools and techniques for providing controlled access to
sensitive data.
8. Real-Time System
A real-time system (operating system) is the one which responds to real-time
events or inputs within a specified time limit. The importance of real-time systems
Self-Instructional
Material 229
Operating System has increased especially in today’s increased use of embedded applications. Real-
time systems include satellite control systems, process control systems, control of
robots, air-traffic control systems and autonomous land rover control systems. A
real-time system must work with a guaranteed response time depending on the
NOTES task; otherwise, the application might fail.
In a real-time operating system, some of the tasks are real-time and others
are ordinary computing tasks. Ordinary computing tasks, like commands from
interactive users, generally demand small response time as compared to the user
interaction time. Faster response time keeps the users satisfied. Whereas, with a
real-time task there always exists a deadline to start responding after an event and
another deadline for the completion of the response. Real-time tasks are classified
as hard and soft. A hard real-time task must meet its deadline to avoid undesirable
damage to the system and other catastrophic events. A soft real-time task also has
a deadline, but failures to meet it will not cause a catastrophe or great losses.
Most real-time operating systems are designed with the objective of starting
real-time tasks as rapidly as possible. This is possible only if the interrupt handling
and task dispatching are fast. There should be some hard limit on the amount of
execution time spent on an interrupt handler code, as the event may occur while a
high priority interrupt handler is mid-way in its execution. The latency of response
to events can be predictably controlled by carefully choosing the interrupt priority
and limiting the amount of time that is needed by the CPU on interrupt handlers.

12.5 OPERATING SYSTEM VIRTUAL MEMORY

Programs must be loaded into the memory for execution. During this period, the
program is called a process. In a multiprogramming environment, several processes
and executed in the system. The memory for all the processes must be allocated.
If the memory size is 1000KB and we have 5 processes each of size 200 KB,
then the memory for all the processes can be allocated without any difficulty. If the
processes size is greater than the total memory size, then we cannot execute those
processes.
Using the virtual memory technique, we can execute a process which is
larger than the physical memory. Using virtual memory, the physical memory can
be assumed to be an extremely large memory. When the virtual memory technique
is used, the programmer need not worry about the physical memory, which means
you can execute programs that are larger than the size of the physical memory.
Virtual memory decreases the performance of the system and is difficult to
implement. The main advantage of virtual memory is that it uses the existing physical
memory effectively.
If we closely inspect the program execution, the entire program is not needed
inside the memory. If the function SQRT is to be executed, then it must be present
in the memory. Sometimes, the programs have the error code that is executed
Self-Instructional
230 Material
when the error occurs. If the error does not occur in the program execution, then Operating System

the error handling code is not needed to be loaded in the memory. Arrays may be
declared to be of size 500 and only then some of the elements may be used. Some
of the functions in the program are never called; so they are not required to be in
the memory. NOTES
In some of the cases, the entire program is needed but the CPU executes
only one instruction at a time if that instruction is present in the memory.
The virtual memory separates the logical memory from the physical memory.
This separation results in large virtual memory for the users. Virtual memory allows
sharing of files and memory by different processes. Here the pages are shared,
which improves the performance at the time of the process creation.

12.6 OPERATING SYSTEM COMPONENTS

An operating system is a complex and normally huge software used to control and
coordinate the hardware resources like a CPU, memory and I/O devices to enable
easy interaction of the computer with human and other applications. The objects
or entities that an operating system manages or deals with include processes,
memory space, files, I/O devices and networks. Let us first briefly describe each
of these entities.
Process: A process is simply the program in execution. For every program
to execute, the operating system creates a process. A process needs resources
like a CPU, memory and I/O devices to execute a program. These resources are
under the control of the operating system. In a computer, there will be many
programs in the state of execution, hence a large number of processes demanding
various resources are also needed to be maintained and managed in an operating
system. When the execution is finished, the resources held by that process will be
returned back to the operating system.
Memory Space: The execution of a program needs memory. The available
memory is divided among various programs that are needed to execute
simultaneously (concurrently) in a time multiplexed way. In normal type of memory,
at a time only one memory location can be accessed. So, in a uni-processor
environment, because the secondary storage devices are much slower than main
memory, the programs to be executed concurrently are loaded into memory and
kept ready awaiting the CPU for fast overall execution speed. The memory is
space multiplexed to load and execute more than one programs in a time interleaved
way. Even if more than one CPU is available, at a time only one program memory
area can be accessed. However, instruction that does not need main memory
access (access can be from local memory or from CPU cache) can be executed
simultaneously in a multiprocessor scenario.
Files: Files are used to store sequence of bits, bytes, words or records.
Files are an abstract concept used to represent storage of logically similar entities
Self-Instructional
Material 231
Operating System or objects. A file may be a data file or a program file. A file with no format for
interpreting its content is called a plain unformatted file. Formatted file content can
be interpreted and is more portable as one knows in what way the data inside is
structured and what it represents. Example formats are Joint Photographic Experts
NOTES Group (JPEG), Moving Picture Experts Group (MPEG), Graphical Interchange
Format (GIF), Executable (Exe), MPEG Audio Layer III (MP3), etc.
I/O Devices: Primary memory like RAM is volatile and the data or program
stored there will be lost when we switch off the power to the computer system or
when it is shutdown. Secondary storage devices are needed to permanently
preserve program and data. Also, as the amount of primary storage that can be
provided will be limited due to reasons of cost, a secondary storage is needed to
store the huge amount data needed for various applications.
Network: Network is the interconnection system between one computer
and the other computers located in the same desk/ room, same building, adjacent
building or in any geographical locations over the world. We can have wired or
wireless network connections to other computers located anywhere in the world.

12.7 OPERATING SYSTEM SERVICES

One of the major responsibilities of operating system is to provide an environment


for an efficient execution of user programs. For this, it provides certain services to
the programs and the users. These services provide an abstract view of the system
to the users so that they can only concentrate on the functional part of the programs
without bothering about the internal details of the system. This reduces the
programming burden and makes the task of programming easier. For instance,
programmers need not to bother about how memory is allocated to their programs,
where their programs are loaded in memory during execution, how multiple
programs are managed and executed, how their programs are organized in files to
reside on disk, etc., while writing programs.
In spite of some specific services that may differ from one operating system
to another, all the operating systems provide some common services for the
convenience of users. These services include the following.
 User Interface: Providing a User Interface (UI) to interact with users is
essential for an operating system. This interface can be in one of the several
forms. One is command-line interface, in which users interact with the
operating system by typing commands. Another is batch interface, in which
several commands and directives to control those commands are collected
into files which are then executed. The most commonly used interface is
Graphical User Interface (GUI), in which users interact with the system
with a pointing device, such as a mouse.
 Program Execution: The system must allocate memory to the user
programs, and then load these programs into memory so that they can be
Self-Instructional
232 Material
executed. The programs must be able to terminate either normally or Operating System

abnormally.
 I/O Operations: Almost all the programs require I/O involving a file or an
I/O device. For efficiency and protection, the operating system must provide
NOTES
a means to perform I/O instead of leaving it for users to handle I/O devices
directly.
 File-System Manipulation: Often, programs need to manipulate files and
directories, such as creating a new file, writing contents to a file, deleting or
searching a file by providing its name, etc. Some programs may also need
to manage permissions for files or directories to allow or deny other programs
requests to access these files or directories.
 Communication: A process executing in one computer may need to
exchange information with the processes executing on the same computer
or on a different computer connected via a computer network. The
information is moved between processes by the operating system.
 Error Detection: There is always a possibility of occurrence of error in the
computer system. Error may occur in the CPU, memory, I/O devices, or in
user program. Examples of errors include an attempt to access an illegal
memory location, power failure, link failure on a network, too long use of
CPU by a user program, etc. The operating system must be constantly
aware of possible errors, and should take appropriate action in the event of
occurrence of error to ensure correct and consistent computing.
In addition to these services, operating system also provides another set of
services that ensures the efficient operations of the system in multiprogramming
environment. These services exist not for helping the users, rather for ensuring the
efficient and secure execution of programs.
 Resource Allocation: In case of multiprogramming, many programs execute
concurrently, each of which require many different types of resources, such
as CPU cycles, memory, I/O devices, etc. Therefore, in such environment,
operating system must allocate resources to programs in a manner such
that resources are utilized efficiently, and no program should wait forever
for other programs to complete their execution.
 Protection and Security: Protection involves ensuring controlled access
to the system resources. In a multiuser or a networked computer system,
the owner of information may want to protect information. When several
processes execute concurrently, a process should not be allowed to interfere
with other processes or with the operating system itself. Security involves
protecting the system from unauthorized users. To provide security, each
user should authenticate him or her to the system before accessing system
resources. A common means of authenticating users is user-name/password
mechanism.
Self-Instructional
Material 233
Operating System  Accounting: We may want to keep track of usage of system resources by
each individual user. This information may be used for accounting so that
users can be billed or for accumulating usage statistics, which is valuable for
researchers.
NOTES
12.8 OPERATING SYSTEM SECURITY

hough the terms security and protection are usually used interchangeably, they
have different meanings in computer terminology. Security deals with the threats
to information caused by the outsiders (non-users), whereas protection deals
with the threats caused by other users (those who are not authorized to do what
they are doing) of the system.
Security Threats
Security threats continue to evolve around us by finding new ways. Some of them
are caused by humans, some are by nature, such as floods, earthquakes and fire,
and some are by the use of Internet, such as virus, Trojan Horse, spyware, and so
on.
The person who tries to breach the security of a system and cause the harm
is known as intruder or hacker in computer world. Once the intruder gets access
of the systems in an organization, he or she may steal the confidential data, modify
it in place, or delete it. The stolen data or information can then be used for illegal
activities like blackmailing the organization, selling the information to competitors,
etc. This attack proves to be more destructive, if the data deleted by intruder
cannot be recovered back by the organization.
Security can also be affected by natural calamities, such as earthquakes,
floods, wars, storms, etc. Such disasters are beyond the control of humans and
can result in huge loss of data. The only way to deal with these threats is to maintain
timely and proper backups of data at geographically apart locations.
The use of the Internet is another possible cause that can affect the security.
With the evolving use of the Internet across the world, the system connected to it
is always prone to get affected by the following threats:
 Virus: Virus is a program, which is designed to replicate, attach to other
programs and perform unsolicited and malicious actions. It executes when
an infected program is executed. On MS DOS systems, these files usually
have the extensions .exe, .com or .bat. Virus enters computer systems
from an external software source and easily hides in software. It may become
destructive as soon as it enters a system or can be programmed to lie dormant
until activated by a trigger.
 Trojan Horse: Trojan Horse is a program which is designed to damage
files by allowing hacker to access your system. It enters into a computer
system through an e-mail or free programs that have been downloaded
Self-Instructional
234 Material
from the Internet. Once safely gets into the computer, it usually opens the Operating System

way for other malicious software (like viruses) to enter into the computer
system. In addition, it may also allow unauthorized users to access the
information stored in the computer.
NOTES
 Spyware: Spyware are the small programs that install themselves on
computers to gather data secretly about the computer user without his/her
consent and report the collected data to interested users or parties. The
information gathered by the spyware may include e-mail addresses and
passwords, net surfing activities, credit card information, etc. The spyware
often gets automatically installed on your computer when you download a
program from the Internet or click any option from the pop-up window in
the browser.
 Hackers and Phishing: Hackers are the programmers who break into
others computer systems in order to steal, damage or change the information
as they want. On the other hand, phishing is a form of threat that attempts to
steal the sensitive data (financial or personal) with the help of fraudulent
e-mails and messages.
These security threats may result in loss of data confidentiality and data
integrity, and/or may result in system unavailability. Hence, system designers should
adopt some strict mechanisms to prevent these attacks, such as applying firewalls
to prevent the unauthorized use of data, enabling the use of anti-virus programs,
and keeping a strict eye on new programs loaded into the computer system.
Design Principles for Security
Designing a secure operating system is a crucial task. While designing the operating
system, the major concern of designers is on the internal security mechanisms that
lay the foundation for implementing security policies. Researchers have identified
certain principles that can be followed to design a secure system. Some design
principles presented by Saltzer and Schroeder (1975) are as follows:
 Least Privilege: This principle states that a process should be allowed the
minimal privileges that are required to accomplish its task.
 Fail-Safe Default: This principle states that access rights should be provided
to a process on its explicit requests only and the default should be no access.
 Complete Mediation: This principle states that each access request for
every object should be checked by an efficient checking mechanism in order
to verify the legality of access.
 User Acceptability: This principle states that the mechanism used for
protection should be acceptable to the users and should be easy to use.
Otherwise, the users may feel a burden in following the protection mechanism.
 Economy of Mechanism: This principle states that the protection
mechanism should be kept simple as it helps in verification and correct
implementations. Self-Instructional
Material 235
Operating System · Least Common Mechanism: This principle states that the amount of
mechanism common to and depended upon by multiple users should be
kept as minimum as possible.
 Open Design: This principle states that the design of the security mechanism
NOTES
should be open to all and should not depend on ignorance of intruders. This
entails to the use of cryptographic systems where the algorithms are made
public while the keys are kept secret.
 Separation of Privileges: This principle states that the access to an object
should not depend only on fulfilling a single condition; rather more than one
condition should be fulfilled before granting an access to the object.
User Authentication and Authorization
One way using which operating system provides security is by controlling the
access to the system. This approach often raises a few questions, such as:
 Who is allowed to login into the system?
 How can a user prove that he is a true user of the system?
Some process is required that lets the users to present their identity to the
system to conform their correctness. This process of verifying the identity of a
user is termed as authentication. User authentication can be based on user
knowledge (such as, a username and password), user possession (such as, a card
or key) and/or user attribute (such as, fingerprint, retina pattern or iris design).
Authentication is simply proving that a user’s identity claim is valid and
authentic. Authentication requires some form of ‘proof of identity.’ In network
technologies, physical proof (such as a driver’s license or photo ID) cannot be
employed, so you have to get something else from a user. That typically means
having the user respond to a challenge to provide genuine credentials at the time of
the requests access. For our purposes, credentials can be something the user
knows, something the user has, or something they are. Once they provide
authentication, there also has to be authorization, or permission to enter. Finally,
you want to have some record of users’ entry into your network—username, time
of entry and resources. That is the accounting side of the process.
Authorization is independent of authentication. A user can be permitted entry
into the network but not be authorized to access a resource. You do not want an
employee having access to HR information or a corporate partner getting access
to confidential or proprietary information. Authorization requires a set of rules that
dictate the resources to which a user will have access. These permissions are
established in your security policy.
The security regarding the implementation of the programs through
guaranteeing that the resources will be consumed only by the programs and users
that have relevant authorizations is also the responsibility of the OS.

Self-Instructional
236 Material
Authentication and authorization are the two important factors that decide Operating System

security features. In authentication process, the identity of user is determined. It


forces the users that they are exact the same person as they already registered to
be. The two prime credentials, such as password and user name are required to
log in the specific Web page. The credentials are first authenticated by the list of NOTES
users, which are maintained by admin side in a backend database server. In
authorization, if the user is authenticated by the name and password the process
goes to the authorization approach. In this approach, user is able to click on the
events that are performed as per actions. Actions involve the operations of viewing
files and support the mechanism to retrieve information from the specific database.
Opening a file in Windows is used to be checked for submitting the order, purchasing
online order, introducing the users for promotion scheme and assigning the project.
Passwords
Password is the simplest and most commonly used authentication scheme. In this
scheme, each user is asked to enter a username and password at the time of
logging in into the system. The combination of username and password is then
matched against the stored list of usernames and passwords. If a match is found,
the system assumes that the user is legitimate and allows him/her the access to the
system; otherwise access is denied. Generally, password is asked once when the
user logs in into the system, however, this process can be repeated for each
operation when the user tries to access sensitive data.
Though the password scheme is widely used, this scheme has some
limitations. In this method, the security of system completely relies on the password.
Thus, password itself needs to be secured from unauthorized access. One simple
way to secure the password is to store it in an encrypted form. The system
employs a function (say, f(x)) to encode (encrypt) all the passwords. Whenever
a user attempts to login into the system, the password entered by him/her is first
encrypted using the same function f(x) and then matched against the stored list
of encrypted passwords.
The main advantage of encrypted passwords is that even if the stored
encrypted password is seen, it cannot be determined. Thus, there is no need to
keep the password file secret. However, care should be taken to ensure that the
password would never be displayed on the screen in its decrypted form.
Smart Card
In this method, each user is provided with a smart card that is used for identification.
The smart card has a key stored on an embedded chip, and the operating system
of smart card ensures that the key can never be read. Instead, it allows data to be
sent to the card for encryption or decryption using that private key. The smart
card is programmed in such a way that it is extremely difficult to extract values
from smart card, thus, it is considered as a secure device.

Self-Instructional
Material 237
Operating System Biometric Techniques
Biometric authentication technologies (see Figure 12.1) use the unique
characteristics (or attributes) of an individual to authenticate the person’s identity.
NOTES These include physiological attributes (such as, fingerprints, hand geometry, or
retina patterns) or behavioural attributes (such as, voice patterns and hand-written
signatures). Biometric authentication technologies based on these attributes have
been developed for computer log in applications. Biometric authentication is
technically complex and expensive, and user acceptance can be difficult.
Biometric systems provide an increased level of security for computer
systems, but the technology is still new as compared to memory tokens or
smart tokens. Biometric authentication devices provide imperfection, which
may result from technical difficulties in measuring and profiling physical attributes
as well as from the somewhat variable nature of physical attributes. These
attributes may change, depending on various conditions. For example, a
person’s speech pattern may change under stressful conditions or when suffering
from a sore throat or cold. Due to their relatively high cost, biometric systems
are typically used with other authentication means in environments where high
security is required.

Fig. 12.1 Biometric Techniques

Check Your Progress


1. What is the significance of virtual memory?
2. Define process.
3. What is virus?

12.9 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. Using the virtual memory technique, we can execute a process which is


larger than the physical memory. Using virtual memory, the physical memory
can be assumed to be an extremely large memory.
Self-Instructional
238 Material
2. A process is simply the program in execution. Operating System

3. Virus is a program, which is designed to replicate, attach to other programs


and perform unsolicited and malicious actions.
NOTES
12.10 SUMMARY

 An operating system is a set of programs that manages computer hardware


resources to provide common services for software and application
programs.
 The management of the allocation of the processors between the various
programs through the use of a scheduling algorithm is the responsibility of
the operating system.
 The reading as well as writing of a file in a file system and the user and
application file access authorizations is managed by the operating system.
 A computer system in which only one user can work at a time is called a
single user system.
 Multitasking is concerned with a single user executing more than one program
simultaneously.
 A distributed system is composed of a large number of computers connected
by high-speed networks.
 A real-time system (operating system) is the one which responds to real-
time events or inputs within a specified time limit.
 Virtual memory decreases the performance of the system and is difficult to
implement. The main advantage of virtual memory is that it uses the existing
physical memory effectively.
 Protection involves ensuring controlled access to the system resources.
 Security involves protecting the system from unauthorized users.
 Biometric authentication technologies use the unique characteristics (or
attributes) of an individual to authenticate the person’s identity.

12.11 KEY WORDS

 Process: It is a program under execution.


 Trojan Horse: It is a program which is designed to damage files by allowing
hacker to access your system.
 Intruder: The person who tries to breach the security of a system and
cause the harm.

Self-Instructional
Material 239
Operating System
12.12 SELF ASSESSMENT QUESTIONS AND
EXERCISES

NOTES Short Answer Questions


1. What are the three major objectives of operating system?
2. What is a real time system?
3. Define virtual memory.
4. Discuss the significance of biometric techniques.
Long Answer Questions
1. Explain the various functions of OS.
2. What are the different types of OS that result in development of operating
system?
3. What are the different components of OS? Explain.
4. Explain the various services of an operating system.

12.13 FURTHER READINGS

Bhatt, Pramod Chandra P. 2003. An Introduction to Operating Systems—


Concepts and Practice. New Delhi: PHI.
Bhattacharjee, Satyapriya. 2001. A Textbook of Client/Server Computing. New
Delhi: Dominant Publishers and Distributers.
Hamacher, V.C., Vranesic, Z.G. and Zaky. 2002. S.G. Computer Organization,
5th edition. New York: McGraw-Hill International Edition.
Mano, M. Morris. 1993. Computer System Architecture, 3th edition. New Jersey:
Prentice-Hall Inc.
Nutt, Gary. 2006. Operating Systems. New Delhi: Pearson Education.

Self-Instructional
240 Material
Internet and Its Working

UNIT 13 INTERNET AND ITS


WORKING
NOTES
Structure
13.0 Introduction
13.1 Objectives
13.2 Definition
13.3 History of the Internet
13.4 Web Browser
13.5 Internet Protocols
13.6 Answers to Check Your Progress Questions
13.7 Summary
13.8 Key Words
13.9 Self Assessment Questions and Exercises
13.10 Further Readings

13.0 INTRODUCTION

The Internet, World Wide Web (WWW) and information super highway have
penetrated into the lives of millions of people all over the world. The Internet is a
network made up of thousands of networks worldwide. Obviously, these networks
are composed of computers and other intelligent and active devices. In fact, the
Internet is an example of self-regulating mechanism and there is no one in charge
of the Internet. There are organizations which are entrusted to develop technical
aspects of this network, but no governing body is in control. Private companies
own the Internet backbone, through which the Internet traffic or data flows in the
form of text, video, graphics, sound, image, etc.

13.1 OBJECTIVES

After going through this unit, you will be able to:


 Discuss the history of internet
 Define web browser
 Explain the various protocols of internet and IP address
 Explain how to connect and use internet

13.2 DEFINITION

The word Internet is the short form of internetwork or interconnected network.


Therefore, it can be said that the Internet is not a single network, but a collection
Self-Instructional
Material 241
Internet and Its Working of networks. These networks have one thing in common—to communicate with
each other. The Internet consists of the following groups of networks:
 Backbones: These are large networks that exist primarily to interconnect
NOTES other networks. Some examples of backbones are NSFNET in the USA,
EBONE in Europe and large commercial backbones.
 Regional networks: These connect, for example, universities and colleges.
ERNET (Education and Research Network) is an example in the Indian
context.
 Commercial networks: They provide access to the backbones to
subscribers, and networks owned by commercial organizations for internal
use and also have connections to the Internet. Mainly, the Internet Service
Providers come in this category.
 Local networks: These are campus-wide university networks.
The networks connect users to the Internet using special devices called
gateways or routers. These devices provide connection and protocol conversion
of dissimilar networks to the Internet.
Gateways or routers are responsible for routing data around the global
network until they reach their ultimate destination as shown in Figure 13.1. The
delivery of data to its final destination takes place based on some routing table
maintained by router or gateways.
Over time, TCP/IP defined several protocol sets for the exchange of routing
information. Each set pertains to a different historic phase in the evolution of the
architecture of the Internet backbone.

Ethernet
10 Mbps

Token-ring
4 Mbps.
10 Mbps
Router

Router
Router

Ethernet
10 Mbps

Fig. 13.1 Local Area Networks Connected to the Internet via Gateways or Routers

Self-Instructional
242 Material
Internet and Its Working
13.3 HISTORY OF THE INTERNET

The Internet, www and Information Super Highway are terms which have deep
impact on the lives of millions of people all over the world. This global nature of NOTES
the internet could not be possible without the development of Transmission Control
Protocol/Internet Protocol (TCP/IP). This is the protocol suite developed
specifically for the Internet. The information technology revolution of today cannot
be achieved without this vast network of networks.
During late 1960s and 1970s, organizations were inundated with many
different LAN and WAN technologies such as packet switching technology, collision-
detection local area networks, hierarchical enterprise networks, and many other
excellent technologies. The major drawbacks of all these technologies were that
they could not communicate to each other without expensive deployment of
communication devices. These were not only expensive, but also put users at the
mercy of the monopoly of the vendor they would be dealing with. Consequently,
multiple networking models were available as a result of the research and development
efforts made by many interest groups. This paved the way for development of another
aspect of networking known as protocol layering. This allows applications to
communicate with each other. A complete range of architectural models was proposed
and implemented by various research teams and computer manufacturers. The result
of all this great know-how is that today, any group of users can find a physical
network and an architectural model suitable for their specific needs. This includes
cheap asynchronous lines with no other error recovery than a bit-per-bit parity
function, through full-function wide area networks (public or private) with reliable
protocols such as public packet switching networks or private SNA networks, to
high-speed but limited-distance local area networks.
This is now evident that organizations or users are using different network
technology to connect computers over the network. The desire to share more and
more information among homogeneous or heterogeneous interest group motivated
the researcher to devise the technology so that one group of users may extend its
information system to another group of users who happen to have a different
network technology and different network protocols. This necessity was recognized
in the beginning of the 1970s by a group of researchers in the United States of
America who hit upon a new principle popularly known as internetworking. Other
organizations also became involved in this area of interconnecting networks, such
as ITU-T (formerly CCITT) and ISO. All were trying to define a set of protocols,
layered in a well-defined suite, so that applications would be able to communicate
to other applications, regardless of the underlying network technology and the
operating systems where those applications run.
ARPanet
ARPAnet was built by DARPA. This initiated the packet switching technology in
the world of networking and therefore, it is sometimes referred as the ‘grand- Self-Instructional
Material 243
Internet and Its Working daddy of packet networks’. The ARPAnet was established in the late 1960s for
the Department of Defense to accommodate research equipment on packet
switching technology, besides allowing resource sharing for the Department of
Defense’s contractors. This network includes research centres, some military bases
NOTES and government locations. It soon became popular with researchers for collaboration
through electronic mail and other services. ARPAnet formed the beginning of the
Internet.
ARPAnet provided interconnection of various packet-switching nodes
(PSN) located across the continental USA and western Europe using 56 Kbps
leased lines. ARPAnet provided connection to minicomputers running a protocol
known as 1822 (after the number of a report describing it) and dedicated to the
packet-switching task. Each PSN had at least two connections to other PSNs (to
allow alternate routing in case of circuit failure) and up to twenty-two ports for
user computer connections. Later on, DARPA replaced the 1822 packet switching
technology with the CCITT X.25 standard. Subsequently, the excessive increase
in the data traffic made the capacity of the existing lines (56kbps) insufficient.
ARPAnet has now been replaced with new technologies.
Internet 2
The success of the Internet and the subsequent alternatives frequent congestion of
the existing backbones led the research community for other alternatives. The
university community, therefore, together with government and industry partners,
and encouraged by the funding agencies have formed the Internet2 project. Internet2
has the following principle objectives:
 To create a high bandwidth, leading-edge network capability for the research
community in the US
 To enable a new generation of applications and communication technologies
to fully exploit the capabilities of broadband networks
 To rapidly transfer newly developed technologies to all levels of education
and to the broader Internet community, both in the US and abroad.

13.4 WEB BROWSER

A Web browser, commonly known as browser, is a computer application that


creates requests for HTML pages or Web pages and displays the processed HTML
page. Web browsers use the HTTP (Hypertext Transfer Protocol) to request for
information from Web servers. The two most commonly used Web browsers are
as follows:
(i) Mozilla Firefox
(ii) Microsoft Internet Explorer
Other examples of Web browsers include Opera, Mosaic, Cello and Lynx.
Self-Instructional
244 Material
Internet and Its Working
13.5 INTERNET PROTOCOLS

The Internet protocols enable the transfer of data over networks and/or the Internet
in an efficient manner. When various computers are connected through a computer NOTES
network, it becomes necessary to use a protocol to efficiently use network
bandwidth and avoid collisions.
A network protocol defines a language that contains the rules and conventions
which are necessary for reliable communication between different devices over a
network. For example, it includes rules that specify how to package data into
messages, how to acknowledge messages and how to compress data.
There are a number of Internet protocols used. The most commonly used
protocols are as follows:
 Transmission Control Protocol/Internet Protocol (TCP/IP)
 HyperText Transfer Protocol (HTTP)
 File Transfer Protocol (FTP)
 Telnet
(a) Transmission Control Protocol/Internet Protocol or TCP/IP
TCP/IP is a protocol suite that is used to transfer data over the Internet. The two
main protocols in this protocol suite are:
TCP: It forms the higher layer of TCP/IP and divides a file or message into
smaller packets, which are then transmitted over the Internet. Following this, a
TCP layer receives these packets on the other side and reassembles them into the
original message.
When two computers seek a reliable communication between each other,
they establish a connection. This is analogous to making a telephone call. If you
want to speak to your uncle in New York, a connection is established when you
dial his phone number and he answers. The TCP guarantees that data sent from
one end of the connection actually reaches the other end in the same order it was
sent. Otherwise, an error is reported.
IP: It forms the lower layer of the protocol suite. The address part of all the
packets is handled by it in such a manner that they reach the desirable destination.
Usually, this address is checked by each gateway computer on the network to
identify where the message is to be forwarded. This implies that all the packets of
a message are delivered to the destination regardless of the route used for delivering
the packets.
Therefore, IP is responsible for routing datagrams (packets) from one host
to another one over a network. In a network, information is passed in the form of
‘packets’. The IP does not verify that the packet really reaches its destination, nor
is it the task of the IP to make sure that it reaches error free and in the correct
order.
Self-Instructional
Material 245
Internet and Its Working The working of TCP/IP can be compared to shifting your residence to a
new location. It involves packing your belongings in smaller boxes for easy
transportation, with the new address and a number written on each of them. You
then load them on multiple vehicles. These vehicles may take different routes to
NOTES reach the same destination. The delivery time of vehicles depends on the amount
of traffic and the length of the route. Once the boxes are delivered to the destination,
you check them to make sure that all of them have been delivered in good shape.
After that, you unpack the boxes and reassemble your house.
(b) HyperText Transfer Protocol or HTTP
HTTP is a protocol that transfers files (image, text, video, sound and other multimedia
files) using the Internet (see Figure 13.2). It runs on top of the TCP/IP protocol
suite and is an application protocol that forms the foundation protocol of the Internet.
It assists in defining how messages are transmitted and formatted, and specifies
the actions that Web browsers and Web servers must engage in response to the
issued commands. HTTP is based on a client-server architecture where the Web
browser acts as an HTTP client making requests to the Web server machines. In
addition to Web pages, a server machine contains an HTTP daemon that handles
the Web page requests. Typically, when a user clicks on a hypertext link or types
a URL (Uniform Resource Locator), an HTTP request is built by the browser and
is sent to the IP address specified in the URL. This request is then received by the
HTTP daemon on the destination server, which, in response, sends back the Web
page that is requested.
HTTP is a stateless protocol, which means each request is processed
independently without any knowledge of the previous request. This is why
server side programming languages, such as JSP, PHP and ASP.NET have
gained popularity.
www.yahoo.com

Request

Response
HTTP Client Server

Fig. 13.2 HTTP

(c) File Transfer Protocol or FTP


File Transfer Protocol or FTP is an application protocol that allows files to be
exchanged between computers through the Internet. It is the simplest protocol for
downloading or uploading a file from/to a server, and is therefore also the most
commonly used one. For example, downloading a document or an article from a
Self-Instructional
246 Material
Website. Like other protocols, FTP also uses the TCP/IP protocol suite for data Internet and Its Working

transfer.

NOTES

FTP Client FTP Server

Fig. 13.3 FTP

FTP also works on a client-server architecture where an FTP client program is


used to make a request to an FTP server (FTP servers are the required software
applications that support file transfers.) (see Figure 13.3). Basic FTP support is
usually offered along with the TCP/IP suite of programs. FTP can be used with a
simple command line interface, such as MS DOS prompt or with a commercial
program that comes with a graphical user interface. Using FTP, you can update
files on a server. FTP requests for downloading programs or documents can also
be made through a Web browser. Typically, a login to an FTP server is needed
for this purpose. However, anonymous FTP can be used to access files that are
publicly available.
(d) Telnet
Telnet is a protocol that allows you to access a remote computer provided you
have been given the permission to do so. It is typically referred to ‘remote login’.
Telnet is based on a different concept from HTTP and FTP. HTTP and FTP enable
requests for specific files to be made only from remote computers but with Telnet
on the other hand, you can log in as a regular user on a remote machine with the
privileges that have been granted to you on that computer with regard to accessing
specific data and applications. A request for a connection to a remote host, which
may be a computer lying physically in a neighbouring room or miles away, results
in an invitation to log on with a user ID and password. If the request is accepted,
the user can enter commands through the Telnet program and these commands
will be executed as if they were being entered directly from the host machine.
Once connected, the user’s computer emulates the remote computer. Telnet is
typically used by program developers or by those who need to access data and/
or applications located at a particular host computer.
IP Address
Since the Internet consists of a large number of computers connected to each
other, it requires a proper addressing system to uniquely identify each computer
on the network. Each computer connected to the Internet is associated with a
unique number and/or a name called the computer address. Before you can access
any Web page on a computer, you require the computer address.
Self-Instructional
Material 247
Internet and Its Working An IP address is a unique number associated with each computer, making it
uniquely identifiable among all the computers connected to the Internet. This is a
32-bit number and is divided into four octets, such as 00001010 00000000
00000000 00000110. For human readability, it is represented in a decimal notation,
NOTES separating each octet with a period. The above number would therefore be
represented as 10.0.0.6.
Each octet can range from 0 – 255, thus all IP addresses lie between 0.0.0.0
and 255.255.255.255 resulting in a total of 4294, 967,296 possible IP addresses.
Note that no two machines (or hosts) can have the same IP address.
Domain
The term ‘domain’ refers either to a local subnetwork or to descriptors for sites on
the Internet. The computers in a domain can share physical proximity on a small
LAN or they can be located in different parts of the world. Following are the
prime types of domain:
 Local Subnetwork Domains: On LAN, a domain is a subnetwork made
up of a group of clients and servers under the control of one central security
database. Within a domain, users authenticate to a centralized server known
as a domain controller rather than repeatedly authenticating to individual
servers and services. Individual servers and services accept the user, based
on the approval of the domain controller.
 The Internet Domains: On the Internet, a domain is part of every network
address, including Website addresses, e-mail addresses, and addresses for
other Internet protocols, such as FTP, IRC and Secure Shell or SSH. All
devices sharing a common part of an address or URL are said to be in the
same domain. To obtain a domain, you must purchase it from a domain
registrar. You can choose a registrar from the list of accredited registrars.
The governing body for domain names is called ICANN or Internet Corporation
for Assigned Names and Numbers which is a non profit corporation charged with
overseeing the creation and distribution of TLDs or Top Level Domain (the Internet).
Domain Name Space
DNS is based on a hierarchical structure that enables same host names to be used
unambiguously within different domains to simplify name space management. The
name space describes the architecture of the names including rules for name
creation, interpretation and the form of names. The Domain Name System (DNS)
describes an architecture depending on domains or nodes. The domains are
structured hierarchically according to their control of authority. The Internet is
divided into more than 200 TLDs. These domains are further partitioned into
subdomains. The subdomains can be further partitioned, and so on. Examples of
TLDs are countries like in, jp, us, ae, eg, etc. There are certain TLDs, which
comes under the category of generic TLD and they are com, net, org, edu, int, etc.

Self-Instructional
248 Material
The DNS hierarchical name architecture follows a directory structure and Internet and Its Working
organizes names from most general types to most specific types. In this manner,
the DNS name space allows names to be arranged into a hierarchy of domains
looking as an inverted tree. In relation to computer terminology, it looks like the
directory structure of a file system. Every standalone internetwork will define its NOTES
own name space with unique hierarchical structure.
The domain name components are represented in English words separated
by dots, for example, www.hotmail.com, www.yahoo.co.in, etc. Each name
separated by dots is subdomains and are managed by a separate authorized server,
for example, ‘.com’ authorized server or servers having the name manage all
domains ‘*.com’.
The DNS name space defines an inverted tree type structures, i.e., the
DNS tree grows from the top down. There are certain terminologies in relation to
DNS tree that are defined below:
Root: The DNS tree grows from top to down, therefore root occupies the top of
the DNS name structure. However, it does not define any name and is considered
null. The root domain is the parent of all the domains in the hierarchy.
Branch: It refers to any next closest part of DNS hierarchy and describes a
domain with subdomains and objects within it. Like a real tree, all branches connect
themselves to the root.
Leaf: Beneath the leaf, no object is defined and therefore it is an ‘end object’ in
the structure. They are also referred as interior nodes, indicating that they occupy
a position in the middle of the structure.
Top Level Domains: They come directly under the root of the tree and therefore
referred as the highest level domains. Other name is first level domains. Similarly,
the domains placed directly beneath the top level domains are called the second
level domains, and so on. The TLDs are considered children domains. A peer at
the same level in the hierarchy is known as sibling which defines that all the TLDs
are siblings with root domain as the parent.
Subdomains: They are located directly below the second level domains.
Conclusively, a domain is either a collection of objects, which represents a
branch of the tree or a specific leaf. Thus, a DNS name space is organized as a
true topological tree with one parent only without any loops. The DNS name
space is logical structure without having any relevance with physical locations of
devices.
Naming in DNS
It involves DNS labels and label syntax rules in which each domain or node is
described with a text label so as the domain may be identified within the structure.
Syntax rules are:
Length: The character length may be of 0-63 characters. However, 1-20 character
length is widely used.

Self-Instructional
Material 249
Internet and Its Working Symbols: Name can be described with letters, numbers and the dash symbol (‘–
’) only.
Case: They are not case sensitive, and lower and upper case for same label is
equivalent. Every label needs to be unique within its parent domain but need not to
NOTES
be unique across domains.
Creating Domain Names
The individual domain within the domain name structure is uniquely created using
the sequence of labels beginning from the root of the tree to the target domain
from right to left separated by dots to provide a formal name to the domain. The
root of the name space is defined with a zero length or ‘null’ name by default. The
DNS name length is limited to 255 characters to describe a complete domain
name.
The Domain Name may be either a Fully Qualified Domain Name (FQDN)
or a Partially Qualified Domain Name (PQDN). A FQDN assigns full path of
labels beginning from the root of the tree down to the target node to uniquely
identify that node in the DNS name space. Unlike FQDN, the PQDN only describes
a part of a domain name to provide a relative name for a particular context.
It is essential to have an authority structure to manage unique TLDs. Erstwhile
the Network Information Center, now known as the Internet Assigned Numbers
Authority (IANA) is the central DNS authority for the Internet to create TLDs
name. In some cases, IANA delegates their power for some of the TLDs to other
organizations. Thus multiple authorities work in assigning and registering Domain
Name. The authority for lower level domain is entrusted with the organization,
which belongs to the second level domain. Conclusively, the DNS name space of
the Internet is managed by several authorities arranged hierarchically in the similar
manner as DNS name space.
Connecting to the Internet
There are various ways to connect to the Internet. Some of the common options
are described here:
Direct Connection
Through a direct connection, a machine is directly connected to the Internet
backbone and acts as a gateway. Though a direct connection provides complete
access to all Internet services, it is very expensive to implement and maintain.
Direct connections are suitable only for very large organizations or companies.
Through the Internet Service Provider
You can also connect, to the Internet through the gateways provided by the Internet
Service Providers or ISPs. The range of the Internet services varies depending on
the ISPs. Therefore, you should use the Internet services of the ISP that is best
Self-Instructional
250 Material
suitable for your requirements. You can connect to your ISP using the following Internet and Its Working

two methods:
Remote Dial-up Connection
A dial-up connection illustrated in Figure 13.4, enables you to connect to your NOTES
ISP using a modem. A modem converts the computer bits or digital signals to
modulated or analog signals that the phone lines can transmit and vice versa. Dial-
up connections use either SLIP (Serial Line Internet Protocol) or PPP (Point-to-
Point Protocol) for transferring information over the Internet.

User’s
Computer

Modem Modem
Internet Backbone

Fig. 13.4 Remote Dial-up Connection

For dial-up connections, regular telephone lines are used. Therefore, the quality
of the connection may not always be very good.
Until the end of 1995, the conventional wisdom was that 28.8 Kbps was
about the fastest speed you could squeeze out of a regular copper telephone line.
Today, data transmission for a dial-up connection is typically 56 Kbps. The key
information here is to know which speed modem is supported by your ISP. If your
ISP supports only a 28.8 Kbps modem on its end of the line, you would be able
to connect to the Net only at 28.8 Kbps, even if you had the faster modem.
Permanent Dedicated Connection
You can also have a dedicated Internet connection that connects you to ISP through
a dedicated phone line. A dedicated Internet connection is a permanent telephone
connection between two points. Computer networks that are physically separated
are often connected using leased or dedicated lines. These lines are preferred
because they are always open for communication traffic unlike the regular telephone
lines that require a dialling sequence to be activated. Often, this line is an ISDN
line that allows transmission of data, voice, video and graphics at very high speeds.
ISDN or Integrated Services Digital Network applications have revolutionized
the way businesses communicate. ISDN lines support upward scalability, which
means that you can transparently add more lines to get faster speeds, going up to
1.28 Mbps (Million bits per second).
T1 and T3 are the two other types of commonly used dedicated line types
for the Internet connections. Dedicated lines are becoming popular because of
their faster data transfer rates. Dedicated lines are cost effective for businesses
that use the Internet services extensively. Self-Instructional
Material 251
Internet and Its Working Digital Subscriber Line or DSL
DSL is a high speed technology that has recently gained popularity. It can carry
both data and voice over telephone lines. It is possible for a DSL line to stay
NOTES connected to the Internet which means that you do not have to dial-up every time
you wish to go online. Usually with DSL, data can be downloaded at rates that
can go up to 1.544 Mbps and data can be sent at 128 Kbps. Because DSL lines
carry both data and voice, a separate phone line does not have to be installed.
DSL services can be established using your existing lines, as long as the service is
offered in your locality and your system lies within the appropriate distance from
the central switching office of the telephone company.
DSL services require special modems and network cards to be installed on
your computer. The cost of equipment, the monthly service and DSL installation
charges may vary considerably, but still the ISP is recommended. It may be noted
here that prices are now declining due to the increasing competition.
Cable Modems
You can also connect to the Internet at high speeds through cable TV. Since their
speeds go up to 36 Mbps, cable modems make it possible for data to be
downloaded in a matter of seconds, where they might take fifty times longer with
dial-up connections. Since they work over TV cables, they do not tie up telephone
lines and also do not require you to specifically connect as in the case of dial-up
connections.
Integrated Services Digital Network or ISDN
ISDN services are an older, but nevertheless viable technology, provided by
telephone companies. ISDN lines transfer data at rates that range between 57
Kbps and 128 Kbps. These leased lines can have two configurations, namely, T1
and T3. T1 (the commonly used connection option) is a dedicated connection that
allows data to be transferred at a speed of 1.54 Mbps or Million bits per second.
This proves to be beneficial for computers and Web servers that remain connected
to the Net at all times. Portions of a T1 line can also be leased using either the
Fractional T1 or the Frame Relay systems. They can be leased in blocks that
range between 128 Kbps and 1.5 Mbps.
Since leased lines cost more, they are usually used only by those companies
whose businesses revolve around the Internet or who engage extensively in large
data transfers.
Prerequisites for the Internet
Following are the prerequisites for the Internet:
Hardware
The hardware requirement varies from case to case.
 Like in case of a dial-up connection, a computer with a serial port for
Self-Instructional
connecting an external modem or a spare expansion slot for connecting an
252 Material
internal modem card is needed. In the case of Broadband or DSL Internet and Its Working

connections a spare USB port and a LAN card are needed.


 A modem (internal or external), ideally a faster one (with a speed of 56
Kbps or more), is required. A modem converts electronic signals from your
NOTES
computer into analog signals (sound), which can then be sent over telephone
lines.
 Cables are required with jacks and sockets to connect your modem with
the computer and telephone.
Software
Following are the software requirements for the Internet:
 Windows: Although you can use earlier versions, Windows 98 or a higher
version is preferable because it has inbuilt components to support the Internet
connectivity.
 Web Browser: A Web browser, such as the Internet Explorer or Netscape
Navigator. A browser is a client software program that allows the user to
navigate the Web.

Others
 A telephone connection in case of a dial-up modem.
 An Internet account. If you want to have Internet access at your home, you
will need to sign up with an Internet Service Provider (ISP) and have an
Internet account. Some common ISPs are Mahanagar Telephone Nigam
Limited or MTNL, Videsh Sanchar Nigam Limited or VSNL, Airtel, Sify,
etc.
Self-Instructional
Material 253
Internet and Its Working Factors Affecting the Internet Connectivity
 Speed of the Modem: A modem with a minimum speed of 56 Kbps or
higher is recommended.
NOTES  Quality of Phone Line: In the case of dial-up networks, the Internet
connections with modems can get disrupted by the level of noise on the
phone line that runs into your home.
 Internet Traffic: The traffic on the Web generally expands throughout the
day, reaching its peak during early evening. Therefore, it is advisable to
schedule your downloading activity and faster surfing at off peak hours.
The following are the factors associated with your computer affecting Internet
Speed:
1. A faster processor (2-3 GHz or higher) allows faster surfing on the
Web.
2. A computer system with a better memory (RAM) enables faster surfing.
You can prevent slowdowns by refraining from working with other
software applications while you are surfing.
3. Web surfing is also adversely affected by an exceedingly fragmented
hard disk. So it is advisable to defragment the hard drive at reasonably
frequent intervals and keep it optimized.
4. The Web browser’s Cache refers to a storage area on the computer’s
hard disk. While you are surfing, the Web pages you visit are stored in
the cache upto the disk space limit set by you. As a result, the Web
browser displays the cached pages faster, since they may be retrieved
from the hard disk and not from the Internet. You can increase the cache
limit of your browser.
5. The surfing speed can be further enhanced by using two or more browser
windows simultaneously. This facilitates reading the contents on one page
even as another page gets loaded in the other window. It is also advisable
to turn off Java and image loading in your browser. To do so, go to the
Advanced tab of Internet Options in the Tools menu. This does not
affect the content on a Web page.
Common Terminology

World Wide Web


Commonly known as ‘WWW’, ‘Web’ or W3’, it consists of a number of distributed
servers that are connected through hypermedia documents. These documents are
created using HyperText Markup Language (HTML). Related text organized into
units can be accessed by using a link called hyperlink. These links or hyperlinks
allow the user to navigate from one document to another without having to worry
about the actual physical location of the documents. WWW makes it possible to
Self-Instructional
254 Material
share information between disparate users, computers and operating systems. It Internet and Its Working
is therefore the fastest growing application of the Internet.
Website
The Web can be understood as a collection of thousands of information locations NOTES
connected to each other. Each such location is called a Website and comprises of
multiple Web pages. A Web page is created using HTML and is like any other
computer document. It consists of text, pictures, sound, video and hyperlinks.
You can navigate from one Web page to another using hyperlinks. A Website can
be created by an individual or a company. Websites are hosted on the Web servers
which are accessible on the Internet. A URL defines the address of a Website and
is used to point to the homepage of the Website. A homepage is the first page that
is displayed when you access a Website. It serves as a reference point and contains
links to additional HTML pages or links to other Websites. The screen below
displays the homepage of the Website britannica.com that is displayed on typing
the URL ‘http://www.britannica.com’.

hyperlink

Uniform Resource Locator or URL


A URL defines the address of a site on the Internet. It defines the global addresses
of documents and other resources on WWW.
Typically, the first part of a URL indicates the protocol to be used, while the
second part specifies the domain name or IP address where the resource is located.
Some examples of URLs are shown below:

URL Description
http://mysite.com/index.html Fetch a Web page (index.html) using the HTTP protocol
ftp://www.sharware/myzip.exe Fetch an executable file (myzip.exe) using FTP
protocol

Wi-Fi Technology
The Wireless Fidelity or Wi-Fi was used in public switched network which was
originally created by American Telephone & Telegraph or AT&T who used Bell
Self-Instructional
Material 255
Internet and Its Working Laboratories standards to ensure that all central office switches and lines that
carried calls met these preset standards. The standards set by AT&T enabled
everyone to communicate with anyone else regardless of the service provider
since the dialingringing, routing and telephone numbering were all uniform.
NOTES
Wi-Fi Security for Public Networks
For the fast connection and security point of view Wi-Fi is used for public networks.
It is used for wireless devices. The Wi-Fi Alliance represents the wireless standard
protocol and basically non profit organization. This supports interoperability features
for wireless devices. It connects the networking system without cables. But for
this, Wi-Fi and regular ISP services are needed. The manufacturers of Wi-Fi
Alliance build the various devices for 802.11 standards. Approximately 205
companies joined to the Wi-Fi Alliance and almost 900 products have been certified
to the interoperable system. These companies give assurance that the Wi-Fi devices
are connected by physical layer in reference models. Wi-Fi Protected Access
(WPA) solution is been added recently to the Wi-Fi standard. The physical and
access control layer implement the extra enhanced features, such as the Internet
security. The Wi-Fi has grown by leaps and bounds because it is connected via
spectrum. It uses unlicensed 24 GHz and 5 GHz bands. It provides data throughput
for most uses. The prime equipment required for Wi-Fi connection is Wi-Fi PC
card. It is a common way to connect the computer to the Internet without wires.
This card is technically known as Personal Computer Memory Card International
Association or PCMCIA. The two prime solutions are included by this device as
follows:
 You can work almost anywhere by mobile Wi-Fi device connected to the
Internet without wires, when away from your home or office.
 You can free yourself from the need to drill holes and wires by creating a
network at the home or office using Wi-Fi devices.

Fig. 13.5 Wi-Fi Zone


Self-Instructional
256 Material
A hotspot is used in Wi-Fi connection means an area in which Wi-Fi users Internet and Its Working

connect to the Internet (see Figure 13.5). The three approaches used to make the
searching information are as follows:
 You can use the search tool provided by organization. It branches are
NOTES
hosted by Wi-Fi hotspots, such as Starbucks chain across the world.
 If you have signed with Wi-Fi provider, you can search via the Internet
Service Provider or ISP.
 You can search many cross provider directories that are available with
Web.
The Wi-Fi hotspots are created around the antennas to outlet the radio waves of
wireless networking. It is confined to almost 10000 hotspots in crowded areas,
such as airport lounges, cafes, etc. A series of antennas are set up into the city
wide zones. The Internet connection is facilitated by Wi-Fi chips. The long calls
possible in Wi-Fi are bypassed network and VoIP (Voice Over Internet Protocol)
technologies. The Wi-Fi assembled mobiles and laptops are connected to these
hotspots frequently and the amount is paid after using this technology by credit
card on the login page provided by Web browser. Users can hold their accounts
provided by service providers, such as BT Openzone, SkypeZones, Nintendo
Wi-Fi, T-Mobile, O2, etc.

Internet
DSL
Cable etc

Modem

Wi-Fi
Router Wired Network
Access
Point Client

Wireless Wired Network


Client

Wi-Fi Wi-Fi
Devices Devices

Fig. 13.6 Wi-Fi Network Setup

In Figure 13.6, the Wi-Fi access point is interconnected with Wi-Fi devices that
are configured with router. Modem is also used to make connection between
router and the Internet DSL cable. The main role of router is to connect the Wi-Fi
access point and the wired network clients. The Voice over IP or VoIP software
enables data, fax calls and voice across IP networks and represents the Internet
telephony allowing a communication between two PCs over packet switching
Internet. It works by encoding voice information. Then it is changed to digital
Self-Instructional
Material 257
Internet and Its Working format. It provides cost benefits by converging data and voice over IP network
into the mobile phones. Many of the latest mobiles are connected with Wi-Fi via
VoIP technology. Between the Internet connection and Wi-Fi access point, there
needs to be a hardware designed to connect with the Internet and share the
NOTES Internetworking connectivity. The following guidelines are required for connecting
the Wi-Fi with public networks:
 The Wi-Fi Protected Access 2 or WPA2 encryption is required for the
communications.
 The shared keys and certificates are encrypted.
 The Service Set IDentification (SSID) should be used for broadcasting.
 The Media Access Control or MAC address authentication must be used
for specific system that accesses the Wi-Fi link.
 The infrastructure mode must be used for the Internet connectivity.
After getting the Wi-Fi connection, you can know about your shared WPA2
key, MAC address and SSID. The infrastructure mode is used to set the wireless
network card and firewall. The connection of the Internet via Wi-Fi involves the
following equipments:
 Wi-Fi device (the client).
 A Wi-Fi broadcast unit (the access point).
 Network connectivity hardware (router and modem).
 Fast Internet connection (usually via cable or DSL).
The Wi-Fi technology is best known for its fast connectivity and speed.
Figure 13.7 illustrates the various wireless standards along with their speed.
10 Base T Wired
802, 11b Wi-Fi

802, 11x Wi-Fi


(Estimated)

(Theoretic)
Bluetooth

Ethernet

Throughput
(in Mbps) 38 6 10 11 24 54
802, 11z Wi-Fi
(Theoretic)
(Theoretic)
802, 11y Wi-Fi

Fig. 13.7 Speed of Wireless Standards

The methods used for Wi-Fi vulnerabilities are as follows:


 The Wi-Fi connection must be locked, if connection is not needed.
Self-Instructional
 The Wi-Fi connection must be secured with wireless firewall security.
258 Material
 The Wi-Fi connection is restricted for preconfigured users for accessing Internet and Its Working

the Internet.

Check Your Progress


NOTES
1. Define internet.
2. What is a web browser?
3. What is the significance of internet protocols?

13.6 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. The word Internet is the short form of internetwork or interconnected


network.
2. A Web browser, commonly known as browser, is a computer application
that creates requests for HTML pages or Web pages and displays the
processed HTML page.
3. The Internet protocols enable the transfer of data over networks and/or the
Internet in an efficient manner.

13.7 SUMMARY

 The word Internet is the short form of internetwork or interconnected


network.
 The global nature of the internet could not be possible without the
development of Transmission Control Protocol/Internet Protocol (TCP/IP).
 A Web browser, commonly known as browser, is a computer application
that creates requests for HTML pages or Web pages and displays the
processed HTML page.
 The Internet protocols enable the transfer of data over networks and/or the
Internet in an efficient manner.
 TCP/IP is a protocol suite that is used to transfer data over the Internet.
 HTTP is a protocol that transfers files (image, text, video, sound and other
multimedia files) using the Internet.
 File Transfer Protocol or FTP is an application protocol that allows files to
be exchanged between computers through the Internet.
 Telnet is a protocol that allows you to access a remote computer provided
you have been given the permission to do so.
 An IP address is a unique number associated with each computer, making
it uniquely identifiable among all the computers connected to the Internet.
Self-Instructional
Material 259
Internet and Its Working  The term ‘domain’ refers either to a local sub network or to descriptors for
sites on the Internet.
 A dial-up connection enables you to connect to your ISP using a modem.
NOTES  A URL defines the address of a site on the Internet. It defines the global
addresses of documents and other resources on WWW.

13.8 KEY WORDS

 TCP/IP: It is a protocol suite that is used to transfer data over the Internet.
 File Transfer Protocol or FTP: It is an application protocol that allows
files to be exchanged between computers through the Internet.
 IP address: It is a unique number associated with each computer, making
it uniquely identifiable among all the computers connected to the Internet.

13.9 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short Answer Questions


1. What are the different group of networks that constitute the internet?
2. What is ARPANET?
3. What is DNS?
Long Answer Questions
1. Explain the various types of internet protocols.
2. What are the different ways of connecting to the internet? Explain.
3. What are the different factors affecting the internet connectivity? Explain.

13.10 FURTHER READINGS

Bhatt, Pramod Chandra P. 2003. An Introduction to Operating Systems—


Concepts and Practice. New Delhi: PHI.
Bhattacharjee, Satyapriya. 2001. A Textbook of Client/Server Computing. New
Delhi: Dominant Publishers and Distributers.
Hamacher, V.C., Vranesic, Z.G. and Zaky. 2002. S.G. Computer Organization,
5th edition. New York: McGraw-Hill International Edition.
Mano, M. Morris. 1993. Computer System Architecture, 3th edition. New Jersey:
Prentice-Hall Inc.
Nutt, Gary. 2006. Operating Systems. New Delhi: Pearson Education.
Self-Instructional
260 Material
Internet and Its Uses

UNIT 14 INTERNET AND ITS USES


Structure NOTES
14.0 Introduction
14.1 Objectives
14.2 Internet Security
14.3 Uses of Internet
14.4 Virus
14.5 Cloud System
14.6 Computing Platforms and Technologies
14.7 Cloud Computing Architecture and infrastructure
14.8 Types of Cloud and Deployment Models
14.9 Answers to Check Your Progress Questions
14.10 Summary
14.11 Key Words
14.12 Self Assessment Questions and Exercises
14.13 Further Readings

14.0 INTRODUCTION

In this unit, you will learn about the firewalls, encryption and uses of internet. All
computers on the Internet communicate with one another using the transmission
control protocol/internet Protocol architecture, abbreviated to TCP/IP, based on
client/server architecture. This means that the remote server machine provides
files and services to the user’s local client machine. So, there is a possibility of
threats to information. Firewall acts as a hurdle to protect the system. Encryption
is a way to secure information and ensure privacy on the internet. You will also
learn about the cloud system architecture and deployment models.

14.1 OBJECTIVES

After going through this unit, you will be able to:


 Explain the significance of firewall and encryption techniques
 Discuss the uses of internet
 Explain the computing platforms and technologies
 Understand the cloud architecture
 Explain different types of deployment models

14.2 INTERNET SECURITY

In 1990, WWW was established to share distributed information amongst


individuals. The main technology of Web consisted of URLs, a method to identify
Self-Instructional
Material 261
Internet and Its Uses the location of information, HTML, the language with which information on the
Web is represented and HTTP, the language used for communication between
Web servers and Web browsers.
Different technologies like Java and ActiveX were added to make Web
NOTES
pages dynamic. All these different technologies have made WWW attractive for
providing many e-services. Therefore, WWW is currently a commonly used
platform for e-commerce, e-banking, e-auctioning, e-government, e-voting, e-
healthcare and e-insurance.
Security was not always considered during the development of Web
technologies. However, it is possible for an attacker to eavesdrop the
communication between a user’s browser and a Web server; sensitive information
such as a credit card number or any other confidential data can thus be obtained.
An attacker could try to impersonate entities in order to get information, which is
normally not disclosed without authorization. For example, an attacker can spoof
a Web banking application, thereby gathering users’ PIN codes. A substantial
amount of confidential information is available on the WWW, which can be
accessed without authorization. An attacker could attract a user to surf its Web
page, which contains a program that installs the Trojan horse virus. The Web
provides an excellent means for exchanging information, which can include illegally
distributed, copyright protected or explicit material.
Firewalls
Firewall can be described as a hurdle to protect your system from being accessed
by offensive Websites and potential hackers. Firewall simply filters information
coming from some other sources, such as suspicious sites or networks, to your
network. Firewall can protect computers in the network from being accessible to
anybody on the Internet. A firewall can be and should be put at every Internet
connection. A firewall can be of the following types:
 Windows: When someone on the Internet tries to access your computer
without proper authorization, Windows firewall will not allow that connection.
If you are using a program such as network game, which needs constant
information transfer from the Web, the firewall will pop up a window asking
if you want to block or unblock the Internet connection. If you decide to
continue receiving the information from the Website, then this firewall will
not trouble you further. Windows firewall can do the following tasks:
o It can help block the worm and viruses.
o It will ask for permission from the user every time a computer attempts
to connect any suspicious Website.
o It maintains a security log, which contains information, related to all
the attempts, either successful or unsuccessful, that a user makes to
connect to a Website. Following are the shortcomings of Windows
firewall.
Self-Instructional
262 Material
– If virus and worms are already present on the system, then Windows Internet and Its Uses
firewall cannot detect them.
– It cannot block unwanted mails or spams.
 Packet filtering: Small blocks of information are called packets. When NOTES
you install packet filtering firewall, it checks the incoming data against a set
of filters. Each packet is either forwarded or blocked based on a set of
rules defined by the firewall administrator. If a packet satisfies the condition
of matching with the set of allowed characters, then it is allowed to enter the
network. This firewall focuses only on detecting packets and blocks them
from entering selected network traffic. Packet filtering only helps in filtering
and does not provide information about the source time and file size of the
packets. A common configuration for network-level-packet filtering firewalls
is to allow all connections initiated by computers inside the firewall, and to
restrict or prohibit all connections made by machines outside the firewall.
 Stateful inspection: This firewall does not examine all the contents of the
packet; instead it compares certain parts of the packet with a set of trusted
Website sources. The outgoing information from inside the firewall is checked
for specific characteristics and then incoming information is compared to
these characteristics. If the comparison yields an acceptable match, then
the information is allowed to pass through the firewall, otherwise it is
discarded.
 Proxy servers: Whenever some data is transferred through the Internet,
the information about the data is stored in the firewall and is then sent to the
requesting system and vice versa. The proxy servers allow indirect access
to and from the Internet, where you will require two connections, one from
the client to the firewall, which is called proxy for the desired server, and
another from the firewall to the desired server. This firewall can provide
detailed information such as source and size of the data, but this makes the
data transfer slow.
 Application gateways: It provides the highest level of security without
using a proxy server. It works by collecting the information required for
security decisions from all the communication channels and storing this
information in a state that can be used to check further attempts to connect
those Websites again in future. This helps in achieving high security, high
performance and efficiency. The application gateway firewalls first determine
if the requested connection between a computer on the internal network
and with the external network is authorized. If the connection is authorized,
then the firewall sets up the necessary communication links between the
two computers. This firewall can observe the communication between the
two networks and curb any unauthorized activity.
 Circuit level gateways: This firewall directs all the requests to a single
computer acting as firewall. The requests are then made from the network
Self-Instructional
Material 263
Internet and Its Uses and they appear as though they were coming directly from the firewall. The
whole network can access Internet with a single IP number allocated from
the Internet IP space. The circuit level firewall implementation requires you
to restructure the whole network structure, and it can turn out to be very
NOTES hard or impossible to implement.
Encryption
Encryption is a way to secure personal information and ensure privacy on the
Internet. All the personal information on the Internet can be made available to
anyone provided that the person has the required tools and approaches to get that
information. Many studies, over the last few years have suggested that a majority
of consumers are concerned about when, what and how their personal information
is being collected, and how it is being used and whether it is being protected or
not. They want to know whether the information is being sold or shared with
others, and with whom and for what purpose. The advent of commerce on the
Internet exposed the lack of security over any public network. The encryption
software evolved to ensure that the computers and data on the Internet are safe
and secure.
Large-scale and continuous handling of financial and personal information
can lead to the Internet being associated with fraud and privacy breaches, instead
of being a practical commerce medium. The translation of data into a secret code
is called encryption and it is the most effective way to ensure data security. To
read an encrypted file, you must have access to a secret key or password that
enables you to decrypt it. Unencrypted data is called plain text and encrypted
data is referred to as cipher text. There are following types of encryption:
 Asymmetric: It is also called public-key encryption. The system uses
two keys, a public key known to everyone and a private or secret key
known only to the recipient of the message. When user A wants to send
a secure message to user B, then A can use B’s public key to encrypt
and send the message. User B then uses his private key to decrypt the
message sent by user A. In the public key system, the public and private
keys are related in such a way that only the public key can be used to
encrypt messages and only the corresponding private key can be used
to decrypt them. Moreover, it is virtually impossible to deduce the private
key even if you know the public key. Whitfield Diffie and Martin Hellman
invented the public key cryptography in 1976, so it is sometimes called
Diffie—Hellman encryption. It is also called asymmetric encryption
because it uses two keys to encrypt and decrypt, instead of one key.
 Symmetric: In this type of encryption, the same key is used to encrypt
and decrypt the message. It requires all parties that are communicating
to share a common key. This encryption differs from asymmetric
encryption, which uses one key to encrypt a message and another to
decrypt the message.
Self-Instructional
264 Material
User Authentication Internet and Its Uses

Users need to be authenticated before they use any legal e-service. When a
connection is made between the browser and server, the user can be
authenticated using a digital signature. Following are the various ways of NOTES
authenticating a user:
 Host name or IP address: This technique authenticates users through the
host name or IP address of their machine. It does not provide a user the
opportunity of mobility. This authentication mechanism is useful when applied
on a small scale. It is impractical to apply this on a large scale because when
the number of computers increases, it becomes difficult to manage the IP
address of each computer.
 Fixed passwords: User authentication is very often performed with fixed
passwords. This can be done based on the basic authentication standard,
i.e. the password is provided via a particular pop-up browser window, and
is included in an exclusive HTTP header, which is present in each subsequent
user requests. Fixed passwords are widely used, as they provide transparent
mobility, and also because they are very easy to implement and use. Most
Websites do not rely on the basic authentication standard and implement
their exclusive and individual fixed password scheme, where the user must
provide the password through an HTML form and on receiving the valid
password, the server authenticates the user. The client in subsequent requests
should include this password within the session with the server. This gets
automatically done when the authenticator is included as a cookie in the
initial reply of the server.
 Dynamic passwords: These passwords are only used once and they are
more secure, but less convenient. A list of independent random one-time
passwords can be issued to users as a dynamic password. As this is very
difficult to learn by heart, users will be forced to keep this list somewhere
either on paper, or worse, on a file on their machine that makes it inconvenient
to use and prone to threat.
 Hardware tokens: These can be used with several of the previously
discussed user authentication mechanisms. The users can easily carry these
hardware tokens with them and can use them on different machines.
Hardware tokens offer more mobility than software tokens that are installed
on a particular machine. Hardware tokens display the current time interval
or a response to an error. The site should issue a new token to all users and
all tokens should contain a different cryptographic key as using the same
token is not at all safe. There are two types of cryptographic keys, public-
key encryption and symmetric keys. Using public-key encryption is safer
than using symmetric keys. The symmetric keys require maintenance of a
secure database containing all the secret keys, which the site shares with its
users. Therefore, the symmetric keys are often cryptographically derived
Self-Instructional
Material 265
Internet and Its Uses from a unique serial number of the token and a master key, which is the
same for all the tokens. In this way, each user shares a different key with the
site, without having the problem of the secure database.

NOTES
14.3 USES OF INTERNET

Let us analyse the use of Internet in different industries.


Railways
The railroad industry is an important driver of a country’s economic growth. An
efficient rail network means transportation of goods and people at low cost and in
time and, thereby, it facilitates economic growth. However, the size and complexity
of problems which the railways face are also unique. Let us, for instance, consider
Indian Railways, which is one of the largest rail networks in the world. It runs
around 11,000 trains everyday, out of which 7000 are passenger trains. It has
over 7500 locomotives, 37,000 coaches and over 2 lakh freight wagons. It operates
from over 6800 railway stations and employs over 1.5 million people. The sheer
scale of the operations poses numerous management and operational problems.
Fortunately, the key decision makers in railways saw the tremendous potential of
IT in solving some of these problems and embarked upon a major computerization
initiative. Some of these are:
 All India centralized reservations system: One of the most successful
examples of computerization in the country; the computerized reservations
mean that anybody, even in a small town, can book tickets for any destination.
 Internet booking: An online ticketing facility has been launched by IRCTC
which can be accessed through the Website irctc.co.in. Currently, people
can avail these facilities from 758 locations in the country. Computerized
enquiries related to reservation like train schedule, passenger status and
trains between pairs of stations are also provided on this site. Anybody
with a credit card can book a ticket on any train through this Website The
site levies a small service fee and delivers the ticket to the passenger’s
home through courier within 24 hours. Timetables, network maps, freight
information, fares and tariff are also available on the Indian Railways
homepage.
 Computerized unreserved ticketing: Nearly 12 million unreserved
passengers travel everyday on Indian Railways. For catering to this large
segment, a computerized system of ticketing has been launched recently.
Unreserved tickets can now be bought even from other locations, not only
from the boarding station, reducing long queues at booking offices and stations.
 Season tickets: A pilot project to issue monthly and quarterly season
tickets through ATM has been successfully launched in Mumbai. Another
Self-Instructional
266 Material
pilot project to purchase tickets (including monthly and quarterly season Internet and Its Uses

tickets) using Smart Cards has also been launched.


 National train enquiry system: This system has been introduced for
providing better passenger information and enquiries. This system provides NOTES
real-time position of running trains using several output devices like interactive
voice response system (IVRS) at major railway stations. This project has
been put into action at 98 stations so far.
 Railnet: Railways have established their own intranet called ‘Railnet’. It
provides networking between railway board, zonal headquarters, divisional
headquarters, training centres, production units, etc. to facilitate inter and
intra-departmental communication and coordination.
Airlines
The air travel industry is one of the biggest users of information technology. There
is hardly any aspect of the airline business in which computer systems have not
been deployed for increasing revenues, reducing costs and enhancing customer
satisfaction.
It is now almost inconceivable to book a ticket or get a seat confirmed
across multiple sales counters (airline offices, travel agents, etc.) spread over
numerous cities, without using computerized databases and e- networking. Like
most other industries, the use of computerized systems in the air travel industry
started with the front office and sales desk with back office operations playing a
oracial role in delivering a quality experience to consumers. What typically started
as airlines Intranet systems have now blossomed into a vast Web-based online
systems which can be accessed by anybody from anywhere in the world.
Some of the interesting areas where IT has been used successfully are:
 Online ticket reservation through Internet: Today, most leading
airlines like United Airlines, Delta, British Airways, etc. sell tickets through
their Websites. You can book the ticket through the Internet, pay online
by giving your international credit card details and then collect the ticket
(on the day of journey) and boarding pass from e-ticket kiosks at the
airport by simply furnishing your booking reference details.
 Flight and seats availability: If you wish to travel from New Delhi to
New York and do not know what your flight options are, simply log
onto the airline site (or better still a travel site like ‘msn’, which offers
information and tickets from many airlines and can, therefore, give you
more options than a single airline’s Website), specify the cities of travel
origin and destination along with preferred journey dates and the database
would yield all the possible options. Once you have selected the flights,
you could even go a step further (possible in the case of a few airlines)
Self-Instructional
Material 267
Internet and Its Uses and book a specific seat number in that flight along with the choice of
meal.
 Last minute deals and auctions: A seat is a perishable commodity.
NOTES An unsold seat means a revenue opportunity lost forever. Therefore,
most airlines, including Indian Airlines (and some specialized ticket
auction sites like Razorfinish.com) have now started a facility on their
Website where potential customers can bid for last minute tickets in
online auctions. Cases of people buying a ticket worth $1000 for as
low as $100 are not uncommon. This is a win-win case by effective use
of IT—the passenger is happy with getting the ticket at a fraction of its
normal cost, and the airline is able to recover something from what
might otherwise have been an unsold seat.
All these facilities/opportunities would have been impossible without
integrated online computer systems.
Banking
In the 1960s and 1970s, the banking industry was losing the battle of providing
good customer services because of impossibly heavy workloads. All major banks
already had branches in most major locations and they simply had to recruit more
and more staff to cope with the increasing number of customers. The accepted
wisdom was that cost was the main basis for competition and so the banks made
strenuous efforts to reduce operational costs, kicking off the process by
computerizing customer accounts. Computerization did lead to cost reductions by
saving a lot of back office work, but banks still needed to employ a large number
of front office staff to deal with customers. To overcome this problem, one of the
UK banks adopted a radical solution. Why not get the customers to do the clerical
work? This idea—not unlike that behind the airline reservation systems—led to
the development of ATMs which allowed customers to take advantage of specific
banking services 24  7 , without entering the bank. ATMs made it easy to deposit
and withdraw money, check balances, request statements, etc. and coupled with
the added advantage of round the clock availability, they not only reduced staff
workloads but also gave their customers a new experience of hassle-free banking.
The banking sector has come a long way since then. It is now one of the
largest users of information technology. Some of the areas where banks typically
use IT are:
 Back office computerization: Nowadays, almost all Indian and
international banks run on fully integrated and online systems where all
back office operations like accounts posting, reconciliation, clearing
house operations, etc. are completely automated.
 Front office computerization: All banks provide facilities like instant
account statement, making fixed deposits, electronic funds transfer, direct
Self-Instructional
268 Material
debit facility, etc. to their customers. None of these would be possible Internet and Its Uses
without the low transaction costs and efficiency offered by computerized
systems.
 Automated teller machines (ATMs): These computerized machines
NOTES
enable customers to do their regular bank transactions (like depositing
and withdrawing money, ascertaining current account balance, etc.)
without visiting a bank branch. ATMs considerably reduce costs for
banks (employee cost, space cost, etc.) and provide better level of
service to customers (by enabling 24 hour banking access at numerous
locations).
 Internet banking: Most banks like HSBC, Standard Chartered,
HDFC, ICICI etc. have extremely user-friendly Websites where the
typical banking transactions (like making request for cash and cheque
pickup, cash delivery, generating account statements, requests for cheque
books and drafts, etc.) can be carried out online without visiting the
bank. This innovative use of IT means that, effectively, customers have
no need to physically visit the bank for most routine banking transactions,
which is an enormous convenience.
 Credit card operations: In a typical credit card operation, you purchase
an article or a service and give your credit card to the vendor/service
provider at the time of clearing the bill. The vendor (called ‘Merchant’ in
banking language) swipes your credit card on a point of sale (POS)
machine that instantly dials into the bank database to verify the authenticity
and credit worthiness of the card. If both are satisfactory (in other words
the transaction is covered by your credit card limit agreed between you
and the bank), the POS prints an authentication receipt that authorizes
the merchant to collect the transaction amount from the bank instead
from the customers. Credit cards obviate the necessity of having to carry
huge amounts of cash and an option of spending more than one’s current
cash status. On the other hand, banks earn money by charging a
transaction fee from the merchant and interest on the credit facility. This
entire operation is critically dependent on IT and would not have been
possible otherwise.
Insurance
Like banking, the insurance sector has also to contend with a lot of routine
paperwork—insurance policies, filed claims, survey or investigation reports,
payment receipts, etc. It is a perfect opportunity to use IT to reduce costs and
processing times.
According to Insurance Journal:
Eighty-eight per cent of insurers think that IT will become more important in
driving efficiencies and cost-reductions in future, according to new research
released by RebusIS, an insurance technology solutions provider. A further 55
Self-Instructional
Material 269
Internet and Its Uses per cent of respondents to the survey argued that IT is currently playing an
‘important’ role in driving efficiencies and cost-reductions, with 43 per cent
contesting that IT is ‘essential’ to business efficiency.
Typically, insurance companies use computerized databases to keep track
NOTES of all insurance policies, generating premium due statements, premium received
receipts, lodging claims for insurance recovery, etc. Basically, all kinds of transactions
are recorded and processed through computerized systems. This not only enables
insurance companies to provide quicker and more efficient service to its clients,
but it also allows them to minimize their risks and maximize profits by enabling
complex financial, economic and demographic analyses of their customers. Using
sophisticated computer programs, an insurance company can determine which
customer segments are growing the fastest, which are most profitable and which
are more risky than others.
Financial Accounting
Financial accounting was one of the first business functions for which software
applications were developed. The importance of financial accounting and
management for any business cannot be overemphasized, but the scale of
transactions, the repetitive and structured nature of the data and the sheer volumes
involved in the case of large corporates make for an ideal case for computerization.
Computerizing accounts also takes the drudgery out of bookkeeping, which means
that accountants can now concentrate more on analysing information rather than
devoting countless hours merely in filling out vouchers and updating registers and
ledgers.
Inventory Control
For any manufacturing firm, managing inventory is crucial. High inventory results
in money being locked up unnecessarily, thereby reducing liquidity and indirectly
profitability (if you offer immediate payment, most suppliers would be willing to
offer you better rates). On the other hand, lower inventory of finished goods may
lead to lost sales, or lower inventory of raw material may lead to disruption in
production line. Optimum stock levels optimize operational efficiency.
Most large manufacturing units typically need hundreds (if not thousands)
of raw material components and produce many products. Managing optimal
inventory of such a large number of items is a difficult task. It is here that information
technology again plays a very useful role. Inventory management software provides
facility for specifying (and determining) the maximum, minimum and reorder levels
for each item, so that appropriate levels of inventory can be maintained keeping in
mind lead times and just-in-time (JIT) systems (if any) for component suppliers.
Basically, this is how a typical computerized inventory system works. A list
of all the inventory items is prepared along with the maximum, minimum, reorder
and current levels (quantity in hand as on a fixed date) for each item. This list is fed
into the inventory software. Thereafter, all incomings (materials purchased or
Self-Instructional
270 Material
produced) and outgoings (sales or issues to production floor) are recorded through Internet and Its Uses

the inventory package. Since the computer knows all the ins and outs for each
item, it can track the exact quantity in hand for each. The package also generates
reports for all the fresh stocks that need to be procured (based upon the levels
specified). A variety of other useful MIS reports like aging analysis, goods movement NOTES
analysis, slow and fast moving stock report, valuation report, etc. can also be
generated which assist the store keeper and accountants.
Some of the more sophisticated inventory packages (or inventory modules
of ERP packages like Oracle financials, Baan, SAP, etc.) automatically generate
purchase orders (as soon as the minimum level of any item is reached), provide
automatic posting of accounting entries (as soon as any purchase or sale is carried
out) and generate analytical reports which show the previous and future trends in
inventory consumption.
Some interesting innovations in usage of IT for better inventory management
are as follows:
 Bar coding system: Bar coding is a technique which allows data to be
encoded in the form of a series of parallel and adjacent bars and spaces
which represent a string of characters. A bar code printer encodes any
data into these spaces and bars and then a bar code reader is used to
decode the bar codes by scanning a source of light across the bar code
and measuring the light’s intensity that is reflected back by the white
spaces. Bar coding provides an excellent and fast method for identifying
items, their batch numbers, expiry dates, etc. without having to manually
type or read the data.

Bar Coding System

 Hand-held terminals (HHTs): HHTs are simple devices used to


communicate with any type of microprocessor-based device. The
standard input device is the keyboard (typically more akin to the calculator,
rather than the computer keyboard) and a small LCD display for the
output. HHTs are compact, simple and rugged devices designed for
outdoor applications like collecting the information about inventory from
large warehouses, recording movement of goods in and out, etc.
 Internet and Intranets: Many organizations (specially those following
‘just-in-time’ techniques) now have a system whereby the moment they
receive an order or a request for an item (which is not in stock or whose
stock is low), the inventory package automatically generates a purchase
Self-Instructional
Material 271
Internet and Its Uses or supply order electronically and mails it to the preferred supplier—all
this without any human intervention!
Hotel Management
NOTES The hotel industry is an integral part of the tourism industry, which is a vital source
of revenue and foreign exchange for a country’s economy. A vibrant hotel industry
means greater employment generation. However, since this industry relies on easy
and quick availability of information, the role of IT in its development and growth
cannot be overstressed. In fact, IT has revolutionized the hotel and tourism industry.
This is because of the instant availability of information about the tourists spots,
hotel infrastructure, room availability, traiff details, online looking, etc. at the click
of a button. IT is playing a critical role in improving performance because of its
potential for creating customer relationships and the flow of information between
the people and customers.
There are numerous instances of use of IT in hotel industry. Some of these
include the following :
 Today’s hotel management software means that the moment a guest
expresses interest in staying at the hotel till the time he checks out, all
transactions with him (room charges, food and laundry bills, business
centre and health centre bills, hiring, etc.) are recorded electronically,
making information available at the click of a button.
 Many leading hotels offer online booking facility for tourists and guests.
This makes it very easy for the tourist as he or she has beforehand
knowledge of room availability and charges. There are several Websites
wholly devoted to this. Microsoft’s MSN has a traveller’s section where
one can search for hotel accommodation on the basis of criteria like
city, location, budget, etc. A tourist, for example, can specify the city
and his budget. On the basis of on this information, the search facility
throws up a complete list of hotels available. Moreover, the tourist can
even specify his or her preferred location. Once the hotel is identified,
booking can be made online using an internationally valid credit card.
 Most of the hotels have computerized their records. It is very easy to
know the details of room availability at a particular time. The information
about the occupant is also available instantly. This computerized system
typically integrates all the MIS functions of the hotel into a single system.
Cendant Corporation has successfully implemented this practice in its
chain of hotels. The Barbizon Hotel and Empire Hotel, New York, has
eliminated logbooks and standardized record keeping by the use of
customized software. Carlson Hospitality Worldwide has the most
efficient reservation system in the US. IMPAC Hotel Group has touch
screen lobby kiosk for guest tracking. Inter Continental Hotels & Resorts
use a global strategic marketing database. All these are examples of use
Self-Instructional
272 Material
of IT in hotel industry, which have made significantly transformed Internet and Its Uses

operations and profitability.


 Hotel information systems help users in accessing information on the
guest database and using the information for creating attractive one-to-
NOTES
one confirmations of reservations, sales messages and e-mail marketing,
custom reports and e-mail comment cards for reinforcing guest
relationships. The Balsams Grand Resort Hotel has a comprehensive
guest history programme that it has used successfully for productive
purposes. Courtyard by Marriott has an intranet system by which it has
replaced manuals and printed material.
 Information technology is being increasingly used by international hotel
chains to formulate and align their corporate strategies. Marriott
International is a successful example of alignment of information
technology with corporate strategy.
Education
Teaching has traditionally been associated with classroom instructions on a
blackboard with the instructor dependent almost entirely on his or her oratory and
presentation skills for holding the attention of the class. From a student’s perspective,
she had to keep pace with the instructor’s pace, which meant that the slower
(though not necessarily less intelligent) student was at a natural disadvantage.
Similarly, some students were more interested in a more in-depth study than the
others. Since access to information was neither easy nor inexpensive, these variables
had always posed a major barrier to learning.
Ever since the advent of information technology, the scenario has changed
dramatically. Today, the instructor has a repertoire of information technologies. To
make the lecture not only more interesting but also more informative, there are
advanced electronic teaching tools available. These vary from simple slide
presentations to full multimedia presentations which have video clippings, sound
effects, animation and graphics to explain even the most abstruse subjects in a
simple and easy-to-understand manner. As an example, a medical student does
not have to pore over boring textbooks to understand, for example, the human
anatomy. Simple computer packages like ‘Body Works’ are available which explain
the same using photographs, images and graphics that make in-depth learning fun
rather than a chore. Moreover, learning is not only faster but is also retained longer
when test is supported by visuals and sound clips. Multimedia has transformed
both classroom as well as online (distance) and packaged (CDs, VCDs, DVDs,
etc.) education, in terms of both content as well as interactivity.
Some of the interesting developments in IT for the education sector
are as follows:
 Computer-based training (CBT): In most of the progressive institutes
today, classroom sessions are complemented by CBTs. CBT typically
Self-Instructional
Material 273
Internet and Its Uses comprises user-friendly software in which the course syllabi is broken
up into a series of interactive sessions. These sessions involve imparting
a slice of knowledge to the student and then quizzing him to reinforce his
understanding. Students have the option of going through these sessions
NOTES at a time most convenient to them and a pace best suited to them. CBTs
also provide an excellent medium for the student to learn by exploration
and discovery rather than by rote.
 Internet: Thanks to the Internet, any and every type of information is
available at the click of a mouse. No longer have students to trudge long
distances to visit a library and spend valuable time plodding through
library catalogues to find the right information. Using a search engine,
one can easily access the desired information. Also, knowledge is no
longer restricted within the academic fraternity alone. Thanks to our
networked world (Intranet/Internet) information dissemination is faster
and widespread.
 Distance learning: Information technology has also made distance
learning a reality. You need not be physically present in a business school
to do a management course from there. By innovative use of information
technology, educational institutes have reached out to students who would
otherwise never have been able to enroll with them.
 Computerization of administrative tasks: Most academic institutes
use computerized systems for student enrolment, fee management,
examination, administration, etc. Enrolment forms, for instance, are now
available on institutional Websites, and examination results are usually
available on the Internet. Some schools have also started collecting fees
through Internet by using credit cards.
Information Kiosks
Traditionally, getting information from any large organization or government
department meant standing in a long queue and then having to deal the changing
moods of the person sitting at the information desk. Not only did it take a lot of
time but also one invariably did not get the complete and authentic information in
one go since even if the person responsible for giving information was available
and willing, he or she invariably did not provide the complete information.
Proactive organizations decided to use information technology to solve this
problem and provide a better level of service to customers and citizens. Information
kiosks are computer-based terminals that provide information of any kind. Typically,
these kiosks use a touch screen technology where the user does not have to type
through the keyboard or click using a mouse but simply touch hot spots on the
computer monitor to select the desired option. These kiosks also use sound
recorded in vernacular languages to make the content more user-friendly and to
reach out to the illiterate and literate alike.
Self-Instructional
274 Material
The most popular applications of information kiosks can be seen at the Internet and Its Uses

following places:
 Public access areas: Shopping malls, holiday resorts, cinema halls,
etc. use information kiosks with graphics and audio prompts that assist
NOTES
customers in accessing information about the desired products, services
and frequently asked questions (FAQs) about availability, price,
attributes, etc.
 Public utilities: One of the early users of information kiosks were public
utility organizations (in the US and Europe). Most public companies
receive enormous amount of requests for information about their services,
lodging complaints, application status, etc. Instead of employing an army
of front office staff (and taking on the additional hassles of their constant
training and ensuring that their motivation is at high levels), most
organizations opted for the information kiosks to provide hassle-free,
round, the clock service to their customers. Information kiosks reduce
personnel cost as well as the need for vast office space and costly support
equipment.
 Web kiosks: Although the early usage of information kiosks was limited
largely to static information (brochures, technical information and
collaterals), information kiosks are being increasingly used to provide
database driven, online information. For instance, information kiosks at
the New York airport are linked to all the major hotels in the city and
any traveller can do an online booking after confirming room availability.

14.4 VIRUS

System threat especially virus attempts on boot sector to modify the interrupt
vectors and system files. For example, virus activity frequently calls I/O operations
if system unit is infected. A virus is detected by searching the pattern of viruses, for
example running any code by examining the records and files. Viruses spread from
machine to machine and across network in a number of ways. The viruses are
always trying to trigger and execute the malicious programs that intently spread
the computer system. For example, a macro virus is booted with infected disk and
spread the viruses to boot sector. Then, it started to share the network drive or
other media that exchanges infected files across Internet by downloading files and
attachments from Internet. The transmission of viruses is possible by following
ways:
 If system unit is booted from infected files.
 If programs are executed with an infected programs.
 If the virus infiltration includes common routes, floppy disks and e-mail
attachments.
 If pirated software and shareware are used in the system files.
Self-Instructional
Material 275
Internet and Its Uses Viruses are so malicious and smart even users are not able to realize whether
their system unit is infected with viruses or not. The property of viruses is that they
hide themselves among regular system files or frequently attachments and
camouflage as they are standard files. The following steps are preferably taken if
NOTES system gets infected with viruses:
 The golden rule must be followed preventing from data destruction that
helps users to avoid unnecessary stress.
 The Internet facility and Local Area Network (LAN) utilities must be
disconnected for the time being.
 Two operating systems must be installed within system unit if system is
not booted properly. If the system does not recognize the hard drive it
means it is infected with virus. It is better to boot from Windows rescue
disk. The partition table of scandisk must be recovered using scandisk
for standard Windows program.
 Back up of all important and critical data must be taken as regular interval
in external devices, such as Compact Disk (CD), Universal Serial Bus
(USB), floppy disks or even flash memory, etc.
 Antivirus software must be installed in the system and it is needed to run
after weekend or month end. The good antivirus disinfects infected
objects, quarantine infected objects and able to delete Trojans and
worms.
 The latest updates must be taken for removing the antivirus databases.
The infected computer is not included to download the updates.
 Scans and disinfects the mails of client’s databases that ensure the
malicious programs are not reactivated if messages are sent from one to
other across network.
Firewall security features must be installed in the system that prevents from
malicious and foreign programs. The corrupted applications and files must be
cleaned and deleted. User needs to try to reinstall the required applications in the
system and should be sure that the corresponding software is not pirated.

14.5 CLOUD SYSTEM

Many experts from industry and academic spheres have made many views on
Cloud and its computing features. Some of them are given below:
Buyya et al. have given the opinion about the cloud as follows: “Cloud is a
parallel and distributed computing system consisting of a collection of inter-
connected and virtualized computers that are dynamically provisioned and
presented as one or more unified computing resources based on service-level
agreements (SLA) established through negotiation between the service
provider and consumers.”
Self-Instructional
276 Material
A McKinsey and Co. have stated that “Clouds are hardware-based services Internet and Its Uses

offering compute, network, and storage capacity where: Hardware


management is highly abstracted from the buyer, buyers incur infrastructure
costs as variable OP EX, and infrastructure capacity is highly elastic.”
NOTES
According to Vaquero et al. “clouds are a large pool of easily usable and
accessible virtualized resources (such as hardware, development platforms
and/or services). These resources can be dynamically reconfigured to adjust
to a variable load (scale), allowing also for an optimum resource utilization.
This pool of resources is typically exploited by a pay-per-use model in which
guarantees are offered by the Infrastructure Provider by means of customized
Service Level Agreements.”
The National Institute of Standards and Technology (NIST) defines cloud
computing as “ . . . a pay-per-use model for enabling available, convenient,
on-demand network access to a shared pool of configurable computing
resources (e.g. networks, servers, storage, applications, services) that can
be rapidly provisioned and released with minimal management effort or
service provider interaction.”
Armbrust et al. have termed cloud as the “data center hardware and
software that provide services”. The basic aim of Cloud Computing is to combine
the physical and computing resources and to distribute the computing tasks to
multiple distributed computers.
In this advanced era, not only user able to use a particular web based
application but also that may be in active participation in its computational procedure
by either adopting, demanding or pay per use basis.

14.6 COMPUTING PLATFORMS AND


TECHNOLOGIES

There are various platforms and technologies available for cloud computing. They
come under any of the three major models- Infrastructure-as-a-Service, Platform-
as-a-Service, and Software-as-a-Service. The development of a cloud computing
application happens by properly identifying and using platforms and frameworks
that provide different types of services, from the infrastructure to customizable
applications serving specific purposes. Some of the cloud computing platforms
and the technologies they provide in seamless development, deployment and
maintenance of the cloud applications.
Amazon web services (AWS)
AWS provides cloud IaaS services for an entire computing stack right from virtual
computational base, data and application storage, to networking. AWS is a solution
for a complete computing package as a service. AWS is popularly known for its

Self-Instructional
Material 277
Internet and Its Uses compute and storage-on-demand services, namely Elastic Compute Cloud (EC2)
and Simple Storage Service (S3).
EC2 is an IaaS, it provides the consumers with customizable virtual hardware
that they can utilise as a base infrastructure for deploying computing systems of
NOTES
their choice on the cloud by making selections from a large variety of virtual
hardware configurations, Graphical Processing Units and cluster instances. EC2
instances can be deployed in two ways one by using the AWS console, a
comprehensive Web portal for accessing AWS services, or by using the Web
services API available for several programming languages. EC2 can be used to
save a specific running instance as an image, thereby permitting users to create
their own templates for deploying systems. S3 can be used to store these templates,
which delivers on demand persistent storage. The organization of S3 is in terms of
buckets; buckets are the containers that contain objects stored in binary form with
enriching attributes for further enhancements. Objects of any size can be stored –
which can be anything from simple files to complete disk images, and these objects
can be accessible from everywhere. Besides EC2 and S3, AWS provides a wide
range of services that can be used by a client to build a complete virtual computing
system with additional services like robust networking support, database (relational
and not) support, etc.
Google AppEngine
Google AppEngine, a proprietary cloud implementation of Google, is basically a
scalable runtime environment that is used for executing Web applications. It
dynamically utilises the global computing infrastructure of Google to cope up with
varying need of the customers. AppEngine provides both a secure execution
environment and a collection of services (like in-memory caching, scalable data
store, job queues, messaging, and cron tasks, etc.) that just ease the development
scalable Web applications. Developers are provided with AppEngine software
development kit (SDK) to build and test applications on their own machines.
Once an application is developed completely, developers can then easily migrate
their application to AppEngine, set quotas to contain the costs generated, and
make the application available to the world. The languages that are currently
supported in AppEngine software development kit (SDK), are Python, Java, and
Go.
Microsoft Azure
Microsoft proprietary cloud implementation is MS Azure. It is a cloud operating
system and a platform that provides a scalable runtime environment for developing
and hosting Web applications. MS Azure supports different roles to identify
distribution units for applications and embody the application’s logic. As of now
three types of role: Web role, worker role, and virtual machine role. The Web role
is meant to host a Web application. The worker role is to act as a container of
applications and perform workload processing as well. The virtual machine role is
Self-Instructional
278 Material
to provide a virtual environment which may even incorporate an operating system. Internet and Its Uses

Besides role implementations, MS Azure also provides some extra services like
data storage, networking, caching, etc. to support application execution.
Hadoop Apache NOTES
Hadoop apache is an open source framework funded and managed by Yahoo.
Hadoop is basically used to process large data sets and is regarded as an
implementation of Map Reduce programming model. Map Reduce is an application
programming model developed by Google that performs two basic operations for
data processing - map and reduce. Map function transforms and synthesizes the
input data provided by the user; while reduce aggregates the output obtained by
the map operations performed over the large data sets. Using Hadoop eases the
task by providing the runtime environment, where in the developers just have to
provide the input data and specify the map and reduce functions that are to be
executed without worrying about any internal compositions of the algorithms
implemented there.
Force.com and Salesforce.com
Force.com is a cloud computing platform for developing social enterprise
applications. The platform is the basis for SalesForce.com, a Software-as-a-Service
solution for customer relationship management. Force.com provides functionality
of composing ready-to-use blocks that can be used by the developers to create
applications by composing ready-to-use blocks; a complete set of components
supporting all the activities of an enterprise are available. It is also possible to
develop your own components or integrate those available in AppExchange into
your applications. The platform provides complete support for developing
applications, from the design of the data layout to the definition of business rules
and workflows and the definition of the user interface. The Force.com platform is
completely hosted on the cloud and provides complete access to its functionalities
and those implemented in the hosted applications through Web services technologies.
Manjrasoft Aneka
Manjrasoft Aneka is a cloud application platform meant for rapid development
and deployment of scalable applications on various types of clouds. It supports a
number of programming abstractions and runtime environment for developing
applications that can be deployed on heterogeneous hardware be it clusters,
networked desktop computers, and/or other cloud resources. Developers can
opt from different abstractions to design their application: tasks, distributed threads,
and map-reduce. These applications are then executed on the distributed service-
oriented runtime environment, which can dynamically integrate additional resource
on demand. The service-oriented architecture of the runtime provides a great degree
of flexibility and also simplifies the integration of new features, such as abstraction
of a new programming model and associated execution management environment.
Self-Instructional
Material 279
Internet and Its Uses Apart from these, there are services that manage almost all the activities happening
during runtime from scheduling, execution, accounting, billing, storage, and checking
the quality of service.

NOTES Principles of Parallel and Distributed Computing


The fundamental computing models that have so far been popular till now are
sequential and parallel computing models (and/or distributed computing). The
sequential computing era commenced in the early 1940s followed by todays parallel
and distributed computing era just after a short time span of a decade, which has
renovated the entire technological implementations. During these eras, the entire
computing world has witnessed the evolution of architectures, compilers,
applications, and problem-solving environments. In its nascent stage, the computing
era emphasised on the development of hardware architectures and hence led to
the evolution of system softwares, especially the operating systems and the
compilers. These softwares supported the management of other softwares and
development of other useful applications.
The computing era commenced with modular development in hardware
architectures, enabling need to create system softwares especially the compilers
and operating systems that further led to the development of business and
management applications. As application development and system development
is the main concern to us, the design and development of problem-solving
environments was needed to facilitate and empower engineers. This is when the
paradigm characterizing the computing achieved maturity and became mainstream.
Moreover, every aspect of this era underwent a three-phase process: research
and development (R&D), commercialization, and commoditization.
Parallel vs. Distributed Computing
In computer jargon, both parallel and distributed computing may have same meaning
but they usually stand different in the way they are defined. Though both the terms
are technically different, these are used interchangeably more often. The term
parallel here means a tightly coupled system, whereas distributed refers to a further
elaboration in definition and a wider class of system, including those that are tightly
coupled not necessarily parallel.
Specifically, the term parallel computing can be defined as a model in which
the entire computational task is divided among several processors sharing the
same memory in such a way that no two processors intervene each other during
the execution of the task. The architecture of a parallel computing system is
homogeneous by composition of components: each processor used is of the same
kind and even has the same capability as the other processors being coupled with
shared memory having a single address space that is accessible to all the processors
in the system. Parallel programs are split into several small units of execution that
can be finally allocated to different processors in such a way that they can

Self-Instructional
280 Material
communicate by means of the shared memory with each other without hampering Internet and Its Uses

each other and hence performing the work assigned.


The term distributed computing can be defined as the composition of
architecture or system that allows the splicing of the entire computation into multiple
NOTES
small units and executed concurrently on different computing elements, whether
these are the cores within a processor, processors on the same computer, and
processors on different nodes in a system. The term distributed often implies that
the locations of the computing elements are not the same and such elements might
be heterogeneous in terms of hardware and software features. Classic examples
of distributed computing systems are computing grids or Internet computing
systems, which combine together the biggest variety of architectures, systems,
and applications in the world.
Elements of Parallel Computing
The core element for parallel processing is the CPU. Depending upon the type of
number of instruction and data streams that can be processed simultaneously,
computing systems are classified into the following four categories:
• Single-instruction, single-data (SISD) systems
• Single-instruction, multiple-data (SIMD) systems
• Multiple-instruction, single-data (MISD) systems
• Multiple-instruction, multiple-data (MIMD) systems
• Single-instruction, single-data (SISD) systems
An SISD computing system is a single processor machine that executes a
single instruction, which operates on a single data stream. SISD machines are also
popularly known as sequential computers as these processed single instruction at
a time and required primary storage for storing the instructions and data.
An SIMD computing system is a multiprocessor machine and can execute
the same instruction on all the CPUs but operating on different data streams.
SIMD machines are well suited to scientific computing since they involve lots of
vector and matrix operations. SIMD representative systems are Cray’s vector
processing machine and Thinking Machines’ cm_.
An MISD machine is a multiprocessor machine that executes different
instructions on different PEs but all on the same data set. These machines are
practically not so useful hence only few of the machines have been built which are
also not commercially available.
An MIMD system is a multiprocessor machine that is capable of executing
multiple instructions on multiple data sets where each PE in the MIMD model has
separate instruction and data streams. PEs in MIMD machines work
asynchronously, which was not in the case of SIMD and MISD.

Self-Instructional
Material 281
Internet and Its Uses Approaches to Parallel Computing
There are many parallel programming approaches available. Some of them are
discussed below:
NOTES  Data parallelism: In the case of data parallelism, the divide-and-conquer
technique is implemented to divide the data set into multiple sets, and each
data set is processed on different PEs using the same instruction. This
approach is best suited to processing on SIMD machines.
 Process parallelism: In the case of process parallelism, a given operation
has multiple but unique/distinct activities that can be processed on multiple
processors.
 Farmer-and-worker model: In farmer- and-worker model, a justified job
distribution is done. Amongst all, one processor acts as master and all other
remaining PEs are designated as slaves. The one designated as the master
assigns jobs to the slave PEs and, on completion, these slave PEs inform
the master. The master then collects results finally.
Levels of parallelism
Levels of parallelism are defined on the basis of the lumps of code (grain size).The
grain size or lump of code is a possible candidate for parallelism. The categories of
code granularity for parallelism can be large (task level), medium (control level),
fine (data level), and Very fine grain (multiple-instruction issue) etc. But whatever
be the size of the grain, all have a common goal: to increase processor efficiency
by hiding latency.
Elements of Distributed Computing
Distributed computing deals with the study and application of the models,
architectures, and algorithms used for building and managing distributed systems.
Tanenbaum et. al defines distributed computing as ‘A distributed system is a
collection of independent computers that appears to its users as a single coherent
system.’ Distributed systems are composed of more than one computer that
collaborate together, it is necessary to provide some sort of data and information
exchange between them, which generally occurs through the network.
Coulouris et al. defined the distributed systems as: ‘A distributed system is
one in which components located at networked computers communicate and
coordinate their actions only by passing messages.’As specified in this definition,
the components of a distributed system communicate with each other with some
sort of message passing mechanism. This is a term that comprehends several
communication models.

Self-Instructional
282 Material
Internet and Its Uses

NOTES

Fig. 14.1 Components of a Distributed System

14.7 CLOUD COMPUTING ARCHITECTURE AND


INFRASTRUCTURE

Cloud computing can be broadly defined as a utility-oriented and Internet-centric


way of delivering IT services on demand. These services cover the entire computing
stack from hardware infrastructure pack-aged as a set of virtual machines to
software services such as development platforms and distributed applications.
This definition captures the most important and fundamental aspects of cloud
computing. We shall now discuss a reference model that helps in the categorization
of cloud technologies, applications, and services.

Fig. 14.2 Cloud Computing

Cloud computing architecture comprises of various components and


subcomponents that in one or the other way participate to provide cloud computing
services. These components can be a front-end platform, back end platforms, a Self-Instructional
Material 283
Internet and Its Uses cloud based delivery, or simply a network (Internet, Intranet, Intercloud etc.).
Front end is always the user interface, i.e., what the client (user) sees and can be
any of the followings: a fat client, thin client, mobile device, etc., whereas the back
end is the basis cloud, the main component of the system which can be a server or
NOTES a data storage facility. Front end consists of the client’s computer and the application
required to access the cloud and the back end generally possess the basic cloud
computing services like software, systems, platforms, servers and data storage,
etc. Intermediate tasks like monitoring of traffic, system administration and client
demands are administered by a central server which acts according to some rules
i.e., protocols and uses a special software called the middleware. The main task
of the middleware application is to take care of communication between the
networked computers. Combined, these components make up the cloud computing
architecture.

Fig. 14.3 Cloud Computing Architecture

A cloud client generally consists of computer hardware and/or computer


software which depends on cloud computing for application delivery, or one that
has been purposely designed for delivery of various cloud services.
A cloud application can deliver Software as a Service (SaaS) over the
network, thereby eliminating any necessity to install or run the application on the
client’s system. Examples of the key providers are SalesForce.com (SFDC),
NetSuite, Oracle, IBM, Microsoft, and Google. Amongst all Google Apps is the
most popular and the most widely used SaaS.
A cloud application can also deliver Platform as a Service (PaaS). It provides
a computing platform using the cloud infrastructure and frees a client from the
worry of buying and installing the software and hardware required for it. Through
this service, the clients are provided with all the systems and run time environments
required for developing, testing, deploying and hosting of web applications. Key
examples are GAE, Microsoft’s Azure.
A cloud application can also deliver Infrastructure as a Service(IaaS). It
provides the client with the required infrastructure as a service. The client does not
Self-Instructional
284 Material
have to bother about purchasing any servers, data centre or the network resources Internet and Its Uses

that are specifically required in development or deployment. And also, client just
need to pay only for what he uses and only for the time he has used. It is very
economic for client and also, he does not have to take any care of the infrastructure
maintenance except paying up for the use. As a result, customers can achieve a NOTES
much faster service delivery with less cost. Examples are GoGrid, Flexiscale,
Layered Technologies, Joyent and Rackspace.
Cloud Reference Model
Cloud computing provides hardware and software infrastructure, development
platforms, application as services that are consumed as a utility and are delivered
to the consumer over the internet. In computation all the above mentioned are
interrelated with application being the basic entity. An application is developed
over certain platforms utilising the hardware and software infrastructure. A stack
can be seen if all these are placed as according to their utilization. So, there should
be some model that can give a glimpse of such an organisation into a layered view
covering the entire stack as given in the Figure 14.4, from hardware to software
systems.

Fig. 14.4 Hardware to software systems

Cloud resources are harnessed to provide on demand services. Often, this


layer is implemented using a data-centre which contains hundreds and thousands
of nodes that are stacked together. Cloud infrastructure can be heterogeneous in
nature as it might consist of a variety of resources, such as clusters of PCs, database
systems and other storage services etc. The physical infrastructure is managed by
the core middleware and its objective is to provide an appropriate runtime
environment for applications exploiting the resources. At the bottom of the stack,
virtualization technologies can be seen to come into play to provide runtime
environment customization, application isolation, sandboxing, and quality of service.
At this level, the commonest implementation is of hardware virtualization.
Hypervisors manage the pool of resources and exposes the distributed infrastructure
Self-Instructional
Material 285
Internet and Its Uses as a collection of virtual machines. By using virtual machine technology, it is possible
to finely partition the hardware resources such as CPU and memory and to virtualize
specific devices, thus meeting the requirements of users and applications. This
solution is generally paired with storage and network virtualization strategies, which
NOTES allow the infrastructure to be completely virtualized and controlled. According to
the specific service offered to end users, other virtualization techniques can be
used; for example, programming-level virtualization helps in creating a portable
runtime environment where applications can be run and controlled. This scenario
generally implies that applications hosted in the cloud be developed with a specific
technology or a programming language, such as Java, .NET, or Python.
Infrastructure management is the key function of core middleware, which supports
capabilities such as negotiation of the quality of service, admission control, execution
management and monitoring, accounting, and billing. The combination of cloud
hosting platforms and resources is generally classified as Infrastructure-as-a-Service
(IaaS) solution. IaaS solutions are suitable for designing the system infrastructure
but provide limited services to build applications. Such service is provided by
cloud programming environments and tools, which form a new layer for offering
users a development platform for applications. The range of tools include Web-
based interfaces, command-line tools, and frameworks for concurrent and
distributed programming. In this scenario, users develop their applications
specifically for the cloud by using the API exposed at the user-level middleware.
For this reason, this approach is also known as Platform-as-a-Service (PaaS)
because the service offered to the user is a development platform rather than an
infrastructure. PaaS solutions generally include the infrastructure as well, which is
bundled as part of the service provided to users. In the case of Pure PaaS, only
the user-level middleware is offered, and it has to be complemented with a virtual
or physical infrastructure. The top layer of the reference model depicted in figure
given above contains services delivered at the application level. These are mostly
referred to as Software-as-a-Service (SaaS). In most cases these are Web-based
applications that rely on the cloud to provide service to end users. As a reference
model, it is then expected to have an adaptive management layer in charge of
elastically scaling on demand. SaaS implementations should feature such behaviour
automatically, whereas PaaS and IaaS generally provide this functionality as a
part of the API exposed to users. The reference model described in figure above
also introduces the concept of everything as a Service (XaaS, which has been
separately discussed under the types of cloud section below). This is one of the
most important elements of cloud computing: cloud services from different providers
can be combined to provide a completely integrated solution covering all the
computing stack of a system.

14.8 TYPES OF CLOUD AND DEPLOYMENT MODELS

Knowing about cloud computing is more or less a hectic task. Since, with new
technological evolution one can never predict what will be provided as a service
Self-Instructional
286 Material
and what will be the cloud base. But to start with you can distinguish the realm of Internet and Its Uses
cloud computing into two distinct sets:
Deployment Models
A cloud model set that is based on the entities-location and management of the NOTES
cloud’s infrastructure. There are four deployment models as discussed below:
1. Public clouds: Public clouds are the very first cloud service offering. They
are boundary less clouds as the services offered over these public cloud is
easily accessible and available to anyone, from anywhere, and at any time
via Internet. Structurally, public clouds can be referred to as a distributed
system, composed of one or more datacentres connected, on top of which
certain computational services are offered by the cloud. To access the
services offered, any customer can easily sign in with the cloud provider,
enter login credential and billing details, and use the services offered. Public
clouds are meant to support a large quantity of users. They can scale
horizontally or vertically as there is the demand and sustain peak loads.
They offer solutions for minimizing IT infrastructure costs and serve as a
best possible option for handling peak loads on the local infrastructure.
2. Private Cloud: Private clouds are virtual distributed systems that rely on a
private infrastructure and provide internal users with dynamic provisioning
of computing resources. Instead of a pay-as-you-go model as in public
clouds, there could be other schemes in place, taking into account the usage
of the cloud and proportionally billing the different departments or sections
of an enterprise. Private clouds have the advantage of keeping the core
business operations in-house by relying on the existing IT infrastructure and
reducing the burden of maintaining it once the cloud has been set up. In this
scenario, security concerns are less critical, since sensitive information does
not flow out of the private infrastructure. Moreover, existing IT resources
can be better utilized because the private cloud can provide services to a
different range of users.
3. Hybrid Clouds: Public clouds are the typically meant for serving the need
of a large pool of customers as these accommodate large software and
hardware infrastructures. But these clouds have a severe drawback –
security threats and administrative problems. If you want a large infrastructure
to use and you are cost concerned but do not bother about security issues,
then public clouds are for you.
Private clouds are primarily meant for those who wish to securely keep the
processing of information within an enterprise’s premises or want to utilize
the existing hardware and software infrastructure. But here is also one
problem, you can secure the information and utilize your resources but when
you will have to address the increase in demand for resources during the
peak hours, private cloud will not work here. You will require the public
clouds. Hence, you will have to create something with a little of both-public
and the private clouds considering the advantage of the best of the private
and public worlds. This led to the development of the hybrid clouds. Self-Instructional
Material 287
Internet and Its Uses Hybrid clouds contain attributes of both the public and the private clouds.
Like private clouds, these hybrid clouds allow enterprises to exploit existing
IT infrastructures, maintain sensitive information within the premises, and
like public clouds sustain the peak load by growing and shrinking the external
NOTES resources and releasing them when they’re no longer required. So, a hybrid
cloud can be defined as a heterogeneous distributed system resulting from
a private cloud that integrates additional services or resources from one or
more public clouds. Due to this, they are also called heterogeneous clouds.
4. Community Clouds: IT industry is too dynamic to cope up with its
requirement with a single cloud implementation. So, community clouds are
required to sustain the need of an industry, a community etc. As a community
is created with a large number of people with diverse skills, interest, etc., a
community cloud can be created by the integration of varied services of
different clouds to address the needs of an industry, a community, and the
business sector. The National Institute of Standards and Technologies
(NIST) characterizes community clouds as follows: The infrastructure is
shared by several organizations and supports a specific community that has
shared concerns (e.g., mission, security requirements, policy, and compliance
considerations). It may be managed by the organizations or a third party
and may exist on premise or off premise.
Figure 14.5 provides a general view of the usage scenario of community
clouds, together with reference architecture. A community cloud is
different from the public cloud in the sense that it serves just the
community of users (government bodies, industries, or even simple users)
with more or less similar needs unlike public clouds that serves a
multitude of the users with varying need. Community clouds are also
different from private clouds, as a community cloud is implemented over
multiple administrative domains (government bodies, private enterprises,
research organizations, and even public virtual infrastructure providers)
unlike the private cloud where the services are generally delivered within
the institution that owns the cloud.

Fig. 14.5 Community Cloud


Self-Instructional
288 Material
Internet and Its Uses

Check Your Progress


1. Define firewall.
2. What are the different types of deployment models? NOTES

14.9 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. Firewall can be described as a hurdle to protect your system from being


accessed by offensive Websites and potential hackers. Firewall simply filters
information coming from some other sources, such as suspicious sites or
networks, to your network.
2. Public private hybrid and community clouds are the types of deployment
models.

14.10 SUMMARY

 Firewall can be described as a hurdle to protect your system from being


accessed by offensive Websites and potential hackers.
 Each machine on the Internet has a unique address called an IP address.
 Encryption is a way to secure personal information and ensure privacy on
the Internet. All the personal information on the Internet can be made available
to anyone provided that the person has the required tools and approaches
to get that information.
 Asymmetric is also called public-key encryption. The system uses two keys,
a public key known to everyone and a private or secret key known only to
the recipient of the message.
 In Symmetric type of encryption, the same key is used to encrypt and decrypt the
message. It requires all parties that are communicating to share a common key.
 Viruses are so malicious and smart even users are not able to realize whether
their system unit is infected with viruses or not. The property of viruses is
that they hide themselves among regular system files or frequently attachments
and camouflage as they are standard files.
 Cloud is a parallel and distributed computing system consisting of a collection
of interconnected and virtualized computers that are dynamically provisioned
and presented as one or more unified computing resources based on service-
level agreements (SLA) established through negotiation between the service
provider and consumers.

Self-Instructional
Material 289
Internet and Its Uses  Public private hybrid and community clouds are the types of deployment
models.

NOTES
14.11 KEY WORDS

 Firewall: It is a network security system that monitors and controls incoming


and outgoing network traffic based on predetermined security rules.
 Encryption: It is the process of encoding a message or information in such
a way that only authorized parties can access it and those who are not
authorized cannot.

14.12 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short Answer questions


1. What are the different types of firewall?
2. Discuss the types of encryption.
3. Define cloud.
Long Answer questions
1. Explain the uses of internet.
2. Describe the technologies of computing.
3. Explain the different types of deployment models.

14.13 FURTHER READINGS

Bhatt, Pramod Chandra P. 2003. An Introduction to Operating Systems—


Concepts and Practice. New Delhi: PHI.
Bhattacharjee, Satyapriya. 2001. A Textbook of Client/Server Computing. New
Delhi: Dominant Publishers and Distributers.
Hamacher, V.C., Vranesic, Z.G. and Zaky. 2002. S.G. Computer Organization,
5th edition. New York: McGraw-Hill International Edition.
Mano, M. Morris. 1993. Computer System Architecture, 3th edition. New Jersey:
Prentice-Hall Inc.
Nutt, Gary. 2006. Operating Systems. New Delhi: Pearson Education.

Self-Instructional
290 Material

Potrebbero piacerti anche