Sei sulla pagina 1di 14

1.

Components of computer

The internal architectural design of computers differs from one system model to another. However, the
basic organization remains the same for all computer systems. The following five units (also called “The
functional units”) correspond to the five basic operations performed by all computer systems.

Input Unit: Data and instructions must enter the computer system before any computation can be
performed on the supplied data. The input unit that links the external environment with the computer
system performs this task. Data and instructions enter input units in forms that depend upon the particular
device used. For example, data is entered from a keyboard in a manner similar to typing, and this differs
from the way in which data is entered through a mouse, which is another type of input device. However,
regardless of the form in which they receive their inputs, all input devices must provide a computer with
data that are transformed into the binary codes that the primary memory of the computer is designed to
accept. This transformation is accomplished by units that called input interfaces. Input interfaces are
designed to match the unique physical or electrical characteristics of input devices to the requirements of
the computer system.
 In short, an input unit performs the following functions.
1. It accepts (or reads) the list of instructions and data from the outside world.
2. It converts these instructions and data in computer acceptable format.
3. It supplies the converted instructions and data to the computer system for further processing.

Output Unit: The job of an output unit is just the reverse of that of an input unit. It supplied information
and results of computation to the outside world. Thus it links the computer with the external environment.
As computers work with binary code, the results produced are also in the binary form. Hence, before
supplying the results to the outside world, it must be converted to human acceptable (readable) form. This
task is accomplished by units called output interfaces.
In short, the following functions are performed by an output unit.
1. It accepts the results produced by the computer which are in coded form and hence cannot be
easily understood by us.
2. It converts these coded results to human acceptable (readable) form.
3. It supplied the converted results to the outside world.

Storage Unit: The data and instructions that are entered into the computer system through input units
have to be stored inside the computer before the actual processing starts. Similarly, the results produced
by the computer after processing must also be kept somewhere inside the computer system before being
passed on to the output units. Moreover, the intermediate results produced by the computer must also be
preserved for ongoing processing. The Storage Unit or the primary / main storage of a computer
system is designed to do all these things. It provides space for storing data and instructions, space for
intermediate results and also space for the final results.
In short, the specific functions of the storage unit are to store:
1. All the data to be processed and the instruction required for processing (received from input
devices).
2. Intermediate results of processing.
3. Final results of processing before these results are released to an output device.

Central Processing Unit (CPU): The main unit inside the computer is the CPU. This unit is responsible
for all events inside the computer. It controls all internal and external devices, performs “Arithmetic and
Logical operations”. The control Unit and the Arithmetic and Logic unit of a computer system are jointly
known as the Central Processing Unit (CPU). The CPU is the brain of any computer system. In a human
body, all major decisions are taken by the brain and the other parts of the body function as directed by the
brain. Similarly, in a computer system, all major calculations and comparisons are made inside the CPU
and the CPU is also responsible for activating and controlling the operations of other units of a computer
system.

Arithmetic and Logic Unit (ALU): The arithmetic and logic unit (ALU) of a computer system is the
place where the actual execution of the instructions take place during the processing operations. All
calculations are performed and all comparisons (decisions) are made in the ALU. The data and
instructions, stored in the primary storage prior to processing are transferred as and when needed to the
ALU where processing takes place. No processing is done in the primary storage unit. Intermediate
results generated in the ALU are temporarily transferred back to the primary storage until needed at a
later time. Data may thus move from primary storage to ALU and back again as storage many times
before the processing is over. After the completion of processing, the final results which are stored in the
storage unit are released to an output device.
The arithmetic and logic unit (ALU) is the part where actual computations take place. It consists of
circuits that perform arithmetic operations (e.g. addition, subtraction, multiplication, division over data
received from memory and capable to compare numbers (less than, equal to, or greater than).

Control Unit: How the input device knows that it is time for it to feed data into the storage unit? How
does the ALU know what should be done with the data once it is received? And how is it that only the
final results are sent to the output devices and not the intermediate results? All this is possible because of
the control unit of the computer system. By selecting, interpreting, and seeing to the execution of the
program instructions, the control unit is able to maintain order and directs the operation of the entire
system. Although, it does not perform any actual processing on the data, the control unit acts as a central
nervous system for the other components of the computer. It manages and coordinates the entire computer
system. It obtains instructions from the program stored in main memory, interprets the instructions, and
issues signals that cause other units of the system to execute them.
The control unit directs and controls the activities of the internal and external devices. It interprets the
instructions fetched into the computer, determines what data, if any, are  needed, where it is stored, where
to store the results of the operation, and sends the control signals to the devices involved in the execution
of the instructions.

2. Operating system

An Operating System acts as a communication bridge (interface) between the user and computer
hardware. The purpose of an operating system is to provide a platform on which a user can execute
programs in a convenient and efficient manner.
An operating system is a piece of software that manages the allocation of computer hardware. The
coordination of the hardware must be appropriate to ensure the correct working of the computer system
and to prevent user programs from interfering with the proper working of the system.
The main task an operating system carries out is the allocation of resources and services, such as
allocation of: memory, devices, processors and information. The operating system also includes programs
to manage these resources, such as a traffic controller, a scheduler, memory management module, I/O
programs, and a file system.

Important functions of an operating System:

a) Security – The operating system uses password protection to protect user data and similar other
techniques. it also prevents unauthorized access to programs and user data.

b) Control over system performance – Monitors overall system health to help improve
performance. Records the response time between service requests and system response to have a
complete view of the system health. This can help improve performance by providing important
information needed to troubleshoot problems.

c) Job accounting – Operating system Keeps track of time and resources used by various tasks and
users, this information can be used to track resource usage for a particular user or group of user.

d) Error detecting aids – Operating system constantly monitors the system to detect errors and
avoid the malfunctioning of computer system.

e) Coordination between other software and users – Operating systems also coordinate and
assign interpreters, compilers, assemblers and other software to the various users of the computer
systems.

f) Memory Management – The operating system manages the Primary Memory or Main Memory.
Main memory is made up of a large array of bytes or words where each byte or word is assigned
a certain address. Main memory is a fast storage and it can be accessed directly by the CPU. For a
program to be executed, it should be first loaded in the main memory. An Operating System
performs the following activities for memory management:
It keeps tracks of primary memory, i.e., which bytes of memory are used by which user program.
The memory addresses that have already been allocated and the memory addresses of the memory
that has not yet been used. In multi programming, the OS decides the order in which process are
granted access to memory, and for how long. It Allocates the memory to a process when the
process requests it and deallocates the memory when the process has terminated or is performing
an I/O operation.

g) Processor Management – In a multi programming environment, the OS decides the order in


which processes have access to the processor, and how much processing time each process has.
This function of OS is called process scheduling. An Operating System performs the following
activities for processor management.
Keeps tracks of the status of processes. The program which perform this task is known as traffic
controller. Allocates the CPU that is processor to a process. De-allocates processor when a
process is no more required.

h) Device Management – An OS manages device communication via their respective drivers. It


performs the following activities for device management. Keeps tracks of all devices connected to
system. Designates a program responsible for every device known as the Input/Output controller.
Decides which process gets access to a certain device and for how long. Allocates devices in an
effective and efficient way. Deallocates devices when they are no longer required.
i) File Management – A file system is organized into directories for efficient or easy navigation
and usage. These directories may contain other directories and other files. An Operating System
carries out the following file management activities. It keeps track of where information is stored,
user access settings and status of every file and more. These facilities are collectively known as
the file system.

3. Compiler and Interpreter

We generally write a computer program using a high-level language. A high-level language is one which
is understandable by us humans. It contains words and phrases from the English (or other) language. But
a computer does not understand high-level language. It only understands program written in 0's and 1's in
binary, called the machine code. A program written in high-level language is called a source code. We
need to convert the source code into machine code and this is accomplished by compilers and interpreters.
Hence, a compiler or an interpreter is a program that converts program written in high-level language into
machine code understood by the computer.

The difference between an interpreter and a compiler is given below:

Interpreter Compiler
Scans the entire program and translates it as a whole
Translates program one statement at a time.
into machine code.
It takes large amount of time to analyze the source
It takes less amount of time to analyze the source
code but the overall execution time is comparatively
code but the overall execution time is slower.
faster.
No intermediate object code is generated, hence are Generates intermediate object code which further
memory efficient. requires linking, hence requires more memory.
Continues translating the program until the first It generates the error message only after scanning the
error is met, in which case it stops. Hence whole program. Hence debugging is comparatively
debugging is easy. hard.
Programming language like Python, Ruby uses
Programming language like C, C++ uses compilers.
interpreters.

4. Types of Computer Language

Machine Language: The lowest and most elementary language and was the first type of programming
language to be developed. Mache language is basically the only language which computer can
understand. In fact, a manufacturer designs a computer to obey just one language, its machine code,
which is represented inside the computer by a string of binary digits (bits) 0 and 1. The symbol 0 stands
for the absence of an electric pulse and 1 for the presence of an electric pulse. Since a computer is capable
of recognizing electric signals, therefore, it understands machine language.
Advantages
1. It makes fast and efficient use of the computer
2. It requires no translator to translate the code i.e. directly understood by the computer.
Disadvantages
1. All operation codes have to be remembered
2. All memory addresses have to be remembered
3. It is hard to amend or find errors in a program written in the machine language
4. These languages are machine dependent i.e. a particular machine language can be used on only
one type of computer.
Assembly Languages: It was developed to overcome some of the many inconveniences of machine
language. This is another low level but a very important language in which operation codes and operands
are given in the form of alphanumeric symbols instead of 0’s and 1’s. These alphanumeric symbols will
be known as mnemonic codes and can have maximum up to 5 letter combinations e.g. ADD for addition,
SUB for subtraction, START LABEL etc. because of this feature it is also known as “Symbolic
Programming Language”. This language is very difficult and needs a lot of practice to master it because
very small English support is given. This symbolic language helps in compiler orientations. The
instructions of the assembly language will also be converted to machine codes by language translator to
be executed by the computer
Advantages
1. It is easier to understand and use as compared to machine language
2. It is easy to locate and correct errors
3. It is modified easily
Disadvantages
1. Like machine language it is also machine dependent
2. Since it is machine dependent, there programmer should have the knowledge of the hardware
also.

High level languages: High level computer languages give formats close to English language and the
purpose of developing high level languages is to enable people to write programs easily and in their own
native language environment (English). High-level languages are basically symbolic languages that use
English words and/or mathematical symbols rather than mnemonic codes. Each instruction in the high
level language is translated into many machine language instructions thus showing one-to-many
translation.
 Problem-Oriented Language: These are languages used for handling specialized types of data
processing problems where programmer only specifies the input/output requirements and other
relative information of the problem, that are to be solved. The programmer does not have to
specify the procedure to be followed in solving that particular problem.
 Procedural Language: These are general purpose languages that are designed to express the
logic of a data processing problem.
 Non-procedural Language: Computer Programming Languages that allow users and
professional programmers to specify the results they want without specifying how to solve the
problem.
Advantages: Following are the advantages of a high level language:
1. User-friendly (people based)
2. Similar to English with vocabulary of words awl symbols therefore it is easier to learn.
3. They require less time to write
4. They are easier to maintain
5. Problem oriented' rather than 'machine' based
6. Shorter than their low-level equivalents. One statement translates into many machine code
instructions.
7. Program written in a high-level equivalent can he translated into many machine language and
therefore can run on every computer for which there exists an appropriate translator.
8. It is independent of the machine on which it used i.e. programs developed in high level language
can be run on any computer.
Disadvantages: There are certain disadvantages also. Inspite these disadvantages high-level languages
have proved their worth. The advantages out-weigh the disadvantages by far, for most applications. These
are:
1. A high-level language has to be translated into the -machine language by a translator and thus a
price in computer time is paid.
2. The object code generated by a translator might be inefficient compared to an equivalent
assembly language program
5. Base conversion

Decimal, the Base 10 Numbering System: The decimal, denary or base 10 numbering system is what
we use in everyday life for counting. The fact that there are ten symbols is more than likely because we
have 10 fingers. We use ten different symbols or numerals to represent the numbers from zero to nine.

Those numerals are 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9


When we get to the number ten, we have no numeral to represent this value, so it is written as:
10
The idea is to use a new place holder for each power of 10 to make up any number we want. So 134
means one hundred, 3 tens and a 4 although we just interpret and read it as one hundred and thirty four.
Placeholder Value in the Decimal Numbering System

Binary, the Base 2 Numbering System: In the decimal number system, we saw that ten numerals were
used to represent numbers from zero to nine.
Binary only uses two numerals 0 and 1. Place holders in binary each have a value of powers of 2. So the
first place has a value 20 = 1, the second place 21 = 2, the third place 22 = 4, the fourth place 23 = 8 and so
on.
In binary we count 0, 1 and then since there's no numeral for two we move onto the next place holder so
three is written as 10 binary. This is exactly the same as when we get to ten decimal and have to write it
as 10 because there's no numeral for ten.

Most Significant Bit (MSB) and Least Significant Bit (LSB): For a binary number, the most significant
bit (MSB) is the digit furthermost to the left of the number and the least significant bit (LSB) is the
rightmost digit.
Steps to Convert Decimal to Binary
If you don't have a calculator to hand, you can easily convert a decimal number to binary using the
remainder method. This involves dividing the number by 2 recursively until you're left with 0, while
taking note of each remainder.
1. Write down the decimal number.
2. Divide the number by 2.
3. Write the result underneath.
4. Write the remainder on the right hand side. This will be 0 or 1.
5. Divide the result of the division by 2 and again write down the remainder.
6. Continue dividing and writing down remainders until the result of the division is 0.
7. The most significant bit (MSB) is at the bottom of the column of remainders and the least
significant bit (LSB) is at the top.
8. Read the series of 1s and 0s on the right from the bottom up. This is the binary equivalent of the
decimal number.

Steps to Convert Binary to Decimal


Converting from binary to decimal involves multiplying the value of each digit (i.e. 1 or 0) by the value
of the placeholder in the number
1. Write down the number.
2. Starting with the LSB, multiply the digit by the value of the place holder.
3. Continue doing this until you reach the MSB.
4. Add the results together.
Indicating the Base of a Number: The binary number 1011011 can be written as 10110112 to explicitly
indicate the base. Similarly 54 base 10 can be written 54 10. Often however, the subscript is omitted to
avoid excessive detail when the context is known. Usually subscripts are only included in explanatory
text or notes in code to avoid confusion if several numbers with different bases are used together.

6. Signed number representation

In computing, signed number representations are required to encode negative numbers in binary
number systems.

In mathematics, negative numbers in any base are represented by prefixing them with a minus ("−") sign.
However, in computer hardware, numbers are represented only as sequences of bits, without extra
symbols. The four best-known methods of extending the binary numeral system to represent signed
numbers are: sign-and-magnitude, ones' complement, two's complement, and offset binary. Some of the
alternative methods use implicit instead of explicit signs, such as negative binary, using the base −2.
Corresponding methods can be devised for other bases, whether positive, negative, fractional, or other
elaborations on such themes.

Signed magnitude representation (SMR)

This representation is also called "sign–magnitude" or "sign and magnitude" representation. In this
approach, a number's sign is represented with a sign bit: setting that bit (often the most significant bit) to
0 for a positive number or positive zero, and setting it to 1 for a negative number or negative zero. The
remaining bits in the number indicate the magnitude (or absolute value). For example, in an eight-bit byte,
only seven bits represent the magnitude, which can can range from 0000000 (0) to 1111111 (127). Thus
numbers ranging from −12710 to +12710 can be represented once the sign bit (the eighth bit) is added. For
example, −4310 encoded in an eight-bit byte is 10101011 while 4310 is 00101011. A consequence of using
signed magnitude representation is that there are two ways to represent zero, 00000000 (0) and 10000000
(−0).

Ones' complement

Alternatively, a system known as ones' complement can be used to represent negative numbers. The ones'
complement form of a negative binary number is the bitwise NOT applied to it, i.e. the "complement" of
its positive counterpart. Like sign-and-magnitude representation, ones' complement has two
representations of 0: 00000000 (+0) and 11111111 (−0).

As an example, the ones' complement form of 00101011 (43 10) becomes 11010100 (−4310). The range of
signed numbers using ones' complement is represented by −(2 N−1 − 1) to (2N−1 − 1) and ±0. A conventional
eight-bit byte is −12710 to +12710 with zero being either 00000000 (+0) or 11111111 (−0).

Two's complement

In two's complement, negative numbers are represented by the bit pattern which is one greater (in an
unsigned sense) than the ones' complement of the positive value.

In two's-complement, there is only one zero, represented as 00000000. Negating a number (whether
negative or positive) is done by inverting all the bits and then adding one to that result.

An easier method to get the negation of a number in two's complement is as follows:


Example 1 Example 2
1. Starting from the right, find the first "1" 00101001 00101100
2. Invert all of the bits to the left of that "1" 11010111 11010100

Method two:

1. Invert all the bits through the number


2. Add one

Example: for +2, which is 00000010 in binary (~X means "invert all the bits in X"):

1. ~00000010 → 11111101
2. 11111101 + 1 → 11111110 (−2 in two's complement)

7. Addition/Subtraction in 2’s complement method

Addition is performed by doing the simple binary addition of the two numbers. Subtraction is
accomplished by first performing the 2's complement operation on the number being subtracted then
adding the two numbers.

Examples: 5 bits

8 01000
+4 00100
-- ----------
12 01100

-8 11000
+ -4 11100
-- -----
-12 10100

8 01000
+ -4 11100
-- -----
4 00100
Example: 8 bits

25 0001 1001
+ -29 1110 0011
--- ---------
-4 1111 1100

Overflow
Since we are working with numbers contained in a fixed number of bits, we must be able to detect
overflow following an operation.

1. No overflow occurs when the value of the bit carried into the most significant bit is the same
as the value carried out of the most significant bit.
2. Overflow occurs when the value of the bit carried into the most significant bit is not the same
as the bit carried out of the most significant bit.
Examples: 4 bits

6 0110
+1 0001
----- ------
7 0111

7 0111
+1 0001
----- ------
8 1000 (-8)
but, in the second case is - 8 the correct answer?

8. IEEE Standard 754 Floating Point Numbers

The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point
computation which was established in 1985 by the Institute of Electrical and Electronics Engineers
(IEEE). The standard addressed many problems found in the diverse floating point implementations that
made them difficult to use reliably and reduced their portability. IEEE Standard 754 floating point is the
most common representation today for real numbers on computers, including Intel-based PC’s, Macs, and
most Unix platforms.

There are several ways to represent floating point number but IEEE 754 is the most efficient in most
cases. IEEE 754 has 3 basic components:

1. The Sign of Mantissa – This is as simple as the name. 0 represents a positive number while 1
represents a negative number.
2. The Biased exponent – The exponent field needs to represent both positive and negative
exponents. A bias is added to the actual exponent in order to get the stored exponent.
3. The Normalized Mantissa – The mantissa is part of a number in scientific notation or a floating-
point number, consisting of its significant digits. Here we have only 2 digits, i.e. O and 1. So a
normalized mantissa is one with only one 1 to the left of the decimal.

IEEE 754 numbers are divided into two based on the above three components: single precision and
double precision.
TYPES SIGN BIASED EXPONENT NORMALISED MANTISA BIAS

Single precision 1(31st bit) 8(30-23) 23(22-0) 127

Double precision 1(63rd bit) 11(62-52) 52(52-0) 1023

Example –

85.125
85 = 1010101
0.125 = 001
85.125 = 1010101.001
=1.010101001 x 2^6
sign = 0

1. Single precision:
Biased exponent 127+6=133
133 = 10000101
Normalized mantissa = 010101001
We will add 0's to complete the 23 bits

The IEEE 754 Single precision is:


= 0 10000101 01010100100000000000000
This can be written in hexadecimal form 42AA4000

2. Double precision:
Biased exponent 1023+6=1029
1029 = 10000000101
Normalized mantissa = 010101001
We will add 0's to complete the 52 bits

The IEEE 754 Double precision is:


= 0 10000000101 0101010010000000000000000000000000000000000000000000
This can be written in hexadecimal form 4055480000000000

9. Algorithm

Step by step procedure for solving any problem is known as algorithm. An algorithm is a finite
set of instructions that, if followed, accomplishes a particular task. It is a sequence of
computational steps that transform the input into a valuable or required output.

All Algorithms must satisfy the following criteria -

1) Input: There are zero or more quantities that are externally supplied.

2) Output: At least one quantity is produced.

3) Definiteness: Each instruction of the algorithm should be clear and unambiguous.

4) Finiteness: The process should be terminated after a finite number of steps.

5) Effectiveness: Every instruction must be basic enough to be carried out theoretically or by


using paper and pencil.

Advantages of algorithm

1. It is a step-wise representation of a solution to a given problem, which makes it easy to


understand.
2. An algorithm uses a definite procedure.
3. It is not dependent on any programming language, so it is easy to understand for anyone even
without programming knowledge.
4. Every step in an algorithm has its own logical sequence so it is easy to debug.
5. By using algorithm, the problem is broken down into smaller pieces or steps hence, it is easier for
programmer to convert it into an actual program

Disadvantages of algorithm

1. Writing algorithm takes a long time.


2. An Algorithm is not a computer program, it is rather a concept of how a program should be.

10. Flowchart

A flowchart is the graphical or pictorial representation of an algorithm with the help of different symbols,
shapes and arrows in order to demonstrate a process or a program. With algorithms, we can easily
understand a program. The main purpose of a flowchart is to analyze different processes. Several standard
graphics are applied in a flowchart:

 Terminal Box - Start / End

 Input / Output

 Process / Instruction

 Decision

 Connector / Arrow

The graphics above represent different part of a flowchart. The process in a flowchart can be expressed
through boxes and arrows with different sizes and colors. In a flowchart, we can easily highlight a certain
element and the relationships between each part.
Example: Convert Temperature from Fahrenheit (℉) to Celsius (℃)
Algorithm:
Step 1: Read temperature in Fahrenheit,
Step 2: Calculate temperature with formula C=5/9*(F-32),
Step 3: Print C,
Flowchart:

Advantages of flowchart:

1. The Flowchart is an excellent way of communicating the logic of a program.


2. It is easy and efficient to analyze problem using flowchart.
3. During program development cycle, the flowchart plays the role of a guide or a blueprint. This
makes program development process easier.
4. After successful development of a program, it needs continuous timely maintenance during the
course of its operation. The flowchart makes program or system maintenance easier.
5. It helps the programmer to write the program code.
6. It is easy to convert the flowchart into any programming language code as it does not use any
specific programming language concept.

Disadvantage of flowchart

1. The flowchart can be complex when the logic of a program is quite complicated.
2. Drawing flowchart is a time-consuming task.
3. Difficult to alter the flowchart. Sometimes, the designer needs to redraw the complete flowchart
to change the logic of the flowchart or to alter the flowchart.
4. Since it uses special sets of symbols for every action, it is quite a tedious task to develop a
flowchart as it requires special tools to draw the necessary symbols.
5. In the case of a complex flowchart, other programmers might have a difficult time understanding
the logic and process of the flowchart.
6. It is just a visualization of a program, it cannot function like an actual program.

Potrebbero piacerti anche