Sei sulla pagina 1di 12

Classification of Computers Unit Structure 3.1 Introduction 3.2 Generation of Computers 3.2.1 First Generation Computer 3.2.

2 Second Generation Computer 3.2.3 Third Generation Computer 3.2.4 Forth Generation Computer 3.3 Types of Computer 3.3.1 Micro Computer 3.3.2 Mini Computer 3.3.3 Mainframe 3.3.4 Super Computer 3.3.5 Digital & Analog Computer 3.3.6 Hybrid Computer 3.1 Introduction One question which we have still not answered is-Is there any classification of Computers? Well for quite sometime computers have been classified under three main classes. These are: Microcomputers Minicomputers Mainframes Although with development in technology the distinction between these is becoming blurred. Yet it is important to classify them as it is sometimes useful to differentiate the key elements and architecture among the classes.

The very first attempt towards this automatic computing was made by Blaise Pascal. He invented a device which consisted of lots of gears and chains and used to Perform repeated addition and subtractions. This device was called Pascaline. Later many attempts were made in this direction, we will not go in the details of these mechanical calculating devices. But we must discuss some details about the innovation by Charles babbage, the grandfather of modern computer. He designed two computers: The Difference engine: It was based on the mathematical principle of finite differences and was used to solve calculations on large numbers using a formula. It was also used for solving the polynomial and trignometric functions. The Analytical Engine by Babbage: It was a general purpose computing device which could be used for performing any mathematical operation automatically. It consisted of the following components: The Store: A mechanical memory unit consisting of sets Of counter wheels. The Mill: An arithmetic unit which is capable of performing the four basic arithmetic operations. Cards: There are basically two types of cards: a. Operation Cards : Selects one of four arithmetic operating by activating the mill to perform the selected function. b. Variable Cards : Selects the memory locations to be used by the mill for a particular Operation(i.e. the source of the operands and the destination of the results). Output : Could be directed to a printer or a card punch device.

The basic features of this analytical engine were: It was a general purpose programmable machine. It had the provision of automatic sequence control, thus enabling programs to alter its sequence of operations. The provision of sign checking of result existed Mechanism for advancing or reversing of control card was permitted thus enabling execution of any desired instruction. In other words, Babbage has deviced a conditional and branching instructions. The Babbage machine is fundamentally the same as a modem computer. Unfortunately Babbage work could not be completed. But as a tribute to Charles babbage his Analytical Engine was completed in the last decade and is now on display at the Science Museum at London. Next notable attempts towards computer were electromechanical and Zuse then used electromechanical relays that could either open or closed automatically. Thus, the use of binary digits, rather than decimal numbers started. Harvard Mark I and the Bug The next significant effort towards computer were electromechanical computer was made at the, Harvard University, jointly sponsored by IBM and the Department of UN Navy, Howard Aiken of Harvard University developed a system called Mark I was a decimal machine. Some of you must have heard a term called bug. It is mainly used to indicate errors in computer programs. This term was coined, when one day, a program in Mark I did not run properly due to a moth short-circuiting the computer. Since then, the moth or the bug has been linked with errors or problems in computer programming. The process of eliminating errors in a program is thus, known as debugging.

The basic drawbacks of these mechanical and electromechanical computers were: Friction/inertia of moving components had limited the speed. The data movement using gears and liner was quite difficult and unreliable. The change was to have switching and storing mechanism with no moving parts and then the electronic switching technique triode vacuum tubes were used and hence born the first electronic computer. 3.2.1 First Generation Computers It is indeed ironic that scientific inventions of great significance, have often been linked with supporting a very sad and undesirable aspect of civilization i.e. fighting wars, Nuclear energy would not have been developed as fast, if colossal efforts were not spent towards devising nuclear bombs. Similarly, the origin of the first truly general purpose computer was also designed to meet the requirement of World War II. The ENIAC (the Electronic Numerical Integrator And Calculator) was designed in 1945 at the university of Pennsylvania to calculate figures for thousands of gunnery tables required by the US army for accuracy in artillery fire. The ENIAC ushered in the era of what is known as first generation computers. It could perform 5000 additions or 500 multiplications per minute. It was, however, a monstrous installation. It used thousands of vacuum tube (18000), weighed 30 tons, occupied a number of rooms, needed a great amount of electricity and emitted excessive heat. The main features of ENIAC can be summarized as: - ENIAC was a general purpose computing machine in which vacuum tube technology was used. - ENIAC was based on decimal arithmetic rather than binary arithmetic.

- ENIAC needed to be programmed manually by setting switches and plugging or unplugging. Thus, to pass a set of instruction to computer was cumber some and time-consuming. This was considered to be the major deficiency of ENIAC. The trends which were encountered during the era of first generation computers were: The first generation computers controls was centralized in a single CPU, and all the operations required a direct intervention of the CPU. Use of ferrite-core main memory was started during this time. Concepts such as use of virtual memory and index Register (you will know more about these terms later) started. Punched cards were used as input device. Magnetic tapes and magnetic drums were used as secondary memory. Binary code or machine language was used for programming. Towards the end due to difficulties encountered in use of machine language as programming language, the use of symbolic language which is now called assembly language started. Assembler, a program which translates assembly language programs to machine language was made. Computer was accessible to only one programmer at a time (single user environment). Advent of von-Neumann architecture.

3.2.2 Second Generation Computers The second generation computers started with the advent of transistors. A transistors is a two state device made from silicon. It is cheaper, smaller and dissipates less than vacuum tube but can be utilized in a similar way as that of vacuum tubes. Unlike vacuum tubes, a transistor do not require wires, metal glass capsule and vacuum, therefore, is called a solid state device. The transistors were invented in 1947 and launched the electronic revolution in 1950. The generation of computers are basically differentiated by a fundamental hardware technology. Each new generation of computer is characterized by greater speed, large memory capacity and smaller size than the previous generation. Thus, second generation computers were more advanced in terms of arithmetic and logic unit and control unit then their counterparts of first generation. Another feature of second generation was M by this time high level languages were beginning to be used and the provisions for system software were starting. One of the main computer series during this time was the IBM 700 series. Each successful number of this series showed increased performance and capacity and reduced cost. In these series two main concepts MO channels, an independent processor for Input/Output and multiplexor, a useful routing device were used. 3.2.3 Third Generation Computers A single self contained transistor is called discrete component. In 1960s, the electronic equipments were made from the discrete components such as transistors, capacitors, resistors and so on. These components were manufactured separately and used to be soldered on circuit boards which then can be used for making computers of the electronic

components. Since computer can contain around 10,000 of these transistors, therefore, the entire mechanism was cumbersome. Then started the era of micro-electronics(small electronic) with the invention of Integrated Circuits (ICs). Use of ICs in computer defines the third generation of computers. In an integrated circuit the components such as transistors, resistors and conductors are fabricated on a semiconductor material such as silicon. Thus, a desired circuit can be fabricated in a tiny piece of silicon rather than assembling several discrete components into the same circuit. Hundreds or even thousands of transistors could be fabricated on a single wafer of silicon. In addition, these fabricated transistors can be connected with a process of metalisation to form logic circuits on the same chip they have been produced. An integrated circuit is constructed on a thin wafer of silicon which is divided into a matrix of small areas (size of the order of a few millimeters squares). An identical circuit pattern is fabricated on each of these areas and the wafer is then broken into chips (Refer figure 3). Each of these chips consists of several gates, which are made using transistors only, and a number of input and output connection points. Each of these chips then can be packaged separately in a housing to protect it. In addition, this housing provides a number of pins for connecting this chip with other devices or circuits. The pins on these packages can be provided in two ways: In two parallel rows with 0.1 inch spacing between two adjacent pins in each row. This package is called dual in-line package (DIP). In case, more than hundred pins are required then pin grid array (PGA) where pins are arranged in arrays of rows and columns, with spacing between two adjacent pins of 0.1 inch.

Different circuits can be constructed on different wafers. All these Packaged circuit chips then can be interconnected on a Printed-circuit board to produce several complex electronic circuits such as computers. Initially, only a few gates were integrated reliably on a chip and then Packaged. These initial integration was referred 10 as small-scale integration (SSI). Later, with the advances in micro electronics technologies the SSI gave way to Medium Scale Integration where 100s of gates were fabricated on a chip. Then came Large Scale Integration (1,000 gates) and very large integration (VLSI 1000,000 gates on a single chip) at present we are going to the era of Ultra Large Scale Integration (ULSI) where 100,000,000 components are expected projection is that in near future almost 10,000,000,000 components will be fabricated on a single chip. What are the advantages of having densely packed Integrated Circuits? These are: Low cost: the cost of a chip has remained almost constant while the chip density (number of gates per chip) is ever increasing. It implies that the cost of computer logic and memory circuitry has been reducing rapidly. Greater Operating Speed: the more is the density, the closer are the logic or memory elements, which implies shorter electrical paths and hence the higher operating speed. Smaller computers-better portability Reduction in power and cooling requirements. Reliability: The integrated circuit interconnections are much more reliable than soldered connections, In addition, densely packed integrated circuits enable fewer inter-chip connections. Thus, the computers are more reliable. Some of the examples of third generation computers are IBM system/360 family and DEC PDP/8 systems. The third generation computers mainly used SSI chips.

One of the key concept which was brought forward during this rime was the concept of the family of compatible computers. This concept was mainly starred by IBM with its system/30 family. Family of computer consist of several models. Each model is assigned a model number, for example, the IBM system/360 family have, Model 30,40,50,65 and 75. As we go from lower model number to higher model number in this family the memory capacity, processing speed and cost increases. But, all these models are compatible in nature, that is, program written on a lower model we go towards higher model without any change. Only the time of execution is reduced as we go towards higher model. The biggest advantage of this family system was the flexibility of selection of model. For example, if you had limited budget and processing requirements you could possibly start with a relatively moderate model such a Model 30. As you business furnishes and such as 40,50,65,75 depending on your need. Here, you are not sacrificing investment on the already developed software as they can be used in these machines also. Let us summarize the main characteristics of the family. These are: The instructions on a family are of similar type. Normally, the instructions set on a lower end member is a subset of higher end member, therefore, a program written on lower end member can be executed on a higher end member, but program written on higher end member may or may not execute on lower end member. The operating system used on family members is the same. In certain cases some features can be added in the operating system for the higher end members.

The speed of execution of instruction increases from lower end family members to higher end members. The number of I/O ports or interfaces increases as we move to higher members. Memory size increases as we move towards higher members. The cost increases from lower to higher members. But how was the family concept implemented? Well there were three main features of implementation.These are: increased complexity of arithmetic logic unit increase in memory- CPU data paths simultaneous access of data in higher end members. The PDP-8 was contemporary computer to system/360 family and was a compact, cheap system from DEC. This computer established a concept of minicomputer. We will explain more about the term mini-computer later. The major developments which took place in the third generation can be summarized as: IC circuits were starting to find their application in the computer hardware replacing the discrete transistor component circuits. This resulted in reduction in the cost and the physical size of the computer. Semiconductor (Integrated Circuit) memories were made more flexible using a technique called microprogramming. Certain new techniques were introduced to increase the effective speed of program execution. These techniques were pipelining and multiprocessing. The operating system of computers were incorporated with the efficient methods of sharing the facilities or resources such as processor and memory space, automatically. 3.2.4 Forth Generation Computer

As discussed earlier with the growth in micro-electronics the IC technology evolved rapidly. One of the major milestone in this technology was the very scale integration (VLSI) where thousands of transistors can be integrated on a single chip. The main impact of VLSI was that, it was possible to produce a complete CPU or main memory or other similar devices on a single IC chip. This implied that mass production of CPU, memory etc. can be done at a very low cost. The VLSI-based computer architecture is sometimes referred to as fourth generation computers. Let us discuss some of the important breakthroughs of VLSI technologies. Semi Conductor Memories: Initially the IC technology was for constructing processor, but soon it was realised that same technology can be used for construction of memory. The first memory chip was constructed in 1970 and could hold 256 bits. Although the cost of this chip was high, but gradually the cost of semiconductor memory is going down. The memory capacity per chip has increased as: 1K, 4K, 16K, 64K, 256K AND 1M bits.

Potrebbero piacerti anche