Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Foundations of Computing: Essential for Computing Studies, Profession And Entrance Examinations - 5th Edition
Foundations of Computing: Essential for Computing Studies, Profession And Entrance Examinations - 5th Edition
Foundations of Computing: Essential for Computing Studies, Profession And Entrance Examinations - 5th Edition
Ebook1,984 pages46 hours

Foundations of Computing: Essential for Computing Studies, Profession And Entrance Examinations - 5th Edition

Rating: 0 out of 5 stars

()

Read preview

About this ebook

If you wish to have a bright future in any profession today, you cannot ignore having sound foundation in Information Technology (IT). Hence, you cannot ignore to have this book because it provides comprehensive coverage of all important topics in IT. Foundations of Computing is designed to introduce through a single book the important concepts of the Foundation Courses in Computer Science (CS), Computer Applications (CA), and Information Technology (IT) programs taught at undergraduate and postgraduate levels.
LanguageEnglish
Release dateDec 12, 2022
ISBN9789355512550
Foundations of Computing: Essential for Computing Studies, Profession And Entrance Examinations - 5th Edition

Related to Foundations of Computing

Related ebooks

Computers For You

View More

Related articles

Reviews for Foundations of Computing

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Foundations of Computing - Pradeep K. Sinha

    Chapter 1

    Characteristics, Evolution, and Classification of Computers

    WHAT IS A COMPUTER?

    The original objective of inventing a computer was to create a fast calculating device. Today, we define a computer as a device that operates upon data because more than 80% of work done by today's computers is data processing. Data can be anything like bio-data of applicants including their photographs, when computer is used for short listing candidates for recruiting; marks obtained by students in various subjects when used for preparing results; details (name, age, sex, etc.) of passengers when used for making airline or railway reservations; or number of different parameters when used for solving scientific research problems, etc. Notice from the examples that data can be numerical (only digits), alphabetic (only alphabets), alphanumeric (mixture of digits, alphabets and special characters), text, image, voice or video.

    A computer is often referred to as a data processor because it can store, process, and retrieve data whenever desired. The activity of processing data using a computer is called data processing. Data processing consists of three sub-activities: capturing input data, manipulating the data, and managing output results. As used in data processing, information is data arranged in an order and form that is useful to people receiving it. Hence, data is raw material used as input to data processing and information is processed data obtained as output of data processing (see Figure 1.1).

    Figure 1.1. A computer (also known as data processor) converts data into information.

    CHARACTERISTICS OF COMPUTERS

    A computer's power and usefulness are mainly due to its characteristics listed in Figure 1.2.

    Figure 1.2. Characteristics of computers.

    EVOLUTION OF COMPUTERS

    Researchers invented computers because of man's search for fast and accurate calculating devices. Blaise Pascal invented the first mechanical adding machine in 1642. Later, in the year 1671, Baron Gottfried Wilhelm von Leibniz of Germany invented the first calculator for multiplication. Keyboard machines originated in the United States around 1880 and we use them even today. Around the same period, Herman Hollerith came up with the concept of punched cards that computers used extensively as input medium even in late 1970s. Business machines and calculators made their appearance in Europe and America towards the end of the nineteenth century.

    Charles Babbage, a nineteenth century Professor at Cambridge University, is considered the father of modern digital programmable computers. He designed a Difference Engine in the year 1822 that could produce reliable tables. In 1842, Babbage came out with his new idea of a completely automatic Analytical Engine for performing basic arithmetic functions for any mathematical problem at an average speed of 60 additions per minute. Unfortunately, he was unable to produce a working model of this machine because the precision engineering required to manufacture the machine was not available during that period. However, his efforts established a number of principles that are fundamental to the design of any digital programmable computer.

    A major drawback of the early automatic calculating machines was that their programs were wired on boards that made it difficult to change programs. In 1940s, Dr. John Von Neumann introduced the stored program concept that helped in overcoming hard-wired program problem. The basic idea behind this concept is that a sequence of instructions and data can be stored in memory of a computer for automatically directing flow of operations. This feature considerably influenced development of modern digital computers because of ease with which different programs can be loaded and executed on a single computer. Due to this feature, we often refer to modern digital computers as stored program digital computers.

    Figure 1.3 provides basic information about some of the well-known early computers.

    COMPUTER GENERATIONS

    Generation in computer talk provides a framework for the growth of computer industry based on key technologies developed. Originally, it was used to distinguish between hardware technologies but was later extended to include both hardware and software technologies.

    The custom of referring to computer era in terms of generations came into wide use only after 1964. There are totally five computer generations known until today. Below we describe each generation along with its identifying characteristics. In the description below, you will come across several new terminologies. Subsequent chapters will deal with these terminologies in detail. The idea here is to provide an overview of the major developments and technologies during the five generations of computers and not to explain them in detail.

    First Generation (1942-1955)

    The early computers of Figure 1.3 and others of their time were manufactured using vacuum tubes as electronic switching device. A vacuum tube [see Figure 1.4(a)] was a fragile glass device using filaments as a source of electronics and could control and amplify electronic signals. It was the only high-speed electronic switching device available in those days. These vacuum tube computers could perform computations in milliseconds and were known as first-generation computers.

    Most of the first-generation computers worked on the principle of storing program instructions along with data in memory of computer (stored program concept) so that they could automatically execute a program without human intervention. Memory of these computers used electromagnetic relays, and users fed all data and instructions into the system using punched cards. Programmers wrote instructions in machine and assembly languages because of lack of high-level programming languages in those days. Since machine and assembly languages are difficult to work with, only a few specialists understood how to program these early computers.

    Second Generation (1955-1964)

    John Bardeen, Willian Shockley, and Walter Brattain invented a new electronic switching device called transistor [see Figure 1.4(b)] at Bell Laboratories in 1947. Transistors soon proved to be a better electronic switching device than vacuum tubes.

    Second-generation computers were manufactured using transistors. They were more powerful, more reliable, less expensive, smaller, and cooler to operate than the first-generation computers.

    The second generation also experienced a change in storage technology. Memory of second-generation computers was composed of magnetic cores. Magnetic cores are small rings made of ferrite that can be magnetized in either clockwise or anti-clockwise direction. Large random access memory (having storage capacity of few tens of kilobytes) had several magnetic cores strung on a mesh of wires.

    In 1957, researchers introduced magnetic tape as a faster and more convenient secondary storage medium. Later magnetic disk storage was also developed, and magnetic disk and magnetic tape were the main secondary storage media used in second-generation computers. Users still used punched cards widely for preparing and feeding programs and data to a computer.

    On software front, high-level programming languages (like FORTRAN, COBOL, ALGOL, and SNOBOL) and batch operating system emerged during second generation. High-level programming languages made second-generation computers easier to program and use than first-generation computers. Introduction of batch operating system helped in reducing human intervention while processing multiple jobs resulting in faster processing, enhanced throughput, and easier operation of second-generation computers.

    In addition to scientific computations, business and industry users used second-generation computers increasingly for commercial data processing applications like payroll, inventory control, marketing, and production planning.

    Ease of use of second-generation computers gave birth to a new profession of programmers and systems analysts in computing that is more oriented towards usage rather than design of computers. This triggered introduction of computer science related courses in several colleges and universities.

    Figure 1.4. Electronics devices used for manufacturing computers of different generations.

    Third Generation (1964-1975)

    In 1958, Jack St. Clair Kilby and Robert Noyce invented the first integrated circuit. Integrated circuits (called ICs) are circuits consisting of several electronic components like transistors, resistors, and capacitors grown on a single chip of silicon eliminating wired interconnection between components. IC technology was also known as microelectronics technology because it made it possible to integrate larger number of circuit components into very small (less than 5 mm square) surface of silicon, known as chip [see Figure 1.4(c)]. Initially the integrated circuits contained only about ten to twenty components. This technology was named small-scale integration (SSI). Later with the advancement in technology for manufacturing ICs, it became possible to integrate up to about hundred components on a single chip. This technology was known as medium scale integration (MSI).

    Third generation computers were manufactured using ICs. Earlier ones used SSI technology and later ones used MSI technology. ICs were smaller, less expensive to produce, more rugged and reliable, faster in operation, dissipated less heat, and consumed less power than circuits built by wiring electronic components manually. Hence, third-generation computers were more powerful, more reliable, less expensive, smaller, and cooler to operate than second-generation computers.

    Parallel advancements in storage technologies allowed construction of larger magnetic core based random access memory as well as larger capacity magnetic disks and tapes. Hence, third-generation computers typically had few megabytes (less than 5 Megabytes) of main memory and magnetic disks capable of storing few tens of megabytes of data per disk drive.

    On software front, standardization of high-level programming languages, timesharing operating systems, unbundling of software from hardware, and creation of an independent software industry happened during third generation. FORTRAN and COBOL were the most popular high-level programming languages in those days. American National Standards Institute (ANSI) standardized them in 1966 and 1968 respectively, and the standardized versions were called ANSI FORTRAN and ANSI COBOL. The idea was that as long as a programmer follows these standards in program writing, he/she could run his/her program on any computer with an ANSI FORTRAN or ANSI COBOL compiler (see Chapter 9 for details). Some more high-level programming languages were introduced during the third-generation. Notable among these were PL/1, PASCAL, and BASIC.

    Second-generation computers used batch operating system. In those systems, users had to prepare their data and programs and then submit them to a computer centre for processing. The operator at the computer centre collected these user jobs and fed them to a computer in batches at scheduled intervals. The respective users then collected their job's output from the computer centre. The inevitable delay resulting from the batch processing approach was frustrating to some users, especially programmers, because often they had to wait for days to locate and correct a few program errors. To rectify this, John Kemeny and Thomas Kurtz of Dartmouth College introduced the concept of timesharing operating system, which enables multiple users to directly access and share a computer's resources simultaneously in a manner that each user feels that no one else is using the computer. This is accomplished by using a large number of independent, relatively low-speed, on-line terminals connected to the computer simultaneously. A separate user uses each terminal to gain direct access to the computer. Timesharing operating system allocates CPU time in such a way that all user programs have a brief share (known as a time slice) of CPU time in turn. Processing speed of CPU allows it to switch from one user job to another in rapid succession and execute a small portion of each job in allocated time slice until the job is completed. Each user gets the illusion that he/she alone is using the computer. Introduction of timesharing concept helped in substantially improving the productivity of programmers and made on-line systems feasible, resulting in new on-line applications like airline reservation systems, interactive query systems, etc.

    Until 1965, computer manufacturers sold their hardware along with all associated software without separately charging for software. For example, buyers received language translators for all languages supported on a computer they purchased. From user's standpoint, software was free. However, the situation changed in 1969 when IBM and other computer manufacturers began to price their hardware and software products separately. This unbundling of software from hardware gave users an opportunity to invest only in software of their need and value. For example, now buyers could purchase only the language translators they needed and not all language translators supported on the purchased computer. This led to the creation of many new software houses and the beginning of an independent software industry.

    Another important concept introduced during third-generation was that of backward compatible family of computers. During this period, IBM introduced its System 360 as a family of computers with backward compatibility as they were different sizes of mainframe systems based on the same machine language. This enabled businesses to upgrade their computers without incurring costs of replacing peripheral equipment and modifying programs to run on new systems.

    Development and introduction of minicomputers also took place during third-generation. Computers built until early 1960s were mainframe systems that only large companies could afford to purchase and use. Clearly, a need existed for low-cost smaller computers to fill the gaps left by the bigger, faster, and costlier mainframe systems. Several innovators recognized this need and formed new firms in 1960s to produce smaller computers. Digital Equipment Corporation (DEC) introduced the first commercially available minicomputer, the PDP-8 (Programmed Data Processor), in 1965. It could easily fit in the corner of a room and did not require attention of a full-time computer operator. It used timesharing operating system and a number of users could access it simultaneously from different locations in the same building. Its cost was about one-fourth the cost of a traditional mainframe system making it possible for smaller companies to afford computers. It confirmed the demand for small computers for business and scientific applications, and by 1971, there were more than 25 computer manufacturers in minicomputer market.

    Fourth Generation (1975-1989)

    Average number of electronic components packed on a silicon chip doubled each year after 1965. This progress soon led to the era of large-scale integration (LSI) when it was possible to integrate over 30,000 electronic components on a single chip, followed by very-large-scale integration (VLSI) when it was possible to integrate about one million electronic components on a single chip. This progress led to a dramatic development - creation of a microprocessor. A microprocessor contains all circuits needed to perform arithmetic logic and control functions, the core activities of all computers, on a single chip. Hence, it became possible to build a complete computer with a microprocessor, a few additional primary storage chips, and other support circuitry. It started a new social revolution - personal computer ( PC) revolution. Overnight computers became incredibly compact. They became inexpensive to make, and it became possible for many to own a computer.

    By 1978, Apple II from Apple Computer Inc. and the TRS-80 model from the Radio Shack Division of Tandy Corporation were dominant personal computers. By 1980, IBM realized that the personal computer market was too promising to ignore and came out with its own PC in 1981, popularly known as IBM PC. Several other manufacturers used IBM's specification and designed their own PCs, popularly known as IBM compatible PCs or clones. The IBM PC and its clones became a popular standard for the PC industry during the fourth generation.

    During fourth generation, semiconductor memories replaced magnetic core memories resulting in large random access memories with fast access time. Hard disks became cheaper, smaller, and larger in capacity. In addition to magnetic tapes, floppy disks became popular as a portable medium for porting programs and data from one computer to another.

    Significant advancements also took place during fourth generation in the area of large-scale computer systems. In addition to improved processing and storage capabilities of mainframe systems, the fourth-generation saw the advent of supercomputers based on parallel vector processing and symmetric multiprocessing technologies. A supercomputer based on parallel vector processing technology contains a small number of custom-designed vector processors which are connected to a number of high-speed data access shared memory modules through a custom-designed, high-bandwidth crossbar switch network. On the other hand, a supercomputer based on symmetric multiprocessing technology uses commodity microprocessors connected to a shared memory through a high-speed bus or a crossbar switch network. Primary builders of supercomputers of the former category included Cray Research and ETA Systems, whereas of the latter category included IBM, Silicon Graphics, and Digital Equipment Corporation.

    High-speed computer networking also developed during fourth-generation. This enabled interconnection of multiple computers for communication and sharing of data among them. Local area networks (LANs) became popular for connecting computers within an organization or within a campus. Similarly, wide area networks (WANs) became popular for connecting computers located at larger distances. This gave rise to network of computers and distributed systems.

    On software front, several new developments emerged to match the new technologies of fourth generation. For example, vendors developed several new operating systems for PCs. Notable ones among these were MS-DOS, MS-Windows, and Apple's propriety Mac OS. Since PCs were for individuals who were not computer professionals, companies developed graphical user interfaces for making computers user friendly (easier to use). A graphical user interface (GUI) provides icons (pictures) and menus (list of choices) that users can select with a mouse. PC manufacturers and application software developers developed several new PC-based applications to make PCs a powerful tool. Notable among these were powerful word processing packages that allowed easy development of documents, spreadsheet package that allowed easy manipulation and analysis of data organized in columns and rows, and graphics packages that allowed easy drawing of pictures and diagrams. Another useful concept that became popular during fourth-generation was multiple windows on a single terminal screen, which allowed users to see the status of several applications simultaneously in separate windows on the same terminal screen.

    In the area of software for large-scale computers, key technologies that became popular included multiprocessing operating systems and concurrent programming languages. With multiprocessing operating systems, a mainframe system could use multiple processors (a main processor and several subordinate processors) in such a manner that the subordinate processors could manage the user terminals and peripheral devices, allowing the main processor to concentrate on processing the main program, improving the overall performance. Supercomputers also used multiprocessing operating system to extract the best performance from the large number of processors used in these systems. Concurrent programming languages further helped in using the multiprocessing capabilities of these systems by allowing programmers to write their applications in such a way that different processors could execute parts of the application in parallel. The most ambitious language of this type was ADA.

    During fourth-generation, the UNIX operating system also became popular for use on large-scale systems. Additionally, due to proliferation of computer networks, several new features were included in existing operating systems to allow multiple computers on the same network to communicate with each other and share resources.

    Some other software technologies that became popular during fourth-generation are C programming language, object-oriented software design, and object-oriented programming. C language combines features of high-level programming languages with efficiency of an assembly language. The primary objectives of object-oriented software design are to make programs generalized and to build software systems by combining reusable pieces of program codes called objects. To facilitate object-oriented software design, several object-oriented programming languages were introduced, out of which, C++ emerged as the most popular one.

    Fifth Generation (1989-Present)

    The trend of further miniaturization of electronic components, dramatic increase in power of microprocessor chips, and increase in capacity of main memory and hard disk continued during fifth generation. VLSI technology became ULSI (Ultra-Large-Scale Integration ) technology in fifth generation, resulting in production of microprocessor chips having ten million electronic components. In fact, the speed of microprocessors and the size of main memory and hard disk doubled almost every eighteen months. As a result, many features found in the CPUs of large mainframe systems of third- and fourth-generation systems became part of microprocessor architecture in fifth generation. This resulted in availability of powerful and compact computers becoming available at cheaper rates and death of traditional large mainframe systems. Subsequently, processor manufacturers started building multicore processor chips instead of increasingly powerful (faster) single-core processor chips. The multicore chips improve overall performance by handling more work in parallel.

    Due to this fast pace of advancement in computer technology, we see more compact and more powerful computers being introduced almost every year at more or less the same price or even cheaper. Notable among these are portable notebook computers that give the power of a PC to their users even while traveling, powerful desktop PCs and workstations, powerful servers, powerful supercomputers, and handheld computers.

    Storage technology also advanced making larger main memory and disk storage available in newly introduced systems. Currently, PCs having few Gigabytes (GB) of main memory and few Terabytes (TB) of hard disk capacity are common. Similarly, workstations having 4 to 64 Gigabytes of main memory and few tens of Terabytes of hard disk capacity are common. RAID (Redundant Array of Inexpensive Disks) technology enables configuration of a bunch of disks as a single large disk. It, thus, supports larger hard disk space with better in-built reliability. During fifth generation, optical disks (popularly known as Compact Disks or CDs) and Solid State Disks (SSD) emerged as popular portable mass storage media.

    In the area of large-scale systems, fifth-generation saw the emergence of more powerful supercomputers based on parallel processing technology. They used multiple processors and were of two types - shared memory and distributed memory parallel computers. In a shared memory parallel computer, a high-speed bus or communication network interconnects a number of processors to a common main memory, whereas in a distributed memory parallel computer, a communication network interconnects a number of processors, each with its own memory. These systems use parallel programming technique to break a problem into smaller problems and execute them in parallel on multiple processors of the system. Processors of a shared memory parallel computer use memory access mechanism for communication, whereas those of a distributed memory parallel computer use message-passing mechanism for communication. Distributed memory parallel computers have better scalability (can grow larger in capability) than shared memory parallel computers, and are now built by clustering together powerful commodity workstations by using a high-speed commodity switched network. This is known as clustering technology.

    During fifth generation, the Internet emerged with associated technologies and applications. It made it possible for computer users sitting across the globe to communicate with each other within minutes by use of electronic mail (known as e-mail) facility. A vast ocean of information became readily available to computer users through the World Wide Web (known as WWW). Moreover, several new types of exciting applications like electronic commerce, virtual libraries, virtual classrooms, distance education, etc. emerged during the period.

    The tremendous processing power and the massive storage capacity of fifth-generation computers also made them a useful and popular tool for a wide range of multimedia applications dealing with information containing text, graphics, animation, audio, and video data. In general, data size for multimedia information is much larger than plain text information because representation of graphics, animation, audio, or video media in digital form requires much larger number of bits than that required for representation of plain text. Because of this, multimedia computer systems require faster processor, larger storage devices, larger main memory, good graphics terminal, and input/output devices required to play any audio or video associated with a multimedia application program. The availability of multimedia computer systems resulted in a tremendous growth of multimedia applications during fifth-generation.

    In the area of operating systems, some new concepts that gained popularity during fifth-generation include microkernels, multithreading, multicore operating systems and operating systems for hand-held mobile devices. Microkernel technology enabled designers to model and design operating systems in a modular fashion. This makes operating systems easier to design and implement, easier to modify or add new services, and allows users to implement and use their own service. Multithreading technology is a popular way to improve application performance through parallelism. In traditional operating systems, the basic unit of CPU scheduling is a process, but in multithreading operating systems, the basic unit of CPU scheduling is a thread. In such operating systems, a process consists of an address space containing its instructions and data, and one or more threads sharing the same address space. Hence, these systems can create a new thread, switch CPU between threads, and share resources between threads of same process more efficiently than between processes, resulting in faster execution and better overall system performance. A multicore operating system can run multiple programs at the same time on a multicore chip with each core handling a separate program. With the advent of smart phones, tablets, smart watches and other hand-held smart electronic devices, special operating systems for these devices were designed.

    In the area of programming languages, concepts that gained popularity during fifth generation are JAVA programming language, and parallel programming libraries like MPI (Message Passing Interface) and PVM (Parallel Virtual Machine). JAVA is used primarily on the World Wide Web. It supports Java-based applets allowing web pages to have dynamic information and more interactivity with users of web information. MPI and PVM libraries enable development of standardized parallel programs, so that a programmer can easily port and execute a parallel program developed for one parallel computer on other parallel computers. MPI is used for distributed memory parallel computers and PVM is used for shared memory parallel computers. Recently, programming languages for programming Data Analytics and Artificial Intelligence (AI) applications were developed. Python and R are two such languages.

    Figure 1.5 summarizes the key technologies and features of five computer generations. Although there is a certain amount of overlap between different generations, the approximate period shown against each are normally accepted.

    Description of history of computing divided into five generations shows how quickly things have changed in the last few decades. Technological progress in this area is continuing. In fact, the fastest-growth period in the history of computing may be still ahead.

    CLASSIFICATION OF COMPUTERS

    Traditionally, computers were classified as microcomputers, minicomputers, mainframes, and supercomputers based on their size, processing speed, and cost. However, this classification is no longer relevant because with rapidly changing technology, new models of computers are introduced every few months having much higher performance and costing less than their preceding models. A recently introduced small system can outperform large models of a few years ago both in cost and performance. Hence, today computers are classified based on their mode of use. According to this classification, computers are classified as notebook computers, personal computers, workstations, mainframe systems, supercomputers, client and server computers, and handled computers.

    Notebook Computers (Laptops)

    Notebook computers are portable computers mainly meant for use by people who need computing resource wherever they go. Their size is about the same as the size of an 8½ x 11 inch notebook or smaller, and their weight is around 2 Kg or less. They are also known as laptop PCs (laptop personal computers or simply laptop) because they are as powerful as a PC, and their size and weight are such that they can be used by placing them on ones lap.

    A notebook computer uses an almost full-size keyboard, a flat screen liquid crystal display, and a trackball, stick, or touchpad (instead of a mouse because notebook computers are often used without a desk) (see Figure 1.6). It also has an inbuilt hard disk and ports for connecting peripheral devices such as printer, pen drive, portable disk, etc. The lid with display screen is foldable in a manner that when not in use, a user can fold it to flush with keyboard to convert the system into notebook form. When in use, the lid with display screen is folded open as shown in the figure. Many models of laptops can be docked on a docking station (device with additional battery, hard disk, I/O ports, etc.) or port replicator (device with additional I/O ports) to take advantage of big monitor, storage space, and other peripherals such as a printer. Users can connect their laptop to a network and download data (read files) from other computers on the network, or access the Internet. Design of most laptops enables their users to establish wireless connectivity with other stationary computers using WiFi and Bluetooth. Bluetooth is an industrial specification for short-range wireless connectivity using globally unlicensed short-range radio frequency. It provides a way to establish wireless connectivity between devices such as PCs, laptops, printers, mobile phones, and digital cameras for exchange of information between them.

    Figure 1.6. A notebook computer (Laptop).

    We can use laptops even at places where there is no external power source available (for example while traveling in a train or airplane) because they can operate with chargeable batteries. With a fully charged battery, a laptop can operate for a few hours.

    Laptops normally run MS-DOS, MS WINDOWS, Mac OS, or Linux operating system. Normally, users use them for word processing, spreadsheet computing, data entry, Internet browsing, accessing and sending e-mails, and preparing presentation materials while traveling. People also often use them to make presentations at locations away from their working places by plugging them into an LCD (liquid crystal display) projection system.

    Processing capability of a laptop is normally as good as an ordinary PC (personal computer) because both use the same type of processor. However, microprocessors for laptops are designed to dynamically adjust their performance with available power to provide maximum power saving. In fact, design of a laptop is such that its each device uses little power and remains suspended if not used. A laptop generally has lesser hard disk storage than a PC to keep its total weight to around two Kg. A laptop is more expensive (2 to 3 times) than a normal PC with similar configuration.

    Personal Computers (PCs)

    A PC is a non-portable, general-purpose computer that fits on a normal size office table (leaving some writing space to keep writing pads and other office stationary), and is used by one person at a time (single-user-oriented). As the name implies, users use PCs for their personal computing needs (such as professional work, personal work, education, and entertainment) either at their work places or at their homes. Hence, PCs can be found in offices, classrooms, homes, hospitals, shops, clinics, etc.

    A PC has several chips (CPU chip, RAM chips, ROM chips, I/O handling chips, etc.) neatly assembled on a main circuit board called system board or motherboard. Motherboard is what distinguishes one PC from another. Sometimes, we differentiate between two PCs based on the microprocessor chip used as CPU. Most modern PCs use microprocessor chips manufactured by Intel, AMD (Advanced Micro Devices), or Apple.

    System configuration of PCs varies depending on their usage. However, the most commonly used configuration consists of a system unit, a monitor (display screen), a keyboard, and a mouse (see Figure 1.7) System unit, in the form of a box, consists of main circuit board (consisting of CPU, memory, etc.), hard disk storage, any special add-on cards (such as network interface card), and ports for connecting peripheral devices such as printer, pen drive, portable disk, etc.

    Figure 1.7. A personal computer (PC).

    System unit of a PC has expansion slots where we can plug in a wide variety of special-function add-on boards (also called add-on cards). These add-on boards contain electronic circuitry for a wide variety of computer-related functions. Number of available expansion slots varies from computer to computer. Some popular add-on boards are:

    Network Interface Card (NIC). It enables connecting a PC to a network. For example, a user needs an Ethernet card to connect his/her PC to an Ethernet LAN.

    Fax modem card. It enables transferring of images of hard-copy documents from a PC to a remote computer via telephone lines just as we do using a fax (facsimile) machine.

    Color and graphics adapter card. It enables interfacing a PC with graphics and/or color video monitor. For example, a user needs a VGA (video graphics array) card to interface his/her PC with a high-resolution monitor.

    Motion video card. It enables integration of full-motion color video/audio with other output on monitor's screen. Since it accepts multiple inputs, videos can be shown full-screen (one video input) or in windows (multiple video inputs).

    System unit of a PC also has I/O ports (both serial and parallel ports) for connecting external I/O or storage devices (such as a printer and other devices external to the system unit box) to the PC in much the same way as you plug electric equipment into an electrical socket.

    Popular operating systems for PCs are MS-DOS, MS-Windows (comprising Windows XP, Vista), Linux, and UNIX. Apple's Macintosh PCs run Apple's propriety OS called Macintosh OS or Mac OS. These operating systems support multitasking enabling a user to switch between tasks.

    PCs generally cost from a few tens of thousands to about a lakh of rupees, depending on configuration. Some major PC manufacturers are Lenovo, Apple, Dell, HCL, Siemens, Toshiba, and Hewlett-Packard.

    Workstations

    A workstation is a powerful desktop computer designed to meet computing needs of engineers, architects, and other professionals who need greater processing power, larger storage, and better graphics display facility than what normal PCs provide. For example, users use workstations commonly for computer-aided design (CAD), simulation of complex scientific and engineering problems, visualization of results of simulation, and multimedia applications such as for creating special audio-visual effects in movies or television programs.

    A workstation is similar to a high-end PC, and typically, only one person uses it at a time (like a PC). Characteristics used to differentiate between the two are:

    Processing power. Processing power of a workstation is more than that of an average PC.

    Storage capacity. Workstations have larger main memory (typically few tens of GB) as compared to PCs (typically few GB). Hard disk capacity of a workstation is also much more (typically few TB) as compared to that of PCs (typically few hundreds of GB).

    Display facility. Most workstations have a large-screen monitor (21 inch or more) capable of displaying high-resolution graphics. PCs normally use monitors having smaller screen (19 inch or less).

    Processor design. PCs normally use CPUs based on CISC technology, whereas workstations use CPUs based on RISC technology.

    Operating system. Unlike PCs that run any desktop OS, all workstations generally run server version of Windows, Mac OS, Linux or UNIX operating system. Unlike most operating systems for PCs that are single-user oriented, operating systems for workstations support multi-user environment, making them suitable for use as a server in network environment.

    Expansion slots. Workstations usually have more expansion slots than PCs providing greater flexibility of supporting additional functionality. For example, a workstation can have two or more NICs plugged into it to enable it to communicate on two different networks (say, Ethernet and ATM).

    Workstations generally cost from few lakhs to few tens of lakhs of rupees depending on configuration. Some manufacturers of workstations are Apple, IBM, Dell, and Hewlett-Packard (HP).

    Mainframe Systems

    Several organizations such as banks, insurance companies, hospitals, railways, etc., need on-line processing of large number of transactions, and require computer systems having massive data storage and processing capabilities. Moreover, often a large number of users need to share a common computing facility such as in research groups, educational institutions, engineering firms, etc. Mainframe systems are used for meeting the computing needs of such organizations.

    Figure 1.8 shows a configuration of a mainframe system. It consists of the following components:

    Host, front-end, and back-end computers. A mainframe system usually consists of several subordinate computers in addition to the main or host computer. Host computer carries out most computations, and has direct control of all other computers. Other computers relieve the host computer of certain routine processing requirements. For example, a front-end computer handles all communications to/from all user terminals, thus relieving host computer of communications-related processing requirements. Similarly, a back-end computer handles all data I/O operations, thus relieving host computer of locating an I/O device and transferring data to/from it. Host and other computers are located in a system room to which entry is restricted to system administrators and maintenance staff only (see Figure 1.8).

    Console(s). One or more console terminals (located in system room) connect directly to the host computer. System administrators use them to monitor the system's health, or perform some system administration activities such as changing system configuration, installing new software, taking system backup, etc.

    Storage devices. For large volume data storage, users connect several magnetic disk drives (located in system room) directly to the back-end computer. Host computer accesses data on these disks via the back-end computer. In addition, there are few tape drives and a magnetic tape library for backup/restoration of data to/from magnetic tapes. System administrators use the tape library (located in system room) to backup data from magnetic disks to magnetic tapes. Tape drives (located in users' room) enable users to bring their input data on tape for processing, or to take their output data on tape after processing.

    Figure 1.8. A mainframe system. System room is shaded.

    User terminals. User terminals serve as access stations. Users use them to work on the system. Although, the figure shows all user terminals in users' room, some of them may be located at geographically distributed locations. Since mainframe systems allow multiple users to use the system simultaneously (through user terminals), their operating systems support multiprogramming with timesharing. This enables all users to get good response time, and an illusion that the system is attending to their jobs.

    Output devices. User terminals also serve the purpose of soft copy output devices. However, for hard copy outputs, there are one or more printers and one or more plotters connected to back-end computer. These output devices are also located in users' room so that users can easily access them to collect their outputs.

    Configuration of a mainframe system depends a lot on the type of its usage and the kind of its users. The example of Figure 1.8 is just one possible configuration.

    A mainframe system looks like a row of large file cabinets, and needs a large room with closely monitored humidity and temperature. It may cost anywhere from few tens of lakhs to few crores of rupees depending on configuration. Mainframe systems having smaller configuration (slower host and subordinate computers, lesser storage space, and fewer user terminals) are known as minicomputers, but there is no well-defined boundary for differentiating between the two. Two major vendors of mainframe systems are IBM and Fujitsu.

    Supercomputers

    Supercomputers are the most powerful and expensive computers available at any given time. Users use them primarily for processing complex scientific applications requiring enormous processing power. Scientists often build models of complex processes and simulate them on a supercomputer. For example, in nuclear fission, when a fissionable material approaches a critical mass, scientists may want to know exactly what will happen during every millisecond of a nuclear chain reaction. Such scientists use a supercomputer to model actions and reactions of literally millions of atoms as they react. A scientific application like this involves manipulation of a complex mathematical model, often requiring processing of trillions of operations in a reasonable time, necessitating use of a supercomputer. Some supercomputing applications (applications that need supercomputers for processing) are:

    Petroleum industry uses supercomputers to analyze volumes of seismic data gathered during oil-seeking explorations to identify areas where there is possibility of getting petroleum products inside the earth. This helps in more detailed imaging of underground geological structures so that expensive resources for drilling and extraction of petroleum products from oil wells can be effectively channelized to areas where analysis results show better possibility of getting petroleum deposits.

    Aerospace industry uses supercomputers to simulate airflow around an aircraft at different speeds and altitude. This helps in producing effective aerodynamic designs to develop aircrafts with superior performance.

    Automobile industry uses supercomputers to do crash simulation of an automobile design before releasing it for manufacturing. Doing so is less expensive, more revealing, and safer than crashing a real model. This helps in producing better automobiles that are safer to ride.

    Structural mechanics industry uses supercomputers to solve complex structural engineering problems that designers of various types of civil and mechanical structures need to deal with to ensure safety, reliability, and cost effectiveness. For example, designer of a large bridge has to ensure that the bridge must work in various atmospheric conditions and pressures from wind velocity, etc., and under different load conditions. Actual construction and testing of such expensive structures is prohibitive in most cases. Using mathematical modeling techniques, various combinations can be tried out on a supercomputer without going for actual construction/manufacturing of such structures and optimum design can be picked up for final implementation.

    Meteorological centers use supercomputers for weather forecasting. They feed weather data, supplied by a worldwide network of space satellites, airplanes, and ground stations, to a supercomputer. The supercomputer analyzes the data using a series of computer programs to arrive at forecasts. This analysis involves solving of complex mathematical equations modeling atmosphere and climate processes. For example, in India, the National Center for Medium Range Weather Forecasting (NCMRWF), located in New Delhi, uses supercomputer for medium range weather forecasts that is crucial for Indian agriculture.

    Material scientists and physicists use supercomputers to design new materials. Nanostructure technology requires microscopic investigation of mechanical, electrical, optical, and structural properties of materials having dimensions of the order of nanometer. For this, scientists need to simulate nano-scale materials like clusters of atoms, quantum dots, quantum wires, etc. using principles of molecular dynamics technique. Such simulations are compute-intensive as they combine complexity of molecular dynamics and electronic structure. Hence, they need supercomputers to complete them in a reasonable time.

    Film and TV industries use supercomputers to create special effects for movies and TV programs. Supercomputers help in producing computer-generated images using advanced graphics features in a short time. Movies or TV programs incorporate the produced images in them to create special effects. Thus, supercomputers substantially reduce the time needed to produce a full-length feature film having several special effects.

    Supercomputers use multiprocessing and parallel processing technologies to solve complex problems faster. That is, they use multiple processors, and parallel processing enables dividing a complex problem into smaller problems that different processors can process in parallel. A parallel program breaks up a problem into smaller computational modules so that different processors can work independently and cooperate to solve it faster. Hence, if the problem takes 100 hours to process on a single-processor system and if it can be broken into 100 smaller computational modules, a supercomputer having 100 processors can theoretically solve it in about one hour. Since modern supercomputers use parallel processing technology, they are also known as parallel computers orparallel processing systems. They are also known as massively parallel processors because they use thousands of processors.

    We measure speed of modern supercomputers in teraflops and petaflops. A teraflop is 10¹² floating-point arithmetic operations per second and a petaflop is 10¹⁵ floating-point arithmetic operations per second.

    A supercomputer, due to its specialized nature and huge cost, is often treated as a national resource that is shared by several academic and research organizations. For developing countries, it is not economically viable to have many such systems because of the cost involved in setting up a supercomputing facility, maintaining the facility, and fast obsolescence of technology.

    Major vendors of supercomputers include Cray Research Company, IBM, Silicon Graphics, Fujitsu, NEC, Hitachi, Hewlett-Packard, and Intel. Realizing the importance of supercomputers in a country's overall progress, national effort to build indigenous supercomputers has been underway in India. Development of PARAM series of supercomputers by the Centre for Development of Advanced Computing (C-DAC), Anupam series of supercomputers by the Bhabha Atomic Research Centre (BARC), and PACE series of supercomputers by the Defence Research and Development Organization (DRDO), Hyderabad are results of this effort. Figure 1.9 shows the PARAM Padma supercomputer (one of the PARAM systems) developed by C-DAC.

    Figure 1.9. C-DAC's PARAM Padma supercomputer [reproduced with permission from C-DAC].

    Cost of a supercomputer may range from few tens of lakhs to few hundreds of crores of rupees depending on configuration.

    Client and Server Computers

    With increased popularity of computer networks, it became possible to interconnect several computers that can communicate with each other over a network. In such a computing environment, multiple users can share several resources/services for cost-effective usage. Moreover, the shared resources/services can be best managed/offered centrally. A few examples of such resources/services are:

    File server. It provides a central storage facility to store files of several users on a network.

    Database server. It manages a centralized database, and enables several users on a network to have shared access to the same database.

    Print server. It manages one or more printers, and accepts and processes print requests from any user in a network.

    Name server. It translates names into network addresses enabling different computers on a network to communicate with each other.

    In these cases, it is usual to have one process that owns a resource or service and is in charge of managing it. This process accepts requests from other processes that want to use the resource or service. The process that owns the resource and does this management is called a server process, and the computer on which the server process runs is called a server computer because it services requests for use of the resource. Other processes that send service requests to the server are called client processes, and computers on which client processes run are called client computers. Note that there may be multiple client computers sending service requests to the same server computer. Figure 1.10 shows a generic client-server computing environment. It is comprised of the following entities:

    Client. A client is generally a single-user system (laptop, PC, workstation, or hand-held computer) that provides user-friendly interface to end users. It runs client processes that send service requests to a server. Generally, there are multiple clients in a client-server computing environment and each client has its own system-specific user interface for allowing users to interface with the system.

    Server. A server is a relatively powerful system (workstation, mainframe, or supercomputer) that manages a shared resource and/or provides a set of shared user services to clients. It runs a server process that services client requests for use of the shared resource managed by the server. Hence, a server's operating system has features to handle multiple client-requests simultaneously. Servers are headless machines not having monitor, keyboard, and mouse attached to them as users do not directly interact with them and use them over network.

    Network. A network interconnects all clients and servers of a client-server computing environment. It may be a LAN, WAN, or an Internet of networks.

    A set of computers interconnected together to form a client-server computing environment is collectively known as a distributed computing system or distributed system. Client-server computing involves splitting an application into tasks and putting each task on a computer, which can handle it most efficiently. This usually means putting processing for presentation on user's machine (the client) and data management and storage on a server. Depending on the application and software used, all data processing may occur on client or be split between client and server. Server software accepts a request for service from a client and returns the results of processing that request to the client. The client then manipulates the received results and presents it in proper format to the end user.

    Notice that in client-server computing, the central idea is allocation of application-level tasks between clients and servers. Computers and operating systems of a client and server may be different. For example, a client may be a PC running MS-Windows OS, whereas a server may be a mainframe system running UNIX OS. In fact, a client-server computing environment may have several different types of client computers with different operating systems and several different types of server computers with different operating systems. As long as a client and server share a common communication protocol and support the same applications, these lower-level differences are irrelevant. The communication protocol enables a client and server to inter-operate even when they are different types of computers running different operating systems. One such standard communication protocol is TCP/IP.

    In client-server computing environment, it is common for a server to use services of another server, and hence, be both a client and a server at the same time. For example, let us assume that a client-server computing environment has clients, a file server, and a disk block server. Any client can send a file access request to the file server. On receiving such a request, the file server checks access rights, etc. of the user making the request. However, instead of reading/writing the file blocks itself; it sends a request to the disk block server for accessing requested data blocks. Disk block server returns requested data blocks to the file server, which then extracts the desired data from the data blocks and returns it to the client. In this scenario, the file server is both a server and a client. It is a server for clients, but a client for the disk block server. Hence, the concept of client and server computers is purely role-based, and may change dynamically as the role of a computer changes.

    Figure 1.10. A generic client-server computing environment.

    Handheld Computers

    A handheld computer, or simply handheld, is a small computing device that can be used by holding in hand (need not be kept on a table or lap to be operated). Obviously, its size, weight, and design are such that a user can use it by holding it in hand. It is also known as palmtop because it can be kept on palm and operated.

    Handheld computers are computing devices of choice for people requiring computing power anytime, anywhere. There are many types of handheld computers available today. Some popular ones are described here.

    Tablet PC

    A tablet PC is a miniaturized laptop. It usually provides all features of a laptop with the following differences or enhancements:

    Light weight. A tablet PC is much lighter than a laptop. For weight reduction, their designers remove from their basic structure a few infrequently used devices and provide them as separate add on.

    Screen flip. A user can rotate, turn around, and flip the screen of a tablet PC over the keyboard area. When a user flips the screen over the keyboard, it hides the keyboard and leaves only the screen visible and usable. In this configuration, the user holds the tablet PC in hand and uses it as digital tablet. The display on the screen shows the content in portrait mode in this configuration (from normal landscape layout mode when using keyboard).

    Handwriting recognition. Usually, a tablet PC comes with a specially designed pen that a user can use to write directly on the screen. Underlying OS translates the pen movements to smooth plot lines that running application can understand as handwritten input.

    Voice recognition. Usually, tablet PCs have voice recognition feature that allows voice input capable applications to accept audio/voice commands. This feature also enables input of voice data.

    Special design for tablet use. When a user flips the screen in tablet mode, he/she cannot use the keyboard because the screen hides it. Apart from voice and handwriting recognition, some quick access hardware keys and some on-screen programmable software keys are present. A user can use these keys to invoke predefined actions that he/she cannot make with pen, or finds it difficult to make with pen.

    PDA/Pocket PC

    Personal Digital Assistant (PDA) was originally introduced as a Personal Information Manager (PIM) device. PIM features include contact list, calendar, task list, e-mail, pocket word-processing application, pocket spreadsheet application, presentation viewer, and host of other applications that come in handy while on the move. PDA has a decent size LCD touch screen with a pen for handwriting recognition. It usually has a PC based synchronization utility that a user can use to transfer and synchronize identified data between PC and PDA. With add-on memory cards, a user can increase its storage as required. Some PDAs also provide USB extension port, which a user can use to connect to external devices for extended features like external monitor, LCD Projector, etc. Almost all PDAs have some digital camera capability as well.

    Newer PDAs also pack network capability using WiFi, Bluetooth, etc. Due to network connectivity option, PDAs have applications in many domains such as medicine, teaching/training, data collection, sports, and Global Positioning System (GPS) based location and route finder. Several PDAs also provide GSM/GPRS service connectivity and, thus, their users can use them as phone for making and receiving cell phone calls, SMS, etc.

    PDAs come with several options of operating systems ranging from MS-Windows Mobile, iOS, PalmOS, SymbianOS, Linux, Blackberry OS, etc.

    Smartphone

    Smartphone is a fully functional mobile phone with computing power. The major distinction between a PDA and a smartphone is that while PDA is mostly a computing platform like PC with optional phone capability, smartphone is a cell phone with PDA-like capability. In essence, smartphone is voice centric whereas, PDA is data centric.

    Normally, smartphones are smaller than PDAs. A user can use a smartphone with one hand, whereas PDAs usually require both hands for operation.

    Figure 1.11 shows samples of some handheld computers.

    Figure 1.11. Samples of handheld computers.

    Points to Remember

    A computer is a fast calculating device. It is also known as a data processor because it can store, process, and retrieve data whenever desired.

    The activity of processing data using a computer is called data processing. Data is raw material used as input to data processing and information is processed data obtained as output of data processing.

    Key characteristics of computers are: automatic, speed, accuracy, diligence, versatility, memory, lack of intelligence, and lack of feelings.

    Charles Babbage is considered the father of modern digital programmable computers.

    Some of the well-known early computers are: MARK I (1937-44), ABC (1939-42), ENIAC (1943-46), EDVAC (1946-52), EDSAC (1947-49), UNIVAC I (1951), IBM-701 (1952), and IBM-650 (1953).

    Dr. John Von Neumann introduced the stored program concept that considerably influenced the development of modern digital computers. Due to this feature, we often refer to modern digital computers as stored program digital computers.

    Generation in computer talk provides a framework for the growth of computer industry based on key technologies developed. Originally, it was used to distinguish between hardware technologies but was later extended to include both hardware and software technologies.

    Till date, there are five computer generations - first, second, third, fourth, and fifth.

    Figure 1.5 summarizes the key hardware and software technologies and key characteristics of computers of five computer generations.

    Traditionally, computers were classified as microcomputers, minicomputers, mainframes, and supercomputers based on their size, processing speed, and cost. However, with rapidly changing technology this classification is no longer relevant.

    Today, computers are classified based on their mode of use. According to this classification, computers are classified as notebook computers, personal computers, workstations, mainframe systems, supercomputers, client and server computers, and handheld computers.

    Notebook computers (or Laptops) are portable computers that are small enough to fit inside a briefcase, light enough to carry around easily, and operate with chargeable batteries so that users can use them even at places with no external power source available. They normally run MS-DOS, MS-WINDOWS, Mac OS or Linux operating system. Users use them commonly for word processing, spreadsheet computing, data entry, Internet browsing, accessing and sending e-mails, and preparing presentation materials while a person is travelling.

    A personal computer is a non-portable, general-purpose computer that fits on a normal size office table, and only one person uses it at a time. PCs meet personal computing needs of individuals either at their work places or at their homes. They normally run MS-DOS, MS-Windows (comprising Windows XP, Vista), Mac OS, Linux, or UNIX operating system, and support multitasking enabling a user to switch between tasks.

    A workstation is a powerful desktop computer designed to meet computing needs of engineers, architects, and other professionals, who need greater processing power, larger storage, and better graphics display facility than what normal PCs provide. Users use workstations commonly for computer-aided design, multimedia applications, simulation of complex scientific and engineering problems, and visualization of results of simulation. Workstations generally run server version of Windows, Mac OS, Linux or UNIX operating system and support multi-user environment.

    Mainframe systems are used for meeting the computing needs of mid to large size organizations such as banks, insurance companies, hospitals, railways, etc. They are also used in such environments where a large number of users need to share a common computing facility such as in research groups, educational institutions, engineering firms, etc. A typical configuration of a mainframe system consists of a host computer, a front-end computer, a back-end computer, one or more console terminals, several magnetic disk drives, a few tape drives, a magnetic tape library, several user terminals, several printers, and one or more plotters. Mainframe systems having smaller configuration (slower host and subordinate computers, lesser storage space, and fewer user terminals) are known as minicomputers.

    Supercomputers are the most powerful and expensive computers available at any given time. Users use them primarily for processing complex scientific applications requiring enormous processing power. Some supercomputing applications are analysis of large volumes of seismic data, simulation of airflow around an aircraft, crash simulation of an automobile design, solving complex structural engineering problems, genome sequence analysis in bioinformatics, design of new materials, creation of special effects in movies, and weather forecasting.

    Modern supercomputers use multiprocessing and parallel processing technologies to solve complex problems faster, and hence, they are also known as parallel computers orparallel processing systems. They are also known as massively parallel processors because they use thousands of processors.

    A handheld computer is a device whose size, weight, and design are such that a user can use it by holding it in hand. It is also known as palmtop because it can be kept on palm and operated. Some popular handheld computers are tablet PC, PDA/Pocket PC, and smartphone.

    In client-server computing environment, a client is generally a single-user system that provides user-friendly interface to end users. It runs client processes that send service requests to a server. A server is a relatively powerful system that manages a shared resource and/or provides a set of shared user services to the clients. It runs a server process that services client requests for use of the shared resource managed by it.

    A set of computers interconnected together to form a client-server computing environment is collectively known as a distributed computing system or distributed system.

    Client-server computing involves splitting an application into tasks and putting each task on a computer, which can handle it most efficiently. Computers and operating systems of a client and server may be different. As long as a client and server share a common communication protocol and support the same applications, these lower-level differences are irrelevant.

    In a client-server computing environment, it is common for one server to use the services of another server, and hence, be both a client and a server at the same time. Hence, the concept of client and server computers is purely role-based and may change dynamically as the role of a computer changes.

    Questions

    What is a computer? Why it is also known as a data processor?

    What is data processing? Differentiate between data and information. Which is more useful to people and why?

    List and explain some important characteristics of a computer.

    Who is known as the father of modern digital programmable computers and why?

    Who invented the concept of stored program? Why is this concept so important?

    What is generation in computer

    Enjoying the preview?
    Page 1 of 1