Sei sulla pagina 1di 14

Operating System Notes:-

Spooling:-
Acronym for simultaneous peripheral operations on-line, spooling refers to putting jobs in a buffer, a
special area in memory or on a disk where a device can access them when it is ready. Spooling is useful
because devices access data at different rates. The buffer provides a waiting station where data can rest
while the slower device catches up.

The most common spooling application is print spooling. In print spooling, documents are loaded into a
buffer (usually an area on a disk), and then the printer pulls them off the buffer at its own rate. Because the
documents are in a buffer where they can be accessed by the printer, you can perform other operations on
the computer while the printing takes place in the background. Spooling also lets you place a number of
print jobs on a queue instead of waiting for each one to finish before specifying the next one.

1.A task performed by a computer system. For example, printing a file is a job. Jobs can be performed by a
single program or by a collection of programs

BUFFER:- A temporary storage area, usually in RAM. The purpose of most buffers is to act as a holding
area, enabling the CPU to manipulate data before transferring it to a device.

Because the processes of reading and writing data to a disk are relatively slow, many programs keep track
of data changes in a buffer and then copy the buffer to a disk. For example, word processors employ a
buffer to keep track of changes to files. Then when you save the file, the word processor updates the
disk file with the contents of the buffer. This is much more efficient than accessing the file on the disk each
time you make a change to the file.

Note that because your changes are initially stored in a buffer, not on the disk, all of them will be lost if the
computer fails during an editing session. For this reason, it is a good idea to save your file periodically.
Most word processors automatically save files at regular intervals.

Buffers are commonly used when burning data onto a compact disc, where the data is transferred to the
buffer before being written to the disc.

Another common use of buffers is for printing documents. When you enter a PRINT command, the
operating system copies your document to a print buffer (a free area in memory or on a disk) from which
the printer can draw characters at its own pace. This frees the computer to perform other tasks while the
printer is running in the background. Print buffering is called spooling.

Most keyboard drivers also contain a buffer so that you can edit typing mistakes before sending your
command to a program. Many operating systems, including DOS, also use a disk buffer to temporarily hold
data that they have read from a disk. The disk buffer is really a cache.

(v.) To move data into a temporary storage area.

I/O Interrupts:­

General Purpose of Interrupts

The temporary stopping of the current program routine, in order to execute some higher 
priority I/O subroutine, is called an interrupt. The interrupt mechanism in the CPU forces 

1
a branch out of the current program routine to one of several subroutines, depending upon 
which level of interrupt occurs.

I/O operations are started as a result of the execution of a program instruction. Once 
started, the I/O device continues its operation at the same time that the job program is 
being executed. Eventually the I/O operation reaches a point at which a program routine 
that is related to the I/O operation must be executed. At that point an interrupt is 
requested by the I/O device involved. The interrupt action results in a forced branch to the 
required subroutine.

In addition to the routine needed to start an I/O operation, subroutines are required to:

1. Transfer a data word between an I/O device and main storage (for write or read 
operations) 
2. Handle unusual (or check) conditions related to the I/O device 
3. Handle the ending of the I/O device operation 

Note: Some I/O devices do not require program­instruction handling of data transfers to 
or from core storage. These devices are described in subsequent sections of this book. 
Their method of transferring data is called cycle steal and is not related to the interrupt 
program­subroutine method of handling data described in this section.

In order to understand the interrupt scheme, first examine the start of an I/O operation. 
Then contrast this operation with an interrupt occurrence, which in many respects is 
similar to the beginning of the I/O operation.

Starting an I/O Operation

Shown in the following diagram are:

1. The job program routine (which, for this discussion, includes those program steps 
not used for I/O operation handling)

2. The I/O program routine that starts an I/O device 
3. The I/O device operation (such as moving a punched card through the read feed of 
a card reader) 

2
Branching from the job routine to the I/O routine occurs at point A in the diagram. This 
branch is a program­controlled operation that is started because the job program is at a 
point at which the I/O operation is required (such as the reading of a card in the card 
reader). Similarly, when the end of the I/O routine is reached, a program­controlled 
branch is made back to the job routine (point B). (Program­controlled means that the 
logical point at which the branch occurs is determined by the program; no forcing is 
performed by the CPU.)

Because the CPU can execute only one instruction at a time, the I/O routine and the job 
routine are not overlapped.

But the I/O device operation is overlapped with program­instruction execution because 
the I/O device can perform its functions without using the CPU mechanism required for 
execution of program instructions. In other words, the I/O device operation, once started, 
continues while program­instruction execution takes place in the CPU.

In summary, the important points to notice about starting the I/O operation are:

1. Branching operations to the I/O routine and then back to the job routine are under 
program control. 
2. Once started, the I/O device operates at the same time as the current program 
routine (I/O or job) is being executed. 

Interrupt Action

Assume that the I/O device in operation is the card reader (1442). The I/O device 
operation that has been started, then, is the moving of a card past the read station.

As soon as the card is moved far enough for one card column to be read, the card reader 
signals the CPU. This signal to the CPU is an interrupt request. The interrupt, however, 
does not occur until execution of the current instruction is completed. At that time, a 
forced branch occurs to an interrupt­handling subroutine.

The interrupt action can be shown pictorially in the following way:

3
Logical requirements of the interrupt subroutine are carried out before a return to the 
interrupted program.

To return to the program routine that was in progress before the interrupt occurred, 
another branch must be provided. This, however, is a program­controlled branch and, 
therefore, is not forced by the CPU.

STORAGE HIERARCHY

The range of memory and storage devices within the computer system. The following list
starts with the slowest devices and ends with the fastest.

VERY SLOW

Punch cards (obsolete)


Punched paper tape (obsolete)

FASTER

Bubble memory
Floppy disks

MUCH FASTER

Magnetic tape
Optical disks (CD-ROM, DVD-ROM, MO, etc.)
Magnetic disks with movable heads
Magnetic disks with fixed heads (obsolete)
Low-speed bulk memory

FASTEST

Flash memory
Main memory
Cache memory
Microcode
Registers

4
Disk scheduling
In multiprogramming systems several different processes may want to use the system's
resources simultaneously. For example, processes will contend to access an auxiliary
storage device such as a disk. The disk drive needs some mechanism to resolve this
contention, sharing the resource between the processes fairly and efficiently.

A magnetic disk consists of a collection of platters which rotate on about a central


spindle. These platters are metal disks covered with magnetic recording material on both
sides. Each disk surface is divided into concentric circles called tracks. Each track is
divided into sectors where information is stored. The reading and writing device, called
the head moves over the surface of the platters until it finds the track and sector it
requires. This is like finding someone's home by first finding the street (track) and then
the particular house number (sector). There is one head for each surface on which
information is stored each on its own arm. In most systems the arms are connected
together so that the heads move in unison, so that each head is over the same track on
each surface. The term cylinder refers to the collection of all tracks which are under the
heads at any time.

In order to satisfy an I/O request the disk controller must first move the head to the
correct track and sector. Moving the head between cylinders takes a relatively long time
so in order to maximise the number of I/O requests which can be satisfied the scheduling
policy should try to minimise the movement of the head. On the other hand, minimising
head movement by always satisfying the request of the closest location may mean that
some requests have to wait a long time. Thus, there is a trade-off between throughput (the
average number of requests satisfied in unit time) and response time (the average time

5
between a request arriving and it being satisfied). Various different disk scheduling
policies are used:

First Come First Served (FCFS)


The disk controller processes the I/O requests in the order in which they arrive, thus
moving backwards and forwards across the surface of the disk to get to the next requested
location each time. Since no reordering of request takes place the head may move almost
randomly across the surface of the disk. This policy aims to minimise response time with
little regard for throughput.

Shortest Seek Time First (SSTF)


Each time an I/O request has been completed the disk controller selects the waiting
request whose sector location is closest to the current position of the head. The movement
across the surface of the disk is still apparently random but the time spent in movement is
minimised. This policy will have better throughput than FCFS but a request may be
delayed for a long period if many closely located requests arrive just after it.

SCAN
The drive head sweeps across the entire surface of the disk, visiting the outermost
cylinders before changing direction and sweeping back to the innermost cylinders. It
selects the next waiting requests whose location it will reach on its path backwards and
forwards across the disk. Thus, the movement time should be less than FCFS but the
policy is clearly fairer than SSTF.

Circular SCAN (C-SCAN)


C-SCAN is similar to SCAN but I/O requests are only satisfied when the drive head is
travelling in one direction across the surface of the disk. The head sweeps from the
innermost cylinder to the outermost cylinder satisfying the waiting requests in order of
their locations. When it reaches the outermost cylinder it sweeps back to the innermost
cylinder without satisfying any requests and then starts again.

LOOK
Similarly to SCAN, the drive sweeps across the surface of the disk, satisfying requests, in
alternating directions. However the drive now makes use of the information it has about
the locations requested by the waiting requests. For example, a sweep out towards the
outer edge of the disk will be reversed when there are no waiting requests for locations
beyond the current cylinder.

Circular LOOK (C-LOOK)

Based on C-SCAN, C-LOOK involves the drive head sweeping across the disk satisfying
requests in one direction only. As in LOOK the drive makes use of the location of waiting
requests in order to determine how far to continue a sweep, and where to commence the
next sweep. Thus it may curtail a sweep towards the outer edge when there are locations
requested in cylinders beyond the current position, and commence its next sweep at a
cylinder which is not the innermost one, if that is the most central one for which a sector
is currently requested.

6
system call:- The invocation(The execution of a program or function).
of an operating system routine. Operating systems contain sets of routines for performing various low-
level operations. For example, all operating systems have a routine for creating a directory. If you want to
execute an operating system routine from a program, you must make a system call. An organized list of
instructions that, when executed, causes the computer to behave in a predetermined manner. Without
programs, computers are useless.

Ex:-A program is like a recipe. It contains a list of ingredients (called variables) and a list of directions
(called statements) that tell the computer what to do with the variables. The variables can represent numeric
data, text, or graphical images.

There are many programming languages -- C, C++, Pascal, BASIC, FORTRAN, COBOL, and LISP are
just a few. These are all high-level languages. One can also write programs in low-level languages called
assembly languages, although this is more difficult. Low-level languages are closer to the language used by
a computer, while high-level languages are closer to human languages.

Eventually, every program must be translated into a machine language that the computer can understand.
This translation is performed by compilers, interpreters, and assemblers.

When you buy software, you normally buy an executable version of a program. This means that the
program is already in machine language -- it has already been compiled and assembled and is ready to
execute.

operating system
Last modified: Friday, January 04, 2002

The most important program that runs on a computer. Every general-purpose computer must have an
operating system to run other programs. Operating systems perform basic tasks, such as recognizing
input from the keyboard, sending output to the display screen, keeping track of files and directories on the
disk, and controlling peripheral devices such as disk drives and printers.

7
For large systems, the operating system has even greater responsibilities and powers. It is like a traffic cop
-- it makes sure that different programs and users running at the same time do not interfere with each other.
The operating system is also responsible for security, ensuring that unauthorized users do not access the
system.

Operating systems can be classified as follows:

multi-user : Allows two or more users to run programs at the same time. Some operating
systems permit hundreds or even thousands of concurrent users.
multiprocessing : Supports running a program on more than one CPU.
multitasking : Allows more than one program to run concurrently.
multithreading : Allows different parts of a single program to run concurrently.
real time: Responds to input instantly. General-purpose operating systems, such as DOS and
UNIX, are not real-time.

Operating systems provide a software platform on top of which other programs, called application
programs, can run. The application programs must be written to run on top of a particular operating
system. Your choice of operating system, therefore, determines to a great extent the applications you can
run. For PCs, the most popular operating systems are DOS, OS/2, and Windows, but others are available,
such as Linux.

As a user, you normally interact with the operating system through a set of commands. For example, the
DOS operating system contains commands such as COPY and RENAME for copying files and changing
the names of files, respectively. The commands are accepted and executed by a part of the operating system
called the command processor or command line interpreter. Graphical user interfaces allow you to enter
commands by pointing and clicking at objects that appear on the screen.

directory
Last modified: Thursday, March 04, 2004

(1) An organizational unit, or container, used to organize folders and files into a hierarchical structure.
Directories contain bookkeeping information about files that are, figuratively speaking, beneath them in
the hierarchy. You can think of a directory as a file cabinet that contains folders that contain files. Many
graphical user interfaces use the term folder instead of directory.

8
Computer manuals often describe directories and file structures in terms of an inverted tree. The files and
directories at any level are contained in the directory above them. To access a file, you may need to specify
the names of all the directories above it. You do this by specifying a path.

The topmost directory in any file is called the root directory. A directory that is below another directory is
called a subdirectory. A directory above a subdirectory is called the parent directory. Under DOS and
Windows, the root directory is a back slash (\).

To read information from, or write information into, a directory, you must use an operating system
command. You cannot directly edit directory files. For example, the DIR command in DOS reads a
directory file and displays its contents.

(2) In networks, a database of network resources, such as e-mail addresses. See under directory service.

network
Last modified: Monday, November 13, 2006

A group of two or more computer systems linked together. There are many types of computer networks,
including:
local-area networks (LANs) : The computers are geographically close together (that is, in the
same building).
wide-area networks (WANs) : The computers are farther apart and are connected by
telephone lines or radio waves.
campus-area networks (CANs): The computers are within a limited geographic area, such as
a campus or military base.
metropolitan-area networks MANs): A data network designed for a town or city.
home-area networks (HANs): A network contained within a user's home that connects a
person's digital devices.

In addition to these types, the following characteristics are also used to categorize different types of
networks:

9
topology : The geometric arrangement of a computer system. Common topologies include a
bus, star, and ring. See the Network topology diagrams in the Quick Reference section of
Webopedia.
protocol : The protocol defines a common set of rules and signals that computers on the
network use to communicate. One of the most popular protocols for LANs is called Ethernet.
Another popular LAN protocol for PCs is the IBM token-ring network .
architecture : Networks can be broadly classified as using either a peer-to-peer or
client/server architecture.

Computers on a network are sometimes called nodes. Computers and devices that allocate resources for a
network are called servers.

(v.) To connect two or more computers together with the ability to communicate with each other.

Batch processing:-Executing a series of noninteractive jobs all at one time.


The term originated in the days when users entered programs on punch cards. They would give a batch of
these programmed cards to the system operator, who would feed them into the computer.

Batch jobs can be stored up during working hours and then executed during the evening or whenever the
computer is idle. Batch processing is particularly useful for operations that require the computer or a
peripheral device for an extended period of time. Once a batch job begins, it continues until it is done or
until an error occurs. Note that batch processing implies that there is no interaction with the user while the
program is being executed.

An example of batch processing is the way that credit card companies process billing. The customer does
not receive a bill for each separate credit card purchase but one monthly bill for all of that month’s
purchases. The bill is created through batch processing, where all of the data are collected and held until the
bill is processed as a batch at the end of the billing cycle.

The opposite of batch processing is transaction processing or interactive processing. In interactive


processing, the application responds to commands as soon as you enter them.

Fundamental Concept: Computer Operating System


History

Fundamental Concept: Computer Operating System

Fundamental concept of computer operating system starts with the structure of the OS.
Conceptually, an operating system can be broken down into the kernel, shell and system utilities.
The kernel is the core of the operating system, and in some cases, the distinction between the
kernel and the shell is clear; in others, the distinction is only conceptual. A process is a program in
execution. New processes are spawned when needed and terminated to free memory and other
resources. Files are a method of organizing the information on disk. Files are usually organized in an
inverted tree-like structure. A system call is a program that requests something from the operating
system.

The roles of an operating system vary with the hardware and user programs it was designed to work
with. The earliest operating systems were designed to help applications interact with hardware. This
has grown to the point that the operating system now defines the machine. The operating system
provides an interface to the underlying hardware, the application programs, and the user.

The operating system manages

• Hardware

10
• CPU
• Memory
• Device Drivers
• I/O
• Applications

There are four basic types of operating systems:

• Real time operating system


• Single user, single task operating system
• Single user, multi task operating system
• Multi user operating system

History of Operating System

History of operating system was started with the first generation computers. They operated on a
purely mechanical basis. There was no concept of an operating system. The user fed data and the
program into the computer using punched cards. The programs were debugged using front panel
switches and lights. The second-generation computers were batch systems where the jobs and
instructions were put together so that they could be processed without interruptions. The third
generation computers used the concepts of multiprogramming and time-sharing. Spooling batch
systems were the first and simplest of multiprogramming systems. The major advantage was that
the output from the jobs was available as soon as the job was completed. The fourth generation saw
the advent of modern operating systems for PCs, workstations and servers and also includes
networking and distributed operating systems.

The early operating systems were diverse, with each vendor producing their own operating
systems specific to their hardware. This continued until the 1960s when IBM developed the S/360
series of machines, where all machines ran the same operating system, OS/360. CP/M (Control
Program for Microcomputers) was the first generic operating system for microcomputers. Before
CP/M, every brand of computer had its own unique operating system. Because of this, programs
could not be shared between computers and software had to be rewritten for every type of
computer. The nearest to the operating system was the "job control language" (JCL) required to
submit a program for execution on the mainframe. CP/M was a complete software development
environment that enabled to write software that was machine-independent, and move CP/M itself to
other computers. OS/2 was the first operating system to provide intrinsic multitasking based on
hardware support. It was text-mode only and allowed only one program to be on the screen at a
time, even though other programs could be running in the background.

Types of Operating systems :-

Within the broad family of operating systems, there are generally four types, categorised based on the types of
computers they control and the sort of applications they support. The broad categories are:

Real-time operating systems:


They are used to control machinery, scientific instruments and industrial systems. An RTOS typically has
very little user-interface capability, and no end-user utilities, since the system will be a sealed box when
delivered for use. A very important part of an RTOS is managing the resources of the computer so that a
particular operation executes in precisely the same amount of time every time it occurs. In a complex machine,
having a part move more quickly just because system resources are available may be just as catastrophic as
having it not move at all because the system is busy.

Single-user, single-tasking operating system:


As the name implies, this operating system is designed to manage the computer so that one user can effectively
do one thing at a time. The Palm O.S. for Palm handheld computers is a good example of a modern single-user,
single-task operating system.

11
Single-user, multi-tasking operating system:
This is the type of operating system most people use on there desktop and laptop computers today. Windows
98 and the Mac O.S. are both examples of an operating system that will let a single user has several programs in
operation at the same time. For example, it's entirely possible for a Windows user to be writing a note in a word
processor while downloading a file from the Internet while printing the text of an e-mail message.

Multi-user operating systems:


A multi-user operating system allows many different users to take advantage of the computer's resources
simultaneously. The operating system must make sure that the requirements of the various users are balanced,
and that each of the programs they are using has sufficient and separate resources so that a problem with
one user doesn't affect the entire community of users. Unix, VMS, and mainframe operating systems, such
as MVS, are examples of multi-user operating systems. It's important to differentiate here between multi-user
operating systems and single-user operating systems that support networking. Windows 2000 and Novell
Netware can each support hundreds or thousands of networked users, but the operating systems themselves
aren't true multi-user operating systems. The system administrator is the only user for Windows 2000 or
Netware. The network support and the entire remote user logins the network enables are, in the overall plan of
the operating system, a program being run by the administrative user.

Direct memory access


Direct memory access (DMA) is a feature of modern computers that allows certain
hardware subsystems within the computer to access system memory for reading and/or
writing independently of the central processing unit. Many hardware systems use DMA
including disk drive controllers, graphics cards, network cards, and sound cards.
Computers that have DMA channels can transfer data to and from devices with much less
CPU overhead than computers without a DMA channel.

Without DMA, using programmed input/output (PIO) mode, the CPU typically has to be
occupied for the entire time it's performing a transfer. With DMA, the CPU would initiate
the transfer, do other operations while the transfer is in progress, and receive an interrupt
from the DMA controller once the operation has been done. This is especially useful in
real-time computing applications where not stalling behind concurrent operations is
critical.

Contents

• 1 Principle
• 2 Cache coherency problem
• 3 DMA engines
• 4 Examples
o 4.1 ISA
o 4.2 PCI
o 4.3 AHB

Principle

DMA is an essential feature of all modern computers, as it allows devices to transfer data
without subjecting the CPU to a heavy overhead. Otherwise, the CPU would have to copy
each piece of data from the source to the destination. This is typically slower than

12
copying normal blocks of memory since access to I/O devices over a peripheral bus is
generally slower than normal system RAM. During this time the CPU would be
unavailable for any other tasks involving CPU bus access, although it could continue
doing any work which did not require bus access.

A DMA transfer essentially copies a block of memory from one device to another. While
the CPU initiates the transfer, it does not execute it. For so-called "third party" DMA, as
is normally used with the ISA bus, the transfer is performed by a DMA controller which
is typically part of the motherboard chipset. More advanced bus designs such as PCI
typically use bus mastering DMA, where the device takes control of the bus and performs
the transfer itself.

A typical usage of DMA is copying a block of memory from system RAM to or from a
buffer on the device. Such an operation does not stall the processor, which as a result can
be scheduled to perform other tasks. DMA transfers are essential to high performance
embedded systems. It is also essential in providing so-called zero-copy implementations
of peripheral device drivers as well as functionalities such as network packet routing,
audio playback and streaming video.

Cache coherency problem

DMA can lead to cache coherency problems. Imagine a CPU equipped with a cache and
an external memory, which can be accessed directly by devices using DMA. When the
CPU accesses location X in the memory, the current value will be stored in the cache.
Subsequent operations on X will update the cached copy of X, but not the external
memory version of X. If the cache is not flushed to the memory before the next time a
device tries to access X, the device will receive a stale value of X.

Similarly, if the cached copy of X is not invalidated when a device writes a new value to
the memory, then the CPU will operate on a stale value of X.

DMA engines

In addition to hardware interaction, DMA can also be used to offload expensive memory
operations, such as large copies or scatter-gather operations, from the CPU to a dedicated
DMA engine. While normal memory copies are typically too small to be worthwhile to
offload on today's desktop computers, they are frequently offloaded on embedded devices
due to more limited resources.[1]

Newer Intel Xeon processors also include a DMA engine technology called I/OAT, meant
to improve network performance on high-throughput network interfaces, in particular
gigabit Ethernet and faster.[2] However, various benchmarks with this approach by Intel's

13
Linux kernel developer Andrew Grover indicate no more than 10% improvement in CPU
utilization with receiving workloads, and no improvement when transmitting data.[3]

Reconfigurable DMA circuits, for instance, based on GAG Generic Address Generators,
provide the enabling technology of Auto-sequencing memory, programmable by
Flowware to generate the data streams for running system architectures based on the Anti
machine paradigm, which could be called a DMA engine.

Examples:- ISA

For example, a PC's ISA DMA controller has 16 DMA channels of which 7 are available
for use by the PC's CPU. Each DMA channel has associated with it a 16-bit address
register and a 16-bit count register. To initiate a data transfer the device driver sets up the
DMA channel's address and count registers together with the direction of the data
transfer, read or write. It then instructs the DMA hardware to begin the transfer. When the
transfer is complete, the device interrupts the CPU.

"Scatter-gather" DMA allows the transfer of data to and from multiple memory areas in a
single DMA transaction. It is equivalent to the chaining together of multiple simple DMA
requests. Again, the motivation is to off-load multiple input/output interrupt and data
copy tasks from the CPU.

DRQ stands for DMA request; DACK for DMA acknowledge. These symbols are
generally seen on hardware schematics of computer systems with DMA functionality.
They represent electronic signaling lines between the CPU and DMA controller.

PCI

As mentioned above, a PCI architecture has no central DMA controller, unlike ISA.
Instead, any PCI component can request control of the bus ("become the bus master") and
request to read and write from the system memory. More precisely, a PCI component
requests bus ownership from the PCI bus controller (usually the southbridge in a modern
PC design), which will arbitrate if several devices request bus ownership simultaneously,
since there can only be one bus master at one time. When the component is granted
ownership, it will issue normal read and write commands on the PCI bus, which will be
claimed by the bus controller and forwarded to the memory controller using a scheme
which is specific to every chipset.

As an example, on a modern AMD Socket AM2-based PC, the southbridge will forward
the transactions to the northbridge (which is integrated on the CPU die) using
HyperTransport, which will in turn convert them to DDR2 operations and send them out
on the DDR2 memory bus. As can be seen, there are quite a number of steps involved in a
PCI DMA transfer; however, since the components outside the PCI bus are faster than the
PCI bus itself by almost an order of magnitude or more (see List of device bandwidths),
that poses little problem.

14

Potrebbero piacerti anche