Sei sulla pagina 1di 31

Shainky Operating Systems 2009

Operating
Systems
(Notes to prepare in 1 night before Exam)
Based on Galvin Book

By:- Shashank Shekhar

If like then dont forget to send ur feedback at:


shashankrajshekhar@yahoo.com

Page 1 of 31
Shainky Operating Systems 2009
Jai Mata Di
Papa & Mummy
Jai Saraswati Maa
Om Ganganpatye Namah

OperatingSystem
Operating System:
It is a software program that enables the computer hardware to communicate and operate with
computer hardware.

It acts as an intermediate layer between application softwares and computer hardware.


Application Softwares
OS
Hardware

It is a platform on which application programs executes and interacts with hardware.


It performs tasks such as recognizing input from the keyboard, sending output the screen, keeping
track of files and directories on the disk and controlling peripheral devices such as disk drives and
printers.
Functions of OS:
1. Implementing the user interface.
2. Sharing hardware among users.
3. Facilitating input/output.
4. Recovering from errors.
5. Facilitating parallel operations.
Common OS:
1. Windows(Win 98,XP,Vista,Windows 7)
2. Macintosh OS X
3. LINUX and UNIX
4. i5/OS (IBM iSeries)
5. z/OS (IBM z series mainframes)
Goals/Objective of OS:
1. To hide details of hardware by creating abstraction (used to reduce complexity, to enforce
security).
2. To allocate resources to processes.
3. To provide a pleasant and effective user interface.

Job Management:- It controls the order and time in which programs are run.
Task Management:- Multitasking, which is the ability to simultaneously execute multiple programs.
Data Management:- It keeps track of the data on disk, tape and optical storage devices. The application
program deals with data by file name. The OSs file system knows where that data are physically stored.
Device management:- It controls peripheral devices by sending them commands in their own propriety
language. The software routine that knows how to deal with each device is called a driver and the OS
requires drivers for the peripherals attached to the computer.
Classification of OS:
Page 2 of 31
Shainky Operating Systems 2009
1. Multiuser:- It allows for multiple users to use the same computer at the same time and /or
different times. Example: Linux, UNIX, Win 2000.
2. Multiprocesing:- It is that type of OS that is capable of supporting and utilizing more than one
computer processor. Example: UNIX, Win 2000.
3. Multitasking:- This OS is capable of allowing multiple software processes to run at the same time.
Allows more than one program to run concurrently. Example: UNIX, Win 2000.
4. Multithreading:- It allow different parts of a single software to run concurrently. Example: Linux,
UNIX, Win 2000

History/ Generation:
1. 1940s (First Generation):- programming languages were unknown; earlies electronic digital
computers had no OS.
2. 1950s (Second Generation):- Introduction of punch cards; first OS is IBM 701 which runs one
job at a time.
3. 1960s (Third Generation):- concept of multiprogramming in which several jobs are in memory
at once and the processor is switched from job to job as needed; time sharing technique in which
each user has an on-line terminal and the computer system responds quickly to user requests.
4. From 1970s (Fourth Generation):- development of LSI(large scale integration) circuits, chips,
OS; personal computer and workstation age; multiprocessor technology evolved; DOS and UNIX
were dominated OS .

Types of OS:-
There are four types of OS categorized based on the types of computers they control and the sort of
applications they support.
1. Real-time OS (RTOS):-They are used to control machinery, scientific instruments and industrial
systems. It has a little user interface capability. The very important property of RTOS is managing
the resources of the computer so that a particular operation executes in precisely the same amount
of time every time it occurs.
2. Single-User, Single Task:- This OS is designed to manage the computer so that one user can
effectively do one thing at a time. Example: Palm OS for Palm handheld computers.
3. Single-user, Multi-tasking:- It let a single user have several programs in operation at the same
time. Windows and Apples MacOS are example of such OS.
4. MultiUser:- It allows multiple users to take advantage of the computers resources
simultaneously. It must make sure that the requirements of various users are balanced, and that
each of the programs they are using has sufficient and separate resources so that a problem with
one user doesnt affect the entire community of users.

OS Operations:-
OS has two modes of Operation:-
1. User mode
2. Kernel mode (Supervisor mode, system mode or privileged mode)

A bit called mode bit is added to the hardware of the computer to indicate the current mode:
kernel (0) or user (1). With the mode bit, we are able to distinguish between a task that is
executed on behalf of the OS and one that is executed on behalf of the user. When the computer
system is executing on behalf of a user application, the system is in user mode. However, when a

Page 3 of 31
Shainky Operating Systems 2009
user application requests a service from the OS (via system call), it must transition from user to
kernel mode to fulfill the request.
At boot time, the hardware starts in kernel mode. The OS is then loaded and starts user
application in user mode.

Timer:-
To prevent a user program from getting stuck in an infinite loop or not calling system services and never
returning control to the OS, a timer is used. A timer can be set to interrupt the computer after a specified
period. The OS sets the counter and this counter is decremented every time the clock ticks. When the
counter reaches 0, an interrupt occurs which transfers control automatically to the OS.
Tasks of OS:-
1. Process Management:-

The operating system manages many kinds of activities ranging from user programs to system programs.
Each of these activities is encapsulated in a process. There are many processes can be running the same
program. The five major activities of an operating system in regard to process management are

Creation and deletion of user and system processes.


Suspension and resumption of processes.
A mechanism for process synchronization.
A mechanism for process communication.
A mechanism for deadlock handling.
2. Memory Management:-
Keeping track of which parts of memory are currently being used and by whom.
Deciding which processes (or parts thereof) and data to move into and out of memory.
Allocating and deallocating memory space as needed.

3. Device Management:-
The assembling-disassembling of I/O peripherals devices are managed by the OS.

4. Storage Management:-
The three major activities of an operating system in regard to secondary storage management are:
1. Managing the free space available on the secondary-storage device.
2. Allocation of storage space when new files have to be written.
3. Scheduling the requests for memory access.
1. Application Interface:-
1. APIs lets application programmers use functions of the computer and OS without having to
directly keep track of all the details in the CPUs operation.
Page 4 of 31
Shainky Operating Systems 2009
2. User Interface:-
2. A user interface (UI) brings structure to the interaction between a user and the computer.

Open Source:- The distribution of original source materials that can be studied, altered and built upon
with the result once again freely distributed are called Open source.

Distributed Systems:-
A distributed system is a collection of physically separate, possibly heterogeneous computers systems that
are networked to provide the users with access to the various resources that the system maintains. A
Network is a communication path between two or more systems. Distributed system depends on
networking for their functionality.

System Calls and System Programs

System calls provide an interface between the process and the operating system. System calls allow user-
level processes to request some services from the operating system which process itself is not allowed to
do. For example, for I/O a process involves a system call telling the operating system to read or write
particular area and this request is satisfied by the operating system.

Types of System Calls are:-

1. Process Control
end, abort
load, execute
create process, terminate process
allocate and free memory
get process attributes, set process attributes
2. File management
Create file, delete file
Open, close
Read, write, reposition
Get file attributes, set file attributes
3. Device management
Request device, release device
Logically attach and detach devices
Read, write, reposition
Get device attributes, set device attributes
4. Communication
Create, delete communication connections
Send, receive messages
Transfer status information
Attach or detach remote devices

System programs provide basic functioning to users so that they do not need to write their own
environment for program development (e.g. editors, compilers) and program execution (shells). In some
sense, they are bundles of useful system calls.

Page 5 of 31
Shainky Operating Systems 2009

File System
File System:-
5. File System is a method for storing and organizing computer files and the data they contain, to
make it easy to find and access the.
6. It may use a data storage device such as hard disk or CD-ROM and involve maintaining the
physical location of the files.
7. Mostly File System make use of an underlying data storage device that offers access to an
array of fixed-size blocks called Sectors, generally of 512 bytes each.
8. They typically have directories which associates filename with files.
9. Examples are FAT (File Allocation Table), NTFS, etc.

Types of File Systems:-


1. Disk File Systems: It is a file system designed for the storage of files on a data storage device,
most commonly a disk drive, which might be directly or indirectly connected to the computer.
Example: FAT, FAT32, NTFS, HFS,etc.
2. Flash File Systems: It is a file system designed for storing files on flash memory devices.
Example: JFFS2, YAFFS.
3. Database File Systems: It is a new concept of file management in which Instead of hierarchical
structured management, files are identified by their characteristics, like type of file, topic, author or
similar metadata.
4. Transactional File Systems: This a special kind of file system in that it logs events or transactions
to files. Each operation that you do may involve changes to a number of different files and disk
structures. It is important that they all be executed at the same time. For example, file system used
in banks that sends money to other bank electronically. This type of file system are synchronized
and are fault tolerant nad having a high degree of overhead.
5. Network File System:- This type of file system acts as a client for a remote file access protocol,
providing access to files on a server. Example: NFS, FTP, etc.
6. Special Purpose File System: This type of file system includes systems where the files are
arranged dynamically by software. These are most commonly used by file-centric OSs such as
UNIX.

File Attributes:-
1. File name
2. File Type
3. Location
4. Size
5. Current position: This is a pointer to current raed or write position in the file.
6. Protection: Access-control information controls that can do reading, writing, executing, and so on.
7. Usage Count: This value indicates the number of processes that are currently using (have opened)
this file.
8. Time, date and process identification: This information may be kept for creation, last
modification and last use.

File Operation:-
1. Creating a file
2. Writing a file
3. Reading a file
4. Deleting a file

Page 6 of 31
Shainky Operating Systems 2009

File Access Methods:-


Sequential Access:- Sequential access is based on the tape model of a file. Information in the file
is processed in order, one record after the other. A read operation reads the next portion of the file
and automatically advances the file pointer. A write operation appends to the end of the file and
the file pointer.
Direct Access:- Direct access is based on a disk model of a file. For direct access, the file is
viewed as a numbered sequence of block or records. There is no restriction on the order of reading
and writing for a direct access file.
Other Access Methods:- This method is built on the top of direct access method. It involves the
construction of an index for a file. The index contains pointers to the various blocks. To find an
entry in the file, the index is searched first and the pointer is then used to access the file directly to
find the desired entry.

File Allocation Methods:-


1. Contiguous Allocation:- This method requires each file to occupy a setoff contiguous address on
the disk. Disk address defines a linear ordering on the disk. The difficulty with contiguous allocation is
finding space for a new file. It also suffers form external fragmentation.
2. Linked Allocation:- In linked allocation, each file is a linked list of disk blocks. The directory
contains a pointer to the first (optionally the last) block of the file. There is no external fragmentation
with linked allocation. Any free block is used to satisfy a request. The problem with it is that it is
inefficient to support direct-access, it is effective only for sequential-access files.
3. Indexed Allocation:- This allocation method is the solution to the problem of both contiguous
and linked allocation. This is done by bringing all the pointers together into one location called the
index block. Each file has its own index block, which is an array of disk sector of addresses. The
directory contains the address of the index block of a file.

Disk/Free Space Management:-


Since there is only a limited amount of disk space, it is necessary to reuse the space from deleted files for
new files. To keep track of free disk space, the system maintains a free-space list which records all disk
blocks that are free.
Free-space list can be implemented as:
Bit-Vector: Each block is represented by a 1 bit. If the block is free, the bit is 0; if the
block is allocated, the bit is 1.

Linked List: This approach link all the free disk blocks together, keeping a pointer to the
first free block. This block contains a pointer to the next free disk block, and so on.

Grouping: This approach stores the addresses of n free blocks in the first free block. The
first n-1 of these are actually free. The last one is the disk address of another block containing
addresses of other n free blocks. The importance of this implementation is that addresses of a
large number of free blocks can be found quickly.

Counting: This approach takes advantage of the fact that several contiguous blocks may be
allocated or freed simultaneously. Thus, rather than keeping a list of free disk addresses, the
address of the first free block is kept and the number n of free contiguous blocks that follow the
first block.

Page 7 of 31
Shainky Operating Systems 2009
Directory System/Structures:
The most common directory structures used by multi-user systems are:

1. Single-Level Directory:- In a single-level directory system, all the files are placed in one directory. This is
very common on single-user OS. It has significant limitations when the no. of files or when there is more
than one user. Since all the files are in same folder, they must have unique name.
2. Two-Level Directory:- In the two-level directory system, the system maintains a master block that has one
entry for each user. This master block contains the addresses of the directories of the users. The problem
with this structure is that it effectively isolates one user from another.
3. Tree-Structured Directories:- In this structure, the directory are files. This leads to the possibility of
having sub-directories that can contain files and sub-subdirectories.
4. Acyclic-Graph Directories:- It is an extension of the tree-structured directory structure. In the tree-
structured directory, files and directories starting from some fixed directory are owned by one particular
user. In the acyclic structure, this prohibition is taken out and thus a directory or file under directory can be
owned by several users.

Page 8 of 31
Shainky Operating Systems 2009

Deadlock
A set of process is in a deadlock state if each process in the set is waiting for an event that can be caused
by only another process in the set. In other words, each member of the set of deadlock processes is waiting
for a resource that can be released only by a deadlock process. None of the processes can run, none of
them can release any resources, and none of them can be awakened. It is important to note that the number
of processes and the number and kind of resources possessed and requested are unimportant.

The resources may be either physical or logical. Examples of physical resources are Printers, Tape
Drivers, Memory Space, and CPU Cycles. Examples of logical resources are Files, Semaphores, and
Monitors.

The simplest example of deadlock is where process 1 has been allocated non-shareable resources A, say, a
tap drive, and process 2 has be allocated non-sharable resource B, say, a printer. Now, if it turns out that
process 1 needs resource B (printer) to proceed and process 2 needs resource A (the tape drive) to proceed
and these are the only two processes in the system, each is blocked the other and all useful work in the
system stops. This situation ifs termed deadlock. The system is in deadlock state because each process
holds a resource being requested by the other process neither process is willing to release the resource it
holds.

Preemptable and Nonpreemptable Resources:-

Resources come in two flavors:

A preemptable resource is one that can be taken away from the process with no ill effects. Memory is an
example of a preemptable resource.

A non-preemptable resource is one that cannot be taken away from process (without causing ill effect).
For example, CD resources are not preemptable at an arbitrary moment.
Reallocating resources can resolve deadlocks that involve preemptable resources.

Necessary and Sufficient Deadlock Conditions:-

Coffman conditions that must hold simultaneously for there to be a deadlock.

1. Mutual Exclusion Condition


The resources involved are non-shareable.
Explanation: At least one resource (thread) must be held in a non-shareable mode, that is, only one
process at a time claims exclusive control of the resource. If another process requests that resource, the
requesting process must be delayed until the resource has been released.

2. Hold and Wait Condition


Requesting process hold already, resources while waiting for requested resources.
Explanation: There must exist a process that is holding a resource already allocated to it while waiting for
additional resource that are currently being held by other processes.

3. No-Preemptive Condition
Resources already allocated to a process cannot be preempted.
Page 9 of 31
Shainky Operating Systems 2009
Explanation: Resources cannot be removed from the processes are used to completion or released
voluntarily by the process holding it.

4. Circular Wait Condition


The processes in the system form a circular list or chain where each process in the list is waiting
for a resource held by the next process in the list.

As an example, consider the traffic deadlock in the following figure

Consider each section of the street as a resource.

1. Mutual exclusion condition applies, since only one vehicle can be on a section of the street at a
time.
2. Hold-and-wait condition applies, since each vehicle is occupying a section of the street, and
waiting to move on to the next section of the street.
3. No-preemptive condition applies, since a section of the street that is a section of the street that is
occupied by a vehicle cannot be taken away from it.
4. Circular wait condition applies, since each vehicle is waiting on the next vehicle to move. That is,
each vehicle in the traffic is waiting for a section of street held by the next vehicle in the traffic.

The simple rule to avoid traffic deadlock is that a vehicle should only enter an intersection if it is assured
that it will not have to stop inside the intersection.

It is not possible to have a deadlock involving only one single process. The deadlock involves a circular
hold-and-wait condition between two or more processes, so one process cannot hold a resource, yet be
waiting for another resource that it is holding. In addition, deadlock is not possible between two threads in
a process, because it is the process that holds resources, not the thread that is, each thread has access to the
resources held by the process.

Dealing with Deadlock Problem


In general, there are four strategies of dealing with deadlock problem:

1. The Ostrich Approach


Just ignore the deadlock problem altogether.
2. Deadlock Detection and Recovery
Detect deadlock and, when it occurs, take steps to recover.
Page 10 of 31
Shainky Operating Systems 2009
3. Deadlock Avoidance
Avoid deadlock by careful resource scheduling.
4. Deadlock Prevention
Prevent deadlock by resource scheduling so as to negate at least one of the four conditions.

Deadlock Detection:-

Deadlock detection is the process of actually determining that a deadlock exists and identifying the
processes and resources involved in the deadlock.
The basic idea is to check allocation against resource availability for all possible allocation sequences to
determine if the system is in deadlocked state a. Of course, the deadlock detection algorithm is only half
of this strategy. Once a deadlock is detected, there needs to be a way to recover several alternatives exists:

Temporarily prevent resources from deadlocked processes.


Back off a process to some check point allowing preemption of a needed resource and restarting
the process at the checkpoint later.
Successively kill processes until the system is deadlock free.

These methods are expensive in the sense that each iteration calls the detection algorithm until the system
proves to be deadlock free. The complexity of algorithm is O(N2) where N is the number of proceeds.
Another potential problem is starvation; same process killed repeatedly.

Deadlock Avoidance:-

This approach to the deadlock problem anticipates deadlock before it actually occurs. This approach
employs an algorithm to access the possibility that deadlock could occur and acting accordingly. This
method differs from deadlock prevention, which guarantees that deadlock cannot occur by denying one of
the necessary conditions of deadlock.

If the necessary conditions for a deadlock are in place, it is still possible to avoid deadlock by being
careful when resources are allocated. Perhaps the most famous deadlock avoidance algorithm, due to
Dijkstra [1965], is the Bankers algorithm. So named because the process is analogous to that used by a
banker in deciding if a loan can be safely made.

Bankers Algorithm

In this analogy

Customers processes
resources, say, tape
Units
drive
Banker Operating System

Customers Used Max


A 0 6 Available Units
B 0 5 = 10

Page 11 of 31
Shainky Operating Systems 2009
C 0 4
D 0 7

Fig. 1

In the above figure, we see four customers each of whom has been granted a number of credit nits. The
banker reserved only 10 units rather than 22 units to service them. At certain moment, the situation
becomes

Customers Used Max


A 1 6
B 1 5 Available Units
C 2 4 =2
D 4 7

Fig. 2

Safe State The key to a state being safe is that there is at least one way for all users to finish. In other
analogy, the state of figure 2 is safe because with 2 units left, the banker can delay any request except C's,
thus letting C finish and release all four resources. With four units in hand, the banker can let either D or
B have the necessary units and so on.

Unsafe State Consider what would happen if a request from B for one more unit were granted in above
figure 2.

We would have following situation

Customers Used Max


A 1 6
B 2 5 Available Units
C 2 4 =1
D 4 7

Fig. 3

This is an unsafe state.

If all the customers namely A, B, C, and D asked for their maximum loans, then banker could not satisfy
any of them and we would have a deadlock.

Important Note: It is important to note that an unsafe state does not imply the existence or even the
eventual existence a deadlock. What an unsafe state does imply is simply that some unfortunate sequence
of events might lead to a deadlock.

The Banker's algorithm is thus to consider each request as it occurs, and see if granting it leads to a safe
state. If it does, the request is granted, otherwise, it postponed until later.

The algorithm has complexity proportional to N2 where N is the number of processes and since the
algorithm is executed each time a resource request occurs, the overhead is significant.
Page 12 of 31
Shainky Operating Systems 2009
Deadlock Prevention:-

Since all four of the conditions are necessary for deadlock to occur, it follows that deadlock might be

Elimination of Mutual Exclusion Condition


The mutual exclusion condition must hold only for non-sharable resources. That is, several
processes cannot simultaneously share a single resource. This condition is difficult to eliminate
because some resources, such as the tap drive and printer, are inherently non-shareable. Note that
shareable resources like read-only-file do not require mutually exclusive access and thus cannot be
involved in deadlock.

Elimination of Hold and Wait Condition


There are two possibilities for elimination of the second condition. The first alternative is that a
process request be granted all of the resources it needs at once, prior to execution. The second
alternative is to disallow a process from requesting resources whenever it has previously allocated
resources. This strategy requires that all of the resources a process will need must be requested at
once. The system must grant resources on all or none basis. If the complete set of resources
needed by a process is not currently available, then the process must wait until the complete set is
available. While the process waits, however, it may not hold any resources. Thus the wait for
condition is denied and deadlocks simply cannot occur. This strategy can lead to serious waste of
resources. For example, a program requiring ten tap drives must request and receive all ten derives
before it begins executing. If the program needs only one tap drive to begin execution and then
does not need the remaining tap drives for several hours. Then substantial computer resources (9
tape drives) will sit idle for several hours. This strategy can cause indefinite postponement
(starvation). Since not all the required resources may become available at once.

Elimination of No-preemption Condition


The nonpreemption condition can be alleviated by forcing a process waiting for a resource that
cannot immediately be allocated to relinquish all of its currently held resources, so that other
processes may use them to finish. Suppose a system does allow processes to hold resources while
requesting additional resources. Consider what happens when a request cannot be satisfied. A
process holds resources a second process may need in order to proceed while second process may
hold the resources needed by the first process. This is a deadlock. This strategy require that when a
process that is holding some resources is denied a request for additional resources. The process
must release its held resources and, if necessary, request them again together with additional
resources. Implementation of this strategy denies the no-preemptive condition effectively.
High Cost When a process release resources the process may lose all its work to that point. One
serious consequence of this strategy is the possibility of indefinite postponement (starvation). A
process might be held off indefinitely as it repeatedly requests and releases the same resources.

Elimination of Circular Wait Condition


The last condition, the circular wait, can be denied by imposing a total ordering on all of the
resource types and than forcing, all processes to request the resources in order (increasing or
Page 13 of 31
Shainky Operating Systems 2009
decreasing). This strategy impose a total ordering of all resources types, and to require that each
process requests resources in a numerical order (increasing or decreasing) of enumeration. With
this rule, the resource allocation graph can never have a cycle.
For example, provide a global numbering of all the resources, as shown

1 Card reader
2 Printer
3 Plotter
4 Tape drive
5 Card punch

Now the rule is this: processes can request resources whenever they want to, but all requests must
be made in numerical order. A process may request first printer and then a tape drive (order: 2, 4),
but it may not request first a plotter and then a printer (order: 3, 2). The problem with this strategy
is that it may be impossible to find an ordering that satisfies everyone.

Page 14 of 31
Shainky Operating Systems 2009

Process
Definition

A program in Execution is called process.

What is the relation between process and program? It is same beast with different name or when this
beast is sleeping (not executing) it is called program and when it is executing becomes process. Well, to
be very precise. Process is not the same as program. A process is more than a program code. A process is
an 'active' entity as oppose to program which consider to be a 'passive' entity. As we all know that a
program is an algorithm expressed in some suitable notation, (e.g., programming language). Being a
passive, a program is only a part of process.

In Process model, all software on the computer is organized into a number of sequential processes. A
process includes PC, registers, and variables. Conceptually, each process has its own virtual CPU. In
reality, the CPU switches back and forth among processes. (The rapid switching back and forth is called
multiprogramming).

Process State

The process state consist of everything necessary to resume the process execution if it is somehow put
aside temporarily. The process state consists of at least following:

Code for the program.


Program's static data.
Program's dynamic data.
Program's procedure call stack.
Contents of general purpose registers.
Contents of program counter (PC)
Contents of program status word (PSW).
Operating Systems resource in use.

Process Operations:-

Process Creation

In general-purpose systems, some way is needed to create processes as needed during operation. There are
four principal events led to processes creation.

System initialization.
Execution of a process Creation System calls by a running process.
A user request to create a new process.
Initialization of a batch job.

Foreground processes interact with users. Background processes that stay in background sleeping but
suddenly springing to life to handle activity such as email, webpage, printing, and so on. Background
processes are called daemons. This call creates an exact clone of the calling process.
Page 15 of 31
Shainky Operating Systems 2009

A process may create a new process by some create process. It choose to does so, creating process is
called parent process and the created one is called the child processes. After a process is created, both
the parent and child have their own distinct address space. If either process changes a word in its address
space, the change is not visible to the other process.

Following are some reasons for creation of a process

User logs on.


User starts a program.
Operating systems creates process to provide service, e.g., to manage printer.
Some program starts another process, e.g., Netscape calls xv to display a picture.

Process Termination

A process terminates when it finishes executing its last statement. Its resources are returned to the system,
it is purged from any system lists or tables, and its process control block (PCB) is erased i.e., the PCB's
memory space is returned to a free memory pool. The new process terminates the existing process, usually
due to following reasons:

Normal Exist Most processes terminates because they have done their job. This call is exist in
UNIX.
Error Exist When process discovers a fatal error. For example, a user tries to compile a program
that does not exist.
Fatal Error An error caused by process due to a bug in program for example, executing an
illegal instruction, referring non-existing memory or dividing by zero.
Killed by another Process A process executes a system call telling the Operating Systems to
terminate some other process. In UNIX, this call is kill.

Process States:-

A process goes through a series of discrete process states.

New State The process being created.


Terminated State The process has finished execution.
Blocked (waiting) State When a process blocks, it does so because logically it cannot continue,
typically because it is waiting for input that is not yet available. Formally, a process is said to be
blocked if it is waiting for some event to happen (such as an I/O completion) before it can proceed.
In this state a process is unable to run until some external event happens.
Running State A process is said to be running if it currently has the CPU, that is, actually using
the CPU at that particular instant.
Ready State A process is said to be ready if it use a CPU if one were available. It is runnable but
temporarily stopped to let another process run.

Logically, the 'Running' and 'Ready' states are similar. In both cases the process is willing to run, only in
the case of 'Ready' state, there is temporarily no CPU available for it. The 'Blocked' state is different
from the 'Running' and 'Ready' states in that the process cannot run, even if the CPU is available.

Page 16 of 31
Shainky Operating Systems 2009
Process State Transitions:-

Following are Six(6) possible transitions among above mentioned five (5) states:-

Transition 1 occurs when process discovers that it cannot continue. If running process initiates an
I/O operation before its allotted time expires, the running process voluntarily relinquishes the CPU.

This state transition is:

Block (process-name): Running Block.

Transition 2 occurs when the scheduler decides that the running process has run long enough and
it is time to let another process have CPU time.

This state transition is:

Time-Run-Out (process-name): Running Ready.

Transition 3 occurs when all other processes have had their share and it is time for the first
process to run again

This state transition is:

Dispatch (process-name): Ready Running.

Transition 4 occurs when the external event for which a process was waiting (such as arrival of
input) happens.

This state transition is:

Wakeup (process-name): Blocked Ready.

Transition 5 occurs when the process is created.

This state transition is:

Page 17 of 31
Shainky Operating Systems 2009
Admitted (process-name): New Ready.

Transition 6 occurs when the process has finished execution.

This state transition is:

Exit (process-name): Running Terminated.

Process Control Block:-

A process in an operating system is represented by a data structure known as a process control block
(PCB) or process descriptor. The PCB contains important information about the specific process
including

The current state of the process i.e., whether it is ready, running, waiting, or whatever.
Unique identification of the process in order to track "which is which" information.
A pointer to parent process.
Similarly, a pointer to child process (if it exists).
The priority of process (a part of CPU scheduling information).
Pointers to locate memory of processes.
A register save area.
The processor it is running on.

The PCB is a certain store that allows the operating systems to locate key information about a process.
Thus, the PCB is the data structure that defines a process to the operating systems.

Threads

A thread is a single sequence stream within in a process. Because threads have some of the properties of
processes, they are sometimes called lightweight processes. In a process, threads allow multiple
executions of streams. In many respect, threads are popular way to improve application through
parallelism. The CPU switches rapidly back and forth among the threads giving illusion that the threads
are running in parallel. Like a traditional process i.e., process with one thread, a thread can be in any of
several states (Running, Blocked, Ready or Terminated). Each thread has its own stack. Since thread will
generally call different procedures and thus a different execution history. This is why thread needs its own
stack. An operating system that has thread facility, the basic unit of CPU utilization is a thread. A thread
has or consists of a program counter (PC), a register set, and a stack space. Threads are not independent of
one other like processes as a result threads shares with other threads their code section, data section and
OS resources.

Page 18 of 31
Shainky Operating Systems 2009
Processes Vs Threads:-

Similarities

Like processes threads share CPU and only one thread active (running) at a time.
Like processes, threads within a processes, threads within a processes execute sequentially.
Like processes, thread can create children.
And like process, if one thread is blocked, another thread can run.

Differences

Unlike processes, threads are not independent of one another.


Unlike processes, all threads can access every address in the task .
Unlike processes, thread are design to assist one other. Note that processes might or might not
assist one another because processes may originate from different users.

Why Threads?

Following are some reasons why we use threads in designing operating systems.

1. They only need a stack and storage for registers therefore, threads are cheap to create.
2. Threads use very little resources of an operating system in which they are working. That is, threads
do not need new address space, global data, program code or operating system resources.
3. Context switching are fast when working with threads. The reason is that we only have to save
and/or restore PC, SP and registers.

Types of threads:-

1. User-Level Threads
2. Kernel-Level Threads

Advantages of Threads over Multiple Processes:-

Context Switching Threads are very inexpensive to create and destroy, and they are inexpensive
to represent. For example, they require space to store, the PC, the SP, and the general-purpose
registers, but they do not require space to share memory information, Information about open files
of I/O devices in use, etc. With so little context, it is much faster to switch between threads. In
other words, it is relatively easier for a context switch using threads.
Sharing Threads allow the sharing of a lot resources that cannot be shared in process, for
example, sharing code section, data section, Operating System resources like open file etc.

Disadvantages of Threads over Multiple processes:-

Blocking The major disadvantage if that if the kernel is single threaded, a system call of one
thread will block the whole process and CPU may be idle during the blocking period.
Security Since there is, an extensive sharing among threads there is a potential problem of
security. It is quite possible that one thread over writes the stack of another thread (or damaged
shared data) although it is very unlikely since threads are meant to cooperate on a single task.

Page 19 of 31
Shainky Operating Systems 2009

Interprocess Communication
Since processes frequently need to communicate with other processes therefore, there is a need for a well-
structured communication, without using interrupts, among processes.

There are two models of Interprocess communication:-

1. Shared Memory- A region of memory that is shared by cooperating is established. Processes can
then exchange information by reading and writing data to the shared region.
2. Message Passing- Communication takes place by means of messages exchanged between the
cooperating processes.

Race Conditions

In operating systems, processes that are working together share some common storage (main memory, file
etc.) that each process can read and write. When two or more processes are reading or writing some shared
data and the final result depends on who runs precisely when, are called race conditions. Concurrently
executing threads that share data need to synchronize their operations and processing in order to avoid
race condition on shared data. Only one customer thread at a time should be allowed to examine and
update the shared variable.

Critical Section

How to avoid race conditions?

The key to preventing trouble involving shared storage is find some way to prohibit more than one
process from reading and writing the shared data simultaneously. That part of the program where
the shared memory is accessed is called the Critical Section.

To avoid race conditions and flawed results, one must identify codes in Critical Sections in each
thread. Here, the important point is that when one process is executing shared modifiable data in
its critical section, no other process is to be allowed to execute in its critical section. Thus, the
execution of critical sections by the processes is mutually exclusive in time.

Page 20 of 31
Shainky Operating Systems 2009
Mutual Exclusion

While one process executes the shared variable, all other processes desiring to do so at the same
time moment should be kept waiting; when that process has finished executing the shared variable,
one of the processes waiting; while that process has finished executing the shared variable, one of
the processes waiting to do so should be allowed to proceed. In this fashion, each process
executing the shared data (variables) excludes all others from doing so simultaneously. This is
called Mutual Exclusion.

Mutual Exclusion Conditions

If we could arrange matters such that no two processes were ever in their critical sections simultaneously,
we could avoid race conditions. We need four conditions to hold to have a good solution for the critical
section problem (mutual exclusion).

No two processes may at the same moment inside their critical sections.
No assumptions are made about relative speeds of processes or number of CPUs.
No process should outside its critical section should block other processes.
No process should wait arbitrary long to enter its critical section.

Proposals for Achieving Mutual Exclusion:-


Proposal 1 -Disabling Interrupts (Hardware Solution)

Each process disables all interrupts just after entering in its critical section and re-enables all interrupts
just before leaving critical section. With interrupts turned off the CPU could not be switched to other
process. Hence, no other process will enter its critical and mutual exclusion achieved.

Conclusion
Disabling interrupts is a useful technique within the kernel of an operating system, but it is not appropriate
as a general mutual exclusion mechanism for user process.

Proposal 2 - Lock Variable (Software Solution)

In this solution, we consider a single, shared, (lock) variable, initially 0. When a process wants to enter in
its critical section, it first test the lock. If lock is 0, the process first sets it to 1 and then enters the critical
section. If the lock is already 1, the process just waits until (lock) variable becomes 0. Thus, a 0 means
that no process in its critical section, and 1 means to wait - some process is in its critical section.

Conclusion
The flaw in this proposal can be best explained by example. Suppose process A sees that the lock is 0.
Before it can set the lock to 1 another process B is scheduled, runs, and sets the lock to 1. When the
process A runs again, it will also set the lock to 1, and two processes will be in their critical section
simultaneously.

Page 21 of 31
Shainky Operating Systems 2009
Proposal 3 - Strict Alteration

In this proposed solution, the integer variable 'turn' keeps track of whose turn is to enter the critical
section. Initially, process A inspects turn, finds it to be 0, and enters in its critical section. Process B also
finds it to be 0 and sits in a loop continually testing 'turn' to see when it becomes 1.Continuously testing a
variable waiting for some value to appear is called the Busy-Waiting.

Conclusion
Taking turns is not a good idea when one of the processes is much slower than the other. Suppose process
0 finishes its critical section quickly, so both processes are now in their noncritical section. This situation
violates above mentioned condition 3.

Using Systems calls 'sleep' and 'wakeup'

Basically, what above mentioned solution do is this: when a processes wants to enter in its critical
section , it checks to see if then entry is allowed. If it is not, the process goes into tight loop and waits (i.e.,
start busy waiting) until it is allowed to enter. This approach waste CPU-time.

Now look at some interprocess communication primitives is the pair of sleep-wakeup.

Sleep:- It is a system call that causes the caller to block, that is, be suspended until some other
process wakes it up.

Wakeup:- It is a system call that wakes up the process.

Conclusion
This approach also leads to same race conditions we have seen in earlier approaches. The essence of the
problem is that a wakeup call, sent to a process that is not sleeping, is lost.

Semaphores
E.W. Dijkstra abstracted the key notion of mutual exclusion in his concepts of semaphores.

Definition
A semaphore is a protected variable whose value can be accessed and altered only by the operations P and
V and initialization operation called 'Semaphoiinitislize'.

Binary Semaphores can assume only the value 0 or the value 1, Counting semaphores also called
General semaphores can assume only nonnegative values.

The P (or wait or sleep or down) operation on semaphores S, written as P(S) or wait (S), operates as
follows:

P(S): IF S > 0
THEN S := S - 1
ELSE (wait on S)

Page 22 of 31
Shainky Operating Systems 2009
The V (or signal or wakeup or up) operation on semaphore S, written as V(S) or signal (S), operates as
follows:

V(S): IF (one or more process are waiting on S)


THEN (let one of these processes proceed)
ELSE S := S +1

It is guaranteed that once a semaphore operations has stared, no other process can access the semaphore
until operation has completed. Mutual exclusion on the semaphore, S, is enforced within P(S) and V(S).

If several processes attempt a P(S) simultaneously, only process will be allowed to proceed. The other
processes will be kept waiting, but the implementation of P and V guarantees that processes will not suffer
indefinite postponement.

Semaphores solve the lost-wakeup problem.

CPU/Process Scheduling
The assignment of physical processors to processes allows processors to accomplish work. The problem of
determining when processors should be assigned and to which processes is called processor scheduling
or CPU scheduling.

When more than one process is runnable, the operating system must decide which one first. The part of
the operating system concerned with this decision is called the scheduler, and algorithm it uses is called
the scheduling algorithm.

Preemptive Vs Nonpreemptive Scheduling

The Scheduling algorithms can be divided into two categories with respect to how they deal with clock
interrupts.

Nonpreemptive Scheduling

A scheduling discipline is nonpreemptive if, once a process has been given the CPU, the CPU cannot be
taken away from that process.

Following are some characteristics of nonpreemptive scheduling

1. In nonpreemptive system, short jobs are made to wait by longer jobs but the overall treatment of
all processes is fair.
2. In nonpreemptive system, response times are more predictable because incoming high priority jobs
cannot displace waiting jobs.

Preemptive Scheduling

Page 23 of 31
Shainky Operating Systems 2009
A scheduling discipline is preemptive if, once a process has been given the CPU can taken away. The
strategy of allowing processes that are logically runnable to be temporarily suspended is called
Preemptive Scheduling.

Following are some scheduling algorithms:-

1. FCFS Scheduling.
2. Round Robin Scheduling.
3. SJF Scheduling.
4. SRT Scheduling.
5. Priority Scheduling.
6. Multilevel Queue Scheduling.
7. Multilevel Feedback Queue Scheduling.

First-Come-First-Served (FCFS) Scheduling

Other names of this algorithm are:

First-In-First-Out (FIFO)
Run-to-Completion
Run-Until-Done

It is the simplest scheduling algorithm. Processes are dispatched according to their arrival time on the
ready queue. Being a nonpreemptive discipline, once a process has a CPU, it runs to completion. The
FCFS scheduling is fair in the formal sense or human sense of fairness but it is unfair in the sense that
long jobs make short jobs wait and unimportant jobs make important jobs wait.

FCFS is more predictable than most of other schemes since it offers time. FCFS scheme is not useful in
scheduling interactive users because it cannot guarantee good response time.

Round Robin Scheduling

One of the oldest, simplest, fairest and most widely used algorithm is round robin (RR).

In the round robin scheduling, processes are dispatched in a FIFO manner but are given a limited amount
of CPU time called a time-slice or a quantum.

If a process does not complete before its CPU-time expires, the CPU is preempted and given to the next
process waiting in a queue. The preempted process is then placed at the back of the ready list.

Round Robin Scheduling is preemptive (at the end of time-slice) therefore it is effective in time-sharing
environments in which the system needs to guarantee reasonable response times for interactive users.

In any event, the average waiting time under round robin scheduling is often quite long.

Shortest-Job-First (SJF) Scheduling


Page 24 of 31
Shainky Operating Systems 2009
Shortest-Job-First (SJF) is a non-preemptive discipline in which waiting job (or process) with the smallest
estimated run-time-to-completion is run next. In other words, when CPU is available, it is assigned to the
process that has smallest next CPU burst.

The SJF scheduling is especially appropriate for batch jobs for which the run times are known in advance.
Since the SJF scheduling algorithm gives the minimum average time for a given set of processes, it is
probably optimal.

The obvious problem with SJF scheme is that it requires precise knowledge of how long a job or process
will run, and this information is not usually available.

Like FCFS, SJF is non-preemptive; therefore, it is not useful in timesharing environment in which
reasonable response time must be guaranteed.

Shortest-Remaining-Time (SRT) Scheduling

The SRT is the preemptive counterpart of SJF and useful in time-sharing environment.
In SRT scheduling, the process with the smallest estimated run-time to completion is run next,
including new arrivals.
In SJF scheme, once a job begins executing, it run to completion.
In SRT scheme, a running process may be preempted by a new arrival process with shortest
estimated run-time.
The algorithm SRT has higher overhead than its counterpart SJF.
In this scheme, arrival of small processes will run almost immediately. However, longer jobs have
even longer mean waiting time.

Priority Scheduling

The basic idea is straightforward: each process is assigned a priority, and priority is allowed to run. Equal-
Priority processes are scheduled in FCFS order. The shortest-Job-First (SJF) algorithm is a special case of
general priority scheduling algorithm.

Priority can be defined either internally or externally.

Internally defined priorities use some measurable quantities or qualities to compute priority of a
process, Example: time limits, memory requirements, etc.

Externally defined priorities are set by criteria that are external to operating system such as the
importance of process, etc.

Priority scheduling can be either preemptive or non-preemptive

A preemptive priority algorithm will preemptive the CPU if the priority of the newly arrival
process is higher than the priority of the currently running process.
A non-preemptive priority algorithm will simply put the new process at the head of the ready
queue.

Page 25 of 31
Shainky Operating Systems 2009
A major problem with priority scheduling is indefinite blocking or starvation. A solution to the problem
of indefinite blockage of the low-priority process is aging. Aging is a technique of gradually increasing
the priority of processes that wait in the system for a long period of time.

Multilevel Queue Scheduling

A multilevel queue scheduling algorithm partitions the ready queue in several separate queues. In a
multilevel queue scheduling processes are permanently assigned to one queues.

The processes are permanently assigned to one another, based on some property of the process, such as

Memory size
Process priority
Process type

This algorithm chooses the process from the occupied queue that has the highest priority, and run that
process either

Preemptive or
Non-preemptively

Each queue has its own scheduling algorithm or policy.

Multilevel Feedback Queue Scheduling

Multilevel feedback queue-scheduling algorithm allows a process to move between queues. It uses many
ready queues and associates a different priority with each queue.

The Algorithm chooses to process with highest priority from the occupied queue and run that process
either preemptively or non-preemptively. If the process uses too much CPU time it will moved to a lower-
priority queue. Similarly, a process that wait too long in the lower-priority queue may be moved to a
higher-priority queue may be moved to a highest-priority queue. Note that this form of aging prevents
starvation.

Remote Procedure Call (RPC):-

It is a procedure call mechanism for use between systems with network connections. Message-based
communication scheme is used to provide remote service. Each message is addressed to an RPC daemon
listening to a port on the remote system and each contains an identifier of the function to execute and the
parameters to pass to that function. The function is then executed as requested and any output is sent back
to the requester in a separate message.

A port is simply a number included at the start of a message packet. Whereas a system normally
has one

Page 26 of 31
Shainky Operating Systems 2009

Input Output Management


Magnetic Disk Structures:

The two surfaces of disk platter are covered with a magnetic matetial. Information is recorded
magnetically on the platters.

A read-write head flies just above each surface of every platter. The heads are attached to a disk armthat
moves all the heads as a unit. The surface of a platter is logically divided into circular tracks, which are
subdivided into sectors. The set of tracks that are at one arm position makes up a cylinder. The size of a
logical block is usually 512 bytes.
Device Drivers:
The device drivers present a uniform device access interface to the I/O subsystem.
Categories of I/O devices:
1. Character-stream or block: A character-stream device transfers byte one by one, whereas a
block device transfers a block of bytes as a unit.
2. Sequential or random-access: A sequential device transfers data in a fixed order determined by
the device, whereas the user of a random-access device can instruct the device to seek to any of the
available data storage locations.
3. Synchronous or Asynchronous: A synchronous device performs data transfers with predictable
response times. An Asynchronous device exhibits irregular or unpredictable response times.
4. Sharable or dedicated: A sharable device can be used concurrently by several processes or
threads; a dedicated device cannot.

Application I/O Interface:


The detailed differences in the I/O devices are encapsulated in kernel modules called device
drivers that internally are custom-tailored to each device but exports one of the standard
interfaces.
The purpose of the device-driver layer is to hide the differences among device controllers from
the I/O subsystem of the kernel.

Page 27 of 31
Shainky Operating Systems 2009

Disk Arm Scheduling:


Seek Time is the time for the disk arm to move the heads to the cylinder containing the desired
sector.
Rotational Latency is the additional time for the disk to rotate the desired sector to the disk head.
The disk bandwidth is the total number of bytes transferred, divided by the total time between the
first request for service and the completion of the last transfer.

Seek time and the bandwidth can be improved by scheduling the servicing of disk I/O requests in a
good order.

Types of Disk Arm Scheduling:


1. FCFS Scheduling
2. SSTF Scheduling
3. SCAN Scheduling
4. C-SCAN Scheduling
5. LOOK Scheduling

1. FCFS Scheduling(First Come-First Serve):


This type of scheduling schedules firstly that request that request which come firstly on
queue.
Consider, for example
A disk queue with requests for I/O to blocks on cylinders
98, 183, 37, 122, 14, 124, 65, 67

Queue = 98, 183, 37, 122, 14, 124, 65, 67


Head starts at 53

Page 28 of 31
Shainky Operating Systems 2009

2. SSTF Scheduling(Shortest-seek-time-first):
This algorithm service all the request close to the current head position before moving the
head far away to service other requests. The SSTF algorithms selects the request with
minimum seek time from the current head position. Since seek time increases the number
of cylinders traversed by the head, SSTF choses the pending request closest to the current
head position.
But it may cause starvation of other requests.
According to this algorithm, scheduling can be done as,

3. SCAN Scheduling:
In this algorithm, the disk arm starts at one end of the disk and moves towards the other
end, servicing requests as it reaches each cylinder, until it gets to the other end of the disk.
At the other end, the direction of head movement is reversed and servicing continues. This
algo. Is also known as elevator algorithm.
According to this algorithm, scheduling can be done as,

Page 29 of 31
Shainky Operating Systems 2009

4. C-SCAN Scheduling:
It is a variant of SCAN designed to provide a more uniform wait time. Like SCAN, C-
SCAN moves the head from one end of the disk to the other, servicing request along the
way. But when the head reaches the other end, it immediately returns to the beginning of
the disk without servicing any requests on the return trip. This algorithm treats the cylinder
as a circular list that wraps around from the final cylinder to the first one.
5. LOOK Scheduling:
Both SCAN and C-SCAN move the disk arm across the fully width of the disk which
wastes next access time. In LOOK Scheduling, the arm goes as far as the final request in
each direction, then it reverses direction immediately without going all the way to the end
of the disk.
For example,

DMA (Direct Memory Access):-

It is a procedure for large transfers which avoid burdening the main CPU to watch and monitor
status bits and to feed data into a controller register one ata time(a process termed
programmed I/O).
To initiate a DMA transfer, the host writes a DMA command in memory. This block contains
the pointer to the source of a transfer, a pointer to the destination of the transfer and a
count of the number of bytes to be transferred.
Page 30 of 31
Shainky Operating Systems 2009
The CPU writes the address of this command to the DMA controller, and then goes on with
other work.
The DMA controller proceeds to operate the memory bus directly, placing addresses on the
bus to perform transfers without the help of the main CPU.
Handshaking between the DMA controller and the device controller is performed via a pair
of wires called DMA-request and DMA-acknowledge.
The device controller places a signal on the DMA-request wire when a word of data is
available for transfer. This signal causes the DMA controller to seize the memory bus,
to place the desired address on the memory-address wires, and to place signal on the
DMA-acknowledge wire. When the device controller receives the DMA-acknowledge
signal, it transfers the word of data to memory and removes the DMA-request signal.
When the entire transfer is finished, the DMA controller interrupts the CPU.

Page 31 of 31

Potrebbero piacerti anche