Sei sulla pagina 1di 30

Types of Operating System

Operating systems are there from the very first computer generation and they keep
evolving with time. There are many operating systems; the evolution took place due to the
customer demand and enhancement in technology.
Types of Operating Systems: Some of the widely used operating systems are as follows-

1. Batch Operating System –


 The very basic and ancient type of operating system which was first introduced was this
batch operating system.
 It used to perform a single task at a given interval of time.
 According to the concept of the batch operating systems, it states that until a JOB (or
process) has been completely processed and has been successfully executed, we cannot
perform another job or process at the same time.
 The users of a batch operating system do not interact with the computer directly.
 Each user prepares his job on an off-line device like punch cards and submits it to the
computer operator.
 Jobs with similar needs are batched together and are run as a group in order to increase the
processing speed.
ADVANTAGES:
The following are the advantages of using a batch operator:
 Reduction of manual work.
 Fast and well managed execution.
 Reduction in the repetitive usage of the punch cards and magnetic tapes.
 Reduction in CPU’s idle time.
DISADVANTAGES:
The following are the disadvantages of the batch operating systems:
 SEQUENTIAL EXECUTION: Jobs in a batch system are always executed
sequentially.
 STARVATION: Different jobs might take different amount of time for execution which
leads to the starvation of some of the jobs.
 NO INTERACTION BETWEEN THE USER AND THE JOB: The user can no
longer get an access to the job once it is submitted to the computer.
2. Multiprogramming operating System

 Multiprogramming is also the ability of an operating system to execute more than one
program on a single processor machine.
 More than one task/program/job/process can reside into the main memory at one point of
time.
 The operating system picks and begins to execute one of the jobs in memory.
 The operating system simply switches to, and executes, another job. When that job needs to
wait, the CPU switches to another job, and so on.

Example : Computer runnings excel and firefox browser simultaneously is an example of


multiprogramming. The multiprogramming is interleaved execution of multiple jobs by the
same computer. In multiprogramming system, when one program is waiting for I/O transfer;
there is another program ready to utilize the CPU. So it is possible for several jobs to share the
time of the CPU.

Figure. A simple process of multiprogramming OS


As shown in fig, at the particular situation, job' A' is not utilizing the CPU time because it is busy
in I/ 0 operations. Hence the CPU becomes busy to execute the job 'B'. Another job C is waiting
for the CPU for getting its execution time. So in this state the CPU will never be idle and
utilizes maximum of its time.
3. Time-Sharing Operating Systems

 Time sharing (or multitasking) is a logical extension of multiprogramming.


 In time-sharing systems, the CPU executes multiple jobs by switching among them, but the
switches occur so frequently that the users can interact with each program while it is running.
 Each job is given some time to execute, so that all the jobs work smoothly. T
 he job can be from single user or from different users also.
 The time that each job gets to execute is called quantum. After this time interval is over
OS switches over to next job.
ADVANTAGES OF TIME SHARING OPERATING SYSTEMS:

The following are the advantages of using the time sharing operating systems:
 Each task gets an equal opportunity
 Less chances of duplication of software
 CPU idle time can be reduced
Disadvantages of Time-Sharing OS:
 Reliability problem
 One must have to take care of security and integrity of user programs and data
 Data communication problem
Examples of Time-Sharing OSs are: Multics, Unix etc.

In the modern operating systems, we are able to play MP3 music, edit documents in Microsoft
Word, surf the Google Chrome all simultaneously, this is accomplished by means of multi
tasking.
4. Distributed Operating System –

 A distributed operating system is a kind of system which uses MULTIPLE CENTRAL


PROCESSORS.
 Various autonomous interconnected computers communicate each other using a shared
communication network.
 Independent systems possess their own memory unit and CPU. These are referred as loosely
coupled systems or distributed systems. These system’s processors differ in size and
function.
 The major benefit of working with these types of operating system is that it is always
possible that one user can access the files or software which are not actually present on his
system but on some other system connected within this network i.e., remote access is enabled
within the devices connected in that network.
Advantages:

1.Since all systems are independent, failure of any one will not affect the networking
communication.
2.Since resources are being shared, computation is highly fast and durable.
3.Distributed systems are easily scalable as many systems can be easily added to the network.
4.Data exchange process within the system in that network is very fast and reliable.

Disadvantages:

1. Since entire communication relies on a single network; failure of this network will stop
the entire communication.
2. Language used to establish distributed systems are not well defined yet.
3. This is an expensive and not readily available system.

Examples of Distributed Operating System are- LOCUS etc.

5. Network Operating System –

 These systems run on a server and provide the capability to manage data, users, groups,
security, applications, and other networking functions.
 These type of operating systems allow shared access of files, printers, security, applications,
and other networking functions over a small private network.
 One more important aspect of Network Operating Systems is that all the users are well aware
of the underlying configuration, of all other users within the network, their individual
connections etc. and that’s why these computers are popularly known as tightly coupled
systems.
Examples of Network Operating System are: Microsoft Windows Server 2003, Microsoft
Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD etc.

5. Real-Time Operating System –

The real-time operating system used for a real-time application means for those applications
where data processing should be done in the fixed and small quantum of time. These types of
OSs serves the real-time systems. The time interval required to process and respond to inputs is
very small. This time interval is called response time. Real time system means that the system is
subjected to real time, i.e., response should be guaranteed within a specified timing constraint or
system should meet the specified deadline. For example: flight control system, real time
monitors etc.
APPLICATIONS:
The following are the places where the real time operating systems are used:
 They are used in the scientific experiments.
 Medical imaging systems
 Industrial control systems
 Weapon systems
 Air traffic control systems
 Robots
 Thermal power plants etc…
The applications of such systems can easily help us to figure out the reasons of having no
buffering delays in real time operating systems.

Two types of Real-Time Operating System which are as follows:


 Hard Real-Time Systems:
These OSs are meant for the applications where time constraints are very strict and even
the shortest possible delay is not acceptable.
 Soft Real-Time Systems:
These OSs are for applications where for time-constraint is less strict.
Examples of Real-Time Operating Systems are: Scientific experiments, medical imaging
systems, industrial control systems, weapon systems, robots, air traffic control systems, etc.
These types of real type systems are the systems which strictly follows the definition of a real
time system. The time constraints are strictly followed here and hence no delays are accepted.

EXAMPLES:

The following are places where the time constraints are strictly followed:

 Submarine signaling

 RADAR

 Air traffic control systems

Delays in the signaling of such systems are proved to lead hazardous accidents.

Operating System Structure


1. Monolithic architecture : In the monolithic systems, each component of the operating
system is contained within the kernel.
2. Layered architecture : This is an important architecture of operating system which is
meant to overcome the disadvantages of early monolithic systems
3. Microkernel architecture : In microkernel architecture, only the most important services
are put inside the kernel and rest of the OS service are present in the system application
program.
4. Hybrid architecture : Combine the best functionalities of all these approaches and hence
this design is termed as the hybrid structured operating system

Monolithic architecture - operating system


It is the oldest architecture of the operating system. All the core software components of
the operating system are collectively known as the kernel.
Kernel is the core part of an operating system which manages all the system resources. It
also acts like a bridge between application and hardware of the computer. It is one of the first
programs loaded on start-up (after the Bootloader).
In the monolithic systems, each component of the operating system is contained within
the kernel. All the basic services of OS like process management, file management, memory
management, exception handling, process communication etc. are all present inside the kernel
only.
Linux is a good example of monolithic kernel.
Fig. Monolithic architecture of operating system

Advantages of Monolithic Architecture:

1. Simple and easy to implement structure

2. Faster execution due to direct access to all the services

Disadvantages of Monolithic Architecture:


1. Addition of new features or removal of obsolete features is very difficult.
2. Security issues are always there because there is no isolation among various servers’
present in the kernel.

Layered Architecture of Operating System


This is an important architecture of operating system which is meant to overcome the
disadvantages of early monolithic systems. In this approach, OS is split into various layers such
that all the layers perform different functionalities. The bottom layer (layer 0) is the hardware;
the highest (layer N) is the user interface.

Each layer can interact with the one just above it and the one just below it. Lowermost
layer which directly deals with the hardware is mainly meant to perform the functionality of I/O
communication and the uppermost layer which is directly connected with the application
program acts as an interface between user and operating system.

This is highly advantageous structure because all the functionalities are on different
layers and hence each layer can be tested and debugged separately.

The Microsoft Windows NT Operating System is a good example of the layered structure.
Fig. Layered Architecture of Operating System

Advantages of Layered architecture:


1. Dysfunction of one layer will not affect the entire operating system
2. Easier testing and debugging due to isolation among the layers.
3. Adding new functionalities or removing the obsolete ones is very easy.

Disadvantages of Layered architecture:


1. It is not always possible to divide the functionalities, many a times they are inter-related
and can’t be separated.
2. Sometimes, a large no. of functionalities is there and number of layers increase greatly.
This might lead to degradation in performance of the system.
3. No communication between non-adjacent layers.

Microkernel Architecture of operating system


The basic ideology in this architecture is to keep the kernel as small as possible. The
kernel is the core part of the operating system and hence it should be meant for handling the
most important services only.
In microkernel architecture, only the most important services are put inside the kernel and rest
of the OS service are present in the system application program. Now the user can easily
interact with those not-so important services within the system applications and kernel i.e.,
microkernel is solely responsible for the three most important services of operating system
namely:
1. Inter-Process communication
2. Memory management
3. CPU scheduling
Fig. Microkernel Architecture of Operating System
Microkernel and system applications can interact with each other by message passing as and
when required. This is extremely advantageous architecture since burden of kernel is reduced
and less crucial services are accessible to the user and hence security is improved too. It is being
highly adopted in the present-day systems. Eclipse IDE is a good example of Microkernel
Architecture.
Advantages:
1. Kernel is small and isolated and can hence function better
2. Expansion of the system is easier, it is simply added in the system application without
disturbing the kernel.

3. System Call
In computing, a system call is the programmatic way in which a computer program
requests a service from the kernel of the operating system it is executed on. System
call provides the services of the operating system to the user programs via Application Program
Interface(API). It provides an interface between a process and operating system to allow user-
level processes to request services of the operating system. System calls are the only entry points
into the kernel system. All programs needing resources must use system calls.

 A figure representing the execution of the system call is given as follows:


The processes execute normally in the user mode until a system call interrupts this. Then the
system call is executed on a priority basis in the kernel mode. After the execution of the system
call, the control returns to the user mode and execution of user processes can be resumed.

Types of System Calls


System calls can be grouped roughly into six major categories:
1. Process control
2. File manipulation
3. Device manipulation
4. Information maintenance
5. Communication
6. Protection
Process Control:
These kinds of system calls are used to direct the processes. These system calls deal with
processes such as process creation, process termination etc. Following are functions of process
control:
i. end, abort : A running program needs to be able to halt its execution either normally (end())
or abnormally (abort()).
ii. load, execute : These system calls are used to load and execute another job.
iii. create process, terminate process
A system call for creating a new job or process is create process() or submit job() and terminate
process() to terminate a job or process that we created.
iv. get process attributes, set process attributes : These system calls has the ability to
determine and reset the attributes of a job or process, including the job’s priority, its maximum
allowable execution time, and so on
v. wait for time : The created new jobs or processes, want to wait for a certain amount of time to
pass (wait time()).
vi. wait event, signal event : to wait for a specific event to occur (wait event()).The jobs or
processes should then signal when that event has occurred (signal event()).
vii. allocate and free memory

File Manipulation
These types of system calls are used to handle files. These system calls are responsible for file
manipulation such as creating a file, reading a file, writing into a file etc.

Functions:
 create file, delete file
 open, close file
 read, write, reposition
 get and set file attributes
1. Able to create and delete files. Either system call requires the name of the file and perhaps
some of the file's attributes.
2. Once the file is created, we need to open it and to use it.
3. We may also do read, write, or reposition.
4. Finally, we need to close the file, indicating that we are no longer using it.
5. We need to be able to determine the values of various attributes and perhaps to reset them
if necessary. File attributes include the file name, a file type, protection codes, accounting
information, and so on

Device Management
These types of system calls are used to deal with devices. These system calls are responsible for
device manipulation such as reading from device buffers, writing into device buffers etc.
Functions:
 request device, release device
 read, write, reposition
 get device attributes, set device attributes
 logically attach or detach devices

 A process may need several resources to execute - main memory, disk drives, access to files,
and so on. If the resources are available, they can be granted, and control can be returned to
the user process. Otherwise, the process will have to wait until sufficient resources are
available.

 The various resources controlled by the OS can be thought of as devices. Some of these
devices are physical devices (for example, tapes), while others can be thought of as abstract
or virtual devices (for example, files).

Information Maintenance
These types of system calls are used to maintain information. These system calls handle
information and its transfer between the operating system and the user program.
Functions:
 get time or date, set time or date
 get system data, set system data
 get and set process, file, or device attributes
 Many system calls exist simply for the purpose of transferring information between the user
program and the OS. For example, most systems have a system call to return the current time
and date.
 Other system calls may return information about the system, such as the number of current
users, the version number of the OS, the amount of free memory or disk space, and so on.
 In addition, the OS keeps information about all its processes, and system calls are used to
access this information. Generally, calls are also used to reset the process information.
Communication
These types of system calls are used for communication. These system calls are useful for
interprocess communication. They also deal with creating and deleting a communication
connection.
Functions:
 create, delete communication connection
 send, receive messages
 transfer status information
 Attach and Detach remote devices
There are two common models of interprocess communication:
o the message-passing model and
o the shared-memory model.
 In the message-passing model, the communicating processes exchange messages with
one another to transfer information.
 In the shared-memory model, processes use shared memory creates and shared memory
attaches system calls to create and gain access to regions of memory owned by other
processes.
 Message passing is useful for exchanging smaller amounts of data, because no conflicts
need be avoided.
 Shared memory allows maximum speed and convenience of communication, since it can
be done at memory speeds when it takes place within a computer.

Protection
Protection provides a mechanism for controlling access to the resources provided by a
computer system. The system calls providing protection include set permission() and get
permission(), which manipulate the permission settings of resources such as files and disks. The
allow user() and deny user() system calls specify whether particular users can — or cannot — be
allowed access to certain resources.

Some of the examples of all the above types of system calls in Windows and Unix are given as
follows:

4. Operating-System Services
An operating system provides an environment for the execution of programs. It provides
certain services to programs and to the users of those programs.

One set of operating system services provides functions that are helpful to the user.

 User interface. Almost all operating systems have a user interface (UI). This interface can
take several forms. One is a command-line interface (CLI), which uses text commands and
a method for entering them. Most commonly, a graphical user interface (GUI) is used.
Here,the interface is a window system with a pointing device to direct I/O, choose from
menus, and make selections and a keyboard to enter text.
 Program execution. The system must be able to load a program into memory and to run that
program. The program must be able to end its execution, either normally or abnormally
(indicating error).
 I/O operations. A running program may require I/O, which may involve a file or an I/O
device.
 File-system manipulation. Programs need to read and write files and directories. They also
need to create and delete them by name, search for a given file, and list file information.
Finally, some operating systems include permissions management to allow or deny access to
files or directories based on file ownership.
 Communications. There are many circumstances in which one process needs to exchange
information with another process. Such communication may occur between processes that
are executing on the same computer or between processes that are executing on different
computer systems tied together by a computer network.
 Error detection. The operating system needs to be detecting and correcting errors
constantly. Errors may occur in the CPU and memory hardware (such as a memory error or a
power failure), in I/O devices (such as a parity error on disk, a connection failure on a
network, or lack of paper in the printer), and in the user program (such as an arithmetic
overflow, an attempt to access an illegal memory location, or a too-great use of CPU time).
 Resource allocation. When there are multiple users or multiple jobs running at the same
time, resources must be allocated to each of them. The operating system manages many
different types of resources.
 Accounting. To keep track of which users use how much and what kinds of computer
resources. This record keeping may be used for accounting (so that users can be billed) or
simply for accumulating usage statistics.

5. Operating system components


An operating system provides the environment within which programs are executed. To
construct such an environment, the system is partitioned into small modules with a well-defined
interface.

A system as large and complex as an operating system can only be created by partitioning
it into smaller pieces. Each of these pieces should be a well defined portion of the system with
carefully defined inputs, outputs, and function. Many modern operating systems share the system
components outlined below.
5.1 Process Management

The CPU executes a large number of programs. The execution of user programs is called
a process.

A word-processing program being run by an individual user on a PC is a process. A


system task, such as sending output to a printer, can also be a process.

A process needs certain resources — including CPU time, memory, files, and I/O devices
— to accomplish its task. These resources are either given to the process when it is created or
allocated to it while it is running.

A process is the unit of work in a system. Such a system consists of a collection of


processes, some of which are operating system processes, those that execute system code, and
the rest being user processes, those that execute user code. All of those processes can potentially
execute concurrently.

The operating system is responsible for the following activities in connection with processes
managed.

o The creation and deletion of both user and system processes


o The suspensions are resumption of processes.
o The provision of mechanisms for process synchronization
o The provision of mechanisms for deadlock handling.

5.2 Memory Management

Memory is central to the operation of a modern computer system. Memory is a large


array of words or bytes, each with its own address. Interaction is achieved through a sequence of
reads or writes of specific memory address. The CPU fetches from and stores in memory.

In order for a program to be executed it must be mapped to absolute addresses and loaded
in to memory. As the program executes, it accesses program instructions and data from memory
by generating these absolute is declared available, and the next program may be loaded and
executed.

In order to improve both the utilization of CPU and the speed of the computer's response
to its users, several processes must be kept in memory. There are many different algorithms
depends on the particular situation. Selection of a memory management scheme for a specific
system depends upon many factor, but especially upon the hardware design of the system. Each
algorithm requires its own hardware support.

The operating system is responsible for the following activities in connection with memory
management.

o Keep track of which parts of memory are currently being used and by whom.
o Decide which processes are to be loaded into memory when memory space
becomes available.
o Allocate and deallocate memory space as needed.
5.3 Secondary Storage Management

The main purpose of a computer system is to execute programs. These programs,


together with the data they access, must be in main memory during execution. Since the main
memory is too small to permanently accommodate all data and program, the computer system
must provide secondary storage to backup main memory.

Most modem computer systems use disks as the primary on-line storage of information,
of both programs and data. Most programs, like compilers, assemblers, sort routines, editors,
formatters, and so on, are stored on the disk until loaded into memory, and then use the disk as
both the source and destination of their processing. Hence the proper management of disk
storage is of central importance to a computer system.

There are few alternatives. Magnetic tape systems are generally too slow. In addition,
they are limited to sequential access. Thus tapes are more suited for storing infrequently used
files, where speed is not a primary concern.

The operating system is responsible for the following activities in connection with
disk management

o Free space management


o Storage allocation
o Disk scheduling.

5.4 I/O System

One of the purposes of an operating system is to hide the peculiarities of specific hardware
devices from the user. For example, in Unix, the peculiarities of I/O devices are hidden from the
bulk of the operating system itself by the I/O system. The I/O system consists of:

o A buffer caching system


o A general device driver code
o Drivers for specific hardware devices.

Only the device driver knows the peculiarities of a specific device.

5.5 File Management

File management is one of the most visible services of an operating system. Computers
can store information in several different physical forms; magnetic tape, disk, and drum are the
most common forms. Each of these devices has its own characteristics and physical organization.

For convenient use of the computer system, the operating system provides a uniform
logical view of information storage. The operating system abstracts from the physical properties
of its storage devices to define a logical storage unit, the file. Files are mapped, by the operating
system, onto physical devices.
A file is a collection of related information defined by its creator. Commonly, files
represent programs (both source and object forms) and data. Data files may be numeric,
alphabetic or alphanumeric. Files may be free-form, such as text files, or may be rigidly
formatted. In general a files is a sequence of bits, bytes, lines or records whose meaning is
defined by its creator and user. It is a very general concept.

Also files are normally organized into directories to ease their use. Finally, when multiple
users have access to files, it may be desirable to control by whom and in what ways files may be
accessed.

The operating system is responsible for the following activities in connection with file
management:

o The creation and deletion of files


o The creation and deletion of directory
o The support of primitives for manipulating files and directories
o The mapping of files onto disk storage.
o Backup of files on stable (non volatile) storage.

5.6 Protection System

The various processes in an operating system must be protected from each other’s activities. For
that purpose, various mechanisms which can be used to ensure that the files, memory segment,
cpu and other resources can be operated on only by those processes that have gained proper
authorization from the operating system.

For example, memory addressing hardware ensure that a process can only execute within its own
address space. The timer ensure that no process can gain control of the CPU without
relinquishing it. Finally, no process is allowed to do its own I/O, to protect the integrity of the
various peripheral devices.

Protection refers to a mechanism for controlling the access of programs, processes, or users to
the resources defined by a computer controls to be imposed, together with some means of
enforcement.

Protection can improve reliability by detecting latent errors at the interfaces between component
subsystems.

Networking

A distributed system is a collection of processors that do not share memory or a clock.


Instead, each processor has its own local memory, and the processors communicate with each
other through various communication lines, such as high speed buses or telephone lines.
Distributed systems vary in size and function. They may involve microprocessors, workstations,
minicomputers, and large general purpose computer systems.

The processors in the system are connected through a communication network, which can
be configured in the number of different ways. The network may be fully or partially connected.
The communication network design must consider routing and connection strategies, and the
problems of connection and security.
A distributed system provides the user with access to the various resources the system
maintains. Access to a shared resource allows computation speed-up, data availability, and
reliability.

PROCESS MANAGEMENT

Process

An Operating System (OS) executes numerous tasks and application programs. A


program is stored on the hard-disk or any other form of secondary storage. For the program to be
executed, it must be loaded into system's primary memory.

A process can be viewed as a program in execution. Each process is assigned a unique ID


when it is created and it will be referenced via this unique ID until the process completes
execution and is terminated. Process needs certain resources such as CPU time, memory, files
and I/O devices to accomplish its task. These resources are allocated to the process either when it
is created or while it is executing.

The system consists of collection of processes. The operating system processes executes
system code, the user processes executes user code. A process is the unit of work in a modern
time-sharing system. A process is an 'active' entity as opposed to program which is
considered to be a 'passive' entity.

We write our computer programs in a text file and when we execute this
program, it becomes a process which performs all the tasks mentioned in the
program.

Process memory

When a program is loaded into the memory and it becomes a process, it can be divided
into four sections ─ stack, heap, text and data. The following image shows a simplified layout of
a process inside main memory −
Figure : Process in memory

Sno Component & Description

1 Stack : The process Stack contains the temporary data such as method/function
parameters, return address and local variables.

2 Heap : This is dynamically allocated memory to a process during its run time.

3 Text : This includes the current activity represented by the value of Program
Counter and the contents of the processor's registers.

4 Data : This section contains the global and static variables.

Process State

As a process executes, it changes state. The state of a process is defined in part by the
current activity of that process. A process may be in one of the following states:

• New. The process is being created.

• Running. Instructions are being executed.

• Waiting. The process is waiting for some event to occur (such as an I/O completion or
reception of a signal).

• Ready. The process is waiting to be assigned to a processor.

• Terminated. The process has finished execution.

It is important to realize that only one process can be running on any processor at any
instant. Many processes may be ready and waiting, however. The state diagram corresponding to
these states is presented in Figure.

Figure. Diagram of process state


Process Control Block

Each process is represented in the operating system by a process control block (PCB) —
also called a task control block. A PCB is shown in Figure.

Figure Process Control Block

It contains many pieces of information associated with a specific process, including these

Process state : The state may be new, ready, running, waiting andso on.

Program counter. The counter indicates the address of the next instruction to be executed for
this process.

CPU registers. The registers vary in number and type, depending on the computer architecture.
They include accumulators, index registers, stack pointers, and general-purpose registers, plus
any condition-code information.

CPU-scheduling information. This information includes a process priority, pointers to


scheduling queues, and any other scheduling parameters.

Memory-management information. This information may include such items as the value of
the base and limit registers and the page tables, or the segment tables, depending on the memory
system used by the operating system.

I/O status information. This information includes the list of I/O devices allocated to the
process, a list of open files, and so on.
Process Scheduling
Process scheduling is a task of operating system to schedules the processes. It is an essential
part of Multiprogramming operating systems. Such operating systems allow more than one
process to be loaded into the executable memory at a time and the loaded process shares the CPU
using time multiplexing.
Scheduling Queues
The OS employs a process scheduler. The process scheduler assigns each process the
necessary resources and its turn for execution on the CPU. The decision to schedule a process is
made by an underlying scheduling algorithm. The scheduler maintains three queues, shown in
Figure, to schedule the processes.

Figure : Flow of a process through the Scheduling Queues

As processes enter the system, they are put into a job queue, which consists of all processes in
the system. The processes that are residing in main memory and are ready and waiting to execute
are kept on a list called the ready queue. The process therefore may have to wait for the disk.
The list of processes waiting for a particular I/O device is called a device queue. Each device has
its own device queue.

 Job queue - The job queue is the set of all processes on the system
 Ready queue - The ready queue has all the processes that are loaded in main memory.
These processes are ready and waiting for their turn to execute as soon as the CPU
becomes available.
 Device queue - the set of processes waiting for an I/O device to become available, such
as printer. This queue is also known as the Blocked Queue.

Processes from the job queue will be moved to the ready queue when they are ready to be
executed. When an executing process stalls for an I/O device to become available, then that
process is moved to the device queue where it remains until the requested I/O resource becomes
available. Then the process is moved back to the ready queue where it waits its turn to execute.
Figure Queueing-diagram representation of process scheduling

A common representation of process scheduling is a queueing diagram, such as that in


above Figure. Each rectangular box represents a queue. Two types of queues are present: the
ready queue and a set of device queues. The circles represent the resources that serve the queues,
and the arrows indicate the flow of processes in the system.

A new process is initially put in the ready queue. It waits there until it is selected for execution,
or dispatched. Once the process is allocated the CPU and is executing, one of several events
could occur:

• The process could issue an I/O request and then be placed in an I/O queue.

• The process could create a new child process and wait for the child’s termination.

• The process could be removed forcibly from the CPU,as a result of an interrupt, and be put
back in the ready queue.

Schedulers

An operating system uses a program scheduler to schedules the processes of computer


system. The schedulers are of following types:

1. Long term scheduler


2. Mid - term scheduler
3. Short term scheduler

1) Long Term Scheduler


It selects the processes that are to be placed in ready queue. The long term scheduler basically
decides the priority in which processes must be placed in main memory. Processes of long term
scheduler are placed in the ready state because in this state the process is ready to execute
waiting for calls of execution from CPU which takes time that’s why this is known as long term
scheduler.

2) Mid – Term Scheduler

It places the blocked and suspended processes in the secondary memory of a computer system.
The task of moving from main memory to secondary memory is called swapping out. The task
of moving back a swapped out process from secondary memory to main memory is known
as swapping in. The swapping of processes is performed to ensure the best utilization of main
memory.

3) Short Term Scheduler

It decides the priority in which processes is in the ready queue are allocated the central
processing unit (CPU) time for their execution. The short term scheduler is also referred as
central processing unit (CPU) scheduler.

Context switch

A context switch occurs when a computer’s CPU switches from one process or thread to a
different process or thread.

 Context switching allows for one CPU to handle numerous processes or


threads without the need for additional processors.
 A context switch is the mechanism to store and restore the state or context of a CPU
in Process Control block so that a process execution can be resumed from the same point at a
later time.
 Any operating system that allows for multitasking relies heavily on the use of context switching
to allow different processes to run at the same time.

Typically, there are three situations that a context switch is necessary, as shown below.
 Multitasking – When the CPU needs to switch processes in and out of memory, so that more
than one process can be running.
 Kernel/User Switch – When switching between user mode to kernel mode, it may be used (but
isn’t always necessary).
 Interrupts – When the CPU is interrupted to return data from a disk read.

The steps in a full process switch are:

1. Save the context of the processor, including program counter and other registers.

2. Update the process control block of the process that is currently in the Running state. This
includes changing the state of the process to one of the other states (Ready; Blocked;
Ready/Suspend; or Exit). Other relevant fields must also be updated, including the reason for
leaving the Running state and accounting information.

3. Move the process control block of this process to the appropriate queue(Ready; Blocked on
Event i ; Ready/Suspend).

4. Select another process for execution.


5. Update the process control block of the process selected. This includes changing the state of this
process to Running.

6. Update memory management data structures. This may be required, depending on how address
translation is managed.

7. Restore the context of the processor to that which existed at the time the selected process was
last switched out of the Running state, by loading in the previous values of the program counter
and other registers.

Operations on Processes
The processes in most systems can execute concurrently, and they may be created and deleted
dynamically. Thus, these systems must provide a mechanism for process creation and
termination.

Process Creation

During the course of execution, a process may create several new processes. As
mentioned earlier, the creating process is called a parent process, and the new processes are
called the children of that process. Each of these new processes may in turn create other
processes, forming a tree of processes.

There are four principal events that cause processes to be created:

1. System initialization.

2. Execution of a process creation system call by a running process.

3. A user request to create a new process.

4. Initiation of a batch job.


When an operating system is booted, typically several processes are created. Some of
these are foreground processes, that is, processes that interact with (human) users and perform
work for them. Others are background processes, which are not associated with particular users,
but instead have some specific function. For example, one background process may be designed
to accept incoming e-mail, sleeping most of the day but suddenly springing to life when
incoming e-mail arrives.

In addition to the processes created at boot time, new processes can be created afterward
as well. Often a running process will issue system calls to create one or more new processes to
help it do its job. Creating new processes is particularly useful when the work to be done can
easily be formulated in terms of several related, but otherwise independent interacting processes.

For example, if a large amount of data is being fetched over a network for subsequent
processing, it may be convenient to create one process to fetch the data and put them in a shared
buf-fer while a second process removes the data items and processes them.

In Microsoft Win-dows, when a process is started it does not have a window, but it can create
one (or more) and most do. In both systems, users may have multiple windows open at once,
each running some process. Using the mouse, the user can select a window and interact with the
process, for example, providing input when needed.

Process Termination

After a process has been created, it starts running and does whatever its job is. Sooner or
later the new process will terminate, usually due to one of the following conditions:

1. Normal exit (voluntary).

2. Error exit (voluntary).

3. Fatal error (involuntary).

4. Killed by another process

Most processes terminate because they have done their work. When a compi-ler has
compiled the program given to it, the compiler executes a system call to tell the operating system
that it is finished. This call is exit in UNIX and ExitProcess in Windows.

Screen-oriented programs also support voluntary termination. Word processors, Internet


browsers and similar programs always have an icon or menu item that the user can click to tell
the process to remove any temporary files it has open and then terminate.

The second reason for termination is that the process discovers a fatal error. For example,
if a user types the command

cc foo.c
to compile the program foo.c and no such file exists, the compiler simply exits. Screen-
oriented interactive processes generally do not exit when given bad parameters. Instead they pop
up a dialog box and ask the user to try again.

The third reason for termination is an error caused by the process, often due to a program
bug. Examples include executing an illegal instruction, referencing nonexistent memory, or
dividing by zero.

The fourth reason a process might terminate is that the process executes a sys-tem call
telling the operating system to kill some other process. In UNIX this call is kill. The
corresponding Win32 function is TerminateProcess. In both cases, the killer must have the
necessary .authorization to do in the killee.

Cooperating Process
Processes executing concurrently in the operating system may be either independent
processes or cooperating processes. A process is independent if it cannot affect or be affected by
the other processes executing in the system. Any process that does not share data with any other
process is independent. A process is cooperating if it can affect or be affected by the other
processes executing in the system. Clearly, any process that shares data with other processes is a
cooperating process.

Interprocess Communication

Cooperating processes require an interprocess communication (IPC) mechanism that will


allow them to exchange data and information. There are two fundamental models of interprocess
communication:

1. shared memory and


2. message passing.

In the shared-memory model, a region of memory that is shared by cooperating processes


is established. Processes can then exchange information by reading and writing data to the shared
region.

In the message-passing model, communication takes place by means of messages


exchanged between the cooperating processes. The two communications models are contrasted
in Figure.
Figure. Communication models a) message passing b) shared memory

The concept of cooperating processes was illustrated by PRODUCER-CONSUMER


problem. A producer process produces information that is consumed by a consumer process.

For example, a print program produces characters that are consumed by the printer driver.

There is a buffer of n slots and each slot is capable of storing one unit of data. There are two
processes running, namely, producer and consumer, which are operating on the buffer.

A producer tries to insert data into an empty slot of the buffer. A consumer tries to
remove data from a filled slot in the buffer. As you might have guessed by now, those two
processes won't produce the expected output if they are being executed concurrently.
There needs to be a way to make the producer and consumer work in an independent manner.
 The Unbounded buffer producer-consumer problem places no limit on the size of the
buffer. The consumer may have to wait for new items, but the producer can always
produce new items.
 The bounded buffer producer-consumer problem assumes a fixed buffer size. The
consumer must wait if the buffer is empty, the producer must wait if the buffer is full.
 The solution for the producer is to either go to sleep or discard data if the buffer is full.
The next time the consumer removes an item from the buffer, it notifies the producer,
who starts to fill the buffer again.
 In the same way, the consumer can go to sleep if it finds the buffer to be empty.
 The next time the producer puts data into the buffer, it wakes up the sleeping consumer.
 The solution can be reached by means of inter-process communication, typically using
semaphores.
 An inadequate solution could result in a deadlock where both processes are waiting to be
awakened. The problem can also be generalized to have multiple producers and
consumers.
Example
The following variables reside in a region of memory shared by the producer and consumer
processes:

#define BUFFER SIZE 10


typedef struct
{
...
}item;
item buffer[BUFFER SIZE];
int in = 0;
int out = 0;

 The shared buffer is implemented as a circular array with two logical pointers: in and out.
 The variable in points to the next free position in the buffer;
 Out points to the first full position in the buffer.
 The buffer is empty when in ==out;
 The buffer is full when ((in +1) % BUFFER SIZE) == out.

The code for the producer process is shown below

The code for the consumer process is shown below.


Thread

Thread is an execution unit. It is the smallest unit of processing that


can be performed in an OS. In most modern operating systems, a
thread exists within a process - that is, a single process may contain
multiple threads
. Threads are a way for a program to divide (termed "split") itself
into two or more simultaneously running tasks.

All modern operating systems, however, provide features enabling a process to contain multiple
threads of control.

Thread is an execution unit which consists of its own program counter, a stack, and a set of
registers. Threads are also known as Lightweight processes. Threads are popular way to improve
application through parallelism. The CPU switches rapidly back and forth among the threads
giving illusion that the threads are running in parallel.
As each thread has its own independent resource for process execution, multiple processes can
be executed parallel by increasing number of threads.
Types of Thread
There are two types of threads:
1. User Threads
2. Kernel Threads

User threads, User threads are supported above the kernel and are managed without kernel
support. These are the threads that application programmers use in their programs.
Kernel threads are supported within the kernel of the OS itself. All modern OSs support kernel
level threads, allowing the kernel to perform multiple simultaneous tasks and/or to service
multiple kernel system calls simultaneously.

Multithreading Models
The user threads must be mapped to kernel threads, by one of the following strategies:

 Many to One Model


 One to One Model
 Many to Many Model

(a) (b) (c)


a Many to one b one to one c many to many

Many to One Model

 In this model, many user-level threads are all mapped onto a single kernel thread.
 Thread management is handled by the thread library in user space, which is efficient in
nature.
 Very few systems continue to use the model because of its inability to take advantage of
multiple processing cores.
One to One Model

 The one to one model creates a separate kernel thread to handle each and every user thread.
 Most implementations of this model place a limit on how many threads can be created.
 It also allows multiple threads to run in parallel on multiprocessors.
 Linux and Windows from 95 to XP implement the one-to-one model for threads.

Many to Many Model

 The many to many model multiplexes any number of user threads onto an equal or smaller
number of kernel threads, combining the best features of the one-to-one and many-to-one
models.
 Users can create any number of the threads.
 Blocking the kernel system calls does not block the entire process.
 Processes can be split across multiple processors.

Potrebbero piacerti anche