Sei sulla pagina 1di 20

www.bankersguru.

org

www.bankersguru.org
www.bankersguru.org

Operating System
A program that acts as an intermediary between a user of a computer
and the computer hardware.
Operating system has following parts
 kernel is a computer program that manages input/output requests
from software, and translates them into data processing
instructions for the CPU
 Shell is a command line interpreter (CLI). It interprets the
commands the user types in and arranges for them to be carried
out.

TYPES OF OPERATING SYSTEM


Single user operating system only has to deal with the requests of the
single person using the computer at that time.
Eg. MS-DOS, Windows, Android etc..
Multi-user operating system allows lots of people to access the resources
of the mainframe computer. It ‘slices up’ the mainframes resources and
divides it out to the different users.
Eg.– UNIX,LINUX etc.

www.bankersguru.org
www.bankersguru.org

FEATURES OF OPERATING SYSTEM


Batch Processing: Batch processing is a technique in which Operating
System collects one programs and data together in a batch before
processing starts. Jobs are processed in the order of submission i.efirst
come first served fashion.
Multi-tasking: It means to run more than one program at once. It is the
operating system which manages this process.
Multi-processing: refers to the use of two or more central processing
units (CPU) within a single computer system.

PROCESS MANAGEMENT
A process is a program in execution. It is a unit of work within the system.
Program is a passive entity, process is an active entity.

Process Stages
Process can have one of the following five states at a time.

www.bankersguru.org
www.bankersguru.org

New: The process is being created.


Ready: The process is waiting to be assigned to a processor. Ready
processes are waiting to have the processor allocated to them by the
operating system so that they can run.
Running: Process instructions are being executed (i.e. The process that is
currently being executed).
Waiting: The process is waiting for some event to occur (such as the
completion of an I/O operation).
Terminated: The process has finished execution.

Process Scheduling
The problem of determining when processors should be assigned and to
which processes is called processor scheduling or CPU scheduling.

Scheduling Queues

Scheduling queues refers to queues of processes or devices. When the


process enters into the system, then this process is put into a job queue.
This queue consists of all processes in the system. The operating system
also maintains other queues such as device queue.

www.bankersguru.org
www.bankersguru.org

This figure shows the queuing diagram of process scheduling.

 Queue is represented by rectangular box.


 The circles represent the resources that serve the queues.
 The arrows indicate the process flow in the system.

Queues are of two types

 Ready queue: A newly arrived process is put in the ready queue.


 Device queue: is a queue for which multiple processes are waiting
for a particular I/O device. Each device has its own device queue.

Schedulers
Schedulers are special system softwares which handles process
scheduling in various ways. Their main task is to select the jobs to be
submitted into the system and to decide which process to run. Schedulers
are of two types

 Long Term Scheduler: It is also called job scheduler. Long term


scheduler determines which programs are admitted to the system
for processing.
 Short Term Scheduler: It is also called CPU scheduler. Short term
scheduler also known as dispatcher, execute most frequently and
makes the fine grained decision of which process to execute next.
Short term scheduler is faster than long term scheduler.

Context Switch
A context switch is the mechanism to store and restore the state or
context of a CPU in Process Control block so that a process execution can
be resumed from the same point at a later time.

www.bankersguru.org
www.bankersguru.org

Process Control Block (PCB)


It is used for storing the collection of information about the Processes
and this is also called as the Data Structure which Stores the information
about the process.

Scheduling Criteria
 CPU utilization – keep the CPU as busy as possible
 Throughput – number of processes that complete their execution
per time unit
 Turnaround time – amount of time to execute a particular process
 Waiting time – amount of time a process has been waiting in the
ready queue
 Response time – amount of time it takes from when a request was
submitted until the first response is produced.

www.bankersguru.org
www.bankersguru.org

Process Scheduling Algorithms


First Come First Serve (FCFS)

 Jobs are executed on first come, first serve basis.


 Easy to understand and implement.
 Poor in performance as average wait time is high.
Shortest Job First (SJF)

 Best approach to minimize waiting time.


 Impossible to implement
 Processer should know in advance how much time process will take.
Priority Based Scheduling

 Each process is assigned a priority. Process with highest priority is to


be executed first and so on.
 Processes with same priority are executed on first come first serve
basis.
 Starvation–Problem occurs which means low priority processes may
never execute.
 Solution is Aging– technique of gradually increasing the priority of
processes that wait in the system for a long period of time.
Round Robin Scheduling

 Each process is provided a fix time to execute called quantum.


 Once a process is executed for given time period. Process is
preempted and other process executes for given time period.
 Context switching is used to save states of preempted processes.

www.bankersguru.org
www.bankersguru.org

THREADS
Thread is a light weight process that uses same address space for the
execution. As each thread has its own independent resource for process
execution, multiple processes can be executed in parallel by increasing
number of threads.

Advantages of Thread
 Thread minimize context switching time.
 Use of threads provides concurrency within a process.
 Efficient communication.

Types of Thread
Threads are implemented in following two ways

 User Level Threads-- User managed threads


 Kernel Level Threads-- Operating System managed threads acting
on kernel, an operating system core.

www.bankersguru.org
www.bankersguru.org

PROCESS SYNCHRONIZATION
Process Synchronization means sharing system resources by processes in
a such a way that, Concurrent access to shared data is handled thereby
minimizing the chance of inconsistent data.

Critical Section Problem


A Critical Section is a code segment that accesses shared variables and
has to be executed as an atomic action. It means that in a group of
cooperating processes, at a given point of time, only one process must be
executing its critical section. If any other process also wants to execute its
critical section, it must wait until the first one finishes.

Mutex Locks
It provides support for critical section code. It is a strict software
approach, here in the entry section of code, a LOCK is acquired over the
critical resources modified and used inside critical section, and in the exit
section that LOCK is released.

www.bankersguru.org
www.bankersguru.org

Semaphores
It is a technique for managing concurrent processes by using the value of
a simple integer variable. This integer variable is called semaphore.
Semaphores are mainly of two types:
 Binary Semaphore: A binary semaphore is initialized to 1 and only
takes the value 0 and 1 during execution of a program.
 Counting Semaphores: A synchronizing tool and is accessed only
through two low standard atomic operations, wait and signal
designated by P() and V() respectively

Limitations of Semaphores
1. Priority Inversion is a big limitation os semaphores.
2. With improper use, a process may block indefinitely. Such a situation is
called Deadlock.

THE DEADLOCK PROBLEM


Deadlocks are a set of blocked processes each holding a resource and
waiting to acquire a resource held by another process.

www.bankersguru.org
www.bankersguru.org

CONDITIONS FOR DEADLOCK


There are four conditions that are necessary to achieve deadlock
 Mutual Exclusion - At least one resource must be held in a non-
sharable mode; If any other process requests this resource, then
that process must wait for the resource to be released.
 Hold and Wait - A process must be simultaneously holding at least
one resource and waiting for at least one resource that is currently
being held by some other process.
 No preemption - Once a process is holding a resource ( i.e. once
its request has been granted ), then that resource cannot be taken
away from that process until the process voluntarily releases it.
 Circular Wait - A set of processes { P0, P1, P2, . . ., PN } must exist
such that every P[ i ] is waiting for P[ ( i + 1 )

HANDLING DEADLOCK
Deadlocks can be prevented by avoiding at least one of the four
conditions.
 Mutual Exclusion
 Hold and Wait
 No Preemption
 Circular Wait
Deadlock can be avoided by using banker’s algorithm.

www.bankersguru.org
www.bankersguru.org

Memory management
Memory management is the functionality of an operating system which
handles or manages primary memory. Memory management keeps track
of each and every memory location either it is allocated to some process
or it is free. It checks how much memory is to be allocated to processes. It
decides which process will get memory at what time. It tracks whenever
some memory gets freed or unallocated and correspondingly it updates
the status.
Dynamic Loading

In dynamic loading, a routine of a program is not loaded until it is called


by the program. All routines are kept on disk in a re-locatable load
format. The main program is loaded into memory and is executed. Other
routines methods or modules are loaded on request.

Dynamic Linking

Linking is the process of collecting and combining various modules of


code and data into a executable file that can be loaded into memory and
executed.

Swapping

Swapping is a mechanism in which a process can be swapped temporarily


out of main memory to a backing store, and then brought back into
memory for continued execution.

www.bankersguru.org
www.bankersguru.org

CONTIGUOUS ALLOCATION
In contiguous memory allocation each process is contained in a single
contiguous block of memory. The free blocks of memory are known as
holes. The set of holes is searched to determine which hole is best to
allocate

O O O O
S S S S
process 5 process 5 process 5 process 5
process 9 process 9
process 8 process 10

process 2 process 2 process 2 process 2

Dynamic Storage-Allocation Problem


First-fit: Allocate the first hole that is big enough
Best-fit: Allocate the smallest hole that is big enough; must search entire
list, unless ordered by size
Worst-fit: Allocate the largest hole; must also search entire list

Fragmentation
As processes are loaded and removed from memory, the free memory
space is broken into little pieces. It happens after sometimes that
processes can not be allocated to memory blocks considering their small
size and memory blocks remains unused. This problem is known as
Fragmentation.

www.bankersguru.org
www.bankersguru.org

Fragmentation is of two types

External fragmentation: Total memory space is enough to satisfy a


request or to reside a process in it, but it is not contiguous so it can not
be used. This can be controlled by Compaction Technique.
Internal fragmentation: Memory block assigned to process is bigger.
Some portion of memory is left unused as it can not be used by another
process.

Paging
External fragmentation is avoided by using paging technique. Paging is a
technique in which physical memory is broken into blocks of the same
size called pages. When a process is to be executed, it's corresponding
pages are loaded into any available memory frames.

Logical address space of a process can be non-contiguous and a process is


allocated physical memory whenever the free memory frame is available.
Operating system keeps track of all free frames. Operating system needs
n free frames to run a program of size n pages.

www.bankersguru.org
www.bankersguru.org

Address generated by CPU is divided into

 Page number (p) -- page number is used as an index into a page


table which contains base address of each page in physical memory.
 Page offset (d) -- page offset is combined with base address to
define the physical memory address.

Segmentation
Segmentation is a technique to break memory into logical pieces where
each piece represents a group of related information. For example ,data
segments or code segment for each process, data segment for operating
system and so on. Segmentation can be implemented using or without
using paging.

Unlike paging, segment are having varying sizes and thus eliminates
internal fragmentation. External fragmentation still exists but to lesser
extent.

www.bankersguru.org
www.bankersguru.org

Address generated by CPU is divided into

 Segment number (s) -- segment number is used as an index into a


segment table which contains base address of each segment in
physical memory and a limit of segment.
 Segment offset (o) -- segment offset is first checked against limit
and then is combined with base address to define the physical
memory address.

Virtual memory
Virtual memory is a technique that allows the execution of processes
which are not completely available in memory.This separation allows an
extremely large virtual memory to be provided for programmers when
only a smaller physical memory is available.

www.bankersguru.org
www.bankersguru.org

Virtual memory is commonly implemented by demand paging. It can also


be implemented in a segmentation system. Demand segmentation can
also be used to provide virtual memory.
Demand Paging
It is a type of swapping in which pages of data are not copied from disk to
RAM until they are needed.
A demand paging system is quite similar to a paging system with
swapping. When we want to execute a process, we swap it into memory.
Rather than swapping the entire process into memory, however, we use a
lazy swapper called pager.

Frame # valid-invalid bit


v
v
v
v
i

i
i
page

PAGE FAULT
An interrupt that occurs when a program requests data that is not in
memory. The interrupt triggers the operating system to fetch the data
from a virtual memory and load it into RAM.

www.bankersguru.org
www.bankersguru.org

PAGE REPLACEMENT
Page replacement algorithms are the techniques using which Operating
System decides which memory pages to swap out, write to disk when a
page of memory needs to be allocated.
First In First Out (FIFO) algorithm

 Oldest page in main memory is the one which will be selected for
replacement.
 Easy to implement, keep a list, replace pages from the tail and add
new pages at the head.

Optimal Page algorithm

 An optimal page-replacement algorithm has the lowest page-fault


rate of all algorithms. An optimal page-replacement algorithm
exists, and has been called OPT or MIN.
 Replace the page that will not be used for the longest period of time
. Use the time when a page is to be used.

www.bankersguru.org
www.bankersguru.org

Least Recently Used (LRU) algorithm

 Page which has not been used for the longest time in main memory
is the one which will be selected for replacement.
 Easy to implement, keep a list, replace pages by looking back into
time.

THRASHING
Thrashing is a condition in which excessive paging operations are taking
place. A system that is thrashing can be perceived as either a very slow
system or one that has come to a halt.
This leads to low CPU utilization.

www.bankersguru.org
www.bankersguru.org

myshop.mahendras.org

www.bankersguru.org

Potrebbero piacerti anche