Sei sulla pagina 1di 146

Operating Systems

Few Questions
1. Did all the computers have same configuration?
CPU (Make, Cores, Speed) Memory (Make, Model, Capacity) Disk (Make, Capacity, Rotational Speed etc) Devices (Mouse??, Keyboard??, Pendrive, LCD, etc.)

2. Did all the Applications run on all the systems?

Few Questions
Name a few Operating Systems on which you have worked..
Windows Linux Unix Mac BSD etc.

Why so many ??? Where do they differ???

Few Questions
Does the programmer have to alter every program for every piece of hardware? Does a faulty program crash everything? Does every program have access to all the hardware?

Operating Systems

Software -------Instruction Set------Hardware

Operating Systems
Applications ----------------Operating System ----------------Hardware

(Logical Interface)

(Physical Interface)

Broadly
What are the components of the System?
Hardware Operating System Applications Users

Few Questions
Why study Operating Systems? Build, modify, or administer an operating system Understand system performance
Behavior of OS impacts entire machine Tune workload performance Apply knowledge across many areas
Computer architecture, programming languages, data structures and algorithms, and performance modeling

Role..
Role #1: Provide standard Library (i.e., abstract resources) What is a resource?

Anything that has identity


(e.g., CPU, memory, disk) Advantages of standard library
Allow applications to reuse common facilities Make different devices look the same

Challenges
What are the correct abstractions? How much of hardware should be exposed?

Role..
Role #2: Resource coordinator (I.e., manager) Advantages of resource coordinator
Virtualize resources so, multiple users or applications can share Protect applications from one another Provide efficient and fair access to resources

Challenges
What are the correct mechanisms? What are the correct policies?

OS Components
Kernel: Core components of the OS
Program Execution (Process Management) Interrupts Modes Memory Management Virtual Memory Multitasking Disk access and File Systems Device Drivers

Networking
Enables processes to communicate with one another

Security
Authentication, Authorization (Access Controls)

User Interface
Shell ( CLI or GUI)

Functions
Depends on
User Expectations Technology Changes

Should adapt both


Change abstractions provided to users Change algorithms to implement those abstractions Change low-level implementation to deal with hardware

Lead to Evolution of Many Operating Systems

Evolution of the OS
Two distinct phases of history
Phase 1: Computers are expensive
Goal: Use computers time efficiently Maximize throughput (I.e., jobs per second) Maximize utilization (I.e., percentage busy)

Phase 2: Computers are inexpensive


Goal: Use peoples time efficiently Minimize response time Trade-off between Usability (ease-of-use) and Resource Utilization

First commercial systems


1950s Hardware
Enormous, expensive, and slow Input/Output: Punch cards and line printers

Goal of OS
Get the hardware working Single operator/programmer/user runs and debugs interactively

OS Functionality
Standard library only (no sharing or coordination of resources) Monitor that is always resident; transfer control to programs

Problems
Inefficient use of hardware (throughput and utilization)

Batch Processing
Goal of OS: Better throughput and utilization Batch: Group of jobs submitted together
Operator collects jobs; orders efficiently; runs one at a time

Advantages
Reduce setup costs over many jobs Keep machine busy Improves throughput and utilization

Problems
User must wait until batch is done for results Machine idle when job is reading from cards and writing to printers

Spooling

Hardware
Mechanical I/O devices much slower than CPU Read 17 cards/sec vs. execute 1000s instructions/sec

Goal of OS
Improve performance by overlapping I/O with CPU execution Spooling: Simultaneous Peripheral Operations On-Line 1. Read card punches to disk 2. Compute (while reading and writing to disk) 3. Write output from disk to printer

OS Functionality
Buffering and interrupt handling

Problem
Machine idle when job waits for I/O to/from disk Mismatch in speed between devices (producer & consumer) is to be addressed.

Multiprogrammed Batch Systems


Observation: Spooling provides pool of ready jobs Goal of OS
Improve performance by always running a job Keep multiple jobs resident in memory When job waits for disk I/O, OS switches to another job

OS Functionality
Job scheduling policies Memory management and protection

Advantage: Improves throughput and utilization Disadvantage: Machine not interactive

Inexpensive Peripherals
1960s Hardware
Expensive mainframes, but inexpensive keyboards and monitors Enables text editors and interactive debuggers

Goal of OS
Improve users response time

OS Functionality
Time-sharing: switch between jobs to give appearance of dedicated machine More complex job scheduling Concurrency control and synchronization

Advantage
Users easily submit jobs and get immediate feedback

Inexpensive Personal Computers


1980s Hardware
Entire machine is inexpensive One dedicated machine per user

Goal of OS
Give user control over machine

Advantages
Simplicity Works with little main memory performance is predictable

Disadvantages
No time-sharing or protection between jobs

Inexpensive, Powerful Computers


1990s+ Hardware
PCs with increasing computation and storage Users connected to the web

Goal of OS
Allow single user to run several applications simultaneously Provide security from malicious attacks Efficiently support web servers

Current Systems
Summary: OS changes due to both hardware and users Current trends
Multiprocessors Networked systems Virtual machines

Classification of Operating System


Classification is based on:
Interface Number of users Response time and program / job execution

Classification - Interface
Character Based:- Commands are entered using keyboard only to perform an operation
Ex:-UNIX, MS-DOS

Graphical User Interface :-Either keyboard or


mouse is used to perform an operation
Ex:- Windows, Macintosh, LINUX

User Interface

Example
Creation of file in MS-DOS Mode:
C:\> COPY CON filename.txt Then press enter key Type the data Press control key+z , to save the file

Creation of file in Windows mode:


Click on the following Start Programs Accessories notepad Now notepad window will be opened, type the data and save the file

Based on Number of Users


Single-User
Single user, Single Task Single User, Multi Task Ex:- MS-DOS, Windows

Multi-User
Ex:-UNIX, XENIX, LINUX etc.,

Based on Response Time and Mode of Program Execution


Batch Processing Multi Programming Time Sharing

Real time
Distributed

Hybrid

Learning Assignment
Mainframe Systems Desktop Systems Multiprocessor Systems Distributed Systems Clustered Systems Real-Time Systems Handheld Systems
Aim : Understanding various types of Operating Systems. Identifying sufficient Examples for each

Note
All the Assignments / Projects / Tutorials etc. will be discussed during Tutorial Sessions
asked for explaining

OS Design
User goals
operating system should be convenient to use, easy to learn, reliable, safe, and fast.

System goals
operating system should be easy to design, implement, and maintain, as well as flexible, reliable, error-free, and efficient.

OS Implementation
Traditionally written in assembly language, operating systems can now be written in higherlevel languages. Code written in a high-level language:
can be written faster. is more compact. is easier to understand and debug.

An operating system is far easier to port (move to some other hardware) if it is written in a highlevel language.

Design & Implementation

Design & Implementation


Layering also make it easier to enhance the operating system one entire layer can be replaced without affecting other parts of the system This structure also allows the operating system to be debugged starting at the lowest layer, adding one

layer at a time until the whole system works correctly

Design & Implementation


The operating system is divided into a number of layers (levels), each built on top of lower layers. The bottom layer (layer 0), is the hardware; the highest (layer N) is the user interface.

Example
Application program Application program

User mode
Kernel mode

System service File system

Memory and I/O Device Management

Processor scheduling

Hardware

Learning Assignment
What is the layered architecture for the following Operating Systems:
Windows UNIX LINUX

Program Execution
Simple Program: (test.c)
main() { printf(%d, fn());
Open (file) Create process Request device, release device Call sub-routine Write Terminate process Wait event, signal event (interrupt) Memory alloc & de-alloc Close (file)

}
int fn() { int d = 5 / 0 ; malloc(); //incorrect }

Operating System Services

Services
Program execution I/O operations File-system manipulation Communications
exchange of information between processes executing either on the same computer or on different systems tied together by a network. Implemented via shared memory or message passing.

Error detection Resource allocation Accounting Protection

System Calls
The applications that a user writes normally run on

top of an operating system


In most operating systems, a program cannot access the resources directly The programs run by the user, request the operating system to do a specific task on behalf of a program

System Calls
For example, if the program wants to open a file, it request

the operating system to please open the file with file


name /usr/read.txt The kernel replies to the request suitably It opens the file if the calling application has permissions

If the application does not have permissions, the operating


system simply ignores the request, giving back an error

System Calls
The process of calling the kernel is referred to as system call invocation Call to the kernel please open the file is known as a system call The kernel calls appropriate device drivers to

access the hardware

System Calls
User program

System call

Kernel has control over Hardware resources

Device driver

Device

Types of System Calls


Process control end, abort load, execute create process, terminate process get process attributes, set process attributes wait (for time) wait event, signal event allocate and free memory

File Management
create file, delete file

open, close
read, write, reposition get file attributes, set file attributes

Device Management
request device, release device read, write, reposition get device attributes, set device attributes logically attach or detach devices

Information Maintenance
get time or date, set time or date get system data, set system data get process, file or device attributes set process, file or device attributes

Communications
create, delete communication connection send, receive messages transfer status information attach or detach remote devices

Learning Assignment
What are the various system calls provided in the following operating systems:
Windows Unix Linux

Process
Process = A program under execution Program Passive entity Process Active entity

Process States
Process changes its state while it executes

The state of a process is defined in part by the current activity of a process


Each process may be in one of the following states New Ready Running Waiting Terminated

Process States
TERMINATED
Admitted Exit Interrupt

NEW READY

RUNNING
Scheduler dispatch

I/O or Event completion

I/O or Event wait WAITING Long-term scheduler (or job scheduler) selects which processes should be brought into the ready queue.

Job Queue : set of all processes in the system Ready Queue : set of all processes ready and waiting for executing Device Queue : set of processes waiting for an I/O

Short-term scheduler (or CPU scheduler) selects which process should be executed next and allocates CPU. A process can be I/O bound or CPU bound

Queueing Diagram

Context Switching
When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process. Context-switch time is overhead; the system does NO useful work while switching.

Operating System

Process Control Block


Pointer: addr of another process

Process States : new, ready,running, waiting,halted


Process Number : Identification

Program Counter : Addr of next instruction


Registers : Status of the register values

Memory Limits : Base register, limit register Page Tables, Segment Tables
List of Open Files : Opened files and their current status List of devices opened

Context Switching Example


Process 0
executing Save state into PCB0 idle Reload state from PCB1 idle Interrupt or system call Save state into PCB1 idle Reload state from PCB0 executing executing

Process 1

Various Queues - Implementation

Learning Assignment
Identify various possibilities of
creating a process terminating a process

List various commands used for creating and terminating a process in


Windows Linux Unix

Inter-Process Communication (IPC)


Independent process cannot affect or be affected by the execution of another process. Cooperating process can affect or be affected by the execution of another process Advantages of process cooperation
Information sharing (several users access same info) Computation speed-up (Multiple Processors) Modularity (Separate functions using Threads) Convenience (editing, printing, compiling in parallel)

Inter-Process Communication (IPC)


Two Models:
Shared Memory
Producer-Consumer Problem

Message Passing
Mail Box

Client Server Model - Socket Programming - Remote Procedure Calls (RPC) - Remote Method Invocation (RMI)

Needs Proper Synchronization

Producer-Consumer Problem
Paradigm for cooperating processes, producer process produces information that is consumed by a consumer process.

Buffering
Queue of messages attached to the link can be implemented in one of three ways.
1. Zero capacity 0 messages Sender must wait for receiver. 2. Bounded capacity finite length of n messages Sender must wait if link full. 3. Unbounded capacity infinite length Sender never waits.

Bounded-Buffer Shared-Memory Based Producer Consumer Problem


item nextProduced;

Shared data
#define BUFFER_SIZE 10 Typedef struct { ... } item; item buffer[BUFFER_SIZE]; int in = 0; int out = 0;

while (1) { while (((in + 1) % BUFFER_SIZE) == out) ; /* do nothing */ buffer[in] = nextProduced; in = (in + 1) % BUFFER_SIZE; }
item nextConsumed;

Consumer

Producer

while (1) { while (in == out) ; /* do nothing */ nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; }

Ready Queue
FIFO?? Can be implemented as:
FIFO Priority Queues Trees Simply Linked Lists

Multi-programmed Environment
Multi-programmed OS Objective:
Maximize Utilization Simple Concept: Utilize I/O time Process Time ( Burst Time )
CPU Execution Time (Cycle) (CPU Burst) I/O Time ( I/O Burst )

CPU Scheduler (Short-term Scheduler)


Schedule Processes in a proper order to get Maximum Utilization

Process Scheduling
Criteria:
CPU Utilization :
Make CPU as busy as possible

Throughput :
Measure of work done Number of processes completed / Unit time

Turn around Time :


Measure of time Time submission Completion

Waiting Time :
Time spent in waiting (for I/O)

Response Time :
Measure of work done (interactive systems time sharing ) Time submission to First response.

Process Scheduling
Preemptive Scheduling Non-Preemptive Scheduling
TERMINATED
Admitted Exit Interrupt

NEW READY

RUNNING
Scheduler dispatch

I/O or Event completion

I/O or Event wait WAITING

Scheduling algorithms
The CPU scheduling, selects a process in the ready queue, for execution using scheduling algorithms Scheduling algorithm can be classified into
First come, first served scheduling Shortest job first scheduling Priority scheduling Round robin scheduling Multilevel queue scheduling Multilevel feedback queue scheduling

FCFS Scheduling Algorithm


It is the simplest scheduling algorithm

It is implemented by First in First out (FIFO) Queue


Whenever the process enters into the ready queue PCB is linked onto the tail of the queue

FCFS
Consider an example of processes P1, P2, P3 arriving at time instant 0 and their CPU burst times are shown below:

Process
P1 P2

Burst time (msecs)


24 3

P3 3 The Gantt chart below shows the result


P1
0 24

P2
27

P3
30

FCFS
Average waiting time and average turn around time are calculated using the example in previous slide
Waiting time = stating time arrival time The waiting time for process P1 P1= 0 0 = 0msec The waiting time for process P2 P2 = 24 0 = 24msec The waiting time for process P3 P3 = 27 0 = 27msec

Average waiting time=(0+24+27)/3=17msec

FCFS - Disadvantages
Average turn around time=(24+27+30)=27msec The average waiting time measured under FCFS policy is long

(i.e. 17 m sec)
For example in previous slide if processes arrive in the order P2, P3, P1 it results in the following Gantt chart

P2
0 3

P3
6

P1
30

The average waiting time is (6+0+3)/3=3msec

FCFS - Disadvantages
FCFS scheduling algorithm is Non preemptive
So, trouble some for time sharing system

Each user gets a share of CPU at regular intervals in

timesharing system

Shortest Job First Scheduling


A different approach to CPU scheduling is the

Shortest Job First scheduling algorithm (SJF)


This algorithm associates with each process the

length of the processes next CPU burst


When CPU is available it is given to the process that has the smallest next CPU burst

Shortest Job First Scheduling


The next CPU bursts of two process are same then FCFS scheduling is used to break the tie More appropriate term for this scheduling

algorithm is shortest next CPU burst algorithm

Shortest Job First Scheduling


Example: A set of processes with the length of the CPU burst time given in milliseconds is shown below

Process P1 P2 P3 P4

Burst time 6 8 7 3

Shortest Job First Scheduling


Above Processes are scheduled using SJF algorithm in the form of a Gantt chart

P4
0 3

P1
9

P3
16

P2
24

Shortest Job First Scheduling


The waiting time for process P1 = 3 milliseconds

The waiting time for process P2


The waiting time for process P3

= 16 milliseconds
= 9 milliseconds

The waiting time for process P4

= 0 milliseconds

Average Waiting Time = 3+16+9+0 / 4 = 7 milliseconds

SJF - Advantages
The scheduling algorithm is probably optimal
Minimum average time for a given set of processes Decreases waiting time for short processes Increases waiting time for long processes Average waiting time is decreased

SJF algorithm is used frequently in long term CPU

scheduling

SJF - Disadvantages
Difficulty in knowing the length of the next CPU

request or CPU burst


SJF algorithm cannot be used for short term CPU

scheduling

SJF - Preemptive
SJF algorithm can be Preemptive

The next CPU burst of new arriving process is shorter than


the currently executing process
then SJF algorithm preempts currently executing process

Preemptive SJF scheduling


Shortest Remaining Time First scheduling

SJF - Preemptive
Process P1 P2 P3 P4 execution time 20 25 10 15 arrival time 0 15 30 45

P1
0 20

P2
30

P3
40

P2
55

P4
70

What is Preemptive and Non-Preemptive Solution ?

SJF
Preemptive Non-Preemptive

Average Wait Time : 06.25 Average Turn-Around Time : 23.75

07.50 25.00

Priority Scheduling
Priority scheduling algorithm is special case of SJF A priority is associated with each process Priorities are indicated by fixed range of numbers

In this scheduling CPU is allocated to the process with


the highest priority

Equal priority process scheduled in FCFS order

Priority Scheduling
Consider the following set of processes assumed to have arrived at time 0,
in order P1,P2,P3-----P5 with the length of the CPU burst time given in milliseconds and their associated priorities

Process P1 P2 P3 P4 P5

Burst time Priorities 10 3 1 1 2 4 1 5 5 2

Priority Scheduling

P2
0

P5
1 6

P1
16

P3
18

P4
19

The average waiting time is 8.2 milliseconds i.e. (0+1+6+16+18)/5 = 8.2 milliseconds Priority can be defined internally or externally

Priority Scheduling
Internally priorities are set (by OS )
some measurable quantity or quantities to compute the priority of a process
Time limits, Memory requirements, Open files , The ratio of average I/O burst to average CPU burst

Externally priorities are set (outside OS)


process type (ex: system or application), department

sponsoring (public sector or private sector), political factors,


amount paid for the computer use etc.

Priority Scheduling
Priority scheduling
Preemptive or Non Preemptive

When a process arrives at the ready queue, its priority is compared with the priority of currently running process Preemptive
allot the CPU if the priority of the newly arrived process is
higher than the priority of the currently running processes

Priority Scheduling
Non Preemptive
put the new process at the head of the ready queue

A major problem with this is indefinite blocking


Starvation

A solution to this problem is Aging


A technique which gradually increases the priority of

processes that wait in the system for a long time

Round-Robin Scheduling

for Time Sharing Systems

Preemptive only
Time Quantum A small unit of time (or time slice) generally from 10 to 100 mille seconds

Round-Robin Scheduling
The average waiting time under Round Robin policy is often long Consider the following set of processes that arrive at time 0, with the
length of the CPU burst time given in milliseconds:

Process
P1 P2 24 3

Burst Time

P3
Time Quantum = 4

Round-Robin Scheduling

Round-Robin Scheduling
The average waiting time
17 / 3= 5.66 ms

The average turn-around time


47/3 = 15.66 ms

Solve with time slice 8, 12, 16, 20, and 24 ? Draw the comparison graphs with points, 4 to 24. What is the conclusion?

What if the time slice is very small (say 1 )?


What if the time slice is very large (say 25 in this example)?

Round-Robin Scheduling
Performance of RR scheduling depends
on the size of time quantum

If time quantum is extremely very large


RR approach is same as FCFs

If it is extremely small
RR approach is called Processor Sharing

Round Robin Scheduling


Turn-around-Time Vs Quantum

Turn around time need NOT improve as the size of time quantum increases.

Round Robin Scheduling


Context Switches

How many context switches involved in each case of the problem discussed in previous slides?

Round Robin Scheduling


If the context switch time is added
Average turn-around time increases for a small time quantum (since, more context switches are added)

Time Quantum should be large compared to Context Switching Time


A Design Thumb Rule: 80% of the CPU burst should be smaller than time quantum

Multilevel Queue Scheduling


processes are classified into different groups
fore ground (interactive) process
back ground ( batch ) processes

Foreground processes have higher priority than

background
A multilevel queue scheduling algorithm partitions the Ready Queue into several separate queues Each Queue has its own scheduling policy

Multilevel Queue Scheduling


Example:
Multi level queue Scheduling algorithm with five queues:
1. 2. 3. System Processes Interactive Processes Interactive Editing Processes

4.
5.

Batch Processes
Student Processes

no process in batch queue could run unless the queues for System, Interactive & Interactive Editing are empty

Multilevel Queue Scheduling


Example :
Highest Priority System Processes

Interactive Processes

Interactive Editing Processes

Batch Processes

Student Processes Lowest Priority

Multilevel Queue Scheduling


Example:
Foreground queue RR Policy
Back ground queue FCFS Policy

In addition, there must be scheduling among queues


fixed priority preemptive scheduling

Fore ground queue has priority over background queue

Multilevel Queue Scheduling


It separates the CPU processes with different CPU burst characteristics The processes are permanently assigned to one queue based on following properties
Memory Size Process Priority Process Type

Multilevel Feedback Queue Scheduling


Consider a multilevel feedback queue scheduler with three queues
q0

quantum=8

q1

quantum=16

q2

FCFS

This scheduling algorithm gives the highest priority to any process with a CPU burst of 8 milliseconds or less

Multilevel Feedback Queue Scheduling


The scheduler first executes all processes in queue 0

When empty it executes all processes in queue 1 and so


on A way to implement Aging.

Depends on:
No. of Queues, Scheduling Policy of each Queue, Method used to upgrade a process, method used to degrade a process, method used to introduce a process (which queue), interscheduling between the queues.

Reading Assignment
Identify various other states of a process (than discussed in the class) and understand them Understand the various scheduling mechanism like
Highest Response Ratio Next Algorithm Feedback Algorithm (paper will be uploaded) Fair-Share Scheduling Algorithm

Reading Assignment...
What are the scheduling algorithms used (and interpreted) in various Operating Systems like
Windows Unix Linux

You can also specify the algorithms used for


General Purpose OS Real-Time OS etc.

Process Address Space

Process Address Space

Process Address Space

Logical Address Space

Physical Address Space

Process Address Space


Memory Management
Contiguous Allocation Non-Contiguous Allocation
Paging Segmentation

Process Hierarchy
Structured as a multilevel hierarchy Lowest level
Virtualize CPU for all processes

Virtual memory
Virtualize memory for all processes.
Single virtual memory shared by all processes Separate virtual memory for each process

Process Hierarchy

Process Hierarchy
Init Initial process whose process ID is always 0 and it will always be running

All the other processes are children to init process


Each process has a parent and can have many siblings

Thread
A thread is a single sequence stream within in a process.

Because threads have some of the properties of processes, they are sometimes called lightweight processes In a process, threads allow multiple executions of streams (multi-threading)
Threads are popular way to improve application through parallelism. Threads in a process share the same address space

Multithreading
Operating system supports multiple threads of execution within a single process MS-DOS supports a single thread UNIX supports multiple user processes but only supports one thread per process

Windows 2000, Solaris, Linux, Mach, and OS/2 support multiple threads

Multithreading

Benefits
Responsiveness
Users often like to do several things at a time Web browser while reading the page, an image is loaded

Resource Sharing
Memory and other resources Within the same address space

Economy
A server (e.g. file server) serves multiple requests Context-switching is faster

Utilization of MP architectures
Multiple CPUs sharing the same memory Increases concurrency

Benefits
Takes less time to create a new thread than a process Less time to terminate a thread than a process Less time to switch between two threads within the same process Since threads within the same process share memory and files, they can communicate with each other without invoking the kernel

Multithreaded system

Thread Control Block (TCB)


State Registers Status Program Counter Stack Code

Thread States (in Java)

Thread Types
User Threads
Created by user supported above kernel NOT managed by kernel

Kernel Threads
Created by kernel Supported & Managed by Kernel

Recent Operating Systems support Kernel Threads Models were proposed to represent a relationship between User Threads and Kernel Threads.
Many to One One to One Many to Many Two Level

Thread Types

Threading Issues
Semantics of fork() and exec() system calls.
Duplicate process (Single Threaded or Multi-threaded? Replace the entire process including threads

Thread cancellation
Asynchronous Cancellation Deferred Cancellation

Signal handling
signal particular event has occurred Delivered to process but,
To a particular thread or all the threads

Thread pools
How many threads ? unlimited ? Pool starting of the process No. of threads? No. of CPUs, memory etc.

Thread specific data


Transaction processing

Multiple Processors
Categories of Computer Systems
Single Instruction Single Data (SISD) Single Instruction Multiple Data (SIMD) Multiple Instruction Single Data (MISD) Multiple Instruction Multiple Data (MIMD)

Multiple Processors

MP OS Design Considerations
Simultaneous concurrent processes or threads Scheduling Synchronization Memory Management Reliability and Fault Tolerance

Microkernels
Small operating system core Contains only essential operating systems functions Many services traditionally included in the operating system are now external subsystems
device drivers file systems virtual memory manager windowing system security services

Benefits of a Microkernel Organization


Uniform interface on request made by a process
All services are provided by means of message passing

Extensibility
Allows the addition of new services

Flexibility
New features added Existing features can be subtracted

Benefits of a Microkernel Organization


Portability
Changes needed to port the system to a new processor is changed in the microkernel - not in the other services

Reliability
Modular design Small microkernel can be rigorously tested

Benefits of a Microkernel Organization


Distributed system support
Message are sent without knowing what the target machine is

Object-oriented operating system


Components are objects with clearly defined interfaces that can be interconnected to form software

Microkernel Design
Low-level memory management
mapping each virtual page to a physical page frame

Inter-process communication I/O and interrupt management

Multiple-Processors - Scheduling
Multiple CPUs
Homogeneous processors Heterogeneous processors

CPU scheduling - more complex


Load sharing Issues
Any Processor available Reserved Processor (Ex:- One processor I/O attached)

Multiple-Processors - Scheduling
Approaches
Asymmetric multiprocessing Symmetric multiprocessing

Asymmetric multiprocessing
Master - Slave Master (Scheduling)
Scheduling decisions, I/O processing, and other system activities

Slave
Execute User Code

Simple
Because only one processor accesses system data structures

Symmetric multiprocessing
Self Scheduling
Hardware logic (Processor selector)

Ready Queue
Common / All Processors Separate / Each Processor

Careful!
Two processors Update same Data Structure

Ensure!
Two processors choose same process?

Popular, Efficient
Virtually, all recent OSs adopted

Symmetric multiprocessing

Symmetric multiprocessing
Issues
Processor Affinity Load Balancing Symmetric Multi-Threading

Symmetric multiprocessing
Issues
Processor Affinity
Process has the affinity for the processor on which it is currently executing
Migration is costly due to Cache memory invalidated (for the current process) Re-populated (for the received process)

Soft affinity if required, can migrate between processors Hard affinity migration not permitted

Symmetric multiprocessing
Issues
Load Balancing
Work Load should be balanced few may be busy few may be idle Distribute the workload evenly Necessary for processors with separate Ready Queue Two Approaches:
Push Migration - Periodically check Push the load on idle processor Pull Migration Periodically check Idle processor pulls the proces

Symmetric multiprocessing
Issues
Symmetric Multi-Threading
Several threads concurrently on different processors Logical Processors (instead of Physical Processors)
Symmetric Multi-Threading (SMT) Or, Hyper-threading Technology Intel Core i3/i5/i7, Itanium, Pentium 4 and Xeon CPUs

Reading Assignment
Hyper Threading Super Threading

Thread Scheduling
User level threads
handled by thread libraries

Kernel level threads


handled by kernel itself

Mapping is required!

Assignment is given as part of Project

Real-Time Scheduling
Hard real-time systems
required to complete a critical task within a guaranteed amount of time.

Soft real-time systems


requires that critical processes receive priority over less fortunate ones.

Explore More !!!

Potrebbero piacerti anche