Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Few Questions
1. Did all the computers have same configuration?
CPU (Make, Cores, Speed) Memory (Make, Model, Capacity) Disk (Make, Capacity, Rotational Speed etc) Devices (Mouse??, Keyboard??, Pendrive, LCD, etc.)
Few Questions
Name a few Operating Systems on which you have worked..
Windows Linux Unix Mac BSD etc.
Few Questions
Does the programmer have to alter every program for every piece of hardware? Does a faulty program crash everything? Does every program have access to all the hardware?
Operating Systems
Operating Systems
Applications ----------------Operating System ----------------Hardware
(Logical Interface)
(Physical Interface)
Broadly
What are the components of the System?
Hardware Operating System Applications Users
Few Questions
Why study Operating Systems? Build, modify, or administer an operating system Understand system performance
Behavior of OS impacts entire machine Tune workload performance Apply knowledge across many areas
Computer architecture, programming languages, data structures and algorithms, and performance modeling
Role..
Role #1: Provide standard Library (i.e., abstract resources) What is a resource?
Challenges
What are the correct abstractions? How much of hardware should be exposed?
Role..
Role #2: Resource coordinator (I.e., manager) Advantages of resource coordinator
Virtualize resources so, multiple users or applications can share Protect applications from one another Provide efficient and fair access to resources
Challenges
What are the correct mechanisms? What are the correct policies?
OS Components
Kernel: Core components of the OS
Program Execution (Process Management) Interrupts Modes Memory Management Virtual Memory Multitasking Disk access and File Systems Device Drivers
Networking
Enables processes to communicate with one another
Security
Authentication, Authorization (Access Controls)
User Interface
Shell ( CLI or GUI)
Functions
Depends on
User Expectations Technology Changes
Evolution of the OS
Two distinct phases of history
Phase 1: Computers are expensive
Goal: Use computers time efficiently Maximize throughput (I.e., jobs per second) Maximize utilization (I.e., percentage busy)
Goal of OS
Get the hardware working Single operator/programmer/user runs and debugs interactively
OS Functionality
Standard library only (no sharing or coordination of resources) Monitor that is always resident; transfer control to programs
Problems
Inefficient use of hardware (throughput and utilization)
Batch Processing
Goal of OS: Better throughput and utilization Batch: Group of jobs submitted together
Operator collects jobs; orders efficiently; runs one at a time
Advantages
Reduce setup costs over many jobs Keep machine busy Improves throughput and utilization
Problems
User must wait until batch is done for results Machine idle when job is reading from cards and writing to printers
Spooling
Hardware
Mechanical I/O devices much slower than CPU Read 17 cards/sec vs. execute 1000s instructions/sec
Goal of OS
Improve performance by overlapping I/O with CPU execution Spooling: Simultaneous Peripheral Operations On-Line 1. Read card punches to disk 2. Compute (while reading and writing to disk) 3. Write output from disk to printer
OS Functionality
Buffering and interrupt handling
Problem
Machine idle when job waits for I/O to/from disk Mismatch in speed between devices (producer & consumer) is to be addressed.
OS Functionality
Job scheduling policies Memory management and protection
Inexpensive Peripherals
1960s Hardware
Expensive mainframes, but inexpensive keyboards and monitors Enables text editors and interactive debuggers
Goal of OS
Improve users response time
OS Functionality
Time-sharing: switch between jobs to give appearance of dedicated machine More complex job scheduling Concurrency control and synchronization
Advantage
Users easily submit jobs and get immediate feedback
Goal of OS
Give user control over machine
Advantages
Simplicity Works with little main memory performance is predictable
Disadvantages
No time-sharing or protection between jobs
Goal of OS
Allow single user to run several applications simultaneously Provide security from malicious attacks Efficiently support web servers
Current Systems
Summary: OS changes due to both hardware and users Current trends
Multiprocessors Networked systems Virtual machines
Classification - Interface
Character Based:- Commands are entered using keyboard only to perform an operation
Ex:-UNIX, MS-DOS
User Interface
Example
Creation of file in MS-DOS Mode:
C:\> COPY CON filename.txt Then press enter key Type the data Press control key+z , to save the file
Multi-User
Ex:-UNIX, XENIX, LINUX etc.,
Real time
Distributed
Hybrid
Learning Assignment
Mainframe Systems Desktop Systems Multiprocessor Systems Distributed Systems Clustered Systems Real-Time Systems Handheld Systems
Aim : Understanding various types of Operating Systems. Identifying sufficient Examples for each
Note
All the Assignments / Projects / Tutorials etc. will be discussed during Tutorial Sessions
asked for explaining
OS Design
User goals
operating system should be convenient to use, easy to learn, reliable, safe, and fast.
System goals
operating system should be easy to design, implement, and maintain, as well as flexible, reliable, error-free, and efficient.
OS Implementation
Traditionally written in assembly language, operating systems can now be written in higherlevel languages. Code written in a high-level language:
can be written faster. is more compact. is easier to understand and debug.
An operating system is far easier to port (move to some other hardware) if it is written in a highlevel language.
Example
Application program Application program
User mode
Kernel mode
Processor scheduling
Hardware
Learning Assignment
What is the layered architecture for the following Operating Systems:
Windows UNIX LINUX
Program Execution
Simple Program: (test.c)
main() { printf(%d, fn());
Open (file) Create process Request device, release device Call sub-routine Write Terminate process Wait event, signal event (interrupt) Memory alloc & de-alloc Close (file)
}
int fn() { int d = 5 / 0 ; malloc(); //incorrect }
Services
Program execution I/O operations File-system manipulation Communications
exchange of information between processes executing either on the same computer or on different systems tied together by a network. Implemented via shared memory or message passing.
System Calls
The applications that a user writes normally run on
System Calls
For example, if the program wants to open a file, it request
System Calls
The process of calling the kernel is referred to as system call invocation Call to the kernel please open the file is known as a system call The kernel calls appropriate device drivers to
System Calls
User program
System call
Device driver
Device
File Management
create file, delete file
open, close
read, write, reposition get file attributes, set file attributes
Device Management
request device, release device read, write, reposition get device attributes, set device attributes logically attach or detach devices
Information Maintenance
get time or date, set time or date get system data, set system data get process, file or device attributes set process, file or device attributes
Communications
create, delete communication connection send, receive messages transfer status information attach or detach remote devices
Learning Assignment
What are the various system calls provided in the following operating systems:
Windows Unix Linux
Process
Process = A program under execution Program Passive entity Process Active entity
Process States
Process changes its state while it executes
Process States
TERMINATED
Admitted Exit Interrupt
NEW READY
RUNNING
Scheduler dispatch
I/O or Event wait WAITING Long-term scheduler (or job scheduler) selects which processes should be brought into the ready queue.
Job Queue : set of all processes in the system Ready Queue : set of all processes ready and waiting for executing Device Queue : set of processes waiting for an I/O
Short-term scheduler (or CPU scheduler) selects which process should be executed next and allocates CPU. A process can be I/O bound or CPU bound
Queueing Diagram
Context Switching
When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process. Context-switch time is overhead; the system does NO useful work while switching.
Operating System
Memory Limits : Base register, limit register Page Tables, Segment Tables
List of Open Files : Opened files and their current status List of devices opened
Process 1
Learning Assignment
Identify various possibilities of
creating a process terminating a process
Message Passing
Mail Box
Client Server Model - Socket Programming - Remote Procedure Calls (RPC) - Remote Method Invocation (RMI)
Producer-Consumer Problem
Paradigm for cooperating processes, producer process produces information that is consumed by a consumer process.
Buffering
Queue of messages attached to the link can be implemented in one of three ways.
1. Zero capacity 0 messages Sender must wait for receiver. 2. Bounded capacity finite length of n messages Sender must wait if link full. 3. Unbounded capacity infinite length Sender never waits.
Shared data
#define BUFFER_SIZE 10 Typedef struct { ... } item; item buffer[BUFFER_SIZE]; int in = 0; int out = 0;
while (1) { while (((in + 1) % BUFFER_SIZE) == out) ; /* do nothing */ buffer[in] = nextProduced; in = (in + 1) % BUFFER_SIZE; }
item nextConsumed;
Consumer
Producer
while (1) { while (in == out) ; /* do nothing */ nextConsumed = buffer[out]; out = (out + 1) % BUFFER_SIZE; }
Ready Queue
FIFO?? Can be implemented as:
FIFO Priority Queues Trees Simply Linked Lists
Multi-programmed Environment
Multi-programmed OS Objective:
Maximize Utilization Simple Concept: Utilize I/O time Process Time ( Burst Time )
CPU Execution Time (Cycle) (CPU Burst) I/O Time ( I/O Burst )
Process Scheduling
Criteria:
CPU Utilization :
Make CPU as busy as possible
Throughput :
Measure of work done Number of processes completed / Unit time
Waiting Time :
Time spent in waiting (for I/O)
Response Time :
Measure of work done (interactive systems time sharing ) Time submission to First response.
Process Scheduling
Preemptive Scheduling Non-Preemptive Scheduling
TERMINATED
Admitted Exit Interrupt
NEW READY
RUNNING
Scheduler dispatch
Scheduling algorithms
The CPU scheduling, selects a process in the ready queue, for execution using scheduling algorithms Scheduling algorithm can be classified into
First come, first served scheduling Shortest job first scheduling Priority scheduling Round robin scheduling Multilevel queue scheduling Multilevel feedback queue scheduling
FCFS
Consider an example of processes P1, P2, P3 arriving at time instant 0 and their CPU burst times are shown below:
Process
P1 P2
P2
27
P3
30
FCFS
Average waiting time and average turn around time are calculated using the example in previous slide
Waiting time = stating time arrival time The waiting time for process P1 P1= 0 0 = 0msec The waiting time for process P2 P2 = 24 0 = 24msec The waiting time for process P3 P3 = 27 0 = 27msec
FCFS - Disadvantages
Average turn around time=(24+27+30)=27msec The average waiting time measured under FCFS policy is long
(i.e. 17 m sec)
For example in previous slide if processes arrive in the order P2, P3, P1 it results in the following Gantt chart
P2
0 3
P3
6
P1
30
FCFS - Disadvantages
FCFS scheduling algorithm is Non preemptive
So, trouble some for time sharing system
timesharing system
Process P1 P2 P3 P4
Burst time 6 8 7 3
P4
0 3
P1
9
P3
16
P2
24
= 16 milliseconds
= 9 milliseconds
= 0 milliseconds
SJF - Advantages
The scheduling algorithm is probably optimal
Minimum average time for a given set of processes Decreases waiting time for short processes Increases waiting time for long processes Average waiting time is decreased
scheduling
SJF - Disadvantages
Difficulty in knowing the length of the next CPU
scheduling
SJF - Preemptive
SJF algorithm can be Preemptive
SJF - Preemptive
Process P1 P2 P3 P4 execution time 20 25 10 15 arrival time 0 15 30 45
P1
0 20
P2
30
P3
40
P2
55
P4
70
SJF
Preemptive Non-Preemptive
07.50 25.00
Priority Scheduling
Priority scheduling algorithm is special case of SJF A priority is associated with each process Priorities are indicated by fixed range of numbers
Priority Scheduling
Consider the following set of processes assumed to have arrived at time 0,
in order P1,P2,P3-----P5 with the length of the CPU burst time given in milliseconds and their associated priorities
Process P1 P2 P3 P4 P5
Priority Scheduling
P2
0
P5
1 6
P1
16
P3
18
P4
19
The average waiting time is 8.2 milliseconds i.e. (0+1+6+16+18)/5 = 8.2 milliseconds Priority can be defined internally or externally
Priority Scheduling
Internally priorities are set (by OS )
some measurable quantity or quantities to compute the priority of a process
Time limits, Memory requirements, Open files , The ratio of average I/O burst to average CPU burst
Priority Scheduling
Priority scheduling
Preemptive or Non Preemptive
When a process arrives at the ready queue, its priority is compared with the priority of currently running process Preemptive
allot the CPU if the priority of the newly arrived process is
higher than the priority of the currently running processes
Priority Scheduling
Non Preemptive
put the new process at the head of the ready queue
Round-Robin Scheduling
Preemptive only
Time Quantum A small unit of time (or time slice) generally from 10 to 100 mille seconds
Round-Robin Scheduling
The average waiting time under Round Robin policy is often long Consider the following set of processes that arrive at time 0, with the
length of the CPU burst time given in milliseconds:
Process
P1 P2 24 3
Burst Time
P3
Time Quantum = 4
Round-Robin Scheduling
Round-Robin Scheduling
The average waiting time
17 / 3= 5.66 ms
Solve with time slice 8, 12, 16, 20, and 24 ? Draw the comparison graphs with points, 4 to 24. What is the conclusion?
Round-Robin Scheduling
Performance of RR scheduling depends
on the size of time quantum
If it is extremely small
RR approach is called Processor Sharing
Turn around time need NOT improve as the size of time quantum increases.
How many context switches involved in each case of the problem discussed in previous slides?
background
A multilevel queue scheduling algorithm partitions the Ready Queue into several separate queues Each Queue has its own scheduling policy
4.
5.
Batch Processes
Student Processes
no process in batch queue could run unless the queues for System, Interactive & Interactive Editing are empty
Interactive Processes
Batch Processes
quantum=8
q1
quantum=16
q2
FCFS
This scheduling algorithm gives the highest priority to any process with a CPU burst of 8 milliseconds or less
Depends on:
No. of Queues, Scheduling Policy of each Queue, Method used to upgrade a process, method used to degrade a process, method used to introduce a process (which queue), interscheduling between the queues.
Reading Assignment
Identify various other states of a process (than discussed in the class) and understand them Understand the various scheduling mechanism like
Highest Response Ratio Next Algorithm Feedback Algorithm (paper will be uploaded) Fair-Share Scheduling Algorithm
Reading Assignment...
What are the scheduling algorithms used (and interpreted) in various Operating Systems like
Windows Unix Linux
Process Hierarchy
Structured as a multilevel hierarchy Lowest level
Virtualize CPU for all processes
Virtual memory
Virtualize memory for all processes.
Single virtual memory shared by all processes Separate virtual memory for each process
Process Hierarchy
Process Hierarchy
Init Initial process whose process ID is always 0 and it will always be running
Thread
A thread is a single sequence stream within in a process.
Because threads have some of the properties of processes, they are sometimes called lightweight processes In a process, threads allow multiple executions of streams (multi-threading)
Threads are popular way to improve application through parallelism. Threads in a process share the same address space
Multithreading
Operating system supports multiple threads of execution within a single process MS-DOS supports a single thread UNIX supports multiple user processes but only supports one thread per process
Windows 2000, Solaris, Linux, Mach, and OS/2 support multiple threads
Multithreading
Benefits
Responsiveness
Users often like to do several things at a time Web browser while reading the page, an image is loaded
Resource Sharing
Memory and other resources Within the same address space
Economy
A server (e.g. file server) serves multiple requests Context-switching is faster
Utilization of MP architectures
Multiple CPUs sharing the same memory Increases concurrency
Benefits
Takes less time to create a new thread than a process Less time to terminate a thread than a process Less time to switch between two threads within the same process Since threads within the same process share memory and files, they can communicate with each other without invoking the kernel
Multithreaded system
Thread Types
User Threads
Created by user supported above kernel NOT managed by kernel
Kernel Threads
Created by kernel Supported & Managed by Kernel
Recent Operating Systems support Kernel Threads Models were proposed to represent a relationship between User Threads and Kernel Threads.
Many to One One to One Many to Many Two Level
Thread Types
Threading Issues
Semantics of fork() and exec() system calls.
Duplicate process (Single Threaded or Multi-threaded? Replace the entire process including threads
Thread cancellation
Asynchronous Cancellation Deferred Cancellation
Signal handling
signal particular event has occurred Delivered to process but,
To a particular thread or all the threads
Thread pools
How many threads ? unlimited ? Pool starting of the process No. of threads? No. of CPUs, memory etc.
Multiple Processors
Categories of Computer Systems
Single Instruction Single Data (SISD) Single Instruction Multiple Data (SIMD) Multiple Instruction Single Data (MISD) Multiple Instruction Multiple Data (MIMD)
Multiple Processors
MP OS Design Considerations
Simultaneous concurrent processes or threads Scheduling Synchronization Memory Management Reliability and Fault Tolerance
Microkernels
Small operating system core Contains only essential operating systems functions Many services traditionally included in the operating system are now external subsystems
device drivers file systems virtual memory manager windowing system security services
Extensibility
Allows the addition of new services
Flexibility
New features added Existing features can be subtracted
Reliability
Modular design Small microkernel can be rigorously tested
Microkernel Design
Low-level memory management
mapping each virtual page to a physical page frame
Multiple-Processors - Scheduling
Multiple CPUs
Homogeneous processors Heterogeneous processors
Multiple-Processors - Scheduling
Approaches
Asymmetric multiprocessing Symmetric multiprocessing
Asymmetric multiprocessing
Master - Slave Master (Scheduling)
Scheduling decisions, I/O processing, and other system activities
Slave
Execute User Code
Simple
Because only one processor accesses system data structures
Symmetric multiprocessing
Self Scheduling
Hardware logic (Processor selector)
Ready Queue
Common / All Processors Separate / Each Processor
Careful!
Two processors Update same Data Structure
Ensure!
Two processors choose same process?
Popular, Efficient
Virtually, all recent OSs adopted
Symmetric multiprocessing
Symmetric multiprocessing
Issues
Processor Affinity Load Balancing Symmetric Multi-Threading
Symmetric multiprocessing
Issues
Processor Affinity
Process has the affinity for the processor on which it is currently executing
Migration is costly due to Cache memory invalidated (for the current process) Re-populated (for the received process)
Soft affinity if required, can migrate between processors Hard affinity migration not permitted
Symmetric multiprocessing
Issues
Load Balancing
Work Load should be balanced few may be busy few may be idle Distribute the workload evenly Necessary for processors with separate Ready Queue Two Approaches:
Push Migration - Periodically check Push the load on idle processor Pull Migration Periodically check Idle processor pulls the proces
Symmetric multiprocessing
Issues
Symmetric Multi-Threading
Several threads concurrently on different processors Logical Processors (instead of Physical Processors)
Symmetric Multi-Threading (SMT) Or, Hyper-threading Technology Intel Core i3/i5/i7, Itanium, Pentium 4 and Xeon CPUs
Reading Assignment
Hyper Threading Super Threading
Thread Scheduling
User level threads
handled by thread libraries
Mapping is required!
Real-Time Scheduling
Hard real-time systems
required to complete a critical task within a guaranteed amount of time.