Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Operating systems are there from the very first computer generation and they keep
evolving with time. There are many operating systems; the evolution took place due to the
customer demand and enhancement in technology.
Types of Operating Systems: Some of the widely used operating systems are as follows-
Multiprogramming is also the ability of an operating system to execute more than one
program on a single processor machine.
More than one task/program/job/process can reside into the main memory at one point of
time.
The operating system picks and begins to execute one of the jobs in memory.
The operating system simply switches to, and executes, another job. When that job needs to
wait, the CPU switches to another job, and so on.
The following are the advantages of using the time sharing operating systems:
Each task gets an equal opportunity
Less chances of duplication of software
CPU idle time can be reduced
Disadvantages of Time-Sharing OS:
Reliability problem
One must have to take care of security and integrity of user programs and data
Data communication problem
Examples of Time-Sharing OSs are: Multics, Unix etc.
In the modern operating systems, we are able to play MP3 music, edit documents in Microsoft
Word, surf the Google Chrome all simultaneously, this is accomplished by means of multi
tasking.
4. Distributed Operating System –
1.Since all systems are independent, failure of any one will not affect the networking
communication.
2.Since resources are being shared, computation is highly fast and durable.
3.Distributed systems are easily scalable as many systems can be easily added to the network.
4.Data exchange process within the system in that network is very fast and reliable.
Disadvantages:
1. Since entire communication relies on a single network; failure of this network will stop
the entire communication.
2. Language used to establish distributed systems are not well defined yet.
3. This is an expensive and not readily available system.
These systems run on a server and provide the capability to manage data, users, groups,
security, applications, and other networking functions.
These type of operating systems allow shared access of files, printers, security, applications,
and other networking functions over a small private network.
One more important aspect of Network Operating Systems is that all the users are well aware
of the underlying configuration, of all other users within the network, their individual
connections etc. and that’s why these computers are popularly known as tightly coupled
systems.
Examples of Network Operating System are: Microsoft Windows Server 2003, Microsoft
Windows Server 2008, UNIX, Linux, Mac OS X, Novell NetWare, and BSD etc.
The real-time operating system used for a real-time application means for those applications
where data processing should be done in the fixed and small quantum of time. These types of
OSs serves the real-time systems. The time interval required to process and respond to inputs is
very small. This time interval is called response time. Real time system means that the system is
subjected to real time, i.e., response should be guaranteed within a specified timing constraint or
system should meet the specified deadline. For example: flight control system, real time
monitors etc.
APPLICATIONS:
The following are the places where the real time operating systems are used:
They are used in the scientific experiments.
Medical imaging systems
Industrial control systems
Weapon systems
Air traffic control systems
Robots
Thermal power plants etc…
The applications of such systems can easily help us to figure out the reasons of having no
buffering delays in real time operating systems.
EXAMPLES:
The following are places where the time constraints are strictly followed:
Submarine signaling
RADAR
Delays in the signaling of such systems are proved to lead hazardous accidents.
Each layer can interact with the one just above it and the one just below it. Lowermost
layer which directly deals with the hardware is mainly meant to perform the functionality of I/O
communication and the uppermost layer which is directly connected with the application
program acts as an interface between user and operating system.
This is highly advantageous structure because all the functionalities are on different
layers and hence each layer can be tested and debugged separately.
The Microsoft Windows NT Operating System is a good example of the layered structure.
Fig. Layered Architecture of Operating System
3. System Call
In computing, a system call is the programmatic way in which a computer program
requests a service from the kernel of the operating system it is executed on. System
call provides the services of the operating system to the user programs via Application Program
Interface(API). It provides an interface between a process and operating system to allow user-
level processes to request services of the operating system. System calls are the only entry points
into the kernel system. All programs needing resources must use system calls.
File Manipulation
These types of system calls are used to handle files. These system calls are responsible for file
manipulation such as creating a file, reading a file, writing into a file etc.
Functions:
create file, delete file
open, close file
read, write, reposition
get and set file attributes
1. Able to create and delete files. Either system call requires the name of the file and perhaps
some of the file's attributes.
2. Once the file is created, we need to open it and to use it.
3. We may also do read, write, or reposition.
4. Finally, we need to close the file, indicating that we are no longer using it.
5. We need to be able to determine the values of various attributes and perhaps to reset them
if necessary. File attributes include the file name, a file type, protection codes, accounting
information, and so on
Device Management
These types of system calls are used to deal with devices. These system calls are responsible for
device manipulation such as reading from device buffers, writing into device buffers etc.
Functions:
request device, release device
read, write, reposition
get device attributes, set device attributes
logically attach or detach devices
A process may need several resources to execute - main memory, disk drives, access to files,
and so on. If the resources are available, they can be granted, and control can be returned to
the user process. Otherwise, the process will have to wait until sufficient resources are
available.
The various resources controlled by the OS can be thought of as devices. Some of these
devices are physical devices (for example, tapes), while others can be thought of as abstract
or virtual devices (for example, files).
Information Maintenance
These types of system calls are used to maintain information. These system calls handle
information and its transfer between the operating system and the user program.
Functions:
get time or date, set time or date
get system data, set system data
get and set process, file, or device attributes
Many system calls exist simply for the purpose of transferring information between the user
program and the OS. For example, most systems have a system call to return the current time
and date.
Other system calls may return information about the system, such as the number of current
users, the version number of the OS, the amount of free memory or disk space, and so on.
In addition, the OS keeps information about all its processes, and system calls are used to
access this information. Generally, calls are also used to reset the process information.
Communication
These types of system calls are used for communication. These system calls are useful for
interprocess communication. They also deal with creating and deleting a communication
connection.
Functions:
create, delete communication connection
send, receive messages
transfer status information
Attach and Detach remote devices
There are two common models of interprocess communication:
o the message-passing model and
o the shared-memory model.
In the message-passing model, the communicating processes exchange messages with
one another to transfer information.
In the shared-memory model, processes use shared memory creates and shared memory
attaches system calls to create and gain access to regions of memory owned by other
processes.
Message passing is useful for exchanging smaller amounts of data, because no conflicts
need be avoided.
Shared memory allows maximum speed and convenience of communication, since it can
be done at memory speeds when it takes place within a computer.
Protection
Protection provides a mechanism for controlling access to the resources provided by a
computer system. The system calls providing protection include set permission() and get
permission(), which manipulate the permission settings of resources such as files and disks. The
allow user() and deny user() system calls specify whether particular users can — or cannot — be
allowed access to certain resources.
Some of the examples of all the above types of system calls in Windows and Unix are given as
follows:
4. Operating-System Services
An operating system provides an environment for the execution of programs. It provides
certain services to programs and to the users of those programs.
One set of operating system services provides functions that are helpful to the user.
User interface. Almost all operating systems have a user interface (UI). This interface can
take several forms. One is a command-line interface (CLI), which uses text commands and
a method for entering them. Most commonly, a graphical user interface (GUI) is used.
Here,the interface is a window system with a pointing device to direct I/O, choose from
menus, and make selections and a keyboard to enter text.
Program execution. The system must be able to load a program into memory and to run that
program. The program must be able to end its execution, either normally or abnormally
(indicating error).
I/O operations. A running program may require I/O, which may involve a file or an I/O
device.
File-system manipulation. Programs need to read and write files and directories. They also
need to create and delete them by name, search for a given file, and list file information.
Finally, some operating systems include permissions management to allow or deny access to
files or directories based on file ownership.
Communications. There are many circumstances in which one process needs to exchange
information with another process. Such communication may occur between processes that
are executing on the same computer or between processes that are executing on different
computer systems tied together by a computer network.
Error detection. The operating system needs to be detecting and correcting errors
constantly. Errors may occur in the CPU and memory hardware (such as a memory error or a
power failure), in I/O devices (such as a parity error on disk, a connection failure on a
network, or lack of paper in the printer), and in the user program (such as an arithmetic
overflow, an attempt to access an illegal memory location, or a too-great use of CPU time).
Resource allocation. When there are multiple users or multiple jobs running at the same
time, resources must be allocated to each of them. The operating system manages many
different types of resources.
Accounting. To keep track of which users use how much and what kinds of computer
resources. This record keeping may be used for accounting (so that users can be billed) or
simply for accumulating usage statistics.
A system as large and complex as an operating system can only be created by partitioning
it into smaller pieces. Each of these pieces should be a well defined portion of the system with
carefully defined inputs, outputs, and function. Many modern operating systems share the system
components outlined below.
5.1 Process Management
The CPU executes a large number of programs. The execution of user programs is called
a process.
A process needs certain resources — including CPU time, memory, files, and I/O devices
— to accomplish its task. These resources are either given to the process when it is created or
allocated to it while it is running.
The operating system is responsible for the following activities in connection with processes
managed.
In order for a program to be executed it must be mapped to absolute addresses and loaded
in to memory. As the program executes, it accesses program instructions and data from memory
by generating these absolute is declared available, and the next program may be loaded and
executed.
In order to improve both the utilization of CPU and the speed of the computer's response
to its users, several processes must be kept in memory. There are many different algorithms
depends on the particular situation. Selection of a memory management scheme for a specific
system depends upon many factor, but especially upon the hardware design of the system. Each
algorithm requires its own hardware support.
The operating system is responsible for the following activities in connection with memory
management.
o Keep track of which parts of memory are currently being used and by whom.
o Decide which processes are to be loaded into memory when memory space
becomes available.
o Allocate and deallocate memory space as needed.
5.3 Secondary Storage Management
Most modem computer systems use disks as the primary on-line storage of information,
of both programs and data. Most programs, like compilers, assemblers, sort routines, editors,
formatters, and so on, are stored on the disk until loaded into memory, and then use the disk as
both the source and destination of their processing. Hence the proper management of disk
storage is of central importance to a computer system.
There are few alternatives. Magnetic tape systems are generally too slow. In addition,
they are limited to sequential access. Thus tapes are more suited for storing infrequently used
files, where speed is not a primary concern.
The operating system is responsible for the following activities in connection with
disk management
One of the purposes of an operating system is to hide the peculiarities of specific hardware
devices from the user. For example, in Unix, the peculiarities of I/O devices are hidden from the
bulk of the operating system itself by the I/O system. The I/O system consists of:
File management is one of the most visible services of an operating system. Computers
can store information in several different physical forms; magnetic tape, disk, and drum are the
most common forms. Each of these devices has its own characteristics and physical organization.
For convenient use of the computer system, the operating system provides a uniform
logical view of information storage. The operating system abstracts from the physical properties
of its storage devices to define a logical storage unit, the file. Files are mapped, by the operating
system, onto physical devices.
A file is a collection of related information defined by its creator. Commonly, files
represent programs (both source and object forms) and data. Data files may be numeric,
alphabetic or alphanumeric. Files may be free-form, such as text files, or may be rigidly
formatted. In general a files is a sequence of bits, bytes, lines or records whose meaning is
defined by its creator and user. It is a very general concept.
Also files are normally organized into directories to ease their use. Finally, when multiple
users have access to files, it may be desirable to control by whom and in what ways files may be
accessed.
The operating system is responsible for the following activities in connection with file
management:
The various processes in an operating system must be protected from each other’s activities. For
that purpose, various mechanisms which can be used to ensure that the files, memory segment,
cpu and other resources can be operated on only by those processes that have gained proper
authorization from the operating system.
For example, memory addressing hardware ensure that a process can only execute within its own
address space. The timer ensure that no process can gain control of the CPU without
relinquishing it. Finally, no process is allowed to do its own I/O, to protect the integrity of the
various peripheral devices.
Protection refers to a mechanism for controlling the access of programs, processes, or users to
the resources defined by a computer controls to be imposed, together with some means of
enforcement.
Protection can improve reliability by detecting latent errors at the interfaces between component
subsystems.
Networking
The processors in the system are connected through a communication network, which can
be configured in the number of different ways. The network may be fully or partially connected.
The communication network design must consider routing and connection strategies, and the
problems of connection and security.
A distributed system provides the user with access to the various resources the system
maintains. Access to a shared resource allows computation speed-up, data availability, and
reliability.
PROCESS MANAGEMENT
Process
The system consists of collection of processes. The operating system processes executes
system code, the user processes executes user code. A process is the unit of work in a modern
time-sharing system. A process is an 'active' entity as opposed to program which is
considered to be a 'passive' entity.
We write our computer programs in a text file and when we execute this
program, it becomes a process which performs all the tasks mentioned in the
program.
Process memory
When a program is loaded into the memory and it becomes a process, it can be divided
into four sections ─ stack, heap, text and data. The following image shows a simplified layout of
a process inside main memory −
Figure : Process in memory
1 Stack : The process Stack contains the temporary data such as method/function
parameters, return address and local variables.
2 Heap : This is dynamically allocated memory to a process during its run time.
3 Text : This includes the current activity represented by the value of Program
Counter and the contents of the processor's registers.
Process State
As a process executes, it changes state. The state of a process is defined in part by the
current activity of that process. A process may be in one of the following states:
• Waiting. The process is waiting for some event to occur (such as an I/O completion or
reception of a signal).
It is important to realize that only one process can be running on any processor at any
instant. Many processes may be ready and waiting, however. The state diagram corresponding to
these states is presented in Figure.
Each process is represented in the operating system by a process control block (PCB) —
also called a task control block. A PCB is shown in Figure.
It contains many pieces of information associated with a specific process, including these
Process state : The state may be new, ready, running, waiting andso on.
Program counter. The counter indicates the address of the next instruction to be executed for
this process.
CPU registers. The registers vary in number and type, depending on the computer architecture.
They include accumulators, index registers, stack pointers, and general-purpose registers, plus
any condition-code information.
Memory-management information. This information may include such items as the value of
the base and limit registers and the page tables, or the segment tables, depending on the memory
system used by the operating system.
I/O status information. This information includes the list of I/O devices allocated to the
process, a list of open files, and so on.
Process Scheduling
Process scheduling is a task of operating system to schedules the processes. It is an essential
part of Multiprogramming operating systems. Such operating systems allow more than one
process to be loaded into the executable memory at a time and the loaded process shares the CPU
using time multiplexing.
Scheduling Queues
The OS employs a process scheduler. The process scheduler assigns each process the
necessary resources and its turn for execution on the CPU. The decision to schedule a process is
made by an underlying scheduling algorithm. The scheduler maintains three queues, shown in
Figure, to schedule the processes.
As processes enter the system, they are put into a job queue, which consists of all processes in
the system. The processes that are residing in main memory and are ready and waiting to execute
are kept on a list called the ready queue. The process therefore may have to wait for the disk.
The list of processes waiting for a particular I/O device is called a device queue. Each device has
its own device queue.
Job queue - The job queue is the set of all processes on the system
Ready queue - The ready queue has all the processes that are loaded in main memory.
These processes are ready and waiting for their turn to execute as soon as the CPU
becomes available.
Device queue - the set of processes waiting for an I/O device to become available, such
as printer. This queue is also known as the Blocked Queue.
Processes from the job queue will be moved to the ready queue when they are ready to be
executed. When an executing process stalls for an I/O device to become available, then that
process is moved to the device queue where it remains until the requested I/O resource becomes
available. Then the process is moved back to the ready queue where it waits its turn to execute.
Figure Queueing-diagram representation of process scheduling
A new process is initially put in the ready queue. It waits there until it is selected for execution,
or dispatched. Once the process is allocated the CPU and is executing, one of several events
could occur:
• The process could issue an I/O request and then be placed in an I/O queue.
• The process could create a new child process and wait for the child’s termination.
• The process could be removed forcibly from the CPU,as a result of an interrupt, and be put
back in the ready queue.
Schedulers
It places the blocked and suspended processes in the secondary memory of a computer system.
The task of moving from main memory to secondary memory is called swapping out. The task
of moving back a swapped out process from secondary memory to main memory is known
as swapping in. The swapping of processes is performed to ensure the best utilization of main
memory.
It decides the priority in which processes is in the ready queue are allocated the central
processing unit (CPU) time for their execution. The short term scheduler is also referred as
central processing unit (CPU) scheduler.
Context switch
A context switch occurs when a computer’s CPU switches from one process or thread to a
different process or thread.
Typically, there are three situations that a context switch is necessary, as shown below.
Multitasking – When the CPU needs to switch processes in and out of memory, so that more
than one process can be running.
Kernel/User Switch – When switching between user mode to kernel mode, it may be used (but
isn’t always necessary).
Interrupts – When the CPU is interrupted to return data from a disk read.
1. Save the context of the processor, including program counter and other registers.
2. Update the process control block of the process that is currently in the Running state. This
includes changing the state of the process to one of the other states (Ready; Blocked;
Ready/Suspend; or Exit). Other relevant fields must also be updated, including the reason for
leaving the Running state and accounting information.
3. Move the process control block of this process to the appropriate queue(Ready; Blocked on
Event i ; Ready/Suspend).
6. Update memory management data structures. This may be required, depending on how address
translation is managed.
7. Restore the context of the processor to that which existed at the time the selected process was
last switched out of the Running state, by loading in the previous values of the program counter
and other registers.
Operations on Processes
The processes in most systems can execute concurrently, and they may be created and deleted
dynamically. Thus, these systems must provide a mechanism for process creation and
termination.
Process Creation
During the course of execution, a process may create several new processes. As
mentioned earlier, the creating process is called a parent process, and the new processes are
called the children of that process. Each of these new processes may in turn create other
processes, forming a tree of processes.
1. System initialization.
In addition to the processes created at boot time, new processes can be created afterward
as well. Often a running process will issue system calls to create one or more new processes to
help it do its job. Creating new processes is particularly useful when the work to be done can
easily be formulated in terms of several related, but otherwise independent interacting processes.
For example, if a large amount of data is being fetched over a network for subsequent
processing, it may be convenient to create one process to fetch the data and put them in a shared
buf-fer while a second process removes the data items and processes them.
In Microsoft Win-dows, when a process is started it does not have a window, but it can create
one (or more) and most do. In both systems, users may have multiple windows open at once,
each running some process. Using the mouse, the user can select a window and interact with the
process, for example, providing input when needed.
Process Termination
After a process has been created, it starts running and does whatever its job is. Sooner or
later the new process will terminate, usually due to one of the following conditions:
Most processes terminate because they have done their work. When a compi-ler has
compiled the program given to it, the compiler executes a system call to tell the operating system
that it is finished. This call is exit in UNIX and ExitProcess in Windows.
The second reason for termination is that the process discovers a fatal error. For example,
if a user types the command
cc foo.c
to compile the program foo.c and no such file exists, the compiler simply exits. Screen-
oriented interactive processes generally do not exit when given bad parameters. Instead they pop
up a dialog box and ask the user to try again.
The third reason for termination is an error caused by the process, often due to a program
bug. Examples include executing an illegal instruction, referencing nonexistent memory, or
dividing by zero.
The fourth reason a process might terminate is that the process executes a sys-tem call
telling the operating system to kill some other process. In UNIX this call is kill. The
corresponding Win32 function is TerminateProcess. In both cases, the killer must have the
necessary .authorization to do in the killee.
Cooperating Process
Processes executing concurrently in the operating system may be either independent
processes or cooperating processes. A process is independent if it cannot affect or be affected by
the other processes executing in the system. Any process that does not share data with any other
process is independent. A process is cooperating if it can affect or be affected by the other
processes executing in the system. Clearly, any process that shares data with other processes is a
cooperating process.
Interprocess Communication
For example, a print program produces characters that are consumed by the printer driver.
There is a buffer of n slots and each slot is capable of storing one unit of data. There are two
processes running, namely, producer and consumer, which are operating on the buffer.
A producer tries to insert data into an empty slot of the buffer. A consumer tries to
remove data from a filled slot in the buffer. As you might have guessed by now, those two
processes won't produce the expected output if they are being executed concurrently.
There needs to be a way to make the producer and consumer work in an independent manner.
The Unbounded buffer producer-consumer problem places no limit on the size of the
buffer. The consumer may have to wait for new items, but the producer can always
produce new items.
The bounded buffer producer-consumer problem assumes a fixed buffer size. The
consumer must wait if the buffer is empty, the producer must wait if the buffer is full.
The solution for the producer is to either go to sleep or discard data if the buffer is full.
The next time the consumer removes an item from the buffer, it notifies the producer,
who starts to fill the buffer again.
In the same way, the consumer can go to sleep if it finds the buffer to be empty.
The next time the producer puts data into the buffer, it wakes up the sleeping consumer.
The solution can be reached by means of inter-process communication, typically using
semaphores.
An inadequate solution could result in a deadlock where both processes are waiting to be
awakened. The problem can also be generalized to have multiple producers and
consumers.
Example
The following variables reside in a region of memory shared by the producer and consumer
processes:
The shared buffer is implemented as a circular array with two logical pointers: in and out.
The variable in points to the next free position in the buffer;
Out points to the first full position in the buffer.
The buffer is empty when in ==out;
The buffer is full when ((in +1) % BUFFER SIZE) == out.
All modern operating systems, however, provide features enabling a process to contain multiple
threads of control.
Thread is an execution unit which consists of its own program counter, a stack, and a set of
registers. Threads are also known as Lightweight processes. Threads are popular way to improve
application through parallelism. The CPU switches rapidly back and forth among the threads
giving illusion that the threads are running in parallel.
As each thread has its own independent resource for process execution, multiple processes can
be executed parallel by increasing number of threads.
Types of Thread
There are two types of threads:
1. User Threads
2. Kernel Threads
User threads, User threads are supported above the kernel and are managed without kernel
support. These are the threads that application programmers use in their programs.
Kernel threads are supported within the kernel of the OS itself. All modern OSs support kernel
level threads, allowing the kernel to perform multiple simultaneous tasks and/or to service
multiple kernel system calls simultaneously.
Multithreading Models
The user threads must be mapped to kernel threads, by one of the following strategies:
In this model, many user-level threads are all mapped onto a single kernel thread.
Thread management is handled by the thread library in user space, which is efficient in
nature.
Very few systems continue to use the model because of its inability to take advantage of
multiple processing cores.
One to One Model
The one to one model creates a separate kernel thread to handle each and every user thread.
Most implementations of this model place a limit on how many threads can be created.
It also allows multiple threads to run in parallel on multiprocessors.
Linux and Windows from 95 to XP implement the one-to-one model for threads.
The many to many model multiplexes any number of user threads onto an equal or smaller
number of kernel threads, combining the best features of the one-to-one and many-to-one
models.
Users can create any number of the threads.
Blocking the kernel system calls does not block the entire process.
Processes can be split across multiple processors.