Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Types of OS
Types of Operating Systems: Some of the widely used operating systems
are as follows-
Failure of one will not affect the other network communication, as all systems
are independent of each other
Since resources are being shared, computation is highly fast and durable
Load on host computer reduces
Advantages of RTOS:
Maximum Consumption:
Task Shifting:
Focus on Application:
Disadvantages of RTOS:
Limited Tasks:
Use heavy system resources:
Complex Algorithms:
OS SERVICES
1. Program Execution
Each program requires an input and after processing the input submitted by
user it produces output. This involves the use of I/O devices. So the
operating system makes the users convenient to run programs by providing
I/O functions.
3. File System Manipulation
4. Communication
5. Error Detection
6. Resource allocation
7. Accounting
8. Protection System
If a computer system has multiple users and allows the concurrent execution
of multiple processes, then the various processes must be protected from
one another's activities.
1. Monolithic Systems
2. Layered Systems
3. Virtual Machines
4. Exokernels
5. Client-Server Systems
1. Main Procedure
2. Service Procedures
3. Utility Procedures
The figure given below shows all the layered provided above in
monolithic system model of operating system.
Layered Systems
This system has 6 layers as shown in the table given below.
Layer Function
5 The operator
4 User programs
3 I/O management
2 Operator-process communication
Virtual Machines
The system originally called CP/CMS, later renamed VM/370, was based
on an astute observation. That was a time sharing system, provides
multiprogramming and an extended machine with a more convenient
interface than the bare hardware.
The heart of the system known as virtual machine monitor that runs on
the bare hardware and does the multiprogramming, providing several
virtual machines to next layer up as shown in the given figure.
These virtual machines aren't extended machines, with files and other
nice features. They are the exact copies of the bare hardware, including
the kernel/user mode, Input/Output, interrupts, and everything else the
real machine has.
Exokernels
Exokernels is a program present at the bottom layer, running in the
kernel mode.
The work of exokernel is just to allocate the resources to the virtual
machines and check attempts to use them to make sure no machine is
trying to use some other's resources.
Exokernels saves a layer of mapping which is the advantage of the
exokernel scheme.
Client-Server Model
In the client-server model, as shown in the figure given below, all the
kernel does is handle the communication between the clients and the
servers.
By splitting the operating system (OS) up into parts, each of which only
handles one fact of the system, such as file service, terminal service,
process service, or memory service, each part becomes small and
manageable.
The adaptability of the client-server model, to use in distributed system
is the advantage of this model.
System Call
System calls are the interface between the operating system and the user
programs that allow user-level processes to request services of the operating system.
In general, system calls are available as assembly language instructions.
They are also included in the manuals used by the assembly level
programmers. System call provides the services of the operating system to the user
programs via Application Program Interface(API). System calls are usually made
As can be seen from this diagram, the processes execute normally in the
user mode until a system call interrupts this. Then the system call is
executed on a priority basis in the kernel mode. After the execution of the
system call, the control returns to the user mode and execution of user
4. Device handling(I/O)
5. Protection
6. Networking, etc.
There are mainly five types of system calls. These are explained in detail as
follows:
Process Control
These system calls deal with processes such as process creation, process
termination etc.
File Management
These system calls are responsible for file manipulation such as creating a
Device Management
These system calls are responsible for device manipulation such as reading
Information Maintenance
These system calls handle information and its transfer between the
Communication
These system calls are useful for interprocess communication. They also
When a program is loaded into the memory and it becomes a process, it can
be divided into four sections ─ stack, heap, text and data.
Process Components
Heap: This is dynamically allocated memory to a process during its run time.
Text: This includes the current activity represented by the value of Program
Counter and the contents of the processor's registers.
Arrival Time – Time at which the process arrives in the ready queue.
Completion Time – Time at which process completes its execution.
Burst Time – Time required by a process for CPU execution.
Turn Around Time = Completion Time - Arrival Time
Waiting Time = Turn Around Time - Burst Time
Process Life Cycle
When a process executes, it passes through different states. These states
may differ in different operating systems, and the names of these states are
also not standardized.
In general, a process can have one of the following five states at a time.
Waiting: Process moves into the waiting state if it needs to wait for a
resource, such as waiting for user input, or waiting for a file to become
available.
Process State: The current state of the process i.e., whether it is ready,
running, waiting, or whatever.
CPU registers: Various CPU registers where process need to be stored for
execution for running state.
Accounting information: This includes the amount of CPU used for process
execution, time limits, execution ID etc.
THREAD
Each thread belongs to exactly one process and no thread can exist outside
a process. Each thread represents a separate flow of control.
Advantages of Thread
Threads minimize the context switching time.
Use of threads provides concurrency within a process.
Efficient communication.
It is more economical to create and context switch threads.
Threads allow utilization of multiprocessor architectures to a greater scale
and efficiency.
Types of Thread
User Level Threads − User managed threads.
Advantages
Thread switching does not require Kernel mode privileges.
User level thread can run on any operating system.
Scheduling can be application specific in the user level thread.
User level threads are fast to create and manage.
Disadvantages
In a typical operating system, most system calls are blocking.
Multithreaded application cannot take advantage of multiprocessing.
Advantages
Kernel can simultaneously schedule multiple threads from the same process
on multiple processes.
If one thread in a process is blocked, the Kernel can schedule another
thread of the same process.
Kernel routines themselves can be multithreaded.
Disadvantages
Kernel threads are generally slower to create and manage than the user
threads.
Transfer of control from one thread to another within the same process
requires a mode switch to the Kernel.
Schedulers
● Long-Term Scheduler
● Short-Term Scheduler
● Medium-Term Scheduler
Scheduling
The process scheduling is the activity of the process manager that handles
the removal of the running process from the CPU and the selection of another
process on the basis of a particular strategy.
Nonpreemptive:
• Once a process has been given the CPU, it runs until blocks for I/O or
termination.
• Treatment of all processes is fair.
• Response times are more predictable.
• Useful in real-time system.
• Shorts jobs are made to wait by longer jobs - no priority
Preemptive:
• Processes are allowed to run for a maximum of some fixed time.
• Useful in systems in which high-priority processes requires rapid
attention.
• In time-sharing systems, preemptive scheduling is important in
guaranteeing acceptable response times.
• High overhead.
1. Shared Memory
2. Message passing
Race Condition
A race condition occurs when two or more process can access shared data
and they try to change it at the same time. Because the process scheduling
algorithm can swap between process at any time, you don't know the order
in which the process will attempt to access the shared data. Therefore, the
result of the change in data is dependent on the process scheduling
algorithm, i.e. both threads are "racing" to access/change the data.
Critical Section
One way to avoid race conditions is not to allow two processes to be in their
critical sections at the same time. Critical section is the part of the process
that accesses a shared variable. The regions of a program that try to access
shared resources and may cause race conditions are called critical section.
To avoid race condition among the processes, we need to assure that only
one process at a time can execute within the critical section.
Mutual Exclusion
A way of making sure that if one process is using a shared modifiable data,
the other processes will be excluded from doing the same thing. Formally,
while one process executes the shared variable, all other processes desiring
to do so at the same time moment should be kept waiting; when that process
has finished executing the shared variable, one of the processes waiting;
while that process has finished executing the shared variable, one of the
processes waiting to do so should be allowed to proceed. In this fashion,
each process executing the shared data (variables) excludes all others from
doing so simultaneously. This is called Mutual Exclusion.
Conclusion
The flaw in this proposal can be best explained by example. Suppose
process A sees that the lock is 0. Before it can set the lock to 1 another
process B is scheduled, runs, and sets the lock to 1. When the process A
runs again, it will also set the lock to 1, and two processes will be in their
critical section simultaneously.
Conclusion
Taking turns is not a good idea when one of the processes is much slower
than the other. Suppose process 0 finishes its critical section quickly, so both
processes are now in their non-critical section. This situation violates above
mentioned condition
Both 'sleep' and 'wakeup' system calls have one parameter that represents a
memory address used to match up 'sleeps' and 'wake ups'.
Peterson’s Solution:
It is a solution to the mutual exclusion problem that does not require strict
alternation, but still uses the idea of lock variables together with the concept
of taking turns.
Peterson's algorithm is used for mutual exclusion and allows two processes to share a
single-use resource without conflict. It uses only shared memory for communication.
Peterson's formula originally worked only with two processes, but has since been
generalized for more than two.
This is a software mechanism implemented at user mode. It is a busy waiting
solution which can be implemented for only two processes. It uses two
variables that are turn variable and interested variable.
The Code of the solution is given below
# define N 2
# define TRUE 1
# define FALSE 0
int interested[N] = FALSE;
int turn;
voidEntry_Section (int process)
{
int other;
other = 1-process;
interested[process] = TRUE;
turn = process;
while (interested [other] =True && TURN=process);
}
voidExit_Section (int process)
{
interested [process] = FALSE;
}
However, the Peterson solution provides you all the necessary requirements
such as Mutual Exclusion, Progress, Bounded Waiting and Portability.
Semaphore
In 1965, Dijkstra proposed a new and very significant technique for
managing concurrent processes by using the value of a simple integer
variable to synchronize the progress of interacting processes. This integer
variable is called semaphore. So it is basically a synchronizing tool and is
accessed only through two low standard atomic operations, wait and signal
designated by P(S) and V(S)respectively.
In very simple words, semaphore is a variable which can hold only a non-
negative Integer value, shared between all the threads, with operations wait
and signal,
Types of Semaphores
Semaphores are mainly of two types:
1. Binary Semaphore: It is a special form of semaphore used for
concurrency.
Statement
To suspend the producers when the buffer is full, to suspend the consumers
when the buffer is empty, and to make sure that only one process at a time
manipulates a buffer so there are no race conditions or lost updates. As an
example how sleep-wakeup system calls are used, consider the producer-
consumer problem also known as bounded buffer problem. Two processes
share a common, fixed-size (bounded) buffer. The producer puts information
into the buffer and the consumer takes information out. Trouble arises when
1. The producer wants to put a new data in the buffer, but buffer is already
full. Solution: Producer goes to sleep and to be awakened when the
consumer has removed data.
2. The consumer wants to remove data the buffer but buffer is already
empty. Solution: Consumer goes to sleep until the producer puts some data
in buffer and wakes consumer up.
Conclusion
This approaches also leads to same race conditions we have seen in earlier
approaches. Race condition can occur due to the fact that access to 'count'
is unconstrained. The essence of the problem is that a wakeup call, sent to a
process that is not sleeping, is lost.
Chapter 4 Deadlock
1) Requests a resource
Consider an example when two trains are coming toward each other on
same track and there is only one track, none of the trains can move once
they are in front of each other. Similar situation occurs in operating systems
when there are two or more processes hold some resources and wait for
resources held by other(s). For example, in the below diagram, Process 1 is
holding Resource 1 and waiting for resource 2 which is acquired by process
2, and process 2 is waiting for resource 1.
Deadlock can arise if following four conditions hold simultaneously
(Necessary Conditions)
Necessary conditions for Deadlocks
1. Mutual Exclusion
3. No preemption
The process which once scheduled will be executed till the completion.
No other process can be scheduled by the scheduler meanwhile.
4. Circular Wait
All the processes must be waiting for the resources in a cyclic manner
so that the last process is waiting for the resource which is being held
by the first process.
1. Deadlock Ignorance
Deadlock Ignorance is the most widely used approach among all the
mechanism. This is being used by many operating systems mainly for end
user uses. In this approach, the Operating system assumes that deadlock
never occurs. It simply ignores deadlock. This approach is best suitable for a
single end user system where User uses the system only for browsing and
all other normal stuff.
2. Deadlock prevention
Deadlock happens only when Mutual Exclusion, hold and wait, No
preemption and circular wait holds simultaneously. If it is possible to violate
one of the four conditions at any time then the deadlock can never occur in
the system.
The idea behind the approach is very simple that we have to fail one of the
four conditions but there can be a big argument on its physical
implementation in the system.
3. Deadlock avoidance
In deadlock avoidance, the operating system checks whether the system is
in safe state or in unsafe state at every step which the operating system
performs. The process continues until the system is in safe state. Once the
system moves to unsafe state, the OS has to backtrack one step.
We will discuss deadlock detection and recovery later in more detail since it
is a matter of discussion.
Deadlock Prevention
Let's see how we can prevent each of the conditions.
1. Mutual Exclusion
Mutual section from the resource point of view is the fact that a resource can
never be used by more than one process simultaneously which is fair
enough but that is the main reason behind the deadlock. If a resource could
have been used by more than one process at the same time then the
process would have never been waiting for any resource.
Spooling
For a device like printer, spooling can work. There is a memory associated
with the printer which stores jobs from each of the process into it. Later,
Printer collects all the jobs and print each one of them according to FCFS.
By using this mechanism, the process doesn't have to wait for the printer and
it can continue whatever it was doing. Later, it collects the output when it is
produced.
Although, Spooling can be an effective approach to violate mutual exclusion
but it suffers from two kinds of problems.
2. After some point of time, there may arise a race condition between the
processes to get space in that spool.
!(Hold and wait) = !hold or !wait (negation of hold and wait is, either you don't
hold or you don't wait)
Process is the set of instructions which are executed by the CPU. Each of
the instruction may demand multiple resources at the multiple times. The
need cannot be fixed by the OS.
2. Possibility of getting starved will be increases due to the fact that some
process may hold a resource for a very long time.
3. No Preemption
Deadlock arises due to the fact that a process can't be stopped once it
starts. However, if we take the resource away from the process which is
causing deadlock then we can prevent deadlock.
This is not a good approach at all since if we take a resource away which is
being used by the process then all the work which it has done till now can
become inconsistent.
Consider a printer is being used by any process. If we take the printer away
from that process and assign it to some other process then all the data which
has been printed can become inconsistent and ineffective and also the fact
that the process can't start printing again from where it has left which causes
performance inefficiency.
4. Circular Wait
To violate circular wait, we can assign a priority number to each of the
resource. A process can't request for a lesser priority resource. This ensures
that not a single process can request a resource which is being utilized by
some other process and no cycle will be formed.
Among all the methods, violating Circular wait is the only approach that can
be implemented practically.
Deadlock Avoidance
It is possible for a process to be in an unsafe state but for this not to result in
a deadlock. The notion of safe/unsafe states only refers to the ability of the
system to enter a deadlock state or not. For example, if a process requests
A which would result in an unsafe state, but releases B which would prevent
circular wait, then the state is unsafe but the system is not in deadlock. One
known algorithm that is used for deadlock avoidance is the Banker's
algorithm, which requires resource usage limit to be known in advance.
However, for many systems it is impossible to know in advance what every
process will request. This means that deadlock avoidance is impossible
practically.
For Resource
For Process
Kill a process
Killing a process can solve our problem but the bigger concern is to decide
which process to kill. Generally, Operating system kills a process which has
done least amount of work until now.
What is a File ?
A file can be defined as a data structure which stores the sequence of
records. Files are stored in a file system, which may exist on a disk or in the
main memory. Files can be simple (plain text) or complex (specially-
formatted).Files represent both the program and the data. Data can be
numeric, alphanumeric, alphabetic or binary.
Many different types of information can be stored on a file ---Source
program, object programs, executable programs, numeric data, payroll
recorder, graphic images, sound recordings and so on.
Sequential Access
Most of the operating systems access the file sequentially. In other words,
we can say that most of the files need to be accessed sequentially by the
operating system.
In sequential access, the OS read the file word by word. A pointer is
maintained which initially points to the base address of the file. If the user
wants to read first word of the file then the pointer provides that word to the
user and increases its value by 1 word. This process continues till the end of
the file.
Modern word systems do provide the concept of direct access and indexed
access but the most used method is sequential access due to the fact that
most of the files such as text files, audio files, video files, etc need to be
sequentially accessed.
Direct Access
The Direct Access is mostly required in the case of database systems. In
most of the cases, we need filtered information from the database. The
sequential access can be very slow and inefficient in such cases.
Suppose every block of the storage stores 4 records and we know that the
record we needed is stored in 10th block. In that case, the sequential access
will not be implemented because it will traverse all the blocks in order to
access the needed record.
Direct access will give the required result despite of the fact that the
operating system has to perform some complex tasks such as determining
the desired block number. However, that is generally implemented in
database applications.
Indexed Access
If a file can be sorted on any of the filed then an index can be assigned to a
group of certain records. However, A particular record can be accessed by
its index. The index is nothing but the address of a record in the file.
In index accessing, searching in a large database became very quick and
easy but we need to have some extra space in the memory to store the
index value.
Directory Structure
What is a directory?
Directory can be defined as the listing of the related files on the disk. The
directory may store some or the entire file attributes.
To get the benefit of different file systems on the different operating systems,
A hard disk can be divided into the number of partitions of different sizes.
The partitions are also called volumes or mini disks.
Each partition must have at least one directory in which, all the files of the
partition can be listed. A directory entry is maintained for each file in the
directory which stores all the information related to that file.
A directory can be viewed as a file which contains the Meta data of the
bunch of files.
Every Directory supports a number of common operations on the file:
1. File Creation
3. File deletion
5. Traversing Files
6. Listing of files
2. If the sizes of the files are very small then the searching becomes
faster.
3. File creation, searching, deletion is very simple since we have only one
directory.
Disadvantages
2. The directory may be very big therefore searching for a file may take
so much time.
4. The same kind of files cannot be grouped into a single directory for a
particular user.
Every Operating System maintains a variable as PWD which contains the
present directory name (present username) so that the searching can be
done appropriately.
Directory Implementation
There is the number of algorithms by using which, the directories can be
implemented. However, the selection of an appropriate directory
implementation algorithm may significantly affect the performance of the
system.
The directory implementation algorithms are classified according to the data
structure they are using. There are mainly two algorithms which are used in
these days.
1. Linear List
In this algorithm, all the files in a directory are maintained as singly linked list.
Each file contains the pointers to the data blocks which are assigned to it
and the next file in the directory.
Characteristics
1. When a new file is created, then the entire list is checked whether the
new file name is matching to a existing file name or not. In case, it
doesn't exist, the file can be created at the beginning or at the end.
Therefore, searching for a unique name is a big concern because
traversing the whole list takes time.
2. Hash Table
To overcome the drawbacks of singly linked list implementation of
directories, there is an alternative approach that is hash table. This approach
suggests to use hash table along with the linked lists.
A key-value pair for each file in the directory gets generated and stored in
the hash table. The key can be determined by applying the hash function on
the file name while the key points to the corresponding file stored in the
directory.
Now, searching becomes efficient due to the fact that now, entire list will not
be searched on every operating. Only hash table entries are checked using
the key and if an entry found then the corresponding file will be fetched using
the value.
File Systems
File system is the part of the operating system which is responsible for file
management. It provides a mechanism to store the data and access to the
file contents including data and programs. Some Operating systems treats
everything as a file for example Ubuntu.
The File system takes care of the following issues
● File Structure
We have seen various data structures in which the file can be stored.
The task of the file system is to maintain an optimal file structure.
Whenever a file gets deleted from the hard disk, there is a free space
created in the disk. There can be many such spaces which need to be
recovered in order to re-allocate them to other files.
The major concern about the file is deciding where to store the files on
the hard disk. There are various disks scheduling algorithm which will
be covered later in this tutorial.
A File may or may not be stored within only one block. It can be stored
in the non contiguous blocks on the disk. We need to keep track of all
the blocks on which the part of the files reside.
● I/O controls contain the codes by using which it can access hard disk.
These codes are known as device drivers. I/O controls are also
responsible for handling interrupts.
Allocation Methods
There are various methods which can be used to allocate disk space to the
files. Selection of an appropriate allocation method will significantly affect the
performance and efficiency of the system. Allocation method provides a way
in which the disk will be utilized and the files will be accessed.
There are following methods which can be used for allocation.
1. Contiguous Allocation.
2. Extents
3. Linked Allocation
4. Clustering
5. FAT
6. Indexed Allocation
7. Linked Indexed Allocation
9. Inode
Contiguous Allocation
If the blocks are allocated to the file in such a way that all the logical blocks
of the file get the contiguous physical block in the hard disk then such
allocation scheme is known as contiguous allocation.
In the image shown below, there are three files in the directory. The starting
block and the length of each file are mentioned in the table. We can check in
the table that the contiguous blocks are assigned to each file as per its need.
Advantages
1. It is simple to implement.
Disadvantages
1. The disk will become fragmented.
Advantages
2. Any free block can be utilized in order to satisfy the file block requests.
3. File can continue to grow as long as the free blocks are available.
3. Any of the pointers in the linked list must not be broken otherwise the
file will get corrupted.
In index allocation, the file size depends on the size of a disk block. To allow
large files, we have to link several index blocks together. In linked index
allocation,
For the larger files, the last entry of the index block is a pointer which points
to another index block. This is also called as linked schema.
Advantage: It removes file size limitations
● The outer level index is used to find the inner level index.
● The inner level index is used to find the desired data block.
1. Bit Vector
If the block is empty then the bit is 1 otherwise it is 0. Initially all the blocks
are empty therefore each bit in the bit-map vector contains 1.
LAs the space allocation proceeds, the file system starts allocating blocks to
the files and setting the respective bit to 0.
2. Linked List
Therefore, all the free blocks on the disks will be linked together with a
pointer. Whenever a block gets allocated, its previous free block will be
linked to its next free block.
CHAPTER 8 SECURITY
Symmetric Encryption
There are various types of algorithms for encryption, some common algorithms include:
● Secret Key Cryptography (SKC): Here only one key is used for both encryption and
decryption. This type of encryption is also referred to as symmetric encryption.
● Public Key Cryptography (PKC): Here two keys are used. This type of encryption is
also called asymmetric encryption. One key is the public key that anyone can
access. The other key is the private key, and only the owner can access it. The
sender encrypts the information using the receiver’s public key. The receiver
decrypts the message using his/her private key. For nonrepudiation, the sender
encrypts plain text using a private key, while the receiver uses the sender’s public
key to decrypt it. Thus, the receiver knows who sent it.
● Hash Functions: These are different from SKC and PKC. They use no key and are
also called one-way encryption. Hash functions are mainly used to ensure that a
file has remained unchanged.
Asymmetric Encryption
Conclusion:
Being a complex and slow encryption technique the asymmetric encryption
is generally used for exchanging the keys and the symmetric encryption
being a faster technique is used for bulk data transmission.
OS User Authentication
User authentication process is used just to identify who the owner is or who
the identified person is.
When a computer user wants to log into a computer system, then the
installed operating system (OS) on that computer system generally wants to
determine or check who the user is. This process is called as user
authentication.
Most methods of authenticating the computer users when they attempt or try
to log into the system are based on one of the following three principles:
● Something, the user knowns
That computer users who want to cause some trouble on any specific
computer system, have to first log into that computer systems, means getting
past whichever the authentication method or procedure is used. Those
computer users are called as hackers.
Now let's describe briefly about all the above authentication process one by
one.
Authenticating the user using their password is an easy method and also
easy to implement.
Here, in this method, the login name typed in is looked up in the list and
typed password is then compared to stored password.
Now, if both login and password match, then the login is allowed or the user
is successfully authenticated and approved to log into that system. And in
case if now match occurred, then the login error is detected.
● Password should contain at least one digit and one special characters
● Don't use dictionary words and known name such as stick, mouth, sun,
albert etc.
When OTPs are used, the user get a book containing a list of many
passwords. Each login uses the next password in the list.
Therefore, if an intruder ever discover the password, then it will n't do any
good for him as the next time, a different password must be used.
To authenticate the user, plastic card is inserted by the user into a reader
associated with the terminal or computer system.
Generally, the user must not only insert the card that is used as physical
object to authenticate him/her, but also type in a password just to prevent
someone from using a lost or stolen card.
This method measures the physical characteristics of the user that are very
hard to forge. These are called as biometrics.
Basically, the typical biometric system has the following two parts:
● Enrolment
● Identification
Now, let's describe briefly about the above two parts of the biometric system.
Enrolment
In biometric system, during enrolment, characteristics of the user are
measured and the results digitized.
Then, significant features are extracted and stored in the record associated
with the user.
Identification
In identification, the user shows up and provides a login name or id. Now,
again, the system makes the measurement.
Now, if the new values match the ones sampled at enrolment time, then the
login is accepted, otherwise the login attempt is rejected.
User Authentication using Countermeasure
User authentication using countermeasure method is used to make the
unauthorized access much harder.
For example, a company could have their policy that the employee working
in the Computer Science (CS) department are only allowed to log in from 10
A.M. to 4 P.M., Monday to Saturday, and then only from a machine in the CS
department connected to company's Local Area Network (LAN).
Third, check for current authority. The system should not check for
permission, determine that access is permitted, and then squirrel away this
information for subsequent use. Many systems check for permission when a
file is opened, and not afterward. This means that a user who opens a file,
and keeps it open for weeks, will continue to have access, even if the owner
has long since changed the file protection or maybe even tried to delete the
file.
Fourth, give each process the least privilege possible. If an editor has only
the authority to access the file to be edited (specified when the editor is
invoked), editors with Trojan horses will not be able to do much damage.
This principle implies a fine-grained protection scheme. We will discuss such
schemes later in this chapter.
Fifth, the protection mechanism should be simple, uniform, and built into the
lowest layers of the system. Trying to retrofit security to an existing insecure
system is nearly impossible. Security, like correctness, is not an add-on
feature.
To this list, we would like to add one other principle that has been gained by
decades of hard-won experience:
If the system is elegant and simple, was designed by a single architect, and
has a few guiding principles that determine the rest, it has a chance of being
secure. If the design is a mess, with no coherence and many fundamental
concessions to ancient insecure systems in the name of backward
compatibility, it is going to be a security nightmare. You can design a system
with many features (options, user-friendliness, etc.) but a system with many
features is a big system. And a big system is potentially an insecure system.
The more code there is, the more security holes and bugs there will be. From
a security perspective, the simplest design is the best design.
A file descriptor is a number that uniquely identifies an open
file in a computer's operating system. It describes a data
resource, and how that resource may be accessed.
1. Multiprogramming – A computer running more than one program at a time
(like running Excel and Firefox simultaneously).
2. Multiprocessing – A computer using more than one CPU at a time.
3. Multitasking – Tasks sharing a common resource (like 1 CPU).
4. Multithreading is an extension of multitasking.
5. Concurrent processing is a computing model in which multiple
processors execute instructions simultaneously for better performance.
6. Parallel processing- a mode of operation in which a process is split into parts,
which are executed simultaneously on different processors attached to the
same computer.
The kernel is the central module of an operating system (OS). It is the part of the operating system that
loads first, and it remains in main memory.