Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Question1:State the Readers & Writers problem and write its semaphore based solution. Also describe the algorithm. Can the producers consumer problem be considered as a special case of Reader / Writer problem with a single Writer (the producer) and a single Reader (consumer)? Answer:The readers/writers problem is one of the classic synchronization problems. Like the dining philosophers, it is often used to compare and contrast synchronization mechanisms. It is also an eminently practical problem. It is one of the classic synchronization problems. It is often used to compare and contrast synchronization mechanism. A common paradigm in concurrent applications is isolation of shared data such as a variable, buffer or document and the control of access to that data. This problem has two types of clients accessing the shared data. The first type, referred to as readers, only wants to read the shared data. The second type, referred to as writers, may want to modify the shared data. There is also a designated central data server or controller. It enforces exclusive write semantic, if a write is active then no other writer or reader can be active. The server can support clients that wish to both read and write. The readers & writers problem is useful for modelling process which are completing for a limited shared resource. Let us understand it with the help of a practical example. The airline reservation system consists of a large database with many processes that read and write the data. Reading information from the database will not cause a problem since no data is changed. The problem lies in writing information to the database. If no constraints are put on access to the database, data may change at nay moment. By the time a reading process displays the result of a request for information to the user, the actual data in the database may have changed. What if, per instance a process reads the numbers of available seats on a flight finds a value of one and repeat it to the customer? Before the customer has a chance to make their reservation, another process makes a reservation for another customer, changing the number of available seats to zero. Selection using semaphores:It can be used to restrict access to the database under certain conditions. In this example: - semaphores are used to prevent any writing process from changing information to the database while other processes are reading from the database. Algorithm:Semaphore mutex Semaphore db int reader_count Reader() { while(TRUE) = 1; = 1; // controls access to the count. // controls access to the database.
// loop forever
1
reader_count = reader_count 1 ; // decrement reader_count If(reader_count == 0) Up(&db) ; // if there are no more process reading // from the database, allow writing process // to access the data Up(&mutex); // allow other process to access reader_countuse-data(); // use the data read from the database(non-critical) } writer () { while(TRUE) { create_data() ; // loop forever // create data to enter into database // (non critical) // write information to the database // release exclusive access to the database.
The producer consumer problem can be considered as a special case of Reader/ writer problem with a single writer(the producer) and a single reader(consumer). If we look at the problem properly then we get that the Producer / Consumer problems is similar to Reader / Writer problem. We can use it as a special case of Reader / Writer problem.
// repeat forever // generate the next item // if buffer is full, go to sleep // put item in buffer
********************
Question 2:4
An unblocked process may acquire all the needed resources and will execute. It will then release all the required resources and remain dormant thereafter. The new released resources may wakeup some previously blocked processes. Continue the above steps as long as possible. If any blocked processes remain, they are deadlocked.
Priorities of a process. CPU time used and expected usage before completion. Number and type of resources being used (can they be pre-empted easily?) Number of resources needed for completion. Number of process needed to be terminated. Are the processes interactive or batch?
Recovery by check pointing and rollback:Some systems facilitate deadlock recovery by implementing checkpoint and rollback. Checkpoint is saving enough state of a process so that the process can be restarted at the point in the computation where the checkpoint was taken. Auto saving file edits are a form of checkpointing. Checkpointing costs depend on the underlying algorithm. Very simple algorithm(like linear primarily testing) can be checkpointed with a few words of data. More complicated processes may have to save all the process state and memory. If a deadlock is detected, one or more processes are restarted from a checkpoint. Restarting a process from a checkpoint is called rollback. It is done with the expectation that the resource requests will not interleave again to produce deadlock. Deadlock recovery is generally used when deadlock are rare, and the cost of recovery (process termination or rollback) is low. *************
Question 3:What is thrashing? How does it happen? Where are the two mechanisms to prevent thrashing? Describe them. Answer :-
Improved Throughput
Thrashing
Fig- Degree of multi programming Here it shows that there is a degree of multiprogramming that is optimal for system performance. CPU utilization reaches a maximum before a swift decline as the degree of multiprogramming increases and thrashing occurs in the over entered system. This indicates that controlling the load on the system is important to avoid thrashing. The selection of a replacement policy to implement virtual memory plays an important part in the elimination of the potential for thrashing. A policy based on the local mode will tend to limit the effect of trashing. A replacement policy based on the global mode is more likely to cause thrashing. Since all pages of memory are available to all transactions, a memory intensive transaction may occupy a large portion of memory, making other transactions susceptible to page faults and resulting in a system that thrashes. To prevent thrashing, there are two techniques 1. Working- sot model. 2. Page-fault Rate. 1. Working-Set Model:Principle of Locality:- Pages are not accessed randomly. At each instant of execution a program tends to use only a small set of pages. As the pages in the set change the program is said to move from one phase to another. The principle of locality States that the most references will be to the current small set of pages in use. Working set Definition:- It is based on the assumption of locality. The idea to examine the most recent page references in the working set. If a page is in active use, it will be in the working set. If it is no longer being used, it will drop from working set. The set of pages currently needed by a process is its working set.
8
No. of Frames Fig :- page fault frequency Establish acceptable page fault If actual rate too low, process loses frame. If actual rate too high, process gains frame. *************** Question 4:What are the factors important for selection of a file organisation ? Discuss three important file organisations mechanisms and their relative performance. Answer :-
10
Abc
Xyz
Hello
Data
Test
Hello1
Hello2 Abc1
Fig:- Two level directory Thus different users may have same file names but within each UFD they should be unique. This resolves name-collision problem upto some extent but this directory structure isolates one user from another, which is not desired some times when the users need to share or co-operate on some task. 3. Tree Structure Directory:The two level directory structure is like a 2-level tree. Thus to generate, we can extend the directory structure to a tree of arbitrary height. Thus the user can create his / her own directory and sub-directories and can also organise files. One bit in each directory entry defines entry as a file(o) or as a sub-directory(1). Hello Test Programs Hello Test Programs
11
F1
F5
F6
F7
F13
Test2
F6
Trial 1
Test2
oet
Hex
Dec
F10 F2 F3
F11
F12
Fig :- Tree Structured directory The tree has a root directory and every file in it has a unique path name(Path from root, through all subdirectories to a specified file). The pathname prefixes the filename, helps to reach the required file traversed from a base directory. The pathnames can be of 2 types : absolute path names or relative path names, depending on the base directory. An absolute path name begins at the root and follows a path to a particular file. It is a full pathname and uses the root directory. Relative defines the path from the current directory. For example, if we assume that the current directory is / Hello2 then the file f4.doc has the absolute pathname /Hello/Hello2/Test2/f4.doc and the relative path name is /Test2/F4.doc. The pathname is used to simplify the searching of a file is a tree structural directory hierarchy. ******
Question 5:How do you differentiate between pre-emptive and non pre-emptive scheduling ? Briefly describes Round Robin and Shortest process next scheduling with example & for each. Answer:-
12
If we use a time quantum of 4 milliseconds then process p1 gets the first 4 milliseconds. Since it requires another 20 milliseconds, it is pre-empted after the first time quantum and the CPU is given to the next process in queue, process p2. Since process p2 does not need and milliseconds, it quits before its time quantum expires. The CPU is then given to the next process, process p3 one each process has received 1 time quantum, the CPU is returned to process p1 for an additional time quantum. The Gantt chart will be:P1 0 P2 4 7 P3 10 P1 14 P1 P1 18 P1 22 P1 26
30
Process P1 P2 P3
Processing time 24 03 03
Average turn around time = ( 30 + 7 + 10) / 3 = 47/3 = 15.66 Average waiting time Through put = ( 6 + 4 + 7) = 17 / 3 = 5.66 = 3 / 30 = 0.1
14
*********
Question 6:i) What are the main differences between capabilities list and access lists? Explain through examples. Answer:The security policy outlines several high level points :- how the data is accessed, the amount of security required and what the steps are when these requirements are not met. Two methods that are practical, however are storing the matrix by rows or by columns and then storing only the non-empty elements.
15
It consists of associating with each object an(ordered) list containing all the domains that may access the object and how. This list is called the access control list or ACL. Consider an example of three processes, each belonging to a different domain. A,B and C and three file F1,F2 and F3. for simplicity, we will assume that each domain corresponds to exactly one user, in this case, users A,B and c often in the security literature, the users are called subjects or principals, to contrast them with the things owned the objects such as files. Each file has an ACL associated with it. File F1 has two entries in its ACL(separated by a semicolon). The first entry says that any process owned by user A may read and write the file. All other access by these users and all accesses by other users are forbidden. The rights are granted by user not by process. As far as the protection system goes, any process owned by user A can read and write file F1. It does not matter if there is one such process or 100 of them. It is the owner, not the process ID, which matters. File F2 has entries in its ACL. A,B and c can all read the file and in addition B can also write it. No other accesses are allowed. File F3 is apparently an executable program, since B and C can both read and execute it. B can also write it. Many systems support the concept of a group of users. Group have names and can be included in ACLs. Two variations on the semantics, each process has a user ID(UID) and group ID(GID). In such systems, an ACL entry contains entries of the form. UID1, GID1: right1; UID2, GID2: right2; .. Under these conditions, when a request is made to access an object, a check is made using the callers UID and GID. If they are present in the ACL, the rights listed are available. If the (UID,GID) combination is not in the list the access is not permitted.
2. Capability List:-
The other way of slicing up the matrix is/ by rows. When this method is used associated with each process is a list of objects that may be accessed, along with an indication of which operations are permitted on each, in other words, its domain.
A
Process
Owner
User Space F1 F1 F1 F1 : R F2 : R F1 : R F2 : RW
F3: RWX
F2 : R F3 : RX 16
Fig:- when capabilities are used each process has a capability list. This list is called capability list or C-list and the individual items on it are called capabilities.(Dennis and Yan Henn1966, Fabry 1974). A set of three processes and the capability lists are shown the above figure. Each capability grants the owner certain right on a certain object. Here in the figure, the process owned by user A can read files F1 and F2. usually a capability consists of a file( or more generally, an object) identifier and a bitmap for the various rights. In UNIX like system, the file identifier would probably be the i-node number. Capability lists are themselves objects and may be pointed to from other capability lists, thus facilitating the sharing of sub-domains. ***********
Question 6:ii) What is the Kernel of an operating system? what functions are normally performed by the Kernel ? Give several reasons why it is effective to design micro kernel. Answer: The kernel is a bridge between applications and the actual data processing done at the hardware level. The operating system, referred to in UNIX as the kernel, interacts directly with the hardware and provides the service to the user programs. User programs interact with the kernel through a set of standard system calls. These system calls request services to be provided by the kernel. Such services would include accessing a file, open, close, read, write, link or execute a file, starting or
17
In multi-user, multi tasking operating system(UNIX), we can have many users logged into a system simultaneously each running many programs. It is the Kernels job to keep each process and user separate and to regulate access to system hardware including CPU<memory Disk and other I/O device. Management of the process by the kernel For each new process created, the kernel sets up an address space in the memory. This address space consists of the following logical segments: Text- contains the programs instructions. Data contains initialize program variables. Bss- contains uninitialized program variables. Stack- a dynamically growable segment, it contains variables allocated locally and parameters paned to functions in the program. Each process has two stacks :- a user stack and a kernel stack. These stacks are used when the process executes in the user or kernel mode. Mode switching:Kernel Mode :Process carrying out Kernel instructions are said to be running in the kernel mode . P1 user process can be in the kernel mode while making a system call, while generating an exception fault , or in case of interrupt . Essentially, a mode switch occurs and control is transferred to the kernel , when a user program makes a system call . the kernel then executes the instructions on the users behalf .
18
Data : To hold the data segments of running processes Stack : To hold the stack segments of running processes .
Shared Memory : This is an area of memory which is available to running programs if they need it . Consider a common use of shared memory : Let us assume we have a program which has been complied using a shared library . Assume that five of this programs are running simultaneously . At run time , the code of sick is made resident in the shared area . This way , a single copy of the library needs to be in memory , resulting in increased efficiency and major cost savings.
Buffer Cache:All reads and writes to the file system are cached here first . Sometimes where a program that is writing to a file doesnt seem to work [ nothing is written to the file ] . A trend in modern operating systems is to take the idea of moving code up into higher layers even further and remove as much as possible from the kernel Client Process User mode Kernel mode
Client process
Process server
File server
Memory server
Client obtain service by sending messages to server processes. mode, leaving a minimal level of kernel with the minimal characteristics of a kernel.
19
**********
20