Sei sulla pagina 1di 31

Virtual Memory Management

 Background
 Demand Paging
 Copy-on-Write
 Page Replacement
 Allocation of Frames
 Thrashing
1. Background
• Virtual memory – separates user logical memory from physical memory.
• Only part of the program needs to be in memory for execution
• Logical address space can be much larger than physical address space

Virtual Memory That is Larger Than Physical Memory 2


Loganathan R, CSE, HKBKCE
1. Background Cntd…

• The virtual address space of a process


refers to the logical (or virtual) view of how
a process is stored in memory
• The heap to grow upward in memory as it is
used for dynamic memory allocation
• The stack to grow downward in memory
through successive function calls
• The large blank space (or hole) between
the heap and the stack is part of the virtual
address space
• Virtual address spaces that include holes
are known as sparse address spaces

Loganathan R, CSE, HKBKCE


Virtual-address Space
3
1. Background Cntd…

• Virtual memory allows files and memory to be shared by two or more


processes through page sharing
• Benefits:
• System libraries can be shared by several processes
• Enables processes to share memory
• Allow pages to be shared during process creation with the fork() system call

Shared Library Using Virtual Memory 4


Loganathan R, CSE, HKBKCE
2. Demand Paging
• Bring a page into memory only
when it is needed
– Less I/O needed
– Less memory needed
– Faster response
– More users
• Page is needed  reference to it
– invalid reference  abort
– not-in-memory  bring to
memory
• Lazy swapper – never swaps a
page into memory unless page
will be needed
– Swapper that deals with pages is a
pager
• H/W Support required :
– Page table valid-invalid bit or
special value of protection bits
– Secondary memory holds those Transfer of a Paged Memory
pages that are not present in to Contiguous Disk Space
memory Loganathan R, CSE, HKBKCE 5
2. Demand Paging
Valid-Invalid Bit
• With each page
table entry a valid–
invalid bit is
associated
• v  in-memory
• i  not-in-
memory
• Initially valid–invalid
bit is set to i on all
entries
• During address
translation, if valid–
invalid bit in page
table entry is
• i  page fault trap
to OS
Page Table When Some Pages Are Not in Main Memory
Loganathan R, CSE, HKBKCE 6
2. Demand Paging Cntd…

Procedure for
handling the page
fault
1. OS looks at
another table to
decide:
– Invalid reference
 abort
– Just not in
memory
2. Get empty frame
3. Swap page into
frame
4. Reset tables
5. Set validation bit =
v
6. Restart the
instruction that
caused the page Steps in Handling a Page Fault
fault Loganathan R, CSE, HKBKCE 7
2. Demand Paging Cntd…
Performance of Demand Paging
• Let p is the probability of a page fault (0  p  1.0)
if p = 0 no page faults and if p = 1, every reference is a fault
• Effective Access Time EAT = (1 – p) x ma(memory access) + p x page fault time
• Page fault causes
1. Trap to the operating system
2. Save the user registers and process state
3. Determine that the interrupt was a page fault
4. Check that the page reference was legal and determine the location of the page on
the disk.
5. Issue a read from the disk to a free frame:
Wait in a queue for this device until the read request is serviced.
Wait for the device seek and /or latency time.
Begin the transfer of the page to a free frame.
6. While waiting, allocate the CPU to some other user (CPU scheduling, optional).
7. Receive an interrupt from the disk I/O subsystem (I/O completed).
8. Save the registers and process state for the other user (if step 6 is executed).
9. Determine that the interrupt was from the disk.
10. Correct page table &other tables to show the desired page is now in memory.
11. Wait for the CPU to be allocated to this process again.
12. Restore the user registers, process state, and new page table, and then resume the
interrupted instruction.
8
Loganathan R, CSE, HKBKCE
2. Demand Paging Cntd…

Demand Paging Example


• 3 major components of the page-fault time:
1. Service the page-fault interrupt.
2. Read in the page.
3. Restart the process.
• Memory access time = 200 nanoseconds
• Average page-fault service time = 8 milliseconds
• EAT = (1 – p) x 200 + p (8 milliseconds)
= (1 – p x 200 + p x 8,000,000
= 200 + p x 7,999,800
• If one access out of 1,000 causes a page fault, then
EAT = 8.2 microseconds.
This is a slowdown by a factor of 40!!
• If we want performance degradation to be less than 10 percent, we need
220 > 200 + 7,999,800 x p,
20 > 7,999,800 x p,
p < 0.0000025.

Loganathan R, CSE, HKBKCE 9


3. Copy-on-Write
• Copy-on-Write (COW) allows both parent and child processes to initially share
the same pages in memory
If either process modifies a shared page, only then is the page copied
• COW allows more efficient process creation as only modified pages are copied
• Free pages are allocated from a pool of zeroed-out pages
• Example:
Before Process 1 Modifies Page C

Loganathan R, CSE, HKBKCE 10


3. Copy-on-Write Cntd…

After Process 1 Modifies Page C

Loganathan R, CSE, HKBKCE 11


4. Page Replacement
Need For Page Replacement

12
Loganathan R, CSE, HKBKCE
4. Page Replacement Cntd…

1. Basic Page Replacement


1. Find the location of the desired page on disk
2. If there is a free frame, use it otherwise, use a page replacement algorithm to
select a victim frame, write it to the disk, change the page and frame tables
3. Bring the desired page into the (newly) free frame; update the page and
frame tables
4. Restart the process
• If no frames are free,
two page transfers
(one out and one in)
are required
• To reduce this
overhead by using a
modify bit or dirty bit
• The modify bit for a
page is set(must write
that page to the disk)
by the hardware
whenever any word or
byte is written
13
Loganathan R, CSE, HKBKCE
4. Page Replacement Cntd…

If the number of frames available increases, the number of


page faults decreases

Graph of Page Faults Versus The Number of Frames


Loganathan R, CSE, HKBKCE 14
4. Page Replacement Cntd…
4.2 First-In-First-Out (FIFO) Algorithm

There are 15 faults


• Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
• 3 frames (3 pages can be in memory at a time per process)
1 1 4 5
2 2 1 3 9 page faults
3 3 2 4
• 4 frames 1 1 5 4
2 2
1 5 10 page faults
3 3 2
4 4 3
Belady’s Anomaly: more frames  more page faults in some algoritms
Loganathan R, CSE, HKBKCE 15
4. Page Replacement Cntd…

FIFO Illustrating Belady’s Anomaly

Page-fault curve for FIFO replacement on a reference string


Loganathan R, CSE, HKBKCE 16
4. Page Replacement Cntd…

4.3 Optimal Algorithm (OPT or MIN)


• Replace page that will not be used for longest period of time
• Never suffers from Belady's anomaly
• Difficult to implement, because it requires future knowledge of the reference string

• 4 frames example
1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
1 4
2 6 page faults
3
4 5
Loganathan R, CSE, HKBKCE 17
4. Page Replacement Cntd…

4.4 Least Recently Used (LRU) page replacement Algorithm


• Replace the page that has not been used for the longest period of time

Total 12 faults

• Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
1 1 1 1 5
2 2 2 2 2
3 5 5 4 4
4 4 3 3 3

Loganathan R, CSE, HKBKCE 18


4. Page Replacement Cntd…

LRU page-replacement implementations


1. Using Counter
– Every page entry associated with time-of-use field and add to the counter /
clock ; every time page is referenced through this entry, copy the clock into
the time of use field
– When a page needs to be changed, look at smallest time value
2. Using Stack
• Keep a stack of page
numbers in a double link
form
•If page referenced
move that page to the top
(requires 6 pointers to be
changed)
•No search for replacement
Both requires H/W support
S/W implementation through
interrupt will slow every
memory reference by a
Loganathan R, CSE, HKBKCE 19
factor of at least ten
4. Page Replacement Cntd…

4.5 LRU Approximation Page Replacement Algorithms


• Each entry in the page table is associated with the reference bit is set by the
hardware whenever that page is referenced.
1. Additional-Reference bit Algorithm
– Replace the one which is 0 (if one exists)
– To know the order of use 8-bit shift registers contain the history of page use for
the last eight time periods(11000100 has been used more recently)
2. Second chance Algorithm
– If page to be replaced has reference bit = 1 then:
• set reference bit 0 and leave page in memory
• replace next page (in clock order), subject to same rules
4.5.3 Enhanced Second-Chance Algorithm
• Enhance the second-chance algorithm by considering the reference bit and the
modify bit as an ordered pair
1. (0, 0) neither recently used nor modified—best page to replace
2. (0, 1) not recently used but modified—not quite as good, because the page
will need to be written out before replacement
3. (1., 0) recently used but clean—probably will be used again soon
4. (1,1) recently used and modified—probably will be used again soon, and the
page will be need to be written out to disk before it can be replaced
Loganathan R, CSE, HKBKCE 20
4. Page Replacement Cntd…

Second-Chance (clock) Page-Replacement Algorithm


Loganathan R, CSE, HKBKCE 21
4. Page Replacement Cntd…

4.6 Counting-Based Page Replacement


• Keep a counter of the number of references that have been made to each
page
• Least frequently Used (LFU) Algorithm: replaces page with smallest
count, since actively used page should have a large reference count
• Most Frequently Used (MFU) Algorithm: based on the argument that the
page with the smallest count was probably just brought in and has yet to
be used
4.7 Page-Buffering Algorithms
• The desired page is read into a free frame from the pool before the victim
is written out to allow the process to restart as soon as possible and when
the victim is later written, its frame is added to the free-frame pool.
• Maintain a list of modified pages and whenever the paging device is idle, a
modified page is written to the disk
• If the frame contents are not modified, it can be reused directly from the
free-frame pool if it is needed before that frame is reused

Loganathan R, CSE, HKBKCE 22


5. Allocation of Frames
• Each process needs minimum number of pages
• Example: IBM 370 – 6 pages to handle MVC (m/y to m/y)instruction:
– instruction is 6 bytes, might span 2 pages
• Allocation Algorithms
– Equal allocation – For example, if there are 100 frames and 5 processes,
give each process 20 frames.
– Proportional allocation – Allocate according to the size of process
Example
Let si  size of process pi
m  64
S   si
s i  10
m  total number of frames
s 2  127
si
ai  allocationfor pi  m 10
 64  5
S a1 
137
127
a2   64  59
137
Loganathan R, CSE, HKBKCE 23
5. Allocation of Frames Cntd…

5.3 Global versus Local Allocation


• Global replacement allows a process to select a replacement frame
from the set of all frame i.e. one process can take a frame from
another
– A process can select a replacement from its own frames or the frames of any
lower-priority process
– Allows a high-priority process to increase its frame allocation
• Local replacement requires that each process select from only its own
set of allocated frames
– The number of frames allocated to a process does not change

Loganathan R, CSE, HKBKCE 24


6. Thrashing
Thrashing : high paging activity i.e. a process is spending more time
paging than executing
6.1 Cause of Thrashing
• If a process does not have enough pages, the page-fault rate is very
high. This leads to:
– low CPU utilization So, OS thinks that it needs to increase the degree of
multiprogramming which leads to more page fault

Loganathan R, CSE, HKBKCE 25


6. Thrashing Cntd…

Thrashing Prevention
• Why does thrashing occur?
A locality is a set of pages that are actively used together
 size of locality > total memory size
– Process migrates from one locality to another
– Localities may overlap
Working-Set Model
•   working-set window  a fixed number of page references
Example: 10,000 instruction
• WSSi (Working Set Size of Process Pi) =
total number of pages referenced in the most recent  (varies
in time)
– if  too small will not include entire locality
– if  too large will include several localities
– if  =   will include entire program

26
6. Thrashing Cntd…

• D =  WSSi  total demand frames


• if D > m  Thrashing
• Policy if D > m, then suspend one of the processes
• Approximate the working set with interval timer + a
reference bit

Loganathan R, CSE, HKBKCE 27


6. Thrashing Cntd…

Page-Fault Frequency Scheme


• Establish “acceptable” page-fault rate
– If actual rate too low, process loses frame
– If actual rate too high, process gains frame

Loganathan R, CSE, HKBKCE 28


Stages in Demand Paging (worse case)
1.Trap to the operating system.
2.Save the user registers and process state.
3.Determine that the interrupt was a page fault.
4.Check that the page reference was legal and determine location of page on the disk.
5.Issue a read from the disk to a free frame:
1. Wait in a queue for this device until the read request is serviced.
2. Wait for the device seek and/or latency time.
3. Begin the transfer of the page to a free frame.
6.While waiting, allocate the CPU to some other user.
7.Receive an interrupt from the disk I/O subsystem (I/O completed).
8.Save the registers and process state for the other user.
9.Determine that the interrupt was from the disk.
10.Correct the page table and other tables to show page is now in memory.
11.Wait for the CPU to be allocated to this process again.
12.Restore the user registers, process state, and new page table, and then resume the
interrupted instruction.
Performance of Demand Paging
Three major activities:
Service the interrupt – careful coding means just several hundred
instructions needed.
Read the page – lots of time..
Restart the process – again just a small amount of time
Page Fault Rate 0  p  1
if p = 0, no page faults.
if p = 1, every reference is a fault.
Effective Access Time (EAT) –
EAT = (1 – p) x memory access
+ p x (page fault overhead
+ [swap page out]
+ swap page in
+ restart overhead)
Demand Paging Example
Memory access time = 200 nanoseconds.
Average page-fault service time = 8 milliseconds.
EAT = (1 – p) x 200 + p (8 milliseconds)
= (1 – p) x 200 + p x 8,000,000
= 200 + p x 7,999,800
If one access out of 1,000 causes a page fault, then
EAT = 8.2 microseconds.
This is a slowdown by a factor of 40!!
If want performance degradation < 10 percent
220 > 200 + 7,999,800 x p
20 > 7,999,800 x p
p < .0000025
< one page fault in every 400,000 memory accesses.

Potrebbero piacerti anche