Sei sulla pagina 1di 48

UNIT 5

Virtual Memory

#Sushanth KJ|Faculty,ECE|BIT, Mlore

Introduction

Virtual Memory Basics


Demand Paging
The Virtual Memory Manager
Page Replacement Policies
Controlling Memory Allocation to a Process
Shared Pages
Memory-Mapped Files
Case Studies of Virtual Memory Using Paging
Virtual Memory Using Segmentation
#Sushanth KJ|Faculty,ECE|BIT,
Mlore

Virtual Memory Basics


MMU translates logical address into
physical one
Virtual memory manager is a software
component
Uses demand loading
Exploits locality of reference to improve
performance

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

Virtual Memory Basics


(continued)

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

Virtual Memory Using


Paging
MMU performs address translation using page table
Effective memory address of logical address (pi, bi)
= start address of the page frame containing page pi + bi

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

Demand Paging
Preliminaries

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

Demand Paging Preliminaries


(continued)
Memory Management Unit (MMU)
raises a page fault interrupt if page
containing logical address not in
memory

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

Demand Paging Preliminaries


(continued)
A page fault interrupt
is raised because
Valid bit of page 3 is 0

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

Demand Paging Preliminaries


(continued)
At a page fault, the required page is loaded in a
free page frame
If no page frame is free, virtual memory
manager performs a page replacement
operation
Page replacement algorithm
Page-out initiated if page is dirty (modified bit is set)

Page-in and page-out: page I/O or page traffic


Effective memory access time in demand
paging:

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

Page Replacement
(Empirical) law of locality of
reference: logical addresses used by
process in a short interval tend to be
grouped in certain portions of its
logical address space

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

10

Memory Allocation to a
Process
How much memory to allocate to a
process

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

11

Optimal Page Size


Size of a page is defined by computer hardware
Page size determines:
No of bits required to represent byte number in a page
Memory wastage due to internal fragmentation
Size of the page table for a process
Page fault rates when a fixed amount of memory is
allocated to a process

Use of larger page sizes than optimal value


implies somewhat higher page fault rates for a
process
Tradeoff between HW cost and efficient operation
#Sushanth KJ|Faculty,ECE|BIT,
Mlore

12

Paging Hardware
Page-table-address-register (PTAR)
points to the start of a page table

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

13

Paging Hardware
(continued)

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

14

Memory Protection
Memory protection violation raised if:
Process tries to access a nonexistent
page
Process exceeds its (page) access
privileges

It is implemented through:
Page table size register (PTSR) of MMU
Kernel records number of pages contained in
a process in its PCB
Loads number from PCB in PTSR when process is
scheduled#Sushanth KJ|Faculty,ECE|BIT,
Mlore

15

Address Translation and Page Fault


Generation
Translation look-aside buffer (TLB):
small and fast associative memory
used to speed up address translation

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

16

Address Translation and Page Fault


Generation (continued)

TLBs can be HW or
SW KJ|Faculty,ECE|BIT,
managed
#Sushanth
Mlore

17

Address Translation and Page Fault


Generation (continued)

TLB hit ratio

Some mechanisms used to improve


performance:
Wired TLB entries for kernel pages: never replaced
Superpages
#Sushanth KJ|Faculty,ECE|BIT,
12.18
Mlore

Superpages
TLB reach is stagnant even though memory sizes
increase rapidly as technology advances
TLB reach = page size x no of entries in TLB
It affects performance of virtual memory

Superpages are used to increase the TLB reach


A superpage is a power of 2 multiple of page size
Its start address (both logical and physical) is aligned
on a multiple of its own size
Max TLB reach = max superpage size x no of entries
in TLB
Size of a superpage is adapted to execution behavior
of a process through promotions and demotions
#Sushanth KJ|Faculty,ECE|BIT,
Mlore

19

Support for Page


Replacement
Virtual memory manager needs
following information for minimizing
page faults and number of page-in
and page-out operations:
The time when a page was last used
Expensive to provide enough bits for this
purpose
Solution: use a single reference bit

Whether a page is dirty


A page is clean if it is not dirty
#Sushanth KJ|Faculty,ECE|BIT,
Solution: modified
bit in page table entry
Mlore

20

Practical Page Table


Organizations
A process with a large address space requires
a large page table, which occupies too much
memory
Solutions:
Inverted page table
Describes contents of each page frame
Size governed by size of memory
Independent of number and sizes of processes
Contains pairs of the form (program id, page #)

Con: information about a page must be searched

Multilevel page table


Page table of process is paged
#Sushanth KJ|Faculty,ECE|BIT,
Mlore

21

Inverted Page Tables

Use of hash table


Speeds up search

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

22

Multilevel Page Tables

If size of a table entry is


2e bytes, number of page
table entries in one PT
page is 2n /2e
Logical address (pi , bi) is
regrouped into three
fields:
b

PT page with the


number pi1 contains
entry for pi
pi2 is entry number for
pi in PT page
bi

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

12.23

I/O Operations in a Paged


Environment
Process makes system call for I/O operations
Parameters include: number of bytes to transfer,
logical address of the data area

Call activates I/O handler in kernel


I/O subsystem does not contain an MMU, so I/O handler
replaces logical address of data area with physical
address, using information from process page table
I/O fix (bit in misc info field) ensures pages of data
area are not paged out
Scatter/gather feature can deposit parts of I/O
operations data in noncontiguous memory areas
Alternatively, data area pages put in contiguous areas
#Sushanth KJ|Faculty,ECE|BIT,
Mlore

24

Example: I/O Operations in Virtual


Memory

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

25

The Virtual Memory


Manager

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

26

Example: Page Replacement

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

27

Overview of Operation of the Virtual


Memory Manager
Virtual memory manager makes two
important decisions during its
operation:
Upon a page fault, decides which page
to replace
Periodically decides how many page
frames should be allocated to a process

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

28

Page Replacement Policies


A page replacement policy should replace a page
not likely to be referenced in the immediate future
Examples:
Optimal page replacement policy
Minimizes total number of page faults; infeasible in practice

First-in first-out (FIFO) page replacement policy


Least recently used (LRU) page replacement policy
Basis: locality of reference

Page reference strings


Trace of pages accessed by a process during its
operation
We associate a reference time string with each
#Sushanth KJ|Faculty,ECE|BIT,
Mlore

29

Example: Page Reference


String
A computer supports instructions
that are 4 bytes in length
Uses a page size of 1KB
Symbols A and B are in pages 2 and 5

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

30

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

31

Page Replacement Policies


(continued)
To achieve desirable page fault
characteristics, faults shouldnt
increase when memory allocation is
increased
Policy must have stack (or inclusion)
property

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

32

#Sushanth KJ|Faculty,ECE|BIT,

FIFO page replacement policy


does not exhibit stack property.
Mlore

33

Page Replacement Policies


(continued)

Virtual memory manager cannot use FIFO policy


Increasing allocation to a process may increase page
fault frequency of process
Would make it impossible
to control thrashing
#Sushanth KJ|Faculty,ECE|BIT,
Mlore

34

Practical Page Replacement Policies

Virtual memory manager has two threads


Free frames manager implements page
replacement policy
#Sushanth
KJ|Faculty,ECE|BIT,
Page I/O manager
performs
page-in/out
Mlore

35

Practical Page Replacement Policies


(continued)
LRU replacement is not feasible
Computers do not provide sufficient bits in the ref
info field to store the time of last reference

Most computers provide a single reference bit


Not recently used (NRU) policies use this bit
Simplest NRU policy: Replace an unreferenced page and
reset all reference bits if all pages have been referenced
Clock algorithms provide better discrimination between
pages by resetting reference bits periodically
One-handed clock algorithm
Two-handed clock algorithm
Resetting pointer (RP) and examining pointer (EP)

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

36

Example: Two-Handed Clock


Algorithm

Both pointers are advanced simultaneously


Algorithm properties defined by pointer
distance:
If pointers are close together, only recently used
pages will survive in memory
KJ|Faculty,ECE|BIT,
If pointers are far#Sushanth
apart,
only pages that have not
Mlore
been used in a long time would be removed

37

Controlling Memory Allocation to a


Process
Process Pi is allocated alloci number of page frames
Fixed memory allocation
Fixes alloc statically; uses local page replacement

Variable memory allocation


Uses local and/or global page replacement
If local replacement is used, handler periodically
determines correct alloc value for a process
May use working set model

Sets alloc to size of the working set

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

38

Implementation of a Working Set


Memory Allocator
Swap out a process if alloc page
frames cannot be allocated
Expensive to determine WSi(t,) and
alloci at every time instant t
Solution: Determine working sets
periodically
Sets determined at end of an interval are
used to decide values of alloc for use during
the next interval
#Sushanth KJ|Faculty,ECE|BIT,
Mlore

39

Shared Pages
Static sharing results from static binding
performed by a linker/loader before execution
of program
Dynamic binding conserves memory by
binding same copy of a program/data to
several processes
Program or data shared retains its identity
Two conditions should be satisfied:
Shared program should be coded as reentrant
Can be invoked by many processes at the same time

Program should be bound to identical logical addresses in


every process that shared it
#Sushanth KJ|Faculty,ECE|BIT,
Mlore

40

Shared pages
should have same
page numbers in
all processes

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

41

Copy-on-Write
Feature used to conserve memory
when data in shared pages could be
modified
Copy-on-write flag in page table entries
Memory allocation decisions areA private copy of
page k is made
performed statically
when A modifies it

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

42

Memory-Mapped Files

Memory mapping of a file by a process


binds file to a part of the logical address
space of the process
Binding is performed when process makes a
memory map system call
Analogous to dynamic binding of programs
and data

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

43

Memory-Mapped Files
(continued)

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

44

Case Studies of Virtual Memory


Using Paging
Unix Virtual Memory

#Sushanth KJ|Faculty,ECE|BIT,
Mlore

45

Unix Virtual Memory

Paging hardware differs in architectures


Pages can be: resident, unaccessed, swapped-out
Allocation of as little swap space as possible
Copy-on-write for fork
Lack reference bit in some HW architectures;
compensated using valid bit in interesting manner
Process can fix some pages in memory
Pageout daemon uses a clock algorithm
Swaps out a process if all required pages cannot be in
memory
A swap-in priority is used to avoid starvation
#Sushanth KJ|Faculty,ECE|BIT,
Mlore

46

Summary
Basic actions in virtual memory using paging:
address translation and demand loading of
pages
Implemented jointly by
Memory Management Unit (MMU): Hardware
Virtual memory manager: Software

Memory is divided into page frames


Virtual memory manager maintains a page table
Inverted and multilevel page tables use less memory
but are less efficient
A fast TLB is used to speed up address translation
#Sushanth KJ|Faculty,ECE|BIT,
Mlore

47

Summary (continued)
Which page should VM manager remove from
memory to make space for a new page?
Page replacement algorithms exploit locality of
reference
LRU has stack property, but is expensive
NRU algorithms are used in practice
E.g., clock algorithms

How much memory should manager allocate?


Use working set model to avoid thrashing

Copy-on-write can be used for shared pages


Memory mapping of files speeds up access to
data
#Sushanth KJ|Faculty,ECE|BIT,
Mlore

48

Potrebbero piacerti anche