Sei sulla pagina 1di 5

What is virtual memory, how is it implemented, and why do operating systems use

it?
Real, or physical, memory exists on RAM chips inside the computer. Virtual memory,
as its name suggests, doesnt physically exist on a memory chip. It is an
optimization technique and is implemented by the operating system in order to give
an application program the impression that it has more memory than actually
exists. Virtual memory is implemented by various operating systems such as
Windows, Mac OS X, and Linux.
Lets say that an operating system needs 120 MB of memory in order to hold all the
running programs, but theres currently only 50 MB of available physical memory
stored on the RAM chips. The operating system will then set up 120 MB of virtual
memory, and will use a program called the virtual memory manager (VMM) to
manage that 120 MB. The VMM will create a file on the hard disk that is 70 MB (120
50) in size to account for the extra memory thats needed.
Now, how does the VMM function? As mentioned before, the VMM creates a file on
the hard disk that holds the extra memory that is needed by the O.S., which in our
case is 70 MB in size. This file is called a paging file (also known as a swap file), and
plays an important role in virtual memory. The paging file combined with the RAM
accounts for all of the memory. Whenever the O.S. needs a block of memory thats
not in the real (RAM) memory, the VMM takes a block from the real memory that
hasnt been used recently, writes it to the paging file, and then reads the block of
memory that the O.S. needs from the paging file. The VMM then takes the block of
memory from the paging file, and moves it into the real memory in place of the
old block. This process is called swapping (also known as paging), and the blocks of
memory that are swapped are called pages.
There are two reasons why one would want this: the first is to allow the use of
programs that are too big to physically fit in memory. The other reason is to allow
for multitasking multiple programs running at once.

Virtual Memory Can Slow Down Performance


However, virtual memory can slow down performance. If the size of virtual memory
is quite large in comparison to the real memory, then more swapping to and from
the hard disk will occur as a result. Accessing the hard disk is far slower than using
system memory. Using too many programs at once in a system with an insufficient
amount of RAM results in constant disk swapping also called thrashing, which can
really slow down a systems performance.
A page fault occurs when a program tries to access a page that is mapped in
address space, but not loaded in the physical memory (the RAM). In other words, a
page fault occurs when a program can not find a page that its looking for in the
physical memory, which means that the program would have to access the paging
file (which resides on the hard disk) to retrieve the desired page.

In a computer operating system that uses paging for virtual memory management,
page replacement algorithms decide which memory pages to page out (swap out,
write to disk) when a page of memory needs to be allocated. Paging happens when
a page fault occurs and a free page cannot be used to satisfy the allocation, either
because there are none, or because the number of free pages is lower than some
threshold.
When the page that was selected for replacement and paged out is referenced
again it has to be paged in (read in from disk), and this involves waiting for I/O
completion. This determines the quality of the page replacement algorithm: the less
time waiting for page-ins, the better the algorithm. A page replacement algorithm
looks at the limited information about accesses to the pages provided by hardware,
and tries to guess which pages should be replaced to minimize the total number of
page misses, while balancing this with the costs (primary storage and processor
time) of the algorithm itself.
PAGE REPLACEMENT ALGORITHMS

The theoretically optimal page replacement algorithm


The theoretically optimal page replacement algorithm (also known as OPT,
clairvoyant replacement algorithm, or Bldy's optimal page replacement policy)[2]
[3][4] is an algorithm that works as follows: when a page needs to be swapped in,
the operating system swaps out the page whose next use will occur farthest in the
future. For example, a page that is not going to be used for the next 6 seconds will
be swapped out over a page that is going to be used within the next 0.4 seconds.
This algorithm cannot be implemented in a general purpose operating system
because it is impossible to compute reliably how long it will be before a page is
going to be used, except when all software that will run on a system is either known
beforehand
Despite this limitation, algorithms exist[citation needed] that can offer near-optimal
performance the operating system keeps track of all pages referenced by the
program, and it uses those data to decide which pages to swap in and out on
subsequent runs. This algorithm can offer near-optimal performance, but not on the
first run of a program, and only if the program's memory reference pattern is
relatively consistent each time it runs.

Demand Paging
A demand paging system is quite similar to a paging system with swapping where
processes reside in secondary memory and pages are loaded only on demand, not
in advance. When a context switch occurs, the operating system does not copy any
of the old programs pages out to the disk or any of the new programs pages into
the main memory Instead, it just begins executing the new program after loading
the first page and fetches that programs pages as they are referenced.

First In First Out (FIFO) algorithm

Oldest page in main memory is the one which will be selected for replacement.

Windows NT and Windows 2000 use this algorithm

Easy to implement, keep a list, replace pages from the tail and add new pages at the head.

FIFO is
not a stack algorithm. In certain cases, the number of page faults can actually increase when
more frames are allocated to the process. In the example below, there are 9 page faults for 3
frames and 10 page faults for 4 frames.

Least Recently Used (LRU) Replacement

choose the page that was last referenced the longest time ago

assumes recent behavior is a good predictor of the near future

can manage LRU with a list called the LRU stack or the paging stack (data structure)

in the LRU stack, the first entry describes the page referenced least recently, the last entry
describes to the last page referenced.

if a page is referenced, move it to the end of the list

problem: requires updating on every page referenced

too slow to be used in practice for managing the page table, but many systems use
approximations to LRU

Least frequently Used(LFU) algorithm

The page with the smallest count is the one which will be selected for replacement.

This algorithm suffers from the situation in which a page is used heavily during the initial
phase of a process, but then is never used again.

Most frequently Used(MFU) algorithm

This algorithm is based on the argument that the page with the smallest count was
probably just brought in and has yet to be used.

RAND (Random)

choose any page to replace at random

assumes the next page to be referenced is random

can test other algorithms against random page replacement

NRU (Not Recently Used):

as an approximation to LRU, select one of the pages that has not been used recently (as
opposed to identifying exactly which one has not been used for the longest amount of
time)

keep one bit called the "used bit" or "reference bit", where 1 => used recently and 0 =>
not used recently

variants of this scheme are used in many operating systems, including UNIX and
MacIntosh

most variations use a scan pointer and go through the page frames one by one, in some
order, looking for a page that has not been used recently.

Variations of NRU:
Variation 1: Linear Scanning Algorithm

this algorithm is sometimes called the Clock Algorithm or the Second Chance algorithm,
but both terms are also used for the FIFO Second Chance algorithm described below.

Potrebbero piacerti anche