Sei sulla pagina 1di 4

Proposed Work

3.1 Problem Definition:


Secondary storage is in abundance always as compared to primary memory and it's
also very cheap. However, it suffers a serious drawback. Secondary memory devices tend to
be very slow as compared to R!. Read"write access to hard disk has always been a
performance issue. nd therefore, for an optimal performance of overall system, access to
secondary memory must always be minimi#ed. $he same problem is present with traditional
paging as well. %ote that programs cannot work until data they need is present in R!.
$herefore, every time an application needs some data which not present in R!, it
has to be brought back to R! and so process of paging takes place. Since a program cannot
work if data is not present in R! , the overall performance suffers because of paging
process as e&isting pages must be paged out before new pages can be brought in. However,
paging is crucial as it allows 'S to access more memory than physically available. 'ne
solution is if we do paging but not on secondary storage but on a faster device then this issues
can be solved.
3.2 Features:
(. High system performance) *ue to reduction in accessing slower disk for paging, the
overall paging process is speeded up.
+. ,esser memory usage) -nder low memory load conditions, the memory usage is low. $he
process is adaptive in the sense that, the si#e of compressed buffer ad.usts dynamically
depending upon memory load conditions.
3.3 Project Scope:
daptive /ompressed Paging is a simulation program which aims at improving the
e&isting paging mechanism. We implement it in two segments. 0n first segment, we will
implement the traditional paging system and in second segment dvanced /ompressed
Paging .We will compare the performance of the two systems and present results. dvanced
/ompressed Paging will help one to clearly see advantage of such a system over traditional
paging by visually presenting results. $his pro.ects does not aim to be a commercial product,
rather it is more of a research sub.ect. 'ur goal is to provide strong proof that adaptive
compressed paging has advantage over traditional system.
3.4 Goals:
$he main goal of this pro.ect is implementing daptive /ompressed Paging. We have
tried to show how an adaptive approach to compressed paging can improve system
performance by minimi#ing access to secondary storage i.e. hard disk. $he goal of this
pro.ect is basically a simulation of the process of adaptive compressed paging in user space.
0n a true implementation such a system must be implemented inside kernel itself. 0n this
section, we have studied about scope, target user groups, operation environment and some
design and implementation constraints.
daptive /ompressed /aching in R! 1
Proposed Work
3.4.1 Project Statement
0n e&isting system paging process is carried out on Hard disk which slows down the
system performance and eventually increases the /P- idle time. $o resolve this problem we
introduce the new location for paging process that is on R! itself rather than doing it on
Hard disk which is very slow as compared to R!.
3.5 Objective
$he ob.ective of this pro.ect is to improve upon traditional paging techni2ue by
storing pages to be removed from memory into a special memory area rather than on to a disk
.0n this way when pages which were paged out previously need not be fetched from disk
instead they are decompressed and loaded from R! directly. We also try to add further
improvement to this compressed caching techni2ue by making use of adaptive compression
algorithm. We have used ,3' algorithm for compression and decompression to achieve high
compression and decompression ratio thus improving system performance, even more.
3. !onstraints
0n this section we list ma.or design and implementation issues)
3..1 Pa"e !ac#e
/ompressed caching has a strong tendency to influence the page cache, as it is
commonly larger than other caches. Pages holding data from blocks of all backing stores 4like
buffer data, regular file data, file system meta5 data and even pages with data from swap6 are
stored in the page cache. ,ike other system caches, page cache may be smaller on a system
with a compressed cache that only stores pages backed by swap. s a conse2uence of this
possible reduction, blocks 4usually from regular files6 will have fewer pages with their data
cached in memory, what is likely to increase the overall 0"'. $hat is a sign that compressed
caching should not only be aware of its usefulness to the virtual memory system, but also
how it might degrade system performance. 0nstead of letting page cache and compressed
cache compete for memory7 our approach for this problem consists of also storing other pages
from the page cache 4besides the ones holding swap data6 into the compressed cache. $his
actually increases memory available to all pages in page cache, not only to those backed by
swap
3..2 Pa"e Or$erin"
0n the compressed cache, our primary concern regarding page ordering is to keep the
compressed pages in the order in which the virtual memory system evicted them. s we
verified in e&periments on ,inu&, which uses an least recently used 4,R-6 appro&imation
replacement policy, not keeping the order in which the compressed pages are stored in the
compressed cache rarely improves system performance and usually degrades it severely. s
most operating systems, when a block is read from the backing store, ,inu& also reads
ad.acent blocks in advance, because reading these subse2uent blocks is usually much cheaper
than reading the first one. Reading blocks in advance is known as read5ahead and the blocks
read ahead are stored in pages in non5compressed memory.
Read ahead operations alter the ,R- ordering since the pages read in advance are
taken as more recently used than the ones stored in the compressed cache, although they may
even be not used. s a conse2uence, it is possible that this change forces the release of pages
daptive /ompressed /aching in R! 8
Proposed Work
not in conformity to the page replacement algorithm. 9or this reason, whenever a page is read
from the compressed cache, a read ahead must not be performed. 0t is not worthwhile to read
pages from the compressed cache in advance since there is no performance penalty for
fetching the pages in different moments. 9urthermore, compressed pages read ahead from
swap are only decompressed when e&plicitly reclaimed by the virtual memory system. 0n
contrast to the pages read only due to the read5ahead operation, a compressed page reclaimed
for immediate use preserves ,R- page ordering, since it will be more recently used than any
page in the compressed cache. We also consider essential to preserve the order in which the
pages were compressed to be able to verify the efficiency of compressed caching. 'therwise
the results would be influenced by this e&tra factor, possibly misleading our conclusions.
3..3 !ells %it# !onti"uous &emor' Pa"es
$o minimi#e the problem of poor compression ratios, we propose the adoption of cells
composed of contiguous memory pages. With larger cells, it is more likely to have memory
space gains even if most pages do not compress very well. 9or e&5 ample, if pages compress
to :;< in average, we will still have space gains if we use cells composed of at least two
contiguous memory pages. 0n fact, in this case, it is possible to store three compressed pages
in one cell. However, we should notice that allocating contiguous memory pages has some
tradeoffs. $he greater the number of contiguous pages, the greater the probability of failure
when allocating them, given the system memory fragmentation. 9urthermore, the larger the
cell, the greater the probability of fragmentation in it and the cost to compact its compressed
pages. s a good side effect, given that part of our metadata is used to store data about the
cells, the use of larger cells reduces these data structures. =&perimentally, we have concluded
that two is the number of contiguous pages to be allocated that achieves the best results in our
implementation.
3..4 Disablin" !lean Pa"e !ompression
0f we release clean pages without compressing them into the compressed cache, the
,R- page ordering is changed because some of the pages freed by the virtual memory
system will be stored into the compressed cache and others will not. %evertheless, since few
of the clean pages were being reclaimed by the system, most of them would be freed anyway.
Hence, it is e&pected that releasing them earlier does not have a ma.or impact on system
performance. $he metadata and processing overhead introduced by this heuristic are
insignificant.
3..5 (ariable !ompresse$ !ac#e Si)e
Having a static compressed cache si#e is not beneficial since on low memory load conditions.
$he memory still remains allocated for the cache where it is hardly used. However an
daptive policy that determines the compressed cache si#e at runtime is close to optimal
solution even though it is usually a comple& task.
3.. Performance issues
Here is list of ma.or issues that cause performance penalty in some way.
(> /ompression and decompression of pages.
+> /ompacting of pages in /ompressed cache cells.
?> /ompressing clean pages.
@> Reduction in effective memory to programs.
daptive /ompressed /aching in R! (A
Proposed Work
3.* Propose$ S'stem
'ur pro.ect is to implement a 4basically simulation6 program which shows the
performance gain in using compressed cache over the traditional paging. So our pro.ect will
consist of two parts one will implement traditional paging and other with compressed cache.
$he compressed cache that we are using is special in the sense that the buffer where
compressed pages are stored dynamically ad.usts its si#e. $he initial cache si#e is very small
and grows as and when needed. However there must be a limit beyond which it wouldn't
grow. 0n general, we will try to get better memory usage and compression ratio so that we can
store more data in a given memory area. /P also attempts to compress kernel file caches
since7 these are stored in kernel memory data structures anyway. By storing them in
compressed cache and modifying kernel to not cache file cache, further optimi#ation can be
carried out. We will clearly point out the difference in the performance and time lag between
$raditional paging and our daptive /ompressed paging simultaneously.
daptive /ompressed /aching in R! ((

Potrebbero piacerti anche