Sei sulla pagina 1di 31

PRESENTATION BY ALBERT DANQUAH EGHAN

In this presentation we will look at the following:


The

effect of hit ratio on average memory access time. Cache update policies. Write buffers. Cache/Main memory structure. Fully associative Mapping.

BY ALBERT D. EGHAN

A systems performance is measured in terms of the cache hit ratio(hit rate).This is the percentage of references found to be in the cache.

BY ALBERT D. EGHAN

In other words, it is the probability of finding a requested item in the cache. When the processor makes a request for a memory reference, the request is first sought in the cache. This is because the requested item is simply returned in a fraction of the normal time if it is contained in the cache.

BY ALBERT D. EGHAN

NB:In the memory hierarchy, the cache comes before the main memory. Determining whether an address placed into the Memory Address Register(MAR) by the CPU corresponds to something in the cache is done by the cache controller.

BY ALBERT D. EGHAN

In the event of a cache miss, the block in main memory that contains the requested item is brought into the cache and the item is sent to the CPU. Transferring an entire memory block takes advantage of the principle of locality of reference to increase the chance that the next memory reference will correspond to a location that is already in the cache.

BY ALBERT D. EGHAN

The time required to handle a cache miss depends in part on when the access to main memory is initiated.

Case 1: The memory reference is intercepted and the cache is checked first for a hit and if a miss occurs, then the access to main memory is started. Such a cache is called a look-through cache.
BY ALBERT D. EGHAN 7

The average memory access time when the cache is always checked first is: TA=TC+[ (1-h)TM] Where TC is the average cache access time,h is the hit ratio and TM is the average access time to main memory OR TA=hit time+(miss ratemiss penalty)

BY ALBERT D. EGHAN

Case 2: Reducing the miss penalty by starting the access to main memory in parallel with the cache lookup; if there is a hit, the memory access is cancelled. Such a cache is called a look-aside cache. The cost of a miss in this case is the memory access time. However, this technique causes extra bus traffic and tends to waste bus bandwidth when the accesses are cancelled.
BY ALBERT D. EGHAN 9

The average memory access time in this case is: TA=(hTC)+[ (1-h)TM]

OR
TA=(hit ratiohit time)+(miss ratemiss penalty)

BY ALBERT D. EGHAN

10

Lets go through the example below: Given a computer system that employs a cache with an access time of 30ns and a main memory with a cycle time of 250ns.Suppose that the hit ratio for reads is 80%, a)What would be the average access time for reads if the cache is a look-through cache? b)What would be the average access time for reads if the cache is a look-aside cache?

BY ALBERT D. EGHAN 11

Ans: TC=30ns;TM=250ns;h=0.80 a)For the look through cache, TA=TC+[ (1-h)TM ] Hence the average read access time =30ns+(0.20250ns)=80ns.

b)For the look aside cache, TA=(hTC)+[(1-h)TM] Hence the average read access time in this case=(0.830ns)+0.20(250ns)=74ns.

BY ALBERT D. EGHAN

12

For items residing in the cache, cache update policies come in two forms : A)Write-through: write data through to main memory as soon as they are placed on any cache. Reliable, but poor performance since every write requires a time equivalent to the memory cycle time.
BY ALBERT D. EGHAN

13

B)Write-back/Copy-back Modifications written to the cache are written to the main memory when the block containing the altered item is removed from the cache. Fast: some data may be overwritten before they are written back to the main memory, and so need never be written at all. Poor reliability: unwritten data will be lost whenever a user machine crashes. Variations: write on close, update on regular intervals.

BY ALBERT D. EGHAN

14

Eg. A system employs a write-through cache with a read hit ratio of 80%. The cache has an access time of 30ns and the main memory has an access time of 250ns and operates in look-through mode for reads. If 70% of all memory references are reads and 30% are writes, what would be the average access time for all memory references? Solution: The previous example showed that the average access time for reads is 80ns. The average access time for writes is the same as the memory access time, 250ns. Thus the average time for all memory references is 0.70 30ns + 0.3 250ns = 96ns.

BY ALBERT D. EGHAN

15

For writes that correspond to items not currently in the cache (i.e. write misses) the item could be updated in main memory only without affecting the cache. This strategy is known as a write-around policy.

BY ALBERT D. EGHAN

16

This refers to updating the item in main memory and bringing the block containing the updated item into the cache. This policy anticipates further accesses to the same block, which would result in future cache hits. Write misses would force the CPU to wait until the update to main memory is completed.

BY ALBERT D. EGHAN

17

Example:

A computer system employs a write-back cache with a 80% hit ratio for writes. The cache operates in look-aside mode and has a 70% read hit ratio. Reads account for 70% of all memory references and writes account for 30%. If the main memory cycle time is 200ns and the cache access time is 20ns, what would be the average access time for all references (reads as well as writes)?

BY ALBERT D. EGHAN

18

Solution: The average access time for reads = 0.7 20ns + 0.3 200ns = 74ns. The average write time = 0.8 20ns + 0.2 200ns = 56ns Hence the overall average access time for combined reads and writes is 0.7 74ns + 0.3 56ns = 68.6ns

BY ALBERT D. EGHAN

19

To allow the CPU to continue execution without waiting for the write to main memory to complete, a write buffer is employed. In the write buffer, the data and destination memory address can be quickly stored by the CPU. The only time that the CPU would have to wait is when the write buffer becomes full.

BY ALBERT D. EGHAN

20

Memory is logically partitioned into a number of blocks,each of which contains some number of consecutive words. Likewise,the cache is partitioned into a number of blocks(called lines or refill lines) of the same size as the memory blocks.

BY ALBERT D. EGHAN

21

BY ALBERT D. EGHAN

22

BY ALBERT D. EGHAN

23

Three techniques are commonly used to map a block from main memory into a line within the cache. They are: Fully Associative Mapping Direct Mapping Set Associative Mapping

BY ALBERT D. EGHAN

24

With fully associative mapping, a block from main memory can be placed into any available line within the cache. Hits are determined by simultaneously comparing the tag field from the referenced address with the tags of all lines within the cache.

BY ALBERT D. EGHAN

25

A match would signal a cache hit and the referenced item can then be accessed from the cache line. If none of the tags in the cache matches that of the referenced address, then a miss would be signaled and the block containing the referenced item would have to be accessed from main memory.

BY ALBERT D. EGHAN

26

BY ALBERT D. EGHAN

27

Since each cache line contains multiple words, the offset is required to select the specific word or byte within the line that is being referenced. Most systems are byte-addressable; hence the offset is the number of bytes from the beginning of the cache data line to the referenced byte.

BY ALBERT D. EGHAN

28

The main disadvantage of the fully associative scheme is the expense of having a separate comparator for each line in the cache. Separate comparators are needed to simultaneously compare the tag field from the address to be referenced with the tags in the cache.

BY ALBERT D. EGHAN

29

http://www.cs.jhu.edu http://ieeexplore.ieee.org http://users.csc.calpoly.edu Computer Architecture-Dr.Nana Diawuo(KNUST) Morgan Kaufmann - Computer Architecture. A Quantitative Approach. 3rd Edition

BY ALBERT D. EGHAN

30

THANK YOU!!!

BY ALBERT D. EGHAN

31

Potrebbero piacerti anche