Sei sulla pagina 1di 3

Chapter 9: Memory Management

Memory addressing: Physical vs. Logical Swapping & Contiguous allocation Paging Segmentation

Types of Addresses
Symbolic addresses
e.g., variable names

Absolute addresses
Code that must run at a specific position in computer memory Specific addresses generated, e.g., by compiler

Relocatable addresses
Can be bound to specific addresses, after compile time

Address Spaces
Logical (a.k.a. virtual)
Addresses manipulated by a program Define the logical address space

Address Binding Time


During compilation
But, generally dont know where program will be loaded when it is compiled

During load time


In order for a program to be initially loaded, decisions must be made about where it will execute in computer memory, so at least initial specific addresses must be bound

Physical
Addresses of physical hardware memory

Conversion from logical to physical


By memory management unit (MMU)

During execution
We may want to move a program, during execution, from one region of memory to another Addresses will have to be re-bound

Some Memory Management Issues


Is all of a program loaded into memory at once? If we load program in parts, when are the parts a loaded into memory? How do we divide program into parts? Does the program always stay in memory once loaded?
Unloaded only when it terminates?

Methods of Program Loading


Static
All of a program is loaded into memory at once

Dynamic
Loading: routine not loaded until called
Not operating system dependent

Linking: libraries are loaded on an as needed basis


E.g., not all executable programs need to have statically linked copies of the standard I/O libraries Needs operating system support; linking of symbols occurs at run time Can be tied into a versioning system

Is memory is allocated contiguously?


A single sequence of addresses Or multiple separated sequences of addresses

Overlays
A program is divided by the programmer into multiple components or overlays load overlay code/data to use it; need overlay driver

How do we deal with memory protection?


When a program tries to access code/data outside of its address space, we need to trap this

Swapping
A running process needs to be in main memory (RAM) A process in other states (e.g., blocked, waiting) may be saved out to disc
E.g., if it will be waiting a relatively long time, and RAM memory is scarce

Contiguous Memory Allocation


Assume for now:
All of the program loaded at once

Contiguous memory allocation


All of program loaded into a single area of RAM

Swap out/swap in
Relocatable code is useful If program uses absolute addresses, need to swap back into same memory area Swapping is slow!

O/S divides user memory up into partitions


Keeps a table of these partitions One process per partition Partition methods:
Fixed size partitions
Limits degree of multiprogramming

Use separate, fast discs Pending I/O must be saved in O/S buffers for a swapped process

Variable size partitions

Variable Size Partition Issues


Dynamic memory allocation
O/S table with start, end of allocated partitions, and unallocated partitions Unallocated partitions are holes When memory returned, merge with neighboring hole Allocation of a new partition of size n
First fit, Best fit, Worst fit First fit may be better: faster, similar to best fit

Paging
Non-contiguous memory allocation Memory divided into physical frames
fixed size blocks, e.g., 1024 bytes

Fragmentation
External
Free blocks of memory too small to be used Compaction can be used; need relocatable code

Process is divided into sequence of logical pages


Frame size same as page size

Internal
Allocating partitions larger than requested; unused space in partition for a process (more typ. with fixed sized partitions)

Still assuming all of a process is loaded in at once

Example
Memory frame0 frame1 page0 page1 page2 frame2 frame3 frame4 frame5 (base 10 addresses) What physical addresses correspond to logical address 0, 1024, 3000 (assuming 1024 page size)? Program (process)

Address Structure
Usually the page/frame size is a power of two E.g., 1024, 2048, 4096 This gives memory addresses two parts
Top bits page number Bottom bits page offset

Example
1024 page size 16 bit address Offset is lower 10 bits (2^10 = 1024) Top 6 bits for page number

What about 32 bit addresses with 4096 page size?

Page Table
Each process with an address space needs to have a page table
Maintained by operating system (kernel space)

Thread vs. Heavy Weight Process


Given the answer to the last question, what is a major advantage of threads as compared to heavy weight processes?

Page table maps from logical page numbers to frame numbers One strategy
Each page table contains the full set of possible entries Size: 2^(number of bits in the address for page number) E.g., 1024 byte page, 16 bit address; 6 bit page number 2^6 entries in each page table = 64 entries

Question: What is the size of a processes page table for a 4096 page size, and 32 bit addresses?

Fragmentation
External
With paging, the external fragmentation problem is solved Every frame of memory can be used

Translation Look-aside Buffer


Without hardware support for page tables
Each memory access requires a page table look up, to retrieve frame number for the page Plus a memory access to retrieve info from process frame

Internal
Average of frame of memory per process lost Because, in general, a process will not have a size in bytes that is evenly divisible by the page size

Standard solution: Cache page table entries


Fast associative memory, TLB Each entry: key, and value E.g., between 64 and 1024 TLB cache entries

Page size is a factor


Smaller pages, less space lost to fragmentation BUT: Larger pages have less overhead (e.g., page table size)

On a TLB hit, rapidly obtain frame for page On a TLB miss, have to do two memory accesses
But put key, value pair into TLB

Effective Access Time (EAT)


Calculating the effective time with which memory is accessed Need information about TLB hits, misses EAT = Prob-hit * hit-time + Prob-miss * miss-time Given
memory access = 100 time units TLB access = 20 time units 85% of the time we get a TLB hit What is EAT?

Segmentation
Paging divides memory into equal sized units (pages, frames) Segmentation divides memory into different sized units, depending on program parts E.g., stack, data area, code area

Potrebbero piacerti anche