Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
8.2
Objectives
To provide a detailed description of various ways of
8.3
Background
Program must be brought (from disk) into memory and
access directly
8.4
8.5
8.6
Address Binding
8.7
8.8
8.9
generated by a program
generated by a program
8.10
address
The user program deals with logical addresses; it never sees the
8.11
8.12
Dynamic Linking
Static linking system libraries and program code combined by
Stub replaces itself with the address of the routine, and executes
the routine
address
8.13
Swapping
A process can be swapped temporarily out of memory to a
backing store, and then brought back into memory for continued
execution
Total physical memory space of processes can exceed
physical memory
Backing store fast disk large enough to accommodate copies
of all memory images for all users; must provide direct access to
these memory images
Roll out, roll in swapping variant used for priority-based
scheduling algorithms; lower-priority process is swapped out so
higher-priority process can be loaded and executed
Major part of swap time is transfer time; total transfer time is
directly proportional to the amount of memory swapped
System maintains a ready queue of ready-to-run processes
which have memory images on disk
8.14
Swapping (Cont.)
Does the swapped out process need to swap back in to same
physical addresses?
Depends on address binding method
Plus consider pending I/O to / from process memory space
Modified versions of swapping are found on many systems (i.e.,
UNIX, Linux, and Windows)
8.15
8.16
50MB/sec
8.17
8.18
8.19
Contiguous Allocation
Main memory must support both OS and user processes
Limited resource, must allocate efficiently
Contiguous allocation is one early method
Main memory usually into two partitions:
8.20
8.21
8.22
Multiple-partition allocation
Multiple-partition allocation
8.23
Worst-fit: Allocate the largest hole; must also search entire list
First-fit and best-fit better than worst-fit in terms of speed and storage
utilization
8.24
Fragmentation
External Fragmentation total memory space exists to
8.25
Fragmentation (Cont.)
Reduce external fragmentation by compaction
I/O problem
problems
8.26
Segmentation
Memory-management scheme that supports user view of memory
A program is a collection of segments
8.27
8.28
1
2
3
user space
8.29
Segmentation Architecture
Logical address consists of a two tuple:
<segment-number, offset>,
Segment table maps two-dimensional physical addresses; each
8.30
read/write/execute privileges
8.31
Segmentation Hardware
8.32
Paging
Physical address space of a process can be noncontiguous;
load program
8.33
page number
m -n
8.34
Paging Hardware
8.35
8.36
Paging Example
8.37
Paging (Cont.)
Calculating internal fragmentation
8.38
Free Frames
Before allocation
After allocation
8.39
table
memory accesses
One for the page table and one for the data / instruction
8.40
next time
8.41
Associative Memory
Associative memory parallel search
P age #
F ra m e #
8.42
8.43
Hit ratio =
Consider = 80%, = 20ns for TLB search, 100ns for memory access
Effective Access Time (EAT)
EAT = (1 + ) + (2 + )(1 )
=2+
Consider = 80%, = 20ns for TLB search, 100ns for memory access
Consider more realistic hit ratio -> = 99%, = 20ns for TLB search,
8.44
Memory Protection
Memory protection implemented by associating protection bit
8.45
8.46
Shared Pages
Shared code
The pages for the private code and data can appear
anywhere in the logical address space
8.47
8.48
methods
Hierarchical Paging
Hashed Page Tables
Inverted Page Tables
8.49
tables
8.50
8.51
Since the page table is paged, the page number is further divided into:
8.52
Address-Translation Scheme
8.53
If two level scheme, inner page tables could be 210 4-byte entries
But in the following example the 2nd outer page table is still 234 bytes in
size
8.54
8.55
Each element contains (1) the virtual page number (2) the value of the
match
8.56
8.57
page-table entries
8.58
8.59
integrated HW
Each entry has base address and span (indicating the number
of pages the entry represents)
8.60
If miss, hardware walks the in-memory TSB looking for the TTE
corresponding to the address
If match found, the CPU copies the TSB entry into the TLB
and translation completes
8.61
8.62
8.63
8.64
8.65
8.66
8.67
32-bit address limits led Intel to create page address extension (PAE),
allowing 32-bit apps access to more than 4GB of memory space
8.68
Intel x86-64
Can also use PAE so virtual addresses are 48 bits and physical
addresses are 52 bits
8.69
4 KB and 16 KB pages
32 bits
outer page
inner page
offset
4-KB
or
16-KB
page
1-MB
or
16-MB
section
8.70
End of Chapter 8