Sei sulla pagina 1di 12

Chapter 4 Memory Organization

4.1 Memory Hierarchy


Memory is used for storing programs and data that are required to perform a specific task.
For CPU to operate at its maximum speed, it required an uninterrupted and high speed access to these memories that
contain programs and data. Some of the criteria need to be taken into consideration while deciding which memory is to
be used are:
• Cost
• Speed
• Memory access time
• Data transfer rate
• Reliability

Fig 4.1 Memory hierarchy in a computer system Fig 4.2 A computer system contains various types of memories like
auxiliary memory, cache memory, and main memory.
•Auxiliary Memory
The auxiliary memory is at the bottom and is not connected with the CPU directly. However, being slow, it is present in
large volume in the system due to its low pricing. Auxiliary memory average access time is usually 1000 times that of
main memory.
•Main Memory
The main memory is at the second level of the hierarchy. The memory unit that communicates directly with the
CPU is called the main memory. It is the central storage unit in a computer system. It is a relatively large and fast
memory used to store programs and data during the computer operation.
•Cache Memory
Cache memory is at the top level of the memory hierarchy. This is a high speed memory used to increase the
speed of processing by making current programs and data available to the CPU at a rapid rate. Cache memory is
usually placed between the CPU and the main memory.

4.2 Main Memory


Types:
RAM (Random access memory)
ROM (Read only memory)
RAM (Random Access Memory)
RAM is the best known form of computer memory. RAM is considered "random access" because you can
access any memory cell directly if you know the row and column that intersect at that cell.
Types of RAM:-
•Static RAM (SRAM) •Dynamic RAM (DRAM)
Static RAM (SRAM) Dynamic RAM (DRAM)
A bit of data is stored using the state of a flip-flop. Each cell stores bit with a capacitor and transistor.
Less storage capacity Large storage capacity
Retains value indefinitely, as long as it is kept powered. Needs to be refreshed frequently.
Mostly uses to create cache memory of CPU. Used to create main memory.
Faster and more expensive than DRAM. Slower and cheaper than SRAM.
ROM: ROM is used for storing programs that are permanently resident in the computer and for tables of constants that
do not change in value once the production of the computer is completed. The ROM portion of main memory is needed
for storing an initial program called bootstrap loader, which is to start the computer software operating when power is
turned on.
There are five basic ROM types:
•ROM - Read Only Memory
•PROM - Programmable Read Only Memory
•EPROM - Erasable Programmable Read Only Memory
•EEPROM - Electrically Erasable Programmable Read Only Memory
•Flash EEPROM memory
RAM and ROM Chips
•A RAM chip is better suited for communication with the CPU if it has one or more control inputs that select the chip
when needed
•The Block diagram of a RAM chip is shown in Fig 4.3 (a), the capacity of the memory is 128 words of 8 bits (one byte)
per word.

(a) Block diagram (b) Function table


Figure Fig 4.3 Typical RAM chip.
ROM:Assume that a computer system needs 512 bytes of RAM and 512 bytes of ROM. The RAM and ROM chips to be
used are specified in Figs. Fig 4.4(a). The memory address map for this configuration is shown in Fig 4.4(b). The
component column specifies whether a RAM or a ROM chip is used. The hexadecimal address column
assigns a range of hexadecimal equivalent addresses for each chip. The address bus lines are listed in the third column

Fig 4.4 (a) Typical ROM chip. (b) Memory Address Map for Microprocomputer
Fig 4.4 Typical RAM chip Memory Address Map

•Memory Address Map is a pictorial representation (see Fig 4.4 (b) )of
assigned address space for each chip in the system
•To demonstrate an example, assume that a computer system needs 512
bytes of RAM and 512 bytes of ROM
•The RAM have 128 byte and need seven address lines, where the ROM
have 512 bytes and need 9 address lines
•The hexadecimal address assigns a range of hexadecimal equivalent
address for each chip
•Line 8 and 9 represent four distinct binary combinations to specify
which RAM we chose
•When line 10 is 0, CPU selects a RAM. And when it’s 1, it selects the
ROM

Memory Connection to CPU


RAM and ROM chips are connected to a CPU through the data
and address buses. The connection of memory chips to the CPU is
shown in Fig 4.5. This configuration gives a memory capacity of 512
bytes of RAM and 512 bytes of ROM. It implements the memory map of
Fig 4.4.(b)/
Fig 4.5 Memory connection to the CPU
Each RAM receives the seven low-order bits of the address bus to select one of 128 possible bytes. The
particular RAM chip selected is determined from lines 8 and 9 in the address bus. This is done through a 2 x 4 decoder
whose outputs go to the CS1 inputs in each RAM chip. Thus, when address lines 8 and 9 are equal to 00, the first RAM
chip is selected. When 01, the second RAM chip is selected, and so on. The RD and WR outputs from the
microprocessor are applied to the inputs of each RAM chip.
The selection between RAM and ROM is achieved through A10. The RAMs are selected when the bit in A10 is
0, and the ROM when this bit is 1. The other chip select CS1 input in the ROM is connected to the RD control line for
the ROM chip to be enabled only during a read operation. Address bus lines 1 to 9 are applied to the input address of
ROM without going through the decoder. This assigns addresses 0 to 511 to RAM and 512 to 1023 to ROM. The data
bus of the ROM has only an output capability, whereas the data bus connected to the RAMs can transfer information in
both directions.

Cache memory
•If the active portions of the program and data are placed in a fast small memory, the average memory access
time can be reduced
•Such a fast small memory is referred to as cache memory
•The cache is the fastest component in the memory hierarchy and approaches the speed of CPU component
•When CPU needs to access memory, the cache is examined
•If the word is found in the cache, it is read from the fast memory
•If the word addressed by the CPU is not found in the cache, the main memory is accessed to read the word
•When the CPU refers to memory and finds the word in cache, it is said to produce a hit
•Otherwise, it is a miss
•The performance of cache memory is frequently measured in terms of a quantity called hit ratio
Hit ratio = hit / (hit + miss)
•The basic characteristic of cache memory is its fast access time
•Therefore, very little or no time must be wasted when searching the words in the cache

4.3 Mapping Functions


Mapping functions are used as a way to decide which main
memory block occupies which line of cache. As there are less lines of
cache than there are main memory blocks, an algorithm is needed to
decide this. The transformation of data from main memory to cache
memory is referred to as a mapping process; there are three types of
mapping:
1) Associative mapping
2) Direct mapping
3) Set-associative mapping Fig 4.6 Example of cache memory
To help understand the mapping procedure, we have the following example as in Fig 4.6:
Cache Size: The larger the cache, the greater the number of gates involved in addressing the cache is needed. The result
is that larger caches end up being slightly slower than small ones.
4.4 Associative Mapping
•The fastest and most flexible cache organization uses an associative memory
•The associative memory stores both the address and data of the memory word
•This permits any location in cache to store any word from main memory
•The address value of 15 bits is shown as a five-digit octal number and its
corresponding 12-bit word is shown as a four-digit octal number as
given in Fig 4.7.
•A CPU address of 15 bits is placed in the argument register and the associative
memory us searched for a matching address
•If the address is found, the corresponding 12-bits data is read and sent to the
CPU
•If not, the main memory is accessed for the word
•If the cache is full, an address-data pair must be displaced to make room for a
pair that is needed and not presently in the cache using first-in first-
out (FIFO) replacement policy (round-robin order).
Fig 4.7 Associative mapping cache (all numbers in octal)
4.5 Direct Mapping
•Associative memory is expensive compared to RAM
•In general case, there are 2^k words in cache memory and 2^n words in main memory
(in our case, k=9, n=15)
•The n bit memory address is divided into two fields: k-bits for the index and n-k bits for the tag field

`
Fig 4.8 Addressing relationships between main and cache memories

Consider the numerical example shown in


Fig 4.0. The word at address zero is presently stored in
the cache (index = 000, tag = 00, data = 1220). Suppose
that the CPU now wants to access the word at address
02000. The index address is 000, so it is used to access
the cache. The two tags are then compared. The cache
tag is 00 but the address tag is 02, which does not
produce a match. Therefore, the main memory is
accessed and the data word 5670 is transferred to the
CPU. The cache word at index address 000 is then
replaced with a tag of 02 and data of 5670.
The direct-mapping example just described uses
a block size of one word. The same organization but
using a block size of B words is shown in Fig 4.10.
The index field is now divided into two parts:
the block field and the word field.
Fig 4.9 Direct mapping cache organization
In a 512-word cache there are 64 blocks of 8 words each, since 64 x 8 = 512.
The block number is specified with a 6-bit field and the word within the block is specified with a 3-bit field.
The tag field stored within the cache is common to all eight words of the same block.
Every time a miss occurs, an entire block of eight words must be transferred from main memory to cache memory.
Although this takes extra time, the hit ratio will most likely improve with a larger block size because of the sequential
nature of computer programs.

Fig 4.10 Direct mapping cache with block size of 8 words


4.6 Set Associative Mapping
The disadvantage of direct mapping is that two words with the same index in their address but with different
tag values cannot reside in cache memory at the same time. Set-Associative Mapping is an improvement over the direct-
mapping in that each word of cache can store two or more word of memory under the same index address.

Each index address refers to two data words and their associated tags
Each tag requires six bits and each data word has 12 bits, so the word
length is 2*(6+12) = 36 bits
An index address of nine bits can accommodate 512 words. Thus the size
of cache memory is 512 x 36.
It can accommodate 1024 words of main memory since each word of
cache contains two data words.
In general, a set-associative cache of set size k will accommodate k
words of main memory in each word of cache.

Fig 4.11 Two-way set-associative mapping cache.


Replacement algorithms: The most common replacement algorithms used are:
Random replacement: This control chooses one tag-data item for replacement at random order.
First-In, First-Out (FIFO): This algorithm selects for replacement the item that has been in the set the longest.
Least Recently Used (LRU): This algorithm selects for replacement the item that has been least recently used by the
CPU. Both FIFO and LRU can be implemented by adding a few extra bits in each word of cache.

4.7 External Memory


Devices that provide backup storage are called auxiliary memory. The most common auxiliary memory devices
used in computer systems are magnetic disks and tapes. They are used for storing system programs, large data 6Jes, and
other backup information. Only programs and data currently needed by the processor reside in main memory. All other
information Is stored in auxiliary memory and transferred to main memory when needed.

4.8 Magnetic Disks


A disk is a circular platter constructed of nonmagnetic material, called the substrate (an aluminum
or aluminum alloy material or glass substrates), coated with a magnetizable material.
Single read/write head: Some units use Single read/write head for each disk surface. In this type of unit, the track
address bits are used by a mechanical assembly to move the head into the specified track position before reading or
writing.
Separate read/write heads: Separate read/write heads are provided in some units for each track in each surface. The
address bits can then select a particular track electronically through a decoder circuit. This type of unit is more
expensive and is found only in very large computer systems.
The head is a relatively small device capable of reading from or writing to a portion of the platter rotating
beneath it. This gives rise to the organization of data on the platter in a concentric set of rings, called tracks.

Fig 4.12 Disk Data Layout Fig 4.13 Components of a Disk Drive
Each track is the same width as the head. There are thousands of tracks per surface. The tracks are typically divided
into hundreds of sectors per track, and these may be of either fixed or variable length. Fig 4.12 depicts this data layout.
Adjacent tracks are separated by gaps. This prevents, or at least minimizes, errors due to misalignment of the head or
simply interference of magnetic fields. Data are transferred to and from the disk in sectors (Fig 4.12).
Some disk drives accommodate multiple platters stacked vertically a fraction of an inch apart. Multiple arms
are provided (Fig 4.13). Multiple–platter disks employ a movable head, with one read-write head per platter surface. All
of the heads are mechanically fixed so that all are at the same distance from the center of the disk and move together.
Thus, at any time, all of the heads are positioned over tracks that are of equal distance from the center of the disk. The
set of all the tracks in the same relative position on the platter is referred to as a cylinder

SEEK TIME: Seek time is the time required to move the disk arm to the required track (~ 10 ms).
ROTATIONAL DELAY: Disks, other than floppy disks, rotate at speeds ranging from 3600 rpm (for handheld devices
such as digital cameras) up to, as of this writing, 20,000 rpm; at this latter speed, there is one revolution per 3 ms. Thus,
on the average, the rotational delay will be 1.5 ms.
TRANSFER TIME: The transfer time to or from the disk depends on the rotation speed of the disk in the following
fashion:

where
T= transfer time
b= number of bytes to be transferred
N= number of bytes on a track
r= rotation speed, in revolutions per second
Thus the total average access time can be expressed as

where Ts is the average seek time.


Note that on a zoned drive, the number of bytes per track is variable, complicating the calculation.

4.9 RAID Technology


A multiple-disk database design is known as RAID (Redundant Array of Independent Disks). The RAID
scheme consists of seven levels zero through six.
RAID characteristics
1. RAID is a set of physical disk drives viewed by the operating system as a single logical drive.
2. Data are distributed across the physical drives of an array in a scheme known as striping.
3. Redundant disk capacity is used to store parity information, which guarantees data recoverability in case of a disk
failure.
The probability of failure increases even though multiple heads and actuators to operate simultaneously to
achieve higher I/O and transfer rates, To compensate for this decreased reliability, RAID makes use of stored parity
information that enables the recovery of data lost due to a disk failure.
RAID levels:
Category Level Description Disks
Required
Striping 0 Non-redundant N
Mirroring 1 Mirrored 2N
Parallel 2 Redundant via Hamming code N+m
access 3 Bit-interleaved parity N+1
Independent 4 Block-interleaved parity N+1
access 5 Block-interleaved distributed arity N+1
6 Block-interleaved dual distributed parity N+2
N = number of data disks; m proportional to log N
RAID Level 0
RAID level 0 does not provide redundant storage. All N disks in the disk array are appearing as a logical disk
drive by O/S. The logical disk is divided into strips; these strips may be physical blocks, sectors, or some other unit on
hich all the user and system data are distributed in segments called strips. A stripe is all of the strips at the same
location on all of the disks.

Fig 4.14 (a) RAID 0 (Nonredundant)


A set of logically consecutive strips that maps exactly one strip to each array member is referred to as a stripe. In an n-
disk array, the first n logical strips are physically stored as the first strip on each of the n disks, forming the first stripe;
the second n strips are distributed as the second strips on each disk; and so on. The strips are mapped round robin to
consecutive physical disks in the RAID array.

Fig 4.14 (b) indicates the use of array


management software to map between logical and
physical disk space. This software may execute
either in the disk subsystem or in a host computer.

RAID 0 FOR HIGH DATA TRANSFER


CAPACITY there are 2 requirements
1) A a high transfer capacity must exist along the
entire path between host memory and the
individual disk drives. This includes internal
controller buses, host system I/O buses, I/O
adapters, and host memory buses.

2) A single I/O request involves the parallel


transfer of data from multiple disks, increasing
the effective transfer rate compared to a single-
disk transfer.
Fig 4.14 (b) Data Mapping for a RAID Level 0 Array

RAID 0 FOR HIGH I/O REQUEST RATE The transaction performance will also be depending strip size. If the strip
size is relatively large, so that a single I/O request only involves a single disk access, then multiple waiting I/O requests
can be handled in parallel, reducing the queuing time for each request.
Applications: Used in supercomputers in which performance and capacity are primary concerns and low cost is more
important than improved reliability.
Main advantage: If two different I/O requests are pending for two different blocks of data, then there is a good chance
that the requested blocks are on different disks. Thus, the two requests can be issued in parallel, reducing the I/O
queuing time.

RAID Level 1
In RAID 1, each logical strip is mapped to two separate physical disks so that every disk in the array has a
mirror disk that contains the same data.

Fig 4.14 (c) RAID 1 (Mirrored)


As Fig 4.14 (c) shows, data striping is used, as in RAID 0.
There are a number of positive aspects to the RAID 1 organization:
1. A read request can be serviced by either of the two disks that contains the requested data, whichever one involves the
minimum seek time plus rotational latency.
2. A write request requires that both corresponding strips be updated, but this can be done in parallel. Thus, the write
performance is dictated by the slower of the two writes
(i.e., the one that involves the larger seek time plus rotational latency).
3. Recovery from a failure is simple. When a drive fails, the data may still be accessed from the second drive.
The principal disadvantage of RAID 1 is the cost; it requires twice the disk space of the logical disk that it supports.

RAID Level 2
RAID levels 2 uses of a parallel access technique. Data striping is used.
The strips are very small, with data distributed in small strips often as small as a single byte or word.
An error-correcting code is calculated across corresponding bits on each data disk, and the bits of the code are stored in
the corresponding bit positions on multiple parity disks. Typically, a Hamming code is used, which is able to
correct single-bit errors and detect double-bit errors.
The number of redundant disks is proportional to the log of the number of data disks.
On a single read, all disks are simultaneously accessed. The requested data and the associated error-correcting code are
delivered to the array controller.
RAID 2 would only be an effective choice in an environment in which many disk errors occur (not implemented).

Fig 4.14 (d) RAID 2 (Redundancy through Hamming code)


RAID Level 3
This also employs parallel access technique.
The strips are very small with data distributed in small strips.
It requires only a single redundant disk.
A simple parity bit is computed for the set of individual bits in the same position on all of the data disks and written on
the redundant disk as shown in Fig 4.14 (e).

Fig 4.14 (e) RAID 3 (Bit-interleaved parity)


REDUNDANCY: In the event of a drive failure, the parity drive is accessed and data is reconstructed from the
remaining devices. Once the failed drive is replaced, the missing data can be restored on the new drive and operation
resumed.
Data reconstruction is simple. Consider an array of five drives in which X0 through X3 contain data and X4 is the
parity disk. The parity for the ith bit is calculated as follows:
X4(i) = X3(i)  X2(i)  X1(i)  X0(i) where  is exclusive-OR function.
Suppose that drive X1 has failed. If we add X4(i)  X1(i) to both sides of the preceding equation, we get
X1(i) = X4(i)  X3(i)  X2(i)  X0(i)
Thus, the contents of each strip of data on X1 can be regenerated from the contents of the corresponding strips on the
remaining disks in the array. This principle is true for RAID levels 3 through 6.

RAID Level 4
Each disk uses an independent access technique.
More suitable for applications that require high I/O request rates and are relatively less suited for applications that
require high data transfer rates
The strips are relatively large. (e.g., 16k or 32k)
Bit-by-bit parity strip is calculated across corresponding strips on each data disk, and the parity bits are stored in the
corresponding strip on the parity disk as sown in Fig 4.14 (f).
• Poor write performance on small size requests
• The single parity drive is a bottleneck

Fig 4.14 (f) RAID 4 (Block-level parity)


Consider an array of five drives in which X0 through X3 contain data and X4 is the parity disk. Suppose
that a write is performed that only involves a strip on disk X1. Initially, for each bit i, we have the following
relationship:
X4(i) = X3(i)  X2(i)  X1(i)  X0(i)
After the update, with potentially altered bits indicated by a prime symbol:
X4′(i) = X3(i)  X2(i)  X1′(i)  X0(i) (a change in X1 will also affect the parity disk X4)
=X3(i)  X2(i)  X1′(i)  X0(i)  0
= X3(i)  X2(i)  X1′(i)  X0(i)  X1(i)  X1(i) (exclusive-OR of any quantity with itself is 0)
= X3(i)  X2(i)  X1(i)  X0(i)  X1(i)  X1′(i) (by reordering)
= X4(i)  X1(i)  X1′(i) (replace the first four terms by X4(i))
To calculate the new parity, the array management software must read the old user strip and the old parity
strip. Then it can update these two strips with the new data and the newly calculated parity. Thus, each strip write
involves two reads and two writes.

RAID Level 5
RAID 5 is organized in a similar fashion to RAID 4.The difference is that RAID 5 distributes the parity strips across all
disks. A typical allocation is a round-robin scheme, as illustrated in Fig 4.14 (g).

Fig 4.14 (g) RAID 5 (Block-level distributed parity)


For an n-disk array, the parity strip is on a different disk for the first n stripes, and the pattern then repeats.
The distribution of parity strips across all drives avoids the potential I/O bottleneck found in RAID 4.
RAID Level 6
In the RAID 6 scheme, two different parity calculations are carried out and stored in separate blocks on
different disks. Thus, a RAID 6 array whose user data require N disks consists of N+2 disks.
Fig 4.14 (h) illustrates the scheme.

Fig 4.14 (h) RAID 6 (Dual redundancy)


P and Q are two different data check algorithms.
One of the two is the exclusive-OR calculation used in RAID 4 and 5. But the other is an independent data check
algorithm. This makes it possible to regenerate data even if two disks containing user data fail.
The advantage of RAID 6 is that it provides extremely high data availability. Three disks would have to fail within the
MTTR (mean time to repair) interval to cause data to be lost. On the other hand, RAID 6 incurs a substantial
write penalty, because each write affects two parity blocks.

4.10 Optical disks


The Optical disk is formed from polycarbonate substrate. Information (either music or computer data) is
imprinted as a series of microscopic pits on the surface of the polycarbonate, using a high-intensity laser. The areas
between pits are called lands. The created disk is called master disk. The master is used, in turn, to make a die to stamp
out copies onto polycarbonate.
The pitted surface is then coated with a highly reflective surface, usually aluminum or gold. This shiny surface is
protected against dust and scratches by a top coating of clear acrylic. Finally, a label can be silkscreened onto the
acrylic.

Fig 4.15 CD-ROM–Capacity 682 MB Fig 4.16 CD Optical Memory Characteristics

Information is retrieved from a CD or CD-ROM by a low-powered laser housed in an optical-disk player, or


drive unit. The laser shines through the clear polycarbonate while a motor spins the disk past it (Figure 6.10).

Fig 4.17 CD Operation


If the laser beam falls on a pit (rough surface), the light scatters and a low intensity is reflected back to the
source. A land is a smooth surface, which reflects back at higher intensity. The change between pits and lands is
detected by a photosensor and converted into a digital signal. The sensor tests the surface at regular intervals. The
beginning or end of a pit represents a 1; when no change in elevation occurs between intervals, a 0 is recorded.

CD
Compact Disk: A non-erasable disk that stores digitized audio information. The standard system uses 12-cm
disks and can record more than 60 minutes of uninterrupted playing time. In CDs and CD-ROMs, the disk contains a
single spiral track, beginning near the center and spiraling out to the outer edge of the disk. Sectors near the outside of
the disk are the same length as those near the inside. The data capacity for a CD-ROM is about 680 MB.
Fig 4.18 CD-ROM Block Format

Data on the CD-ROM are organized as a sequence of blocks. A typical block format is shown in Fig 4.18.
It consists of the following fields:
• Sync: The sync field identifies the beginning of a block.
It consists of a byte of all 0s, 12 bytes of all 1s, and a byte of all 0s.
• Header: The header contains the block address and the mode byte;
mode 0 specifies a blank data field;
mode 1 specifies the use of an error-correcting code and 2048 bytes of data;
mode 2 specifies 2336 bytes of user data with no error-correcting code.
• Data: User data.
• Auxiliary: Additional user data in mode 2. In mode 1, this is a 288-byte error correcting code.

The disadvantages of CD-ROM are as follows:


• It is read-only and cannot be updated.
• It has an access time much longer than that of a magnetic disk drive, as much as half a second.

CD-ROM: Compact Disk Read-Only Memory. A nonerasable disk used for storing computer data.
The standard system uses 12-cm disks and can hold more than 650 Mbytes.
CD-R: CD Recordable. Similar to a CD-ROM. The user can write to the disk only once.
CD-RW: CD Rewritable. Similar to a CD-ROM. The user can erase and rewrite to the disk multiple times.

DVD
Digital Versatile Disk: A technology for producing digitized, compressed representation of video information,
as well as large volumes of other digital data. Both 8 and 12 cm diameters are used, with a double-sided capacity of up
to 17 Gbytes. The basic DVD is read-only (DVD-ROM).

Fig 4.19 DVD-ROM, double-sided, dual-layer–Capacity 17 GB Fig 4.20 DVD Optical Memory
Characteristics
DVD-R
DVD Recordable. Similar to a DVD-ROM. The user can write to the disk only once.
Only one-sided disks can be used.
DVD-RW
DVD Rewritable. Similar to a DVD-ROM. The user can erase and rewrite to the disk multiple times.
Only one-sided disks can be used.
Blu-Ray DVD
High definition video disk. Provides considerably greater data storage density than DVD, using a 405-nm
(blue-violet) laser.
A single layer on a single side can store 25 Gbytes.
Fig 4.21 Blu-ray Optical Memory Characteristics
Blu-ray positions the data layer on the disk closer to the laser as shown in Fig 4.21.This enables a tighter focus
and less distortion and thus smaller pits and tracks. Blu-ray can store 25 GB on a single layer. Three versions are
available: read only (BD-ROM), recordable once (BD-R), and rerecordable (BD-RE).
High-Definition Optical Disks
The HD DVD scheme can store 15 GB on a single layer on a single side. High-definition optical disks are
designed to store high-definition videos and to provide significantly greater storage capacity compared to DVDs. The
higher bit density is achieved by using a laser with a shorter wavelength, in the blue-violet range.

4.11 Magnetic Tape


The medium is flexible polyester tape coated with magnetizable material.
Tape widths vary from 0.38 cm (0.15 inch) to 1.27 cm (0.5 inch).
Parallel recording:
Data on the tape are structured as a number of parallel tracks running lengthwise.
Earlier tape systems have nine tracks to store data one byte at a time, with an additional parity bit as the ninth track.
Similarly, tape systems may have 18 or 36 tracks, corresponding to a digital word or double word.
The recording of data in this form is referred to as parallel recording.
Serpentine recording:
Most modern systems instead use serial recording, in which data are laid out as a sequence of bits along each
track, as is done with magnetic disks. As with the disk, data are read and written in contiguous blocks, called physical
records, on a tape. Blocks on the tape are separated by gaps referred to as interrecord gaps. As with the disk, the tape is
formatted to assist in locating physical records. The typical recording technique used in serial tapes is referred to as
serpentine recording.
In this technique, when data are being recorded, the first set of bits is recorded along the whole length of the
tape. When the end of the tape is reached, the heads are repositioned to record a new track, and the tape is again
recorded on its whole length, this time in the opposite direction. That process continues, back and forth, until the tape is
full (Fig 4.22(a)). To increase speed, the read-write head is capable of reading and writing a number of adjacent tracks
simultaneously (typically two to eight tracks). Data are still recorded serially along individual tracks, but blocks in
sequence are stored on adjacent tracks, as suggested by Fig 4.22(b).

(a) Serpentine reading and writing (b) Block layout for system that reads—writes
four tracks simultaneously
Fig 4.22 Typical Magnetic Tape Features
A tape drive is a sequential-access device. If the tape head is positioned at record 1, then to read record N, it is
necessary to read physical records 1 through N- 1, one at a time. The dominant tape technology today is a cartridge
system known as linear tape-open (LTO).

Potrebbero piacerti anche