Sei sulla pagina 1di 63

About Me

Instructer: Chande Kasita y Office: A 110 y Contacts: 0713 041634


y
x kasitac@yahoo.co.uk

Contact Hours: Anytime between 0800 and 1600 except weekends

Module 4: Information Representation, Organization and Storage


Overview of Physical Storage Media y Magnetic Disks y RAID y Tertiary Storage y Storage Access y File Organization
y

Storage and Organization


The first task in building a database system is determining how to store the data on a disk. y Since a database is an application that is running on an operating system, the database must use the file system provided by the operating system to store its information. y However, many database systems implement their own file security and organization on top of the operating system file structure.
y y

We will study techniques for storing data on disks including random and sequential files and fast lookup structures such as indexes and hashing.

Disks and Files


y

DBMS stores information on disks.


In an electronic world, disks are a mechanical anachronism!

This has major implications for DBMS design!


READ: transfer data from disk to main memory (RAM). WRITE: transfer data from RAM to disk. Both are high-cost operations, relative to in-memory operations, so must be planned carefully!

Why Not Store It All in Main Memory?


y

Costs too much. $100 will buy you either 1 GB of RAM or 150 GB of disk (EIDI/ATA) today.
High-end Databases today in the 10-200 TB range. Approx 60% of the cost of a production system is in the disks.

Main memory is volatile. We want data to be saved between runs. (Obviously!) Note, some specialized systems do store entire database in main memory.
Vendors claim 10x speed up vs. traditional DBMS running in main memory.

Physical storage of data is dependent on the computer system and its associated devices on which the data is stored. y Allocating database storage involves:
y

deciding on the physical media used to store the data and its associated properties understanding how the physical media affects the performance of data update and retrieval
x hard drive vs. tape drive x allocating records on disk blocks and pages

Classification of Physical Storage Media


Speed with which data can be accessed y Cost per unit of data y Reliability
y

y data loss on power failure or system crash y physical failure of the storage device
y

Can differentiate storage into:


y volatile storage: loses contents when power is switched off y non-volatile storage:
y Contents persist even when power is switched off. y Includes secondary and tertiary storage, as well as battery-backed up main-memory.

Physical Storage Media (Cont.)


y

Magnetic-disk

y Data is stored on spinning disk, and read/written magnetically y Primary medium for the long-term storage of data; typically stores entire database. y Data must be moved from disk to main memory for access, and written back for storage y direct-access possible to read data on disk in any order, unlike magnetic tape y Capacities range up to roughly 750 GB currently (mid 2006)
y Much larger capacity and cost/byte than main memory/flash memory y Growing constantly and rapidly with technology improvements (factor of 2 to 3 every 2 years)

y Survives power failures and system crashes

y disk failure can destroy data: is rare but does happen

Physical Storage Media (Cont.)


y

Optical storage
y non-volatile, data is read optically from a spinning disk using a laser y CD-ROM (640 MB) and DVD (4.7 to 17 GB) most popular forms y Write-one, read-many (WORM) optical disks used for archival storage (CD-R, DVD-R, DVD+R) y Multiple write versions also available (CD-RW, DVD-RW, DVD+RW, and DVD-RAM) y Reads and writes are slower than with magnetic disk y Juke-box systems, with large numbers of removable disks, a few drives, and a mechanism for automatic loading/unloading of disks available for storing large volumes of data

Physical Storage Media (Cont.)


y

Tape storage
y non-volatile, used primarily for backup (to recover from disk failure), and for archival data y sequential-access much slower than disk y very high capacity (40 to 300 GB tapes available) y tape can be removed from drive storage costs much cheaper than disk, but drives are expensive y Tape jukeboxes available for storing massive amounts of data
y hundreds of terabytes (1 terabyte = 109 bytes) to even a petabyte (1 petabyte = 1012 bytes)

y y y y y

Tape storage is non-volatile and is used primarily for backup and archiving data. Tapes are sequential access devices, so they are much slower than disks, but provide very high capacity (>80GB). Tape jukeboxes that store multiple tapes have very large capacity in the terabyte to petabyte range. Tapes can be removed from the drive, so storage costs are much cheaper than hard drives. Since most databases can be stored in hard drives and RAID systems that support direct access, tape drives are now relegated to secondary roles as backup devices. Database systems no longer worry about optimizing queries for data stored on tapes.

Magnetic Hard Disk Mechanism

NOTE: Diagram is schematic, and simplifies the structure of actual disk drives

y y

Data is stored on a hard drive on the surface of platters. Each platter is divided into circular tracks, and each track is divided into sectors. A sector is the smallest unit of data that can be read or written. A cylinder i consists of the i-th track of all the platters (surfaces). The read-write head device is positioned close to the platter surface where it reads/writes magnetically encoded data. To read a particular sector, the read-write head is moved over the correct track by the arm assembly. Since the platter spins continuously, the read-write head reads the data when the sector rotates under the head. Head-disk assemblies allow multiple disk platters on a single spindle with multiple heads (one per platter) mounted on a common arm.

Magnetic Disks
y

Read-write head
y Positioned very close to the platter surface (almost touching it) y Reads or writes magnetically encoded information.

y y

Surface of platter divided into circular tracks


y Over 50K-100K tracks per platter on typical hard disks

Each track is divided into sectors.


y Sector size typically 512 bytes y Typical sectors per track: 500 (on inner tracks) to 1000 (on outer tracks)

To read/write a sector
y disk arm swings to position head on right track y platter spins continually; data is read/written as sector passes under head

Magnetic Disks (Cont.)


y

Head-disk assemblies
multiple disk platters on a single spindle (1 to 5 usually) one head per platter, mounted on a common arm.

Cylinder i consists of ith track of all the platters y Earlier generation disks were susceptible to head-crashes leading to loss of all data on disk
y

Current generation disks are less susceptible to such disastrous failures, but individual sectors may get corrupted

Disk Controller
y

Disk controller interfaces between the computer system and the disk drive hardware.
y accepts high-level commands to read or write a sector y initiates actions such as moving the disk arm to the right track and actually reading or writing the data y Computes and attaches checksums to each sector to verify that data is read back correctly
y If data is corrupted, with very high probability stored checksum wont match recomputed checksum

y Ensures successful writing by reading back sector after writing it y Performs remapping of bad sectors

Disk Subsystem

Multiple disks connected to a computer system through a controller


y Controllers functionality (checksum, bad sector remapping) often carried out by individual disks; reduces load on controller

Disk interface standards families


y y y y ATA (AT adaptor) range of standards SATA (Serial ATA) SCSI (Small Computer System Interconnect) range of standards Several variants of each standard (different speeds and capabilities)

Performance Measures of Disks


y

Access time the time it takes from when a read or write request is issued to when data transfer begins. Consists of:
Seek time time it takes to reposition the arm over the correct track.
x Average seek time is 1/2 the worst case seek time.
x Would be 1/3 if all tracks had the same number of sectors, and we ignore the time to start and stop arm movement

x 4 to 10 milliseconds on typical disks

Rotational latency time it takes for the sector to be accessed to appear under the head.
x Average latency is 1/2 of the worst case latency. x 4 to 11 milliseconds on typical disks (5400 to 15000 r.p.m.)

Performance Measures (Cont.)


y

Data-transfer rate the rate at which data can be retrieved from or stored to the disk. 25 to 100 MB per second max rate, lower for inner tracks Multiple disks may share a controller, so rate that controller can handle is also important

x E.g. ATA-5: 66 MB/sec, SATA: 150 MB/sec, Ultra 320 SCSI: 320 MB/s x Fiber Channel (FC2Gb): 256 MB/s
y

Mean time to failure (MTTF) the average time the disk is expected to run continuously without any failure. Typically 3 to 5 years Probability of failure of new disks is quite low, corresponding to a theoretical MTTF of 500,000 to 1,200,000 hours for a new disk

x E.g., an MTTF of 1,200,000 hours for a new disk means that given 1000 relatively new disks, on an average one will fail every 1200 hours
MTTF decreases as disk ages

RAID
There are many applications, particularly in a business environment, where there are needs beyond what can be fulfilled by a single hard disk, regardless of its size, performance or quality level. y Many businesses can't afford to have their systems go down for even an hour in the event of a disk failure; they need large storage subsystems with capacities in the terabytes; and they want to be able to insulate themselves from hardware failures to any extent possible.
y

The fundamental principle behind RAID is the use of multiple hard disk drives in an array that behaves in most respects like a single large, fast one. y There are a number of ways that this can be done, depending on the needs of the application, but in every case the use of multiple drives allows the resulting storage subsystem to exceed the capacity, data security, and performance of the drives that make up the system, to one extent or another.
y

General RAID Concepts


These are the concepts that describe how RAID works, how arrays are set up, and how different RAID levels work to improve reliability and performance. y Understanding these concepts provides the foundation for our subsequent discussions of performance issues, reliability concerns, and the various RAID levels.
y
Hard Disk Drives 22

Physical and Logical Arrays and Drives


The fundamental structure of RAID is the array. An array is a collection of drives that is configured, formatted and managed in a particular way. The number of drives in the array, and the way that data is split between them, is what determines the RAID level, the capacity of the array, and its overall performance and data protection characteristics.
Hard Disk Drives 23

Physical Drives: The physical, actual hard disks that comprise the array are the "building blocks" of all data storage under RAID. Physical Arrays: One or more physical drives are collected together to form a physical array. Most simple RAID setups use just one physical array, but some complex ones can have two or more physical arrays.

Hard Disk Drives

24

Mirroring
Mirroring is one of the two data redundancy techniques used in RAID (the other being parity). In a RAID system using mirroring, all data in the system is written simultaneously to two hard disks instead of one; thus the "mirror" concept. The principle behind mirroring is that this 100% data redundancy provides full protection against the failure of either of the disks containing the duplicated data. Mirroring setups always require an even number of drives for obvious reasons.
Hard Disk Drives 25

The chief advantage of mirroring is that it provides not only complete redundancy of data, but also reasonably fast recovery from a disk failure. Since all the data is on the second drive, it is ready to use if the first one fails. Mirroring also improves some forms of read performance (though it actually hurts write performance.)

Hard Disk Drives

26

Duplexing
Duplexing is an extension of mirroring that is based on the same principle as that technique. Like in mirroring, all data is duplicated onto two distinct physical hard drives. Duplexing goes one step beyond mirroring, however, in that it also duplicates the hardware that controls the two hard drives (or sets of hard drives). So if you were doing mirroring on two hard disks, they would both be connected to a single host adapter or RAID controller. If you were doing duplexing, one of the drives would be connected to one adapter and the other to a second adapter.
Hard Disk Drives 27

Duplexing is superior to mirroring in terms of availability because it provides the same protection against drive failure that mirroring does, but also protects against the failure of either of the controllers. It also costs more than mirroring because you are duplicating more hardware.

Hard Disk Drives

28

` `

Striping

The main performance-limiting issues with disk storage relate to the slow mechanical components that are used for positioning and transferring data. ` Since a RAID array has many drives in it, an opportunity presents itself to improve performance by using the hardware in all these drives in parallel. ` For example, if we need to read a large file, instead of pulling it all from a single hard disk, it is much faster to chop it up into pieces, store some of the pieces on each of the drives in an array, and then use all the disks to read back the file when needed.
Hard Disk Drives 29

This technique is called striping, after the pattern that might be visible if you could see these "chopped up pieces" on the various drives with a different color used for each file. It is similar in concept to the memory performance-enhancing technique called interleaving.

Hard Disk Drives

30

Parity Mirroring is a data redundancy technique used by some RAID levels, in particular RAID level 1, to provide data protection on a RAID array. While mirroring has some advantages and is well-suited for certain RAID implementations, it also has some limitations. It has a high overhead cost, because fully 50% of the drives in the array are reserved for duplicate data; and it doesn't improve performance as much as data striping does for many applications. For this reason, a different way of protecting data is provided as an alternate to mirroring. It involves the use of parity information, which is redundancy information calculated from the actual data values.
Hard Disk Drives 31

RAID
Disk Array: Arrangement of several disks that gives abstraction of a single, large disk. y Goals: Increase performance and reliability. y Two main techniques:
y

Data striping: Data is partitioned; size of a partition is called the striping unit. Partitions are distributed over several disks. Redundancy: More disks => more failures. Redundant information allows reconstruction of data if a disk fails.

RAID
Redundant Arrays of Independent Disks is a disk organization technique that utilizes a large number of inexpensive, mass-market disks to provide increased reliability, performance, and storage. y RAID: Redundant Arrays of Independent Disks
y

disk organization techniques that manage a large numbers of disks, providing a view of a single disk of
x high capacity and high speed by using multiple disks in parallel, and x high reliability by storing data redundantly, so that data can be recovered even if a disk fails

RAID Controller

Bit-level striping split the bits of each byte across multiple disks
y In an array of eight disks, write bit i of each byte to disk i. y Each access can read data at eight times the rate of a single disk. y But seek/access time worse than for a single disk

Block-level striping with n disks, block i of a file goes to disk (i mod n) + 1


y Requests for different blocks can run in parallel if the blocks reside on different disks y A request for a long sequence of blocks can utilize all disks in parallel

y Bit level striping is not used much any more

RAID 0
Block striping; non-redundant Striping is the practice of spreading data over multiple disk drives. It allows greater performance because drives can seek and deliver data simultaneously, rather than one drive having to do all the work by itself. y There are significant performance advantage over a single disk. - Multiple reads or writes are done simultaneously with multiple disks, rather than a read or write to a single disk. Reads/writes are overlapped across all disks. y If one disk fails, all data is lost, and all disks must be reformatted. Data could be restored across the array from a tape or diskette backup, if available.
y y

RAIDRAID-0
y

Data striping without redundancy (no protection).


Minimum number of drives: 2 Strengths: Highest performance Advantages : Low overhead (No parity calculation),Very simple design, Easy to implement. Weaknesses: No data protection; One drive fails, all data is lost.

RAIDRAID-0

RAIDRAID-1
Mirrored disks with block striping is a disk-mirroring strategy for high performance. All data is written twice to separate drives. The cost per megabyte of storage is higher, of course, but if one drive fails, normal operations can continue with the duplicate data. If the RAID device permits hotswapping of drives, the bad drive can be replaced without interruption. y Mirroring and Duplexing.
y y

y Disk mirroring duplicates the data from one disk onto a second disk using a single disk controller. y Disk duplexing is the same as mirroring, except that the disks are attached to separate disk controllers, such as two SCSI adapters.

Write performance is somewhat reduced, because both drives in the mirrored pair must complete the write operation. y A read request can be handled by either disk. The drive in the pair that is less busy is issued the read command, leaving the other drive to perform another read operation. y If either disk fails, a copy of the data is still available on the other disk. - If a disk controller fails while duplexing, the data can still be accessed through the other controller and disk. y Popular for applications such as storing log files in a database system.
y

RAIDRAID-1
y

Disk mirroring -writing them to both the data disk and a mirror disk
Minimum number of drives : 2 Strengths: Very high performance;Very high data protection;Very minimal penalty on write performance Weaknesses: High redundancy cost overhead; Because all data is duplicated, twice the storage capacity is required

RAIDRAID-1

RAID Level 2: 2:
y

Memory-Style Error-Correcting-Codes (ECC) with bit striping.


performs disk striping at the bit level and uses one or more disks to store parity information. RAID-2 is not used very often because it is considered to be slow and expensive. Bit interleave data striping with hamming code. Fast for sequential applications such as graphics modeling. Almost never used with PC-based systems

RAIDRAID-2
Minimum number of drives 4: not use in LAN y Strengths: Previously used for RAM error environments correction (known as Hamming Code ) and in disk drives before the use of embedded error correction y Weaknesses: No practical use; Same performance can be achieved by RAID 3 at lower cost
y

RAIDRAID-2

RAID Level 3: 3:
y y y

Bit-Interleaved Parity uses data striping, generally at the byte level and uses one disk to store parity information. Striping improves the throughput of the system, and using only one disk per set for parity information reduces the cost per megabyte of storage. Striping data in small chunks provides excellent performance when transferring large amounts of data, because all disks operate in parallel. Two disks must fail within a set before data would become unavailable. Bit interleave data striping with parity - Access to all drives to retrieve on record - Best for large sequential reads - Poor for random transactions - Faster than a single disk but significantly slower than RAID 0 or RAID 1 in random environments

RAIDRAID-3
y

Byte-level data striping with dedicated parity drive


Minimum number of drives: 3 Strengths: Excellent performance for large, sequential data requests Weaknesses: Not well-suited for transaction-oriented network applications; Single parity drive does not support multiple, simultaneous read and write requests

RAIDRAID-3

RAIDRAID-4
stripes data in larger chunks, which provides better performance than RAID3 when transferring small amounts of data. y Block interleave data striping with one parity disk - Best for large sequential I/O, but poor write performance - Faster than a single drive but significantly slower than RAID 0 or RAID 1.
y

RAIDRAID-4
y

Block-level data striping with dedicated parity drive


Minimum number of drives: 3 Strengths: Data striping supports multiple simultaneous read requests Weaknesses: Write requests suffer from same single parity-drive bottleneck as RAID 3; RAID 5 offers equal data protection and better performance at same cost

RAIDRAID-4

RAIDRAID-5
stripes data in blocks sequentially across all disks in an array and writes parity data on all disks as well. By distributing parity information across all disks, RAID-5 eliminates the bottleneck sometimes created by a single parity disk. y RAID-5 is increasingly popular and is well suited to transaction environments. y RAID 5 is preferred for smaller block transfers. Typically smaller block transfers are used in network files y If any disk fails, the data can be recovered by using the data from the other disks along with the parity information.
y

RAIDRAID-5
y

Block-level data striping with distributed parity


Minimum number of drives: 3 Strengths: Best cost/performance for transaction-oriented networks;Very high performance, very high data protection; Supports multiple simultaneous reads and writes; Can also be optimized for large, sequential requests Weaknesses: Write performance is slower than RAID 0 or RAID 1

RAIDRAID-5

RAID Levels (cont.)


RAID Level 6: P+Q Redundancy scheme;
Similar to Level 5, but stores extra redundant information to guard against multiple disk failures. Better reliability than Level 5 at a higher cost; not used as widely.

Choice of RAID Level


y

Factors in choosing RAID level


y Monetary cost y Performance: Number of I/O operations per second, and bandwidth during normal operation y Performance during failure y Performance during rebuild of failed disk
y Including time taken to rebuild failed disk

y y y

RAID 0 is used only when data safety is not important


y E.g. data can be recovered quickly from other sources

Level 2 and 4 never used since they are subsumed by 3 and 5 Level 3 is not used since bit-striping forces single block reads to access all disks, wasting disk arm movement, which block striping (level 5) avoids y Level 6 is rarely used since levels 1 and 5 offer adequate safety for almost all applications y So competition is between 1 and 5 only

Choice of RAID Level (Cont.)


y

Level 1 provides much better write performance than level 5 Level 5 requires at least 2 block reads and 2 block writes to write a single block, whereas Level 1 only requires 2 block writes Level 1 preferred for high update environments such as log disks

Level 1 had higher storage cost than level 5 disk drive capacities increasing rapidly (50%/year) whereas disk access times have decreased much less (x 3 in 10 years) I/O requirements have increased greatly, e.g. for Web servers When enough disks have been bought to satisfy required rate of I/O, they often have spare storage capacity x so there is often no extra monetary cost for Level 1!

Level 5 is preferred for applications with low update rate, and large amounts of data Level 1 is preferred for all other applications

RAID Benefits
Higher Data Security: Through the use of redundancy, most RAID levels provide protection for the data stored on the array. This means that the data on the array can withstand even the complete failure of one hard disk (or sometimes more) without any data loss, and without requiring any data to be restored from backup y Fault Tolerance: RAID implementations that include redundancy provide a much more reliable overall storage subsystem than can be achieved by a single disk. This means there is a lower chance of the storage subsystem as a whole failing due to hardware failures.
y

Improved Availability: Availability refers to access to data. Good RAID systems improve availability both by providing fault tolerance and by providing special features that allow for recovery from hardware faults without disruption. y Increased, Integrated Capacity: By turning a number of smaller drives into a larger array, you add their capacity together . This facilitates applications that require large amounts of contiguous disk space, and also makes disk space management simpler. y Improved Performance: Last, but certainly not least, RAID systems improve performance by allowing the controller to exploit the capabilities of multiple hard disks to get around performance-limiting mechanical issues that plague individual hard disks.
y

End of Module :

(Information Representation, Organization and Storage)

Potrebbero piacerti anche