Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
fixed Variable
Record Characteristics
A logical view:
SELECT * FROM STUDENTS or (Smith, 17, 1, CS) , (Brown, 8, 2, CS) or STUDENT(Name, Number, Class, Major) Number,
A physical view:
FIXED LENGTH:
every record has same fields field can be located relative to record start
Record blocking
Allocating records to disk blocks Unspanned records Each record is fully contained in one block Many records in one block Blocking factor bfr number of records that fit in one block Example: Block size B = 1024 record size (fixed) R = 150 bfr = 1024/150 = 6 (floor and ceiling functions) (floor functions) Spanned organization Record continued on the consecutive block Required pointer to point the block with the remainder of a record If records are of a variable length , then bfr could represent the average number of records per bloc (the rounding function does not apply)
File structure
Blocks allocated
Contiguous Linked (use of block pointers) Linked clusters Indexed
Operations on Files
Because of complex path from stored data to user, DBMS offer a range of I/O operations:
OPEN - access the file and prepare pointer FIND (LOCATE) - find first record FINDNEXT FINDALL - set READ INSERT DELETE MODIFY CLOSE REORGANISE - set READREAD-ORDERED (FIND-ORDERED) - set (FIND-
A file organization is organization of the data of a file into records, blocks, and access structures;
An access method provides a group of operations that can be applied to a file resulting in retrieval, modification and reorganisation.
One file organization can accept many different access methods Some access methods, though, can be applied only to files with specific file organization. For example, one cannot apply an indexed access method to a file without an index
transfer times
The reason many DBMS do not rely on the OS file system is: higher level DB operations, e.g. JOIN, have a known pattern of page accesses and can be translated into known sets of I/O operations buffer manager can PRE-FETCH pages by anticipating the next request. PREThis is especially efficient when the required data are stored CONTIGUOUSLY on disk
The address of the last file block is kept in the file header The last disk block of the file is copied into a buffer page; The new record is added or new page is opened; the page is then rewritten back to disk block.
Searching for a record using any search condition in a file stored in b blocks
Cost = b/2 block transfers. on average, if only one record satisfies the search condition, Cost = b block transfers. If no records or several records satisfy the search condition. program must read and search all b blocks in the file.
To delete a record,
find its block and copy the block into a buffer page, delete the record from the buffer, rewrite the updated page back to the disk block. Note: Unused space in the block could be used in future for a new record if suitable (some book keeping necessary on unused space in file blocks))
Each record has an extra byte or bit, called a deletion marker set to 1 at insertion *) DO not remove deleted record, but reset its deletion marker to 0 when deleted Record with deletion marker set to 0 is not used by application programs From time to time reorganise the file: physically remove deleted records or reclaim unused space.
*) Just for simplicity we assume that values of deletion markers are 0 or 1. A system actually can choose other characters or combination of bits as values of deletion markers.
Ordering key
insertion is expensive retrieval is easy (efficient) if exploiting the sort order binary search reduces time significantly
Fast: Select * from Course order by <order> Slow: Select * from Course where <any other attribute> = c
end
Reading the records in order of the ordering key values is extremely efficient, Finding the next record from the current one in order of the ordering key usually requires no additional block accesses, the next record is in the same block or in the next block using a search condition based on the value of an ordering key field results in faster access when the binary search technique is used, A binary search can be done on the blocks rather than on the records.. A binary search usually accesses log2(b) blocks, whether the record is found or not No advantage if search criterion is specified in terms of non ordering fields
Modifying record
R1 R2 R3 R4
Notice that all records after R15 have changed their page location or position on the page
Main File Overflow File Periodically overflow file is sorted and merged with the main file
Deletions from a Heap: R10, R3, R7: using deletion marker technique
------------------------1 1 1 1 R1 R7 R35 R10 ------------------------1 1 1 1 R14 R12 R23 R6 ------------------------1 1 1 1 R24 ------R27 ------1 1
Deletion markers set to 0 and later these records will be physically removed when file is reorganised
R1 R2 R4 R6
-------------------------
Deletions from ordered file: R10, R3, R7: Using deletion marker technique:R27 ------- 1 ------1 R6 ------- 1 R14 ------1
1 1 1 R7 ------R10 ------R12 ------1 1 1 R16 ------R23 ------R24 ------1 1 1 R35 ------1
Deletion markers set to 0 and later these records will be physicaly removed when file is reorganised
j-th record located by position in block j / bfr average time to find a single record = b / 2
(b = number of blocks)
Quick summary retrieval on key field is fast - next record is nearby any other retrieval either requires a sort, or an index, or is as slow as a heap update, delete, insert are slow (find block, update block, rewrite block)
Types of hashing: static or dynamic What is the point of hashing? reduce a large address space provide close to direct access provide reasonable performance for all U,I,D,S What is a hash function? properties behaviour Collisions Collision resolution Open addressing Summary
direct access to block containing the desired record reduce the number of blocks read or written allow for file expansion and contraction with minimal file reorganising permit retrieval on hashed fields without re-sorting the file reno need to allocate contiguous disk areas if file is small, internal hashing; otherwise external no direct access other than by hashing
There are 25 rows of seats, with 3 seats per row (75 seats total) We have to allocate each person to a row in advance, at random We will hash on their family name so as to find the persons row number directly, knowing only the name The database is logically a single table ROOM (Name, Age, Attention) (Name, implemented as a blocked, hashed file
C D E F G H I J K L M N O P Q R S T U V W X Y Z
Where is MCWILLIAM?
Hash(MCWILLIAM) = Row 20
Loc 53 90 95 42 22
Alloc +
RowNo #Sitting
13 14 15 16 17 18 19 20 21 22 23 24 2 3 0 0 3 3 1 1 3 1 3 1
Alloc +
1 1
Each person has a hash key - the name Each person is a record Each row is a hardware block (bucket) Each row number is the address of a bucket Records here are fixed-length (and 3 records per block) fixedThe leftover people are collisions (key collisions) They will have to be found a seat by collision resolution
Collision Resolution
Leave an empty seat in each row Under population - blocks 66% full A notice on the end of the row: extra seat for row N can be found at the rear exit buckets overflow chain points to an overflow page containing the record Everyone stand up while we reallocate seats file reorganisation
Collision Resolution
Strategy 1 (open addressing):
Nora is 4th arrival for 23 Place new arrival in next higher No block with a vacancy
Disadvantages:
May need to read whole file consecutively on some keys Blocks will gradually fill up with out-of-place records out-of Deletions cause either immediate or periodic reorganisation
Collision Resolution
Strategy 2:
Reserve some rows (buckets) for overflow Blocks 25, 26 and 27 or recalculate hash function for smaller mod, say 20 instead of 25
Julia is then 4th arrival for block 3 Place in overflow block with smaller label and with available space (26 ? and optionally placing a pointer in bucket 3 pointing to 26th). Retrieve block 3 Read blocks in 3 consecutively. If Julia not found, either:
Disadvantages:
Overflow gradually fills up giving longer retrieval times Deletions/additions cause periodic reorganisation
Collision Resolution
More formally Open addressing: If location specified by hash address is occupied then the subsequent positions are checked in order until an unused (empty) position is found. Chaining: various overflow locations are kept, a pointer field is added to each record location. A collision is resolved by placing the new record in an unused overflow location and setting the pointer of the occupied hash address location to the address of that overflow location. Multiple hashing: A second hash function is applied if the first results in a collision.
Update: same
UPDATE ROOM SET ATTENTION = low WHERE NAME = McWilliam UPDATE ROOM SET ATTENTION = high WHERE AGE > 50 OR AGE < 10
Internal Hashing
Internal hashing is used as an internal search structure within a program whenever a group of records is accessed exclusively by using the value of one field. Applicable to smaller files Hashed in main memory: fast lookup in store R records, R-length array Hash function transforms key field into subscript array in the range 0 to R - 1 hash (Key Value) = Key Value (mod R) subscript is the record address in store
External Hashing
Hashing for disk files is called external hashing. address space is made of buckets, each of which holds multiple records. A bucket is either one disk block or a cluster of contiguous blocks. The hashing function maps a key into a relative bucket number, A table maintained in the file header converts the bucket number into the corresponding disk block address
The hashing scheme is called static hashing if a fixed number of buckets M is allocated. If a record is to be retrieved with search condition specified for the key values, then the bucket number of the bucket potentially containing that record is determined using the hashing function applied on the key and then that bucket is examined for the containment of the desired record. If record is not in that bucket then further search could be activated in overflow buckets.
If bucket is full then apply selected collision resolution procedure If the number of records in overflow buckets is large and/or distribution of records in buckets is highly un-uniform , then reorganise the file unusing changed hashing function (tuning)
Overflow Page
key H
N-1
Primary buckets
Number of buckets is fixed shrinkage causes wasted space growth causes long overflow chains Solutions:
reorganise re-hash re use dynamic hashing...
Extendible Hashing
Previously, to insert a new record into a full bucket add overflow page, or reorganise by doubling the bucket allocation and redistributing the records This is a poor solution: entire file is read twice as many pages have to be written Solution: Extendible Hashing add a directory of pointers to buckets double the number of buckets by doubling the directory split only the bucket that has overflowed
LINEAR HASHING
It does not have a directory at all Instead, have a family of algorithms to manage dynamic expansion and contraction of the file Start with a set number of M buckets 0..M-1 with hashing function 0..Mmod M Split them in linear order, when more space is needed. The next hashing function is mod 2M and subsequent 3M, 4M etc as required Example: Block capacity is 2 records. Records with values 72, 62, 32 are colliding for hashing function (mod 10), but after application of next hashing function (mod 20) they do not (one bucket contains 72 and 32 and another 62). Combines controlled overflow with new space acquisition
HASHING SUMMARY
Comparison of Simple and Hashed Files Pages in a hashed file are grouped into buckets (1 block or a cluster of contiguous blocks) reduce a large address space provide close to direct access Static hashing has some disadvantages which are addressed in dynamic hashing solutions (extendible, linear)
Static hashed files are kept at about 80% occupancy then reorganised. New pages are added when each existing page is about 80% full Hence, time to read entire file is } 1.25 nonnonhashed file Dynamic hashing provides flexibility in usage of file storage space (expansion and contraction)