Sei sulla pagina 1di 50

Hierarchical Model

The hierarchical data model organizes data in a tree structure. There is a hierarchy of parent and child data segments. This structure implies that a record can have repeating information, generally in the child data segments. Data in a series of records, which have a set of field values attached to it. It collects all the instances of a specific record together as a record type. These record types are the equivalent of tables in the relational model, and with the individual records being the equivalent of rows. To create links between these record types, the hierarchical model uses Parent Child Relationships. These are a 1:N mapping between record types. This is done by using trees, like set theory used in the relational model, "borrowed" from maths. For example, an organization might store information about an employee, such as name, employee number, department, salary. The organization might also store information about an employee's children, such as name and date of birth. The employee and children data forms a hierarchy, where the employee data represents the parent segment and the children data represents the child segment. If an employee has three children, then there would be three child segments associated with one employee segment. In a hierarchical database the parent-child relationship is one to many. This restricts a child segment to having only one parent segment. Hierarchical DBMSs were popular from the late 1960s, with the introduction of IBM's Information Management System (IMS) DBMS, through the 1970s.

Network Model
The popularity of the network data model coincided with the popularity of the hierarchical data model. Some data were more naturally modeled with more than one parent per child. So, the network model permitted the modeling of many-to-many relationships in data. In 1971, the Conference on Data Systems Languages (CODASYL) formally defined the network model. The basic data modeling construct in the network model is the set construct. A set consists of an owner record type, a set name, and a member record type. A member record type can have that role in more than one set, hence the multiparent concept is supported. An owner record type can also be a member or owner in another set. The data model is a simple network, and link and intersection record types (called junction records by IDMS) may exist, as well as sets between them . Thus, the complete network of relationships is represented by several pairwise sets; in each set some (one) record type is owner (at the tail of the network arrow) and one or more record types are members (at the head of the relationship arrow). Usually, a set defines a 1:M relationship, although 1:1 is permitted. The CODASYL network model is based on mathematical set theory.

Relational Model
(RDBMS - relational database management system) A database based on the relational model developed by E.F. Codd. A relational database allows the definition of data structures, storage and retrieval operations and integrity constraints. In such a database the data and relations between them are organised in tables. A table is a collection of records and each record in a table contains the same fields. Properties of Relational Tables: Values Are Atomic Each Row is Unique Column Values Are of the Same Kind The Sequence of Columns is Insignificant The Sequence of Rows is Insignificant Each Column Has a Unique Name

Certain fields may be designated as keys, which means that searches for specific values of that field will use indexing to speed them up. Where fields in two different tables take values from the same set, a join operation can be performed to select related records in the two tables by matching values in those fields. Often, but not always, the fields will have the same name in both tables. For example, an "orders" table might contain (customer-ID, product-code) pairs and a "products" table might contain (product-code, price) pairs so to calculate a given customer's bill you would sum the prices of all products ordered by that customer by joining on the productcode fields of the two tables. This can be extended to joining multiple tables on multiple fields. Because these relationships are only specified at retreival time, relational databases are classed as dynamic database management system. The RELATIONAL database model is based on the Relational Algebra.

Object/Relational Model
Object/relational database management systems (ORDBMSs) add new object storage capabilities to the relational systems at the core of modern information systems. These new facilities integrate management of traditional fielded data, complex objects such as time-series and geospatial data and diverse binary media such as audio, video, images, and applets. By encapsulating methods with data structures, an ORDBMS server can execute comple x analytical and data manipulation operations to search and transform multimedia and other complex objects. As an evolutionary technology, the object/relational (OR) approach has inherited the robust transaction- and performance-management features of it s relational ancestor and the flexibility of its object-oriented cousin. Database designers can work with familiar tabular structures and data definition languages (DDLs) while assimilating new object-management possibi lities.

Query and procedural languages and call interfaces in ORDBMSs are familiar: SQL3, vendor procedural languages, and ODBC, JDBC, and proprie tary call interfaces are all extensions of RDBMS languages and interfaces. And the leading vendors are, of course, quite well known: IBM, Inform ix, and Oracle.

Object-Oriented Model
Object DBMSs add database functionality to object programming languages. They bring much more than persistent storage of programming language objects. Object DBMSs extend the semantics of the C++, Smalltalk and Java object programming languages to provide full-featured database programming capability, while retaining native language compatibility. A major benefit of this approach is the unification of the application and database development into a seamless data model and language environment. As a result, applications require less code, use more natural data modeling, and code bases are easier to maintain. Object developers can write complete database applications with a modest amount of additional effort. According to Rao (1994), "The object-oriented database (OODB) paradigm is the combination of object-oriented programming language (OOPL) systems and persistent systems. The power of the OODB comes from the seamless treatment of both persistent data, as found in databases, and transient data, as found in executing programs." In contrast to a relational DBMS where a complex data structure must be flattened out to fit into tables or joined together from those tables to form the in-memory structure, object DBMSs have no performance overhead to store or retrieve a web or hierarchy of interrelated objects. This oneto-one mapping of object programming language objects to database objects has two benefits over other storage approaches: it provides higher performance management of objects, and it enables better management of the complex interrelationships between objects. This makes object DBMSs better suited to support applications such as financial portfolio risk analysis systems, telecommunications service applications, world wide web document structures, design and manufacturing systems, and hospital patient record systems, which have complex relationships between data.

Semistructured Model
In semistructured data model, the information that is normally associated with a schema is contained within the data, which is sometimes called ``self-describing''. In such database there is no clear separation between the data and the schema, and the degree to which it is structured

depends on the application. In some forms of semistructured data there is no separate schema, in others it exists but only places loose constraints on the data. Semi-structured data is naturally modelled in terms of graphs which contain labels which give semantics to its underlying structure. Such databases subsume the modelling power of recent extensions of flat relational databases, to nested databases which allow the nesting (or encapsulation) of entities, and to object databases which, in addition, allow cyclic references between objects. Semistructured data has recently emerged as an important topic of study for a variety of reasons. First, there are data sources such as the Web, which we would like to treat as databases but which cannot be constrained by a schema. Second, it may be desirable to have an extremely flexible format for data exchange between disparate databases. Third, even when dealing with structured data, it may be helpful to view it as semistructured for the purposes of browsing.

Associative Model
The associative model divides the real-world things about which data is to be recorded into two sorts: Entities are things that have discrete, independent existence. An entitys existence does not depend on any other thing. Associations are things whose existence depends on one or more other things, such that if any of those things ceases to exist, then the thing itself ceases to exist or becomes meaningless. An associative database comprises two data structures: 1. A set of items, each of which has a unique identifier, a name and a type. 2. A set of links, each of which has a unique identifier, together with the unique identifiers of three other things, that represent the source source, verb and target of a fact that is recorded about the source in the database. Each of the three things identified by the source, verb and target may be either a link or an item.

Entity-Attribute-Value (EAV) data model


The best way to understand the rationale of EAV design is to understand row modeling (of which EAV is a generalized form). Consider a supermarket database that must manage thousands of products and brands, many of which have a transitory existence. Here, it is intuitively obvious that product names should not be hard-coded as names of columns in tables. Instead, one stores product descriptions in a Products table: purchases/sales of individual items are recorded in other tables as separate rows with a product ID referencing this table. Conceptually an EAV design involves a single table with three columns, an entity (such as an olfactory receptor ID), an

attribute (such as species, which is actually a pointer into the metadata table) and a value for the attribute (e.g., rat). In EAV design, one row stores a single fact. In a conventional table that has one column per attribute, by contrast, one row stores a set of facts. EAV design is appropriate when the number of parameters that potentially apply to an entity is vastly more than those that actually apply to an individual entity.

Context Model
The context data model combines features of all the above models. It can be considered as a collection of object-oriented, network and semistructured models or as some kind of object database. In other words this is a flexible model, you can use any type of database structure depending on task. Such data model has been implemented in DBMS ConteXt. The fundamental unit of information storage of ConteXt is a CLASS. Class contains METHODS and describes OBJECT. The Object contains FIELDS and PROPERTY. The field may be composite, in this case the field contains SubFields etc. The property is a set of fields that belongs to particular Object. (similar to AVL database). In other words, fields are permanent part of Object but Property is its variable part. The header of Class contains the definition of the internal structure of the Object, which includes the description of each field, such as their type, length, attributes and name. Context data model has a set of predefined types as well as user defined types. The predefined types include not only character strings, texts and digits but also pointers (references) and aggregate types (structures). Types of Fields

A context model comprises three main data types: REGULAR, VIRTUAL and REFERENCE. A regular (local) field can be ATOMIC or COMPOSITE. The atomic field has no inner structure.

In contrast, a composite field may have a complex structure, and its type is described in the header of Class. The composite fields are divided into STATIC and DYNAMIC. The type of a static composite field is stored in the header and is permanent. Description of the type of a dynamic composite field is stored within the Object and can vary from Object to Object. Like a NETWORK database, apart from the fields containing the information directly, context database has fields storing a place where this information can be found, i.e. POINTER (link, reference) which can point to an Object in this or another Class. Because main addressed unit of context database is an Object, the pointer is made to Object instead of a field of this Object. The pointers are divided on STATIC and DYNAMIC. All pointers that belong to a particular static pointer type point to the same Class (albeit, possibly, to different Object). In this case, the Class name is an integral part of the that pointer type. A dynamic pointer type describes pointers that may refer to different Classes. The Class, which may be linked through a pointer, can reside on the same or any other computer on the local area network. There is no hierarchy between Classes and the pointer can link to any Class, including its own. In contrast to pure object-oriented databases, context databases is not so coupled to the programming language and doesn't support methods directly. Instead, method invocation is partially supported through the concept of VIRTUAL fields. A VIRTUAL field is like a regular field: it can be read or written into. However, this field is not physically stored in the database, and in it does not have a type described in the scheme. A read operation on a virtual field is intercepted by the DBMS, which invokes a method associated with the field and the result produced by that method is returned. If no method is defined for the virtual field, the field will be blank. The METHODS is a subroutine written in C++ by an application programmer. Similarly, a write operation on a virtual field invokes an appropriate method, which can changes the value of the field. The current value of virtual fields is maintained by a run-time process; it is not preserved between sessions. In object-oriented terms, virtual fields represent just two public methods: reading and writing. Experience shows, however, that this is often enough in practical applications. From the DBMS point of view, virtual fields provide transparent interface to such methods via an aplication written by application programer. A context database that does not have composite or pointer fields and property is essentially RELATIONAL. With static composite and pointer fields, context database become OBJECTORIENTED. If the context database has only Property in this case it is an ENTITYATTRIBUTE-VALUE database. With dynamic composite fields, a context database becomes what is now known as a SEMISTRUCTURED database. If the database has all available types... in this case it is ConteXt database!

1. Hierarchical Database Model: The hierarchical database model is an inverted treelike structure. The tables of this model take on a child-parent relationship. Each child table has a single parent table, and each parent table can have multiple child tables. Child tables are completely dependent on parent tables; therefore, a child table can exist only if its parent table does. It follows that any entries in child tables can only exist where corresponding Parent entries exist in parent tables. The result of this structure is that the hierarchical database model supports one-to-many relationships. 2. Network Database Model: The network database model is essentially a refinement of the hierarchical database model. The network model allows child tables to have more than one parent, thus creating a networked-like table structure. Multiple parent tables for each child allows for many-to-many relationships, in addition to one-to-many relationships. there is a many-to-many relationship between employees and tasks. In other words, an employee can be assigned many tasks, and a task can be assigned to many different employees. Thus, many employees have many tasks, and vice versa 3. Relational Database Model: The relational database model improves on the restriction of a hierarchical structure, not completely abandoning the hierarchy of data. Any table can be accessed directly without having to access all parent objects. The trick is to know what to look forif you want to find the address of a specific employee, you have to know which employee to look for, or you can simply examine all employees. You dont have to search the entire hierarchy, from the company downward, to find a single employee. Another benefit of the relational database model is that any tables can be linked together, regardless of their hierarchical position. Obviously, there should be a sensible link between the two tables, but you are not restricted by a strict hierarchical structure; therefore, a table can be linked to both any number of parent tables and any number of child tables.

The Relational Data Model, Normalisation and effective Database Design

By Tony Marston
30th Amended 12th August 2005
Introduction What is a database? The Hierarchical Data Model The Network Data Model The Relational Data Model - The Relation - Keys - Relationships - Relational Joins - Lossless Joins - Determinant and Dependent - Functional Dependencies (FD) - Transitive Dependencies (TD) - Multi-Valued Dependencies (MVD) - Join Dependencies (JD) - Modification Anomalies Types of Relational Join - Inner Join - Natural Join - Left [Outer] Join - Right [Outer] Join - Full [Outer] Join - Self Join - Cross Join Entity-Relationship Diagram (ERD) Data Normalisation - 1st Normal Form - 2nd Normal Form - 3rd Normal Form - Boyce-Codd Normal Form - 4th Normal Form - 5th (Projection-Join) Normal Form - 6th (Domain-Key) Normal Form De-Normalisation - Compound Fields

September

2004

- Summary Fields - Summary Tables - Optional Attributes that exist as a group Personal Guidelines - Database Names - Table Names - Field Names - Primary Keys - Foreign Keys - Generating Unique ids Comments - The choice between upper and lower case - Field names should identify their content - The naming of Foreign Keys Amendment History

Introduction
I have been designing and building applications, including the databases used by those applications, for several decades now. I have seen similar problems approached by different designs, and this has given me the opportunity to evaluate the effectiveness of one design over another in providing solutions to those problems. It may not seem obvious to a lot of people, but the design of the database is the heart of any system. If the design is wrong then the whole application will be wrong, either in effectiveness or performance, or even both. No amount of clever coding can compensate for a bad database design. Sometimes when building an application I may encounter a problem which can only be solved effectively by changing the database rather than by changing the code, so change the database is what I do. I may have to try several different designs before I find one that provides the most benefits and the least number of disadvantages, but that is what prototyping is all about. The biggest problem I have encountered in all these years is where the database design and software development are handled by different teams. The database designers build something according to their rules, and they then expect the developers to write code around this design. This approach is often fraught with disaster as the database designers often have little or no development experience, so they have little or no understanding of how the development language can use that design to achieve the expected results. This happened on a project I worked on in the 1990s, and every time that we, the developers, hit a problem the response from the database designers was always the same: Our design is perfect, so you will have to just code around it. So code around it we did, and not only were we not happy with the result, neither were the users as the entire system ran like a pig with a wooden leg. In this article I will provide you with some tips on how I go about designing a database in the hope that you may learn from my experience. Note that I do not use any expensive modelling tools, just the Mark I Brain.

What is a database?
This may seem a pretty fundamental question, but unless you know what a database consists of you may find it difficult to build one that can be used effectively. Here is a simple definition of a database:
A database is a collection of information that is organised so that it can easily be accessed, managed, and updated.

A database engine may comply with a combination of any of the following:


The database is a collection of table, files or datasets. Each table is a collection of fields, columns or data items. One or more columns in each table may be selected as the primary key. There may be additional unique keys or non-unique indexes to assist in data retrieval. Columns may be fixed length or variable length. Records may be fixed length or variable length. Table and column names may be restricted in length (8, 16 or 32 characters). Table and column names may be case-sensitive.

Over the years there have been several different ways of constructing databases, amongst which have been the following:

The Hierarchical Data Model The Network Data Model The Relational Data Model

Although I will give a brief summary of the first two, the bulk of this document is concerned with The Relational Data Model as it the most prevalent in today's world.

The Hierarchical Data Model


The Hierarchical Data Model structures data in a tree of records, with each record having one parent record and many children. It can be represented as follows: Figure 1 - The Hierarchical Data Model

A hierarchical database consists of the following:


1. It contains nodes connected by branches. 2. The top node is called the root. 3. If multiple nodes appear at the top level, the nodes are called root segments. 4. The parent of node nx is a node directly above nx and connected to nx by a branch. 5. Each node (with the exception of the root) has exactly one parent. 6. The child of node nx is the node directly below nx and connected to nx by a branch. 7. One parent may have many children.

By introducing data redundancy, complex network structures can also be represented as hierarchical databases. This redundancy is eliminated in physical implementation by including a 'logical child'. The logical child contains no data but uses a set of pointers to direct the database management system to the physical child in which the data is actually stored. Associated with a logical child are a physical parent and a logical parent. The logical parent provides an alternative (and possibly more efficient) path to retrieve logical child information.

The Network Data Model


The Network Data Model uses a lattice structure in which a record can have many parents as well as many children. It can be represented as follows: Figure 2 - The Network Data Model

Like the The Hierarchical Data Model the Network Data Model also consists of nodes and branches, but a child may have multiple parents within the network structure instead of being restricted to just one.

I have worked with both hierarchical and network databases, and they both suffered from the following deficiencies (when compared with relational databases):

Access to the database was not via SQL query strings, but by a specific set of API's, typically for FIND, CREATE, READ, UPDATE and DELETE. Each API would only access a single table (dataset), so it was not possible to implement a JOIN which would return data from several tables. It was not possible to provide a variable WHERE clause. The only selection mechanism availabe was
o o o

read all entries (a full table scan). read a single entry using a specific primary key. read all entries on a child table which were associated with a selected entry on a parent table

Any further filtering had to be done within the application code.

It was not possible to provide an ORDER BY clause. Data was presented in the order in which it existed in the database. This mechanism could be tuned by specifying sort criteria to be used when each record was inserted, but this had several disadvantages:
o o

Only a single sort sequence could be defined for each path (link to a parent), so all records retrieved on that path would be provided in that sequence. It could make inserts rather slow when attempting to insert into the middle of a large collection, or where a table had multiple paths each with its own set of sort criteria.

The Relational Data Model


The Relational Data Model has the relation at its heart, but then a whole series of rules governing keys, relationships, joins, functional dependencies, transitive dependencies, multi-valued dependencies, and modification anomalies.
The Relation

The Relation is the basic element in a relational data model. Figure 3 - Relations in the Relational Data Model

A relation is subject to the following rules:


1. Relation (file, table) is a two-dimensional table. 2. Attribute (i.e. field or data item) is a column in the table. 3. Each column in the table has a unique name within that table. 4. Each column is homogeneous. Thus the entries in any column are all of the same type (e.g. age, name, employee-number, etc). 5. Each column has a domain, the set of possible values that can appear in that column. 6. A Tuple (i.e. record) is a row in the table. 7. The order of the rows and columns is not important. 8. Values of a row all relate to some thing or portion of a thing. 9. Repeating groups (collections of logically related attributes that occur multiple times within one record occurrence) are not allowed. 10. Duplicate rows are not allowed (candidate keys are designed to prevent this). 11. Cells must be single-valued (but can be variable length). Single valued means the following:
o o

Cannot contain multiple values such as 'A1,B2,C3'. Cannot contain combined values such as 'ABC-XYZ' where 'ABC' means one thing and 'XYZ' another.

A relation may be expressed using the notation R(A,B,C, ...) where:

R = the name of the relation.

(A,B,C, ...) = the attributes within the relation. A = the attribute(s) which form the primary key.

Keys 1. A simple key contains a single attribute. 2. A composite key is a key that contains more than one attribute. 3. A candidate key is an attribute (or set of attributes) that uniquely identifies a row. A candidate key must possess the following properties:
o

4.

5. 6.

7.

8.

Unique identification - For every row the value of the key must uniquely identify that row. o Non redundancy - No attribute in the key can be discarded without destroying the property of unique identification. A primary key is the candidate key which is selected as the principal unique identifier. Every relation must contain a primary key. The primary key is usually the key selected to identify a row when the database is physically implemented. For example, a part number is selected instead of a part description. A superkey is any set of attributes that uniquely identifies a row. A superkey differs from a candidate key in that it does not require the non redundancy property. A foreign key is an attribute (or set of attributes) that appears (usually) as a non key attribute in one relation and as a primary key attribute in another relation. I say usually because it is possible for a foreign key to also be the whole or part of a primary key: o A many-to-many relationship can only be implemented by introducing an intersection or link table which then becomes the child in two one-to-many relationships. The intersection table therefore has a foreign key for each of its parents, and its primary key is a composite of both foreign keys. o A one-to-one relationship requires that the child table has no more than one occurrence for each parent, which can only be enforced by letting the foreign key also serve as the primary key. A semantic or natural key is a key for which the possible values have an obvious meaning to the user or the data. For example, a semantic primary key for a COUNTRY entity might contain the value 'USA' for the occurrence describing the United States of America. The value 'USA' has meaning to the user. A technical or surrogate or artificial key is a key for which the possible values have no obvious meaning to the user or the data. These are used instead of semantic keys for any of the following reasons:
o

When the value in a semantic key is likely to be changed by the user, or can have duplicates. For example, on a PERSON table it is unwise to use PERSON_NAME as the key as it is possible to have more than one person with the same name, or the name may change such as through marriage. When none of the existing attributes can be used to guarantee uniqueness. In this case adding an attribute whose value is generated by the system, e.g from a sequence of numbers, is the only way to provide a unique value. Typical examples would be ORDER_ID and INVOICE_ID. The value '12345' has no meaning to the user as it conveys nothing about the entity to which it relates.

9. A key functionally determines the other attributes in the row, thus it is always a determinant.

10. Note that the term 'key' in most DBMS engines is implemented as an index which does not allow duplicate entries. Relationships

One table (relation) may be linked with another in what is known as a relationship. Relationships may be built into the database structure to facilitate the operation of relational joins at runtime.
1. A relationship is between two tables in what is known as a one-to-many or parent-child or master-detail relationship where an occurrence on the 'one' or 'parent' or 'master' table may have any number of associated occurrences on the 'many' or 'child' or 'detail' table. To achieve this the child table must contain fields which link back the primary key on the parent table. These fields on the child table are known as a foreign key, and the parent table is referred to as the foreign table (from the viewpoint of the child). 2. It is possible for a record on the parent table to exist without corresponding records on the child table, but it should not be possible for an entry on the child table to exist without a corresponding entry on the parent table. 3. A child record without a corresponding parent record is known as an orphan. 4. It is possible for a table to be related to itself. For this to be possible it needs a foreign key which points back to the primary key. Note that these two keys cannot be comprised of exactly the same fields otherwise the record could only ever point to itself. 5. A table may be the subject of any number of relationships, and it may be the parent in some and the child in others. 6. Some database engines allow a parent table to be linked via a candidate key, but if this were changed it could result in the link to the child table being broken. 7. Some database engines allow relationships to be managed by rules known as referential integrity or foreign key restraints. These will prevent entries on child tables from being created if the foreign key does not exist on the parent table, or will deal with entries on child tables when the entry on the parent table is updated or deleted. Relational Joins

The join operator is used to combine data from two or more relations (tables) in order to satisfy a particular query. Two relations may be joined when they share at least one common attribute. The join is implemented by considering each row in an instance of each relation. A row in relation R1 is joined to a row in relation R2 when the value of the common attribute(s) is equal in the two relations. The join of two relations is often called a binary join. The join of two relations creates a new relation. The notation R1 x R2 indicates the join of relations R1 and R2. For example, consider the following:

Relation R1 A 1 2 8 9 1 5 2 B 5 4 3 3 6 4 7 C 3 5 5 3 5 3 5

Relation R2 B 4 6 5 7 3 D 7 2 7 2 2 E 4 3 8 3 2

Note that the instances of relation R1 and R2 contain the same data values for attribute B. Data normalisation is concerned with decomposing a relation (e.g. R(A,B,C,D,E) into smaller relations (e.g. R1 and R2). The data values for attribute B in this context will be identical in R1 and R2. The instances of R1 and R2 are projections of the instances of R(A,B,C,D,E) onto the attributes (A,B,C) and (B,D,E) respectively. A projection will not eliminate data values duplicate rows are removed, but this will not remove a data value from any attribute. The join of relations R1 and R2 is possible because B is a common attribute. The result of the join is:
Relation R1 x R2 A B C D E

1 2 8 9 1 5 2

5 4 3 3 6 4 7

3 5 5 3 5 3 5

7 7 2 2 2 7 2

8 4 2 2 3 4 3

The row (2 4 5 7 4) was formed by joining the row (2 4 5) from relation R1 to the row (4 7 4) from relation R2. The two rows were joined since each contained the same value for the common attribute B. The row (2 4 5) was not joined to the row (6 2 3) since the values of the common attribute (4 and 6) are not the same. The relations joined in the preceding example shared exactly one common attribute. However, relations may share multiple common attributes. All of these common attributes must be used in creating a join. For example, the instances of relations R1 and R2 in the following example are joined using the common attributes B and C: Before the join:
Relation R1 A 6 8 5 2 B 1 1 1 7 C 4 4 2 1

Relation R2 B 1 1 C 4 4 D 9 2

1 7 7

2 1 1

1 2 3

After the join:


Relation R1 x R2 A 6 6 8 8 5 2 2 B 1 1 1 1 1 7 7 C 4 4 4 4 2 1 1 D 9 2 9 2 1 2 3

The row (6 1 4 9) was formed by joining the row (6 1 4) from relation R1 to the row (1 4 9) from relation R2. The join was created since the common set of attributes (B and C) contained identical values (1 and 4). The row (6 1 4) from R1 was not joined to the row (1 2 1) from R2 since the common attributes did not share identical values - (1 4) in R1 and (1 2) in R2. The join operation provides a method for reconstructing a relation that was decomposed into two relations during the normalisation process. The join of two rows, however, can create a new row that was not a member of the original relation. Thus invalid information can be created during the join process.
Lossless Joins

A set of relations satisfies the lossless join property if the instances can be joined without creating invalid data (i.e. new rows). The term lossless join may be somewhat confusing. A join that is not lossless will contain extra, invalid rows. A join that is lossless will not contain extra, invalid rows. Thus the term gainless join might be more appropriate. To give an example of incorrect information created by an invalid join let us take the following data structure:

R(student, course, instructor, hour, room, grade)

Assuming that only one section of a class is offered during a semester we can define the following functional dependencies:
1. (HOUR, ROOM) COURSE GRADE

2. (COURSE, STUDENT)

3. (INSTRUCTOR, HOUR) ROOM 4. (COURSE) INSTRUCTOR 5. (HOUR, STUDENT) ROOM

Take the following sample data:


STUDENT Smith Jones Brown Green COURSE Math 1 English English Algebra INSTRUCTOR Jenkins Goldman Goldman Jenkins HOUR 8:00 8:00 8:00 9:00 ROOM 100 200 200 400 GRADE A B C A

The following four relations, each in 4th normal form, can be generated from the given and implied dependencies:

R1(STUDENT, HOUR, COURSE) R2(STUDENT, COURSE, GRADE) R3(COURSE, INSTRUCTOR) R4(INSTRUCTOR, HOUR, ROOM)

Note that the dependencies (HOUR, ROOM) COURSE and (HOUR, STUDENT) ROOM are not explicitly represented in the preceding decomposition. The goal is to develop relations in 4th normal form that can be joined to answer any ad hoc inquiries correctly. This goal can be achieved without representing every functional dependency as a relation. Furthermore, several sets of relations may satisfy the goal. The preceding sets of relations can be populated as follows:
R1 STUDENT Smith HOUR 8:00 COURSE Math 1

Jones Brown Green R2 STUDENT Smith Jones Brown Green R3 COURSE Math 1 English Algebra R4

8:00 8:00 9:00

English English Algebra

COURSE Math 1 English English Algebra

GRADE A B C A

INSTRUCTOR Jenkins Goldman Jenkins

INSTRUCTOR Jenkins Goldman Jenkins

HOUR 8:00 8:00 9:00

ROOM 100 200 400

Now suppose that a list of courses with their corresponding room numbers is required. Relations R1 and R4 contain the necessary information and can be joined using the attribute HOUR. The result of this join is:
R1 x R4 STUDENT Smith COURSE Math 1 INSTRUCTOR Jenkins HOUR 8:00 ROOM 100

Smith Jones Jones Brown Brown Green

Math 1 English English English English Algebra

Goldman Jenkins Goldman Jenkins Goldman Jenkins

8:00 8:00 8:00 8:00 8:00 9:00

200 100 200 100 200 400

This join creates the following invalid information (denoted by the coloured rows):

Smith, Jones, and Brown take the same class at the same time from two different instructors in two different rooms. Jenkins (the Maths teacher) teaches English. Goldman (the English teacher) teaches Maths. Both instructors teach different courses at the same time.

Another possibility for a join is R3 and R4 (joined on INSTRUCTOR). The result would be:
R3 x R4 COURSE Math 1 Math 1 English Algebra Algebra INSTRUCTOR Jenkins Jenkins Goldman Jenkins Jenkins HOUR 8:00 9:00 8:00 8:00 9:00 ROOM 100 400 200 100 400

This join creates the following invalid information:

Jenkins teaches Math 1 and Algebra simultaneously at both 8:00 and 9:00.

A correct sequence is to join R1 and R3 (using COURSE) and then join the resulting relation with R4 (using both INSTRUCTOR and HOUR). The result would be:
R1 x R3

STUDENT Smith Jones Brown Green

COURSE Math 1 English English Algebra

INSTRUCTOR Jenkins Goldman Goldman Jenkins

HOUR 8:00 8:00 8:00 9:00

(R1 x R3) x R4 STUDENT Smith Jones Brown Green COURSE Math 1 English English Algebra INSTRUCTOR Jenkins Goldman Goldman Jenkins HOUR 8:00 8:00 8:00 9:00 ROOM 100 200 200 400

Extracting the COURSE and ROOM attributes (and eliminating the duplicate row produced for the English course) would yield the desired result:
COURSE Math 1 English Algebra ROOM 100 200 400

The correct result is obtained since the sequence (R1 x r3) x R4 satisfies the lossless (gainless?) join property. A relational database is in 4th normal form when the lossless join property can be used to answer unanticipated queries. However, the choice of joins must be evaluated carefully. Many different sequences of joins will recreate an instance of a relation. Some sequences are more desirable since they result in the creation of less invalid data during the join operation. Suppose that a relation is decomposed using functional dependencies and multi-valued dependencies. Then at least one sequence of joins on the resulting relations exists that recreates the original instance with no invalid data created during any of the join operations.

For example, suppose that a list of grades by room number is desired. This question, which was probably not anticipated during database design, can be answered without creating invalid data by either of the following two join sequences:
R1 x R3 (R1 x R3) x R2 ((R1 x R3) x R2) x R4 or R1 x R3 (R1 x R3) x R4 ((R1 x R3) x R4) x R2

The required information is contained with relations R2 and R4, but these relations cannot be joined directly. In this case the solution requires joining all 4 relations. The database may require a 'lossless join' relation, which is constructed to assure that any ad hoc inquiry can be answered with relational operators. This relation may contain attributes that are not logically related to each other. This occurs because the relation must serve as a bridge between the other relations in the database. For example, the lossless join relation will contain all attributes that appear only on the left side of a functional dependency. Other attributes may also be required, however, in developing the lossless join relation. Consider relational schema R(A, B, C, D), A B and C D. Relations Rl(A, B) and R2(C, D) are in 4th normal form. A third relation R3(A, C), however, is required to satisfy the lossless join property. This relation can be used to join attributes B and D. This is accomplished by joining relations R1 and R3 and then joining the result to relation R2. No invalid data is created during these joins. The relation R3(A, C) is the lossless join relation for this database design. A relation is usually developed by combining attributes about a particular subject or entity. The lossless join relation, however, is developed to represent a relationship among various relations. The lossless join relation may be difficult to populate initially and difficult to maintain - a result of including attributes that are not logically associated with each other. The attributes within a lossless join relation often contain multi-valued dependencies. Consideration of 4th normal form is important in this situation. The lossless join relation can sometimes be decomposed into smaller relations by eliminating the multi-valued dependencies. These smaller relations are easier to populate and maintain.

Determinant and Dependent

The terms determinant and dependent can be described as follows:


1. The expression X Y means 'if I know the value of X, then I can obtain the value of Y' (in a table or somewhere). 2. In the expression X Y, X is the determinant and Y is the dependent attribute. 3. The value X determines the value of Y. 4. The value Y depends on the value of X. Functional Dependencies (FD)

A functional dependency can be described as follows:


1. An attribute is functionally dependent if its value is determined by another attribute which is a key. 2. That is, if we know the value of one (or several) data items, then we can find the value of another (or several). 3. Functional dependencies are expressed as X the functionally dependent attribute. 4. If A (B,C) then A B and A C. C. Y, where X is the determinant and Y is

5. If (A,B) C, then it is not necessarily true that A C and B 6. If A B and B A, then A and B are in a 1-1 relationship. 7. If A B then for A there can only ever be one value for B.

Transitive Dependencies (TD)

A transitive dependency can be described as follows:


1. An attribute is transitively dependent if its value is determined by another attribute which is not a key. 2. If X Y and X is not a key then this is a transitive dependency. B C but NOT A C. 3. A transitive dependency exists when A Multi-Valued Dependencies (MVD)

A multi-valued dependency can be described as follows:


1. A table involves a multi-valued dependency if it may contain multiple values for an entity. 2. A multi-valued dependency may arise as a result of enforcing 1st normal form. 3. X Y, ie X multi-determines Y, when for each value of X we can have more than one value of Y. 4. If A B and A C then we have a single attribute A which multi-determines two other independent attributes, B and C.

5. If A (B,C) then we have an attribute A which multi-determines a set of associated attributes, B and C. Join Dependencies (JD)

A join dependency can be described as follows:


1. If a table can be decomposed into three or more smaller tables, it must be capable of being joined again on common keys to form the original table. Modification Anomalies

A major objective of data normalisation is to avoid modification anomalies. These come in two flavours:
1. An insertion anomaly is a failure to place information about a new database entry into all the places in the database where information about that new entry needs to be stored. In a properly normalized database, information about a new entry needs to be inserted into only one place in the database. In an inadequately normalized database, information about a new entry may need to be inserted into more than one place, and, human fallibility being what it is, some of the needed additional insertions may be missed. 2. A deletion anomaly is a failure to remove information about an existing database entry when it is time to remove that entry. In a properly normalized database, information about an old, to-be-gotten-rid-of entry needs to be deleted from only one place in the database. In an inadequately normalized database, information about that old entry may need to be deleted from more than one place, and, human fallibility being what it is, some of the needed additional deletions may be missed.

An update of a database involves modifications that may be additions, deletions, or both. Thus 'update anomalies' can be either of the kinds of anomalies discussed above. All three kinds of anomalies are highly undesirable, since their occurrence constitutes corruption of the database. Properly normalised databases are much less susceptible to corruption than are unnormalised databases.

Types of Relational Join


A JOIN is a method of creating a result set that combines rows from two or more tables (relations). When comparing the contents of two tables the following conditions may occur:

Every row in one relation has a match in the other relation. Relation R1 contains rows that have no match in relation R2. Relation R2 contains rows that have no match in relation R1.

INNER joins contain only matches. OUTER joins may contain mismatches as well.
Inner Join

This is sometimes known as a simple join. It returns all rows from both tables where there is a match. If there are rows in R1 which do not have matches in R2, those rows will not be listed. There are two possible ways of specifying this type of join:
SELECT * FROM R1, R2 WHERE R1.r1_field = R2.r2_field; SELECT * FROM R1 INNER JOIN R2 ON R1.field = R2.r2_field

If the fields to be matched have the same names in both tables then the ON condition, as in:
ON R1.fieldname = R2.fieldname ON (R1.field1 = R2.field1 AND R1.field2 = R2.field2)

can be replaced by the shorter USING condition, as in:


USING fieldname USING (field1, field2) Natural Join

A natural join is based on all columns in the two tables that have the same name. It is semantically equivalent to an INNER JOIN or a LEFT JOIN with a USING clause that names all columns that exist in both tables.
SELECT * FROM R1 NATURAL JOIN R2

The alternative is a keyed join which includes an ON or USING condition.


Left [Outer] Join

Returns all the rows from R1 even if there are no matches in R2. If there are no matches in R2 then the R2 values will be shown as null.
SELECT * FROM R1 LEFT [OUTER] JOIN R2 ON R1.field = R2.field Right [Outer] Join

Returns all the rows from R2 even if there are no matches in R1. If there are no matches in R1 then the R1 values will be shown as null.
SELECT * FROM R1 RIGHT [OUTER] JOIN R2 ON R1.field = R2.field

Full [Outer] Join

Returns all the rows from both tables even if there are no matches in one of the tables. If there are no matches in one of the tables then its values will be shown as null.
SELECT * FROM R1 FULL [OUTER] JOIN R2 ON R1.field = R2.field Self Join

This joins a table to itself. This table appears twice in the FROM clause and is followed by table aliases that qualify column names in the join condition.
SELECT a.field1, b.field2 FROM R1 a, R1 b WHERE a.field = b.field Cross Join

This type of join is rarely used as it does not have a join condition, so every row of R1 is joined to every row of R2. For example, if both tables contain 100 rows the result will be 10,000 rows. This is sometimes known as a cartesian product and can be specified in either one of the following ways:
SELECT * FROM R1 CROSS JOIN R2 SELECT * FROM R1, R2

Entity-Relationship Diagram (ERD)


An entity-relationship diagram (ERD) is a data modeling technique that creates a graphical representation of the entities, and the relationships between entities, within an information system. Any ER diagram has an equivalent relational table, and any relational table has an equivalent ER diagram. ER diagramming is an invaluable aid to engineers in the design, optimization, and debugging of database programs.

The entity is a person, object, place or event for which data is collected. It is equivalent to a database table. An entity can be defined by means of its properties, called attributes. For example, the CUSTOMER entity may have attributes for such things as name, address and telephone number. The relationship is the interaction between the entities. It can be described using a verb such as:
o o o o

A customer places an order. A sales rep serves a customer. A order contains a product. A warehouse stores a product.

In an entity-relationship diagram entities are rendered as rectangles, and relationships are portrayed as lines connecting the rectangles. One way of indicating which is the 'one' or 'parent' and which is the 'many' or 'child' in the relationship is to use an arrowhead, as in figure 4. Figure 4 - One-to-Many relationship using arrowhead notation

This can produce an ERD as shown in figure 5: Figure 5 - ERD with arrowhead notation

Another method is to replace the arrowhead with a crowsfoot, as shown in figure 6: Figure 6 - One-to-Many relationship using crowsfoot notation

The relating line can be enhanced to indicate cardinality which defines the relationship between the entities in terms of numbers. An entity may be optional (zero or more) or it may be mandatory (one or more).

A single bar indicates one. A double bar indicates one and only one. A circle indicates zero. A crowsfoot or arrowhead indicates many.

As well as using lines and circles the cardinality can be expressed using numbers, as in:

One-to-One expressed as 1:1 Zero-to-Many expressed as 0:M One-to-Many expressed as 1:M

Many-to-Many expressed as N:M

This can produce an ERD as shown in figure 7: Figure 7 - ERD with crowsfoot notation and cardinality

In plain language the relationships can be expressed as follows:


1 instance of a SALES REP serves 1 to many CUSTOMERS 1 instance of a CUSTOMER places 1 to many ORDERS 1 instance of an ORDER lists 1 to many PRODUCTS 1 instance of a WAREHOUSE stores 0 to many PRODUCTS

In order to determine if a particular design is correct here is a simple test that I use:
1. Take the written rules and construct a diagram. 2. Take the diagram and try to reconstruct the written rules.

If the output from step (2) is not the same as the input to step (1) then something is wrong. If the model allows a situation to exist which is not allowed in the real world then this could lead to serious problems. The model must be an accurate representation of the real world in order to be effective. If any ambiguities are allowed to creep in they could have disastrous consequences. We have now completed the logical data model, but before we can construct the physical database there are several steps that must take place:

Assign attributes (properties or values) to all the entities. After all, a table without any columns will be of little use to anyone. Refine the model using a process known as 'normalisation'. This ensures that each attribute is in the right place. During this process it may be necessary to create new tables and new relationships.

Data Normalisation
Relational database theory, and the principles of normalisation, were first constructed by people with a strong mathematical background. They wrote about databases using terminology which was not easily understood outside those mathematical circles. Below is an attempt to provide understandable explanations. Data normalisation is a set of rules and techniques concerned with:

Identifying relationships among attributes. Combining attributes to form relations. Combining relations to form a database.

It follows a set of rules worked out by E F Codd in 1970. A normalised relational database provides several benefits:

Elimination of redundant data storage. Close modeling of real world entities, processes, and their relationships. Structuring of data so that the model is flexible.

Because the principles of normalisation were first written using the same terminology as was used to define the relational data model this led some people to think that normalisation is difficult. Nothing could be more untrue. The principles of normalisation are simple, common sense ideas that are easy to apply. Although there are numerous steps in the normalisation process - 1NF, 2NF, 3NF, BCNF, 4NF, 5NF and DKNF - a lot of database designers often find it unnecessary to go beyond 3rd Normal Form. This does not mean that those higher forms are unimportant, just that the circumstances for which they were designed often do not exist within a particular database. However, all database designers should be aware of all the forms of normalisation so that they may be in a better position to detect when a particular rule of normalisation is broken and then decide if it is necessary to take appropriate action. The guidelines for developing relations in 3rd Normal Form can be summarised as follows:
1. Define the attributes. 2. Group logically related attributes into relations. 3. Identify candidate keys for each relation. 4. Select a primary key for each relation. 5. Identify and remove repeating groups. 6. Combine relations with identical keys (1st normal form). 7. Identify all functional dependencies.

8. Decompose relations such that each non key attribute is dependent on all the attributes in the key. 9. Combine relations with identical primary keys (2nd normal form). 10. Identify all transitive dependencies.
o o

Check relations for dependencies of one non key attribute with another non key attribute. Check for dependencies within each primary key (i.e. dependencies of one attribute in the key on other attributes within the key).

11. Decompose relations such that there are no transitive dependencies. 12. Combine relations with identical primary keys (3rd normal form) if there are no transitive dependencies. 1st Normal Form A table is in first normal form if all the key attributes have been defined and it contains no repeating groups.

Taking the ORDER entity in figure 7 as an example we could end up with a set of attributes like this:
ORDER order_id 123 456 customer_id 456 789 product1 abc1 abc2 product2 def1 product3 ghi1

This structure creates the following problems:


Order 123 has no room for more than 3 products. Order 456 has wasted space for product2 and product3.

In order to create a table that is in first normal form we must extract the repeating groups and place them in a separate table, which I shall call ORDER_LINE.
ORDER order_id 123 456 customer_id 456 789

I have removed 'product1', 'product2' and 'product3', so there are no repeating groups.
ORDER_LINE order_id 123 123 123 456 product abc1 def1 ghi1 abc2

Each row contains one product for one order, so this allows an order to contain any number of products. This results in a new version of the ERD, as shown in figure 8: Figure 8 - ERD with ORDER and ORDER_LINE

The new relationships can be expressed as follows:


1 instance of an ORDER has 1 to many ORDER LINES 1 instance of a PRODUCT has 0 to many ORDER LINES

2nd Normal Form A table is in second normal form (2NF) if and only if it is in 1NF and every non key attribute is fully functionally dependent on the whole of the primary key (i.e. there are no partial dependencies). 1. Anomalies can occur when attributes are dependent on only part of a multi-attribute (composite) key. 2. A relation is in second normal form when all non-key attributes are dependent on the whole key. That is, no attribute is dependent on only a part of the key. 3. Any relation having a key with a single attribute is in second normal form.

Take the following table structure as an example:


order(order_id, order_total) cust, cust_address, cust_contact, order_date,

Here we should realise that cust_address and cust_contact are functionally dependent on cust but not on order_id, therefore they are not dependent on the whole key. To make this table 2NF these attributes must be removed and placed somewhere else.
3rd Normal Form A table is in third normal form (3NF) if and only if it is in 2NF and every non key attribute is non transitively dependent on the primary key (i.e. there are no transitive dependencies). 1. Anomalies can occur when a relation contains one or more transitive dependencies. 2. A relation is in 3NF when it is in 2NF and has no transitive dependencies. 3. A relation is in 3NF when 'All non-key attributes are dependent on the key, the whole key and nothing but the key'.

Take the following table structure as an example:


order(order_id, order_total) cust, cust_address, cust_contact, order_date,

Here we should realise that cust_address and cust_contact are functionally dependent on cust which is not a key. To make this table 3NF these attributes must be removed and placed somewhere else. You must also note the use of calculated or derived fields. Take the example where a table contains PRICE, QUANTITY and EXTENDED_PRICE where EXTENDED_PRICE is calculated as QUANTITY multiplied by PRICE. As one of these values can be calculated from the other two then it need not be held in the database table. Do not assume that it is safe to drop any one of the three fields as a difference in the number of decimal places between the various fields could lead to different results due to rounding errors. For example, take the following fields:

AMOUNT - a monetary value in home currency, to 2 decimal places.

EXCH_RATE - exchange rate, to 9 decimal places. CURRENCY_AMOUNT - amount expressed in foreign currency, calculated as AMOUNT multiplied by EXCH_RATE.

If you were to drop EXCH_RATE could it be calculated back to its original 9 decimal places? Reaching 3NF is is adequate for most practical needs, but there may be circumstances which would benefit from further normalisation.
Boyce-Codd Normal Form A table is in Boyce-Codd normal form (BCNF) if and only if it is in 3NF and every determinant is a candidate key. 1. Anomalies can occur in relations in 3NF if there is a composite key in which part of that key has a determinant which is not itself a candidate key. 2. This can be expressed as R(A,B,C), C A where: o The relation contains attributes A, B and C. o A and B form a candidate key. o C is the determinant for A (A is functionally dependent on C). o C is not part of any key. 3. Anomalies can also occur where a relation contains several candidate keys where:
o o

The keys contain more than one attribute (they are composite keys). An attribute is common to more than one key.

Take the following table structure as an example:


schedule(campus, course, class, time, room/bldg)

Take the following sample data:


campus East East West course English 101 English 101 English 101 class 1 2 3 time 8:00-9:00 10:00-11:00 8:00-9:00 room/bldg 212 AYE 305 RFK 102 PPR

Note that no two buildings on any of the university campuses have the same name, thus ROOM/BLDG CAMPUS. As the determinant is not a candidate key this table is NOT in Boyce-Codd normal form. This table should be decomposed into the following relations:

R1(course, class, room/bldg, time) R2(room/bldg, campus)

As another example take the following structure:


enrol(student#, s_name, course#, c_name, date_enrolled)

This table has the following candidate keys:


(student#, course#) (student#, c_name) (s_name, course#) - this assumes that s_name is a unique identifier (s_name, c_name) - this assumes that c_name is a unique identifier

The relation is in 3NF but not in BCNF because of the following dependencies:

student# course#

s_name c_name

4th Normal Form A table is in fourth normal form (4NF) if and only if it is in BCNF and contains no more than one multi-valued dependency. 1. Anomalies can occur in relations in BCNF if there is more than one multi-valued dependency. 2. If A B and A C but B and C are unrelated, ie A (B,C) is false, then we have more than one multi-valued dependency. 3. A relation is in 4NF when it is in BCNF and has no more than one multi-valued dependency.

Take the following table structure as an example:


info(employee#, skills, hobbies)

Take the following sample data:


employee# 1 1 skills Programming Programming hobbies Golf Bowling

1 1 2 2 2 2

Analysis Analysis Analysis Analysis Management Management

Golf Bowling Golf Gardening Golf Gardening

This table is difficult to maintain since adding a new hobby requires multiple new rows corresponding to each skill. This problem is created by the pair of multi-valued dependencies EMPLOYEE# SKILLS and EMPLOYEE# HOBBIES. A much better alternative would be to decompose INFO into two relations:
skills(employee#, skill) hobbies(employee#, hobby)
5th (Projection-Join) Normal Form A table is in fifth normal form (5NF) or Projection-Join Normal Form (PJNF) if it is in 4NF and it cannot have a lossless decomposition into any number of smaller tables.

Another way of expressing this is:


... and each join dependency is a consequence of the candidate keys.

Yet another way of expressing this is:


... and there are no pairwise cyclical dependencies in the primary key comprised of three or more attributes.

Anomalies can occur in relations in 4NF if the primary key has three or more fields. 5NF is based on the concept of join dependence - if a relation cannot be decomposed any further then it is in 5NF. Pairwise cyclical dependency means that:
o o

You always need to know two values (pairwise). For any one you must know the other two (cyclical).

Take the following table structure as an example:


buying(buyer, vendor, item)

This is used to track buyers, what they buy, and from whom they buy. Take the following sample data:
buyer Sally Mary Sally Mary Sally vendor Liz Claiborne Liz Claiborne Jordach Jordach Jordach item Blouses Blouses Jeans Jeans Sneakers

The question is, what do you do if Claiborne starts to sell Jeans? How many records must you create to record this fact? The problem is there are pairwise cyclical dependencies in the primary key. That is, in order to determine the item you must know the buyer and vendor, and to determine the vendor you must know the buyer and the item, and finally to know the buyer you must know the vendor and the item. The solution is to break this one table into three tables; Buyer-Vendor, Buyer-Item, and VendorItem.
6th (Domain-Key) Normal Form A table is in sixth normal form (6NF) or Domain-Key normal form (DKNF) if it is in 5NF and if all constraints and dependencies that should hold on the relation can be enforced simply by enforcing the domain constraints and the key constraints specified on the relation.

Another way of expressing this is:


... if every constraint on the table is a logical consequence of the definition of keys and domains. 1. An domain constraint (better called an attribute constraint) is simply a constraint to the effect a given attribute A of R takes its values from some given domain D. 2. A key constraint is simply a constraint to the effect that a given set A, B, ..., C of R constitutes a key for R.

This standard was proposed by Ron Fagin in 1981, but interestingly enough he made no note of multi-valued dependencies, join dependencies, or functional dependencies in his paper and did not demonstrate how to achieve DKNF. However, he did manage to demonstrate that DKNF is often impossible to achieve.

If relation R is in DKNF, then it is sufficient to enforce the domain and key constraints for R, and all constraints on R will be enforced automatically. Enforcing those domain and key constraints is, of course, very simple (most DBMS products do it already). To be specific, enforcing domain constraints just means checking that attribute values are always values from the applicable domain (i.e., values of the right type); enforcing key constraints just means checking that key values are unique. Unfortunately lots of relations are not in DKNF in the first place. For example, suppose there's a constraint on R to the effect that R must contain at least ten tuples. Then that constraint is certainly not a consequence of the domain and key constraints that apply to R, and so R is not in DKNF. The sad fact is, not all relations can be reduced to DKNF; nor do we know the answer to the question "Exactly when can a relation be so reduced?"

De-Normalisation
Denormalisation is the process of modifying a perfectly normalised database design for performance reasons. Denormalisation is a natural and necessary part of database design, but must follow proper normalisation. Here are a few words from C J Date on denormalisation:
The general idea of normalization...is that the database designer should aim for relations in the "ultimate" normal form (5NF). However, this recommendation should not be construed as law. Sometimes there are good reasons for flouting the principles of normalization.... The only hard requirement is that relations be in at least first normal form. Indeed, this is as good a place as any to make the point that database design can be an extremely complex task.... Normalization theory is a useful aid in the process, but it is not a panacea; anyone designing a database is certainly advised to be familiar with the basic techniques of normalization...but we do not mean to suggest that the design should necessarily be based on normalization principles alone. C.J. An Pages 528-529 Date Systems

Introduction

to

Database

In the 1970s and 1980s when computer hardware was bulky, expensive and slow it was often considered necessary to denormalise the data in order to achieve acceptable performance, but this performance boost often came with a cost (refer to Modification Anomalies). By comparison, computer hardware in the 21st century is extremely compact, extremely cheap and extremely fast. When this is coupled with the enhanced performance from today's DBMS engines the performance from a normalised database is often acceptable, therefore there is less need for any denormalisation. However, under certain conditions denormalisation can be perfectly acceptable. Take the following table as an example:

Company Acme Widgets ABC Corporation XYZ Inc

City New York Miami Columbia

State NY FL MD

Zip 10169 33196 21046

This table is NOT in 3rd normal form because the city and state are dependent upon the ZIP code. To place this table in 3NF, two separate tables would be created - one containing the company name and ZIP code and the other containing city, state, ZIP code pairings. This may seem overly complex for daily applications and indeed it may be. Database designers should always keep in mind the tradeoffs between higher level normal forms and the resource issues that complexity creates. Deliberate denormalisation is commonplace when you're optimizing performance. If you continuously draw data from a related table, it may make sense to duplicate the data redundantly. Denormalisation always makes your system potentially less efficient and flexible, so denormalise as needed, but not frivolously. There are techniques for improving performance that involve storing redundant or calculated data. Some of these techniques break the rules of normalisation, others do not. Sometimes real world requirements justify breaking the rules. Intelligently and consciously breaking the rules of normalisation for performance purposes is an accepted practice, and should only be done when the benefits of the change justify breaking the rule.
Compound Fields

A compound field is a field whose value is the combination of two or more fields in the same record. The cost of using compound fields is the space they occupy and the code needed to maintain them. (Compound fields typically violate 2NF or 3NF.) For example, if your database has a table with addresses including city and state, you can create a compound field (call it City_State) that is made up of the concatenation of the city and state fields. Sorts and queries on City_State are much faster than the same sort or query using the two source fields - sometimes even 40 times faster. The downside of compound fields for the developer is that you have to write code to make sure that the City_State field is updated whenever either the city or the state field value changes. This is not difficult to do, but it is important that there are no 'leaks', or situations where the source data changes and, through some oversight, the compound field value is not updated.

Summary Fields

A summary field is a field in a one table record whose value is based on data in related-many table records. Summary fields eliminate repetitive and time-consuming cross-table calculations and make calculated results directly available for end-user queries, sorts, and reports without new programming. One-table fields that summarise values in multiple related records are a powerful optimization tool. Imagine tracking invoices without maintaining the invoice total! Summary fields like this do not violate the rules of normalisation. Normalisation is often misconceived as forbidding the storage of calculated values, leading people to avoid appropriate summary fields. There are two costs to consider when contemplating using a summary field: the coding time required to maintain accurate data and the space required to store the summary field. Some typical summary fields which you may encounter in an accounting system are:

For an INVOICE the invoice amount is the total of the amounts on all INVOICE_LINE records for that invoice. For an ACCOUNT the account balance will be the sum total of the amounts on all INVOICE and PAYMENT records for that account.

Summary Tables

A summary table is a table whose records summarise large amounts of related data or the results of a series of calculations. The entire table is maintained to optimise reporting, querying, and generating cross-table selections. Summary tables contain derived data from multiple records and do not necessarily violate the rules of normalisation. People often overlook summary tables based on the misconception that derived data is necessarily denormalised. In order for a summary table to be useful it needs to be accurate. This means you need to update summary records whenever source records change. This task can be taken care of in the program code, or in a database trigger (preferred), or in a batch process. You must also make sure to update summary records if you change source data in your code. Keeping the data valid requires extra work and introduces the possibility of coding errors, so you should factor this cost in when deciding if you are going to use this technique.
Optional Attributes that exist as a group

As mentioned in the guidelines for developing relations in 3rd normal form all relations which share the same primary key are supposed to be combined into the same table. However, there are circumstances where is is perfectly valid to ignore this rule. Take the following example which I encountered in 1984:

A finance company gives loans to customers, and a record is kept of each customer's repayments. If a customer does not meet a scheduled repayment then his account goes into arrears and special action needs to be taken.

Of the total customer base about 5% are in arrears at any one time.

This means that with 100,000 customers there will be roughly 5,000 in arrears. If the arrears data is held on the same record as the basic customer data (both sets of data have customer_id as the primary key) then it requires searching through all 100,000 records to locate those which are in arrears. This is not very efficient. One method tried was to create an index on account_status which identified whether the account was in arrears or not, but the improvement (due to the speed of the hardware and the limitations of the database engine) was minimal. A solution in these circumstances is to extract all the attributes which deal with arrears and put them in a separate table. Thus if there are 5,000 customers in arrears you can reference a table which contains only 5,000 records. As the arrears data is subordinate to the customer data the arrears table must be the 'child' in the relationship with the customer 'parent'. It would be possible to give the arrears table a different primary key as well as the foreign key to the customer table, but this would allow the customer arrears relationship to be one-to-many instead of one-toone. To enforce this constraint the foreign key and the primary key should be exactly the same. This situation can be expressed using the following structure:
R (K, A, B, C, X, Y, Z) where:
1. Attribute K is the primary key. 2. Attributes (A B C) exist all the time. 3. Attributes (X Y Z) exist some of the time (but always as a group under the same circumstances). 4. Attributes (X Y Z) require special processing.

After denormalising the result is two separate relations, as follows:


R1 (K, A, B, C) R2 (K, X, Y, Z) where K is also the foreign key to R1

Personal Guidelines
Even if you obey all the preceding rules it is still possible to produce a database design that causes problems during development. I have come across many different implementation tips and techniques over the years, and some that have worked in one database system have been successfully carried forward into a new database system. Some tips, on the other hand, may only be applicable to a particular database system. For particular options and limitations you must refer to your database manual.

Database Names 1. Database names should be short and meaningful, such as products, purchasing and sales. Short, but not too short, as in prod or purch. Meaningful but not verbose, as in 'the database used to store product details'. 2. Do not waste time using a prefix such as db to identify database names. The SQL syntax analyser has the intelligence to work that out for itself - so should you. 3. If your DBMS allows a mixture of upper and lowercase names, and it is case sensitive, it is better to stick to a standard naming convention such as:
o o o o o

All uppercase. All lowercase (my preference - see The choice between upper and lower case). Leading uppercase, remainder lowercase. Inconsistencies may lead to confusion, confusion may lead to mistakes, mistakes can lead to disasters. 4. If a database name contains more than one word, such as in sales orders and purchase orders, decide how to deal with it: o Separate the words with a single space, as in sales orders (note that some DBMSs do not allow embedded spaces, while most languages will require such names to be enclosed in quotes). o Separate the words with an underscore, as in sales_orders (my preference - see The choice between upper and lower case). o Separate the words with a hyphen, as in sales-orders. o Use camel caps, as in SalesOrders. Again, be consistent. 5. Rather than putting all the tables into a single database it may be better to create separate databases for each logically related set of tables. This may help with security, archiving, replication, etc. Table Names 1. Table names should be short and meaningful, such as part, customer and invoice. Short, but not too short. Meaningful, but not verbose. 2. Do not waste time using a prefix such as tbl to identify table names. The SQL syntax analyser has the intelligence to work that out for itself - so should you. 3. Table names should be in the singular (e.g. customer not customers). The fact that a table may contain multiple entries is irrelevant - any multiplicity can be derived from the existence of one-to-many relationships. 4. If your DBMS allows a mixture of upper and lowercase names, and it is case sensitive, It is better to stick to a standard naming convention such as:
o o o o o

All uppercase. All lowercase. (my preference - see The choice between upper and lower case) Leading uppercase, remainder lowercase. Inconsistencies may lead to confusion, confusion may lead to mistakes, mistakes can lead to disasters.

5. If a table name contains more than one word, such as in sales order and purchase order, decide how to deal with it:
o

Separate the words with a single space, as in sales order (note that some DBMSs do not allow embedded spaces, while most languages will require such names to be enclosed in quotes). o Separate the words with an underscore, as in sales_order (my preference - see The choice between upper and lower case). o Separate the words with a hyphen, as in sales-order. o Use camel caps, as in SalesOrder. Again, be consistent. 6. Be careful if the same table name is used in more than one database - it may lead to confusion. Field Names 1. Field names should be short and meaningful, such as part_name and customer_name. o Short, but not too short, such as in ptnam. o Meaningful, but not verbose, such as the name of the part. 2. Do not waste time using a prefix such as col or fld to identify column/field names. The SQL syntax analyser has the intelligence to work that out for itself - so should you. 3. If your DBMS allows a mixture of upper and lowercase names, and it is case sensitive, it is better to stick to a standard naming convention such as: o All uppercase. o All lowercase. (my preference - see The choice between upper and lower case) o Leading uppercase, remainder lowercase. Inconsistencies may lead to confusion, confusion may lead to mistakes, mistakes can lead to disasters. 4. If a field name contains more than one word, such as in part name and customer name, decide how to deal with it: o Separate the words with a single space, as in part name (note that some DBMSs do not allow embedded spaces, while most languages will require such names to be enclosed in quotes). o Separate the words with an underscore, as in part_name (my preference - see The choice between upper and lower case). o Separate the words with a hyphen, as in part-name. o Use camel caps, as in PartName. Again, be consistent. 5. Common words in field names may be abbreviated, but be consistent.
o o

Do not allow a mixture of abbreviations, such as 'no', 'num' and 'nbr' for 'number'. Publish a list of standard abbreviations and enforce it.

6. Although field names must be unique within a table, it is possible to use the same name on multiple tables even if they are unrelated, or they do not share the same set of possible values. It is recommended that this practice should be avoided, for reasons described in Field names should identify their content and The naming of Foreign Keys.

Primary Keys 1. It is recommended that the primary key of an entity should be constructed from the table name with a suffix of _ID. This makes it easy to identify the primary key in a long list of field names. 2. Do not waste time using a prefix such as pk to identify primary key fields. This has absolutely no meaning to any database engine or any application. 3. Avoid using generic names for all primary keys. It may seem a clever idea to use the name ID for every primary key field, but this causes problems:
o

It causes the same name to appear on multiple tables with totally different contexts. The string ID='ABC123' is extremely vague as it gives no idea of the entity being referenced. Is it an invoice id, customer id, or what? It also causes a problem with foreign keys.

4. There is no rule that says a primary key must consist of a single attribute - both simple and composite keys are allowed - so don't waste time creating artificial keys. 5. Avoid the unnecessary use of technical keys. If a table already contains a satisfactory unique identifier, whether composite or simple, there is no need to create another one. Although the use of a technical key can be justified in certain circumstances, it takes intelligence to know when those circumstances are right. The indiscriminate use of technical keys shows a distinct lack of intelligence. For further views on this subject please refer to Technical Keys - Their Uses and Abuses. Foreign Keys 1. It is recommended that where a foreign key is required that you use the same name as that of the associated primary key on the foreign table. It is a requirement of a relational join that two relations can only be joined when they share at least one common attribute, and this should be taken to mean the attribute name(s) as well as the value(s). Thus where the customer and invoice tables are joined in a parent-child relationship the following will result: o The primary key of customer will be customer_id. o The primary key of invoice will be invoice_id. o The foreign key which joins invoice to customer will be customer_id. 2. For MySQL users this means that the shortened version of the join condition may be used:
o o

Short: A LEFT JOIN B USING (a,b,c) Long: A LEFT JOIN B ON (A.a=B.a AND A.b=B.b AND A.c=B.c)

3. The only exception to this naming recommendation should be where a table contains more than one foreign key to the same parent table, in which case the names must be changed to avoid duplicates. In this situation I would simply add a meaningful suffix to each name to identify the usage, such as:
o o o

To signify movement I would use location_id_from and location_id_to. To signify positions in a hierarchy I would use node_id_snr and node_id_jnr. To signify replacement I would use part_id_old and part_id_new.

I prefer to use a suffix rather than a prefix as it makes the leading characters match (as in PART_ID_old and PART_ID_new) instead of having the traiing characters match (as in old_PART_ID and new_PART_ID).

4. Do not waste time using a prefix such as fk to identify foreign key fields. This has absolutely no meaning to any database engine or any application. Generating Unique ids

Where a technical primary key is used a mechanism is required that will generate new and unique values. Such keys are usually numeric, so there are several methods available:
1. Some database engines will maintain a set of sequence numbers for you which can be referenced using code such as : 2. SELECT <seq_name>.NEXTVAL FROM DUAL Using such a sequence is a two-step procedure:
o o

Access the sequence to obtain a value. Use the supplied value on an INSERT statement.

It is sometimes possible to access the sequence directly from an INSERT statement, as in the following:
INSERT INTO tablename (col1,col2,...) VALUES (tablename_seq.nextval,'value2',...)

If the number just used needs to be retrieved so that it can be passed back to the application it can be done so with the following:
SELECT <seq_name>.CURRVAL FROM DUAL

I have used this method, but a disadvantage that I have found is that the DBMS has no knowledge of what primary key is linked to which sequence, so it is possible to insert a record with a key not obtained from the sequence and thus cause the two to become unsynchronised. The next time the sequence is used it could therefore generate a value which already exists as a key and therefore cause an INSERT error.
3. Some database engines will allow you to specify a numeric field as 'auto-increment', and on an INSERT they will automatically generate the next available number (provided that no value is provided for that field in the first place). This is better than the previous method because:
o o

The sequence is tied directly to a particular database table and is not a separate object, thus it is impossible to become unsynchronised. It is not necessary to access the sequence then use the returned value on an INSERT statement - just leave the field empty and the DBMS will fill in the value automatically.

4. While the previous methods have their merits, they both have a common failing in that they are not-standard extensions to the SQL standard, therefore they are not available in

all SQL-compliant database engines. This becomes an important factor if it is ever decided to switch to another database engine. A truly portable method which uses a standard technique and can therefore be used in any SQL-compliant database is to use an SQL statement similar to the following to obtain a unique key for a table: 5. SELECT max(table_id) FROM <tablename> 6. table_id = table_id+1 Some people seem to think that this method is inefficient as it requires a full table search, but they are missing the fact that table_id is a primary key, therefore the values are held within an index. The SELECT max(...) statement will automatically be optimised to go straight to the last value in the index, therefore the result is obtained with almost no overhead. This would not be the case if I used SELECT count(...) as this would have to physically count the number of entries. Another reason for not using SELECT count(...) is that if records were to be deleted then record count would be out of step with the highest current value. 7. The Radicore development framework has separate data access objects for each DBMS to which it can connect. This means that the different code for dealing with auto_increment keys can be contained within each object, so is totally transparent to the application. All that is necessary is that the key be identified as 'auto_increment' in the Data Dictionary and the database object will take care of all the necessary processing.

Comments
Some people disagree with my ideas, but usually because they have limited experience and only know what they have been taught. What I have stated here is the result of decades of experience using various database systems with various languages. This is what I have learned, and goes beyond what I have been taught. There are valid reasons for some of the preferences I have stated in this document, and it may prove beneficial to state these in more detail.
The choice between upper and lower case

When I first started programming in the 1970s all coding was input via punched cards, not a VDU (that's a Visual Display Unit to the uninitiated), and there was no such thing as lowercase as the computer used a 6-bit character instead of an 8-bit byte and did not have enough room to deal with both lower and uppercase characters. CONSEQUENTLY EVERYTHING HAD TO BE IN UPPER CASE. When I progressed to a system where both cases were possible neither the operating system nor the programming language cared which was used - they were both caseinsensitive. By common consent all the programmers preferred to use lowercase for everything. The use of uppercase was considered TO BE THE EQUIVALENT OF SHOUTING and was discouraged, except where something important needed to stand out.

Until the last few years all the operating systems, database systems, programming languages and text editors have been case-insensitive. The UNIX operating system and its derivatives are casesensitive (for God's sake WHY??). The PHP programming language is case-sensitive in certain areas. I do not like systems which are case-sensitive for the following reasons:

I have been working for 30 years with systems which have been case-insensitive and I see no justification in making the switch. Case does not make a difference in any spoken language, so why should it make a difference in any computer language? When I am merrily hammering away at the keyboard I do not like all those pauses where I have to reach for the shift key. It tends to interrupt my train of thought, and I do not like to be interrupted with trivialities. To my knowledge there is no database system which is case-sensitive, so when I am writing code to access a database I do not like to be told which case to use. With the growing trend of being able to speak to a computer instead of using a keyboard, how frustrating will it become if you have to specify that particular words and letters are in upper or lower case?

That is why my preference is for all database, table and field names to be in lowercase as it works the same for both case-sensitive and case-insensitive systems, so I don't get suddenly caught out when the software decides to get picky. This also means that I use underscore separators instead of those ugly CamelCaps (i.e. 'field_name' instead of 'FieldName'). This topic is discussed in more detail in Case Sensitive Software is EVIL.
The use of unique and non-unique field names

Some people think that my habit of including the table name inside a field name (as in CUSTOMER.CUSTOMER_ID) introduces a level of redundancy and is therefore wrong. I consider this view to be too narrow as it does not cater for all the different circumstances I have encountered over the years. Field names should identify their content. Over many years I have come to adopt a fairly straightforward convention with the naming of fields:

Field names should give some idea of their content.

If I see several tables which all contain field names such as ID and DESCRIPTION it makes me want to reach for the rubber gloves, disinfectant and scrubbing brush. A field named ID simply says that it contains an identity, but the identity of what? A field named DESCRIPTION simply says that it contains a description, but the description of what?

One of the first database systems which I used did not allow field definitions to be included within the table definitions inside the schema. Instead all the fields were defined in one area, and the table definitions simply listed the fields which they contained. This meant that a field was defined with one set of attributes (type and size) and those attributes could not be changed at the table level. Thus ID could not be C10 in one table and C30 in another. The only time we had fields with the same name existing on more than one table was where there was a logical relationship between records which had the same values in those fields. Because of this it became standard practice to have unique field names on each table using the table name as a prefix, such as table_id and table_desc. One of the benefits of this approach was that we could build standard code which provided the correct label and help text based on nothing more than the field name itself. When performing sql JOINS between tables which have common field names that are unrelated you have to give each field a unique alias name before you can access its content. If each of these fields were given a unique name to begin with then this step would not be necessary.

Fields with the same context should have the same name.

If primary keys are named table_id instead of just id it then becomes possible, when naming foreign key fields on related tables, to use the same name for both the primary key and the foreign key. This makes it easier for a human being to recognise certain fields for what they are - anything ending in _id is a key field, and if the table prefix is not the current table then it is a foreign key to that table. This is what we called "self-documenting field names". In some circumstances it may not be possible to use the same name. This happens when the same field needs to appear more than once in the same table. In this case I would start with the same basic name and add a suffix for further identification, such as having location_id_FROM and location_id_TO to identify movements from one location to another, or having node_id_SNR and node_id_JNR to identify the senior and junior nodes in a hierarchical relationship.

Fields with different context should have different names.

It is not just primary key fields which should have unique names instead of sharing the common name of id. Non-key fields should follow the same convention for the same reasons. For example, if the CUSTOMER table has a STATUS field with one set of values and the INVOICE table has a STATUS field with another set of values then you should resist the temptation to give the two different fields the same common name of STATUS. They should be given proper names such as CUST_STATUS and INV_STATUS. They are different fields with different meanings, therefore they deserve to have different names. The breaking of this simple rule cased problems in one of the short-lived new-fangled languages that I used many years ago. This tool was built on the assumption that fields with

the same name that existed on more than one table implied a relationship between those tables. If you tried to perform a join between two tables this software would look for field names which existed on both tables and automatically perform a natural join using those fields. This caused our programs not to find the right records when we performed a join, and the only way we could fix it was to give different names to unrelated fields. Those conventions arose out of experience, to avoid certain problems which were encountered with certain languages. Every time I see these conventions broken I do not have to wait long before I see the same problems reappearing. The naming of Foreign Keys In any relationship the foreign key field(s) on the child/junior table are linked with the primary key field(s) on the parent/senior table. These related fields do not have to have the same name as it is still possible to perform a join, as shown in the following example:
SELECT field1, field2, field3 FROM first_table LEFT [OUTER] JOIN second_table ON (first_table.keyfield = second_table.foreign_keyfield)

However, if the fields have the same name then it is possible to replace the ON expression with a shorter USING expression, as in the following example:
SELECT field1, field2, field3 FROM first_table LEFT [OUTER] JOIN second_table USING (field1)

This feature is available in popular databases such as MySQL, PostgreSQL and Oracle, so it just goes to show that using identical field names is a recognised practice that has its benefits. Not only does the use of identical names have an advantage when performing joins in an SQL query, it also has advantages when simulating joins in your software. By this I mean where the reading of the two tables is performed in separate operations. It is possible to perform this using standard code with the following logic:

Operation (1) perform the following after each database row has been read:
o o o o

Identify the field(s) which constitute the primary key for the first table. Extract the values for those fields from the current row. Construct a string in the format field1='value1' [field2='value2']. Pass this string to the next operation.

Operation (2) performs the following:

o o o

Use the string passed down from the previous operation as the WHERE clause in a SELECT statement. Execute the query on the second table. Return the result back to the previous operation.

It is possible to perform these functions using standard code that never has to be customised for any particular database table. I should know as I have done it in two completely different languages. The only time that manual intervention (i.e. extra code) is required is where the field names are not exactly the same, which forces operation (2) to convert primary_key_field='value' to foreign_key_field='value' before it can execute the query. Experienced programmers should instantly recognise that the need for extra code incurs its own overhead:

The time taken to actually write this extra code. The time taken to test that the right code has been put in the right place. The time taken to amend this code should there be any database changes in the future.

The only occasion where fields with the same name are not possible is when a table contains multiple versions of that field. This is where I would add a suffix to give some extra meaning. For example:

In a table which records movements or ranges I would have <table>_ID_FROM and <table>_ID_TO. In a table which records a senior-to-junior hierarchy I would have <table>_ID_SNR and <table>_ID_JNR.

My view of field names can be summed up as follows:


Fields with the same context should have the same name. Fields with different context should have different names. Key fields, whether primary or foreign, should be in the format <table>_id. Duplicate foreign keys should be in the format <table>_id_<suffix>

Potrebbero piacerti anche