Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Denormalization refers to a refinement to the relational schema such that the degree of
normalization for a modified relation is less than the degree of at least one of the original
relations. Denormalization can also be referred to a process in which we combine two relations
into one new relation, and the new relation is still normalized but contains more nulls than the
original relations.
Normalization
Normalization is a logical database design that is structurally consistent and has minimal
redundancy. Normalization forces us to understand completely each attribute that has to be
represented in the database. This may be the most important factor that contributes to the overall
success of the system.
In addition, the following factors have to be considered: denormalization makes implementation
more complex; denormalization often sacrifices flexibility; denormalization may speed up
retrievals but it slows down updates.
Then why to denormalize relations
It is sometimes argued that a normalized database design does not provide maximum processing
efficiency. There may be circumstances where it may be necessary to accept the loss of some of
the benefits of a fully normalized design in favor of performance.
Benefits of Normalization
You usually wind up with fewer indexes per table, so data modification commands are
faster.
You wind up with fewer null values and less redundant data, making your database more
compact.
Triggers execute more quickly if you are not maintaining redundant data.
Data modification anomalies are reduced.
Normalization is conceptually cleaner and easier to maintain and change as your needs
change.
Good reasons for denormalizing are:
All or nearly all of the most frequent queries require access to the full set of joined data
A majority of applications perform table scans when joining tables
Computational complexity of derived columns requires temporary tables or excessively
complex queries
Disadvantages of Denormalization
Reducing the number of indexes, saving storage space and reducing data modification
time
Precomputing aggregate values, that is, computing them at data modification time rather
than at select time
Reducing the number of tables (in some cases)