HOME

TheInfoList



OR:

Database normalization or database normalisation (see spelling differences) is the process of structuring a
relational database A relational database is a (most commonly digital) database based on the relational model of data, as proposed by E. F. Codd in 1970. A system used to maintain relational databases is a relational database management system (RDBMS). Many relati ...
in accordance with a series of so-called
normal forms Database normalization or database normalisation (see spelling differences) is the process of structuring a relational database in accordance with a series of so-called normal forms in order to reduce data redundancy and improve data integr ...
in order to reduce
data redundancy In computer main memory, auxiliary storage and computer buses, data redundancy is the existence of data that is additional to the actual data and permits correction of errors in stored or transmitted data. The additional data can simply be a comple ...
and improve
data integrity Data integrity is the maintenance of, and the assurance of, data accuracy and consistency over its entire life-cycle and is a critical aspect to the design, implementation, and usage of any system that stores, processes, or retrieves data. The ter ...
. It was first proposed by
British British may refer to: Peoples, culture, and language * British people, nationals or natives of the United Kingdom, British Overseas Territories, and Crown Dependencies. ** Britishness, the British identity and common culture * British English, ...
computer scientist A computer scientist is a person who is trained in the academic study of computer science. Computer scientists typically work on the theoretical side of computation, as opposed to the hardware side on which computer engineers mainly focus (a ...
Edgar F. Codd as part of his
relational model The relational model (RM) is an approach to managing data using a structure and language consistent with first-order predicate logic, first described in 1969 by English computer scientist Edgar F. Codd, where all data is represented in terms of t ...
. Normalization entails organizing the
columns A column or pillar in architecture and structural engineering is a structural element that transmits, through compression, the weight of the structure above to other structural elements below. In other words, a column is a compression membe ...
(attributes) and tables (relations) of a database to ensure that their dependencies are properly enforced by database integrity constraints. It is accomplished by applying some formal rules either by a process of ''synthesis'' (creating a new database design) or ''decomposition'' (improving an existing database design).


Objectives

A basic objective of the first normal form defined by Codd in 1970 was to permit data to be queried and manipulated using a "universal data sub-language" grounded in
first-order logic First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quantifie ...
. An example of such a language is SQL, though it is one that Codd regarded as seriously flawed. Databases are often normalized by adaptive input heuristic parameters, which is readily achieved with a fraction of the processing allocation used by methods such as subclass interpolation. The objectives of normalisation beyond 1NF (first normal form) were stated by Codd as: When an attempt is made to modify (update, insert into, or delete from) a relation, the following undesirable side-effects may arise in relations that have not been sufficiently normalized: * Insertion anomaly. There are circumstances in which certain facts cannot be recorded at all. For example, each record in a "Faculty and Their Courses" relation might contain a Faculty ID, Faculty Name, Faculty Hire Date, and Course Code. Therefore, the details of any faculty member who teaches at least one course can be recorded, but a newly hired faculty member who has not yet been assigned to teach any courses cannot be recorded, except by setting the Course Code to
null Null may refer to: Science, technology, and mathematics Computing * Null (SQL) (or NULL), a special marker and keyword in SQL indicating that something has no value * Null character, the zero-valued ASCII character, also designated by , often use ...
. * Update anomaly. The same information can be expressed on multiple rows; therefore updates to the relation may result in logical inconsistencies. For example, each record in an "Employees' Skills" relation might contain an Employee ID, Employee Address, and Skill; thus a change of address for a particular employee may need to be applied to multiple records (one for each skill). If the update is only partially successful – the employee's address is updated on some records but not others – then the relation is left in an inconsistent state. Specifically, the relation provides conflicting answers to the question of what this particular employee's address is. * Deletion anomaly. Under certain circumstances, deletion of data representing certain facts necessitates deletion of data representing completely different facts. The "Faculty and Their Courses" relation described in the previous example suffers from this type of anomaly, for if a faculty member temporarily ceases to be assigned to any courses, the last of the records on which that faculty member appears must be deleted, effectively also deleting the faculty member, unless the Course Code field is set to null.


Minimize redesign when extending the database structure

A fully normalized database allows its structure to be extended to accommodate new types of data without changing existing structure too much. As a result, applications interacting with the database are minimally affected. Normalized relations, and the relationship between one normalized relation and another, mirror real-world concepts and their interrelationships.


Normal forms

Codd introduced the concept of normalization and what is now known as the
first normal form First normal form (1NF) is a property of a relation in a relational database. A relation is in first normal form if and only if no attribute domain has relations as elements. Or more informally, that no table column can have tables as values (or ...
(1NF) in 1970. Codd went on to define the
second normal form Second normal form (2NF) is a normal form used in database normalization. 2NF was originally defined by E. F. Codd in 1971.Codd, E. F. "Further Normalization of the Data Base Relational Model". (Presented at Courant Computer Science Symposia Se ...
(2NF) and
third normal form Third normal form (3NF) is a database schema design approach for relational databases which uses normalizing principles to reduce the duplication of data, avoid data anomalies, ensure referential integrity, and simplify data management. It was ...
(3NF) in 1971,Codd, E. F. "Further Normalization of the Data Base Relational Model". (Presented at Courant Computer Science Symposia Series 6, "Data Base Systems", New York City, May 24–25, 1971.) IBM Research Report RJ909 (August 31, 1971). Republished in Randall J. Rustin (ed.), ''Data Base Systems: Courant Computer Science Symposia Series 6''. Prentice-Hall, 1972. and Codd and Raymond F. Boyce defined the Boyce–Codd normal form (BCNF) in 1974.Codd, E. F. "Recent Investigations into Relational Data Base Systems". IBM Research Report RJ1385 (April 23, 1974). Republished in ''Proc. 1974 Congress'' (Stockholm, Sweden, 1974), N.Y.: North-Holland (1974). Informally, a relational database relation is often described as "normalized" if it meets third normal form. Most 3NF relations are free of insertion, updation, and deletion anomalies. The normal forms (from least normalized to most normalized) are:


Example of a step by step normalization

Normalization is a database design technique, which is used to design a
relational database A relational database is a (most commonly digital) database based on the relational model of data, as proposed by E. F. Codd in 1970. A system used to maintain relational databases is a relational database management system (RDBMS). Many relati ...
table up to higher normal form. The process is progressive, and a higher level of database normalization cannot be achieved unless the previous levels have been satisfied. That means that, having data in
unnormalized form In database normalization, unnormalized form (UNF), also known as an unnormalized relation or non-first normal form (N1NF or NF2), is a database data model (organization of data in a database) which does not meet any of the conditions of database ...
(the least normalized) and aiming to achieve the highest level of normalization, the first step would be to ensure compliance to
first normal form First normal form (1NF) is a property of a relation in a relational database. A relation is in first normal form if and only if no attribute domain has relations as elements. Or more informally, that no table column can have tables as values (or ...
, the second step would be to ensure
second normal form Second normal form (2NF) is a normal form used in database normalization. 2NF was originally defined by E. F. Codd in 1971.Codd, E. F. "Further Normalization of the Data Base Relational Model". (Presented at Courant Computer Science Symposia Se ...
is satisfied, and so forth in order mentioned above, until the data conform to sixth normal form. However, it is worth noting that normal forms beyond 4NF are mainly of academic interest, as the problems they exist to solve rarely appear in practice. ''The data in the following example were intentionally designed to contradict most of the normal forms. In real life, it is quite possible to be able to skip some of the normalization steps because the table doesn't contain anything contradicting the given normal form. It also commonly occurs that fixing a violation of one normal form also fixes a violation of a higher normal form in the process. Also one table has been chosen for normalization at each step, meaning that at the end of this example process, there might still be some tables not satisfying the highest normal form.''


Initial data

Let a database table exist with the following structure: For this example, it is assumed that each book has only one author. As a prerequisite to conform to the relational model, a table must have a
primary key In the relational model of databases, a primary key is a ''specific choice'' of a ''minimal'' set of attributes ( columns) that uniquely specify a tuple ( row) in a relation ( table). Informally, a primary key is "which attributes identify a recor ...
, which uniquely identifies a row. Two books could have the same title, but an ISBN uniquely identifies a book, so it can be used as the primary key:


Satisfying 1NF

To satisfy
First normal form First normal form (1NF) is a property of a relation in a relational database. A relation is in first normal form if and only if no attribute domain has relations as elements. Or more informally, that no table column can have tables as values (or ...
, each column of a table must have a single value. Columns which contain sets of values or nested records are not allowed. In the initial table, Subject contains a set of subject values, meaning it does not comply. To solve the problem, the subjects are extracted into a separate Subject table: A foreign key column is added to the Subject-table, which refers to the primary key of the row from which the subject was extracted. The same information is therefore represented but without the use of non-simple domains. Instead of one table in
unnormalized form In database normalization, unnormalized form (UNF), also known as an unnormalized relation or non-first normal form (N1NF or NF2), is a database data model (organization of data in a database) which does not meet any of the conditions of database ...
, there are now two tables conforming to the 1NF.


Satisfying 2NF

If a table has a single column primary key, it automatically satisfies 2NF, but if a table has a multi-column or composite key then it may not satisfy 2NF.
The Book table below has a composite key of (indicated by the underlining), so it may not satisfy 2NF. At this point in our design the key is not finalised as the
primary key In the relational model of databases, a primary key is a ''specific choice'' of a ''minimal'' set of attributes ( columns) that uniquely specify a tuple ( row) in a relation ( table). Informally, a primary key is "which attributes identify a recor ...
, so it is called a
candidate key A candidate key, or simply a key, of a relational database is a minimal superkey. In other words, it is any set of columns that have a unique combination of values in each row (which makes it a superkey), with the additional constraint that removin ...
. Consider the following table: All of the attributes that are not part of the candidate key depend on ''Title'', but only ''Price'' also depends on ''Format''. To conform to 2NF and remove duplicities, every non-candidate-key attribute must depend on the whole candidate key, not just part of it. To normalize this table, make a (simple) candidate key (the primary key) so that every non-candidate-key attribute depends on the whole candidate key, and remove ''Price'' into a separate table so that its dependency on ''Format'' can be preserved: Now, the Book table conforms to 2NF.


Satisfying 3NF

The Book table still has a transitive functional dependency ( is dependent on , which is dependent on ). A similar violation exists for genre ( is dependent on , which is dependent on ). Hence, the Book table is not in 3NF. To make it in 3NF, let's use the following table structure, thereby eliminating the transitive functional dependencies by placing and in their own respective tables:


Satisfying EKNF

The elementary key normal form (EKNF) falls strictly between 3NF and BCNF and is not much discussed in the literature. It is intended ''"to capture the salient qualities of both 3NF and BCNF"'' while avoiding the problems of both (namely, that 3NF is "too forgiving" and BCNF is "prone to computational complexity"). Since it is rarely mentioned in literature, it is not included in this example.


Satisfying 4NF

Assume the database is owned by a book retailer franchise that has several franchisees that own shops in different locations. And therefore the retailer decided to add a table that contains data about availability of the books at different locations: As this table structure consists of a compound primary key, it doesn't contain any non-key attributes and it's already in BCNF (and therefore also satisfies all the previous
normal forms Database normalization or database normalisation (see spelling differences) is the process of structuring a relational database in accordance with a series of so-called normal forms in order to reduce data redundancy and improve data integr ...
). However, assuming that all available books are offered in each area, the Title is not unambiguously bound to a certain Location and therefore the table doesn't satisfy 4NF. That means that, to satisfy the
fourth normal form Fourth normal form (4NF) is a normal form used in database normalization. Introduced by Ronald Fagin in 1977, 4NF is the next level of normalization after Boyce–Codd normal form (BCNF). Whereas the second, third, and Boyce–Codd normal form ...
, this table needs to be decomposed as well: Now, every record is unambiguously identified by a
superkey In the relational data model a superkey is a set of attributes that uniquely identifies each tuple of a relation. Because superkey values are unique, tuples with the same superkey value must also have the same non-key attribute values. That is, ...
, therefore 4NF is satisfied.


Satisfying ETNF

Suppose the franchisees can also order books from different suppliers. Let the relation also be subject to the following constraint: * If a certain supplier supplies a certain title * and the title is supplied to the franchisee * and the franchisee is being supplied by the supplier, * then the supplier supplies the title to the franchisee. This table is in 4NF, but the Supplier ID is equal to the join of its projections: . No component of that join dependency is a
superkey In the relational data model a superkey is a set of attributes that uniquely identifies each tuple of a relation. Because superkey values are unique, tuples with the same superkey value must also have the same non-key attribute values. That is, ...
(the sole superkey being the entire heading), so the table does not satisfy the ETNF and can be further decomposed: The decomposition produces ETNF compliance.


Satisfying 5NF

To spot a table not satisfying the 5NF, it is usually necessary to examine the data thoroughly. Suppose the table from 4NF example with a little modification in data and let's examine if it satisfies 5NF: Decomposing this table lowers redundancies, resulting in the following two tables: The query joining these tables would return the following data: The JOIN returns three more rows than it should; adding another table to clarify the relation results in three separate tables:
What will the JOIN return now? It actually is not possible to join these three tables. That means it wasn't possible to decompose the Franchisee - Book - Location without data loss, therefore the table already satisfies 5NF. C.J. Date has argued that only a database in 5NF is truly "normalized".


Satisfying DKNF

Let's have a look at the Book table from previous examples and see if it satisfies the Domain-key normal form: Logically, Thickness is determined by number of pages. That means it depends on Pages which is not a key. Let's set an example convention saying a book up to 350 pages is considered "slim" and a book over 350 pages is considered "thick". This convention is technically a constraint but it is neither a domain constraint nor a key constraint; therefore we cannot rely on domain constraints and key constraints to keep the data integrity. In other words – nothing prevents us from putting, for example, "Thick" for a book with only 50 pages – and this makes the table violate DKNF. To solve this, a table holding enumeration that defines the Thickness is created, and that column is removed from the original table: That way, the domain integrity violation has been eliminated, and the table is in DKNF.


Satisfying 6NF

A simple and intuitive definition of the sixth normal form is that ''"a table is in 6NF when the row contains the Primary Key, and at most one other attribute"''. That means, for example, the Publisher table designed while creating the 1NF needs to be further decomposed into two tables: The obvious drawback of 6NF is the proliferation of tables required to represent the information on a single entity. If a table in 5NF has one primary key column and N attributes, representing the same information in 6NF will require N tables; multi-field updates to a single conceptual record will require updates to multiple tables; and inserts and deletes will similarly require operations across multiple tables. For this reason, in databases intended to serve
Online Transaction Processing In online transaction processing (OLTP), information systems typically facilitate and manage transaction-oriented applications. This is contrasted with online analytical processing. The term "transaction" can have two different meanings, both of w ...
needs, 6NF should not be used. However, in data warehouses, which do not permit interactive updates and which are specialized for fast query on large data volumes, certain DBMSs use an internal 6NF representation – known as a columnar data store. In situations where the number of unique values of a column is far less than the number of rows in the table, column-oriented storage allow significant savings in space through data compression. Columnar storage also allows fast execution of range queries (e.g., show all records where a particular column is between X and Y, or less than X.) In all these cases, however, the database designer does not have to perform 6NF normalization manually by creating separate tables. Some DBMSs that are specialized for warehousing, such as
Sybase IQ SAP IQ (formerly known as SAP Sybase IQ or Sybase IQ; IQ for Intelligent Query) is a column-based, petabyte scale, relational database software system used for business intelligence, data warehousing, and data marts. Produced by Sybase Inc., ...
, use columnar storage by default, but the designer still sees only a single multi-column table. Other DBMSs, such as Microsoft SQL Server 2012 and later, let you specify a "columnstore index" for a particular table.Microsoft Corporation. Columnstore Indexes: Overview. https://docs.microsoft.com/en-us/sql/relational-databases/indexes/columnstore-indexes-overview . Accessed March 23, 2020.


See also

* Denormalization *
Database refactoring A database refactoring is a simple change to a database schema that improves its design while retaining both its behavioral and informational semantics. Database refactoring does not change the way data is interpreted or used and does not fix bugs ...
* Lossless join decomposition


Notes and references


Further reading

* Date, C. J. (1999),
An Introduction to Database Systems
' (8th ed.). Addison-Wesley Longman. . * Kent, W. (1983)

', Communications of the ACM, vol. 26, pp. 120–125 * H.-J. Schek, P. Pistor Data Structures for an Integrated Data Base Management and Information Retrieval System


External links

*

by Mike Chapple (About.com)
Database Normalization Intro

Part 2

An Introduction to Database Normalization
by Mike Hillyer.
A tutorial on the first 3 normal forms
by Fred Coulson
Description of the database normalization basics
by Microsoft
Normalization in DBMS by Chaitanya (beginnersbook.com)

A Step-by-Step Guide to Database Normalization

ETNF – Essential tuple normal form
{{DEFAULTSORT:Database Normalization Database constraints Data management Data modeling Relational algebra