In
computing
Computing is any goal-oriented activity requiring, benefiting from, or creating computing machinery. It includes the study and experimentation of algorithmic processes, and development of both hardware and software. Computing has scientific, ...
, extract, transform, load (ETL) is a three-phase process where data is extracted, transformed (cleaned, sanitized, scrubbed) and loaded into an output data container. The data can be collated from one or more sources and it can also be outputted to one or more destinations. ETL processing is typically executed using
software applications but it can also be done manually by
system operator
A sysop (; an abbreviation of system operator) is an administrator of a multi-user computer system, such as a bulletin board system (BBS) or an online service virtual community.Jansen, E. & James,V. (2002). NetLingo: the Internet dictionary. Netl ...
s. ETL software typically automates the entire process and can be run manually or on reoccurring schedules either as single jobs or aggregated into a batch of jobs.
A properly designed ETL system extracts data from source systems and enforces
data type
In computer science and computer programming, a data type (or simply type) is a set of possible values and a set of allowed operations on it. A data type tells the compiler or interpreter how the programmer intends to use the data. Most progra ...
and data validity standards and ensures it conforms structurally to the requirements of the output. Some ETL systems can also deliver data in a presentation-ready format so that application developers can build applications and end users can make decisions.
The ETL process became a popular concept in the 1970s and is often used in
data warehousing. ETL systems commonly integrate data from multiple applications (systems), typically developed and supported by different
vendors
In a supply chain, a vendor, supplier, provider or a seller, is an enterprise that contributes goods or services. Generally, a supply chain vendor manufactures inventory/stock items and sells them to the next link in the chain. Today, these terms ...
or hosted on separate computer hardware. The separate systems containing the original data are frequently managed and operated by different
stakeholder
Stakeholder may refer to:
*Stakeholder (corporate), a group, corporate, organization, member, or system that affects or can be affected by an organization's actions
*Project stakeholder, a person, group, or organization with an interest in a proje ...
s. For example, a cost accounting system may combine data from payroll, sales, and purchasing.
Extract
Data extraction
Data extraction is the act or process of retrieving data out of (usually unstructured or poorly structured) data sources for further data processing or data storage ( data migration). The import into the intermediate extracting system is thus usu ...
involves extracting data from homogeneous or heterogeneous sources;
data transformation processes data by
data cleaning and transforming it into a proper storage format/structure for the purposes of querying and analysis; finally, data loading describes the insertion of data into the final target database such as an
operational data store, a
data mart
A data mart is a structure/access pattern specific to ''data warehouse'' environments, used to retrieve client-facing data. The data mart is a subset of the data warehouse and is usually oriented to a specific business line or team. Whereas data w ...
,
data lake or a data warehouse.
ETL processing involves extracting the data from the source system(s). In many cases, this represents the most important aspect of ETL, since extracting data correctly sets the stage for the success of subsequent processes. Most data-warehousing projects combine data from different source systems. Each separate system may also use a different data organization and/or
format
Format may refer to:
Printing and visual media
* Text formatting, the typesetting of text elements
* Paper formats, or paper size standards
* Newspaper format, the size of the paper page
Computing
* File format, particular way that informati ...
. Common data-source formats include
relational database
A relational database is a (most commonly digital) database based on the relational model of data, as proposed by E. F. Codd in 1970. A system used to maintain relational databases is a relational database management system (RDBMS). Many relatio ...
s,
XML
Extensible Markup Language (XML) is a markup language and file format for storing, transmitting, and reconstructing arbitrary data. It defines a set of rules for encoding documents in a format that is both human-readable and machine-readable. ...
,
JSON
JSON (JavaScript Object Notation, pronounced ; also ) is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attribute–value pairs and arrays (or other s ...
and
flat files, but may also include non-relational database structures such as
Information Management System (IMS) or other data structures such as
Virtual Storage Access Method (VSAM) or
Indexed Sequential Access Method (ISAM), or even formats fetched from outside sources by means such as
web spidering or
screen-scraping. The streaming of the extracted data source and loading on-the-fly to the destination database is another way of performing ETL when no intermediate data storage is required.
An intrinsic part of the extraction involves data validation to confirm whether the data pulled from the sources has the correct/expected values in a given domain (such as a pattern/default or list of values). If the data fails the validation rules, it is rejected entirely or in part. The rejected data is ideally reported back to the source system for further analysis to identify and to rectify the incorrect records.
Transform
In the
data transformation stage, a series of rules or functions are applied to the extracted data in order to prepare it for loading into the end target.
An important function of transformation is
data cleansing
Data cleansing or data cleaning is the process of detecting and correcting (or removing) corrupt or inaccurate records from a record set, table, or database and refers to identifying incomplete, incorrect, inaccurate or irrelevant parts of the d ...
, which aims to pass only "proper" data to the target. The challenge when different systems interact is in the relevant systems' interfacing and communicating. Character sets that may be available in one system may not be so in others.
In other cases, one or more of the following transformation types may be required to meet the business and technical needs of the server or data warehouse:
* Selecting only certain columns to load: (or selecting
null columns not to load). For example, if the source data has three columns (aka "attributes"), roll_no, age, and salary, then the selection may take only roll_no and salary. Or, the selection mechanism may ignore all those records where salary is not present (salary = null).
* Translating coded values: (''e.g.'', if the source system codes male as "1" and female as "2", but the warehouse codes male as "M" and female as "F")
* Encoding free-form values: (''e.g.'', mapping "Male" to "M")
* Deriving a new calculated value: (''e.g.'', sale_amount = qty * unit_price)
* Sorting or ordering the data based on a list of columns to improve search performance
*
Joining data from multiple sources (''e.g.'', lookup, merge) and
deduplicating the data
* Aggregating (for example, rollup — summarizing multiple rows of data — total sales for each store, and for each region, etc.)
* Generating
surrogate-key values
*
Transposing or
pivoting (turning multiple columns into multiple rows or vice versa)
* Splitting a column into multiple columns (''e.g.'', converting a
comma-separated list, specified as a string in one column, into individual values in different columns)
* Disaggregating repeating columns
* Looking up and validating the relevant data from tables or referential files
* Applying any form of data validation; failed validation may result in a full rejection of the data, partial rejection, or no rejection at all, and thus none, some, or all of the data is handed over to the next step depending on the rule design and exception handling; many of the above transformations may result in exceptions, e.g., when a code translation parses an unknown code in the extracted data
Load
The load phase loads the data into the end target, which can be any data store including a simple delimited flat file or a
data warehouse
In computing, a data warehouse (DW or DWH), also known as an enterprise data warehouse (EDW), is a system used for reporting and data analysis and is considered a core component of business intelligence. DWs are central repositories of integra ...
. Depending on the requirements of the organization, this process varies widely. Some data warehouses may overwrite existing information with cumulative information; updating extracted data is frequently done on a daily, weekly, or monthly basis. Other data warehouses (or even other parts of the same data warehouse) may add new data in a historical form at regular intervals — for example, hourly. To understand this, consider a data warehouse that is required to maintain sales records of the last year. This data warehouse overwrites any data older than a year with newer data. However, the entry of data for any one year window is made in a historical manner. The timing and scope to replace or append are strategic design choices dependent on the time available and the
business needs. More complex systems can maintain a history and
audit trail of all changes to the data loaded in the data warehouse.
As the load phase interacts with a database, the constraints defined in the database schema — as well as in triggers activated upon data load — apply (for example, uniqueness,
referential integrity, mandatory fields), which also contribute to the overall data quality performance of the ETL process.
* For example, a financial institution might have information on a customer in several departments and each department might have that customer's information listed in a different way. The membership department might list the customer by name, whereas the accounting department might list the customer by number. ETL can bundle all of these data elements and consolidate them into a uniform presentation, such as for storing in a database or data warehouse.
* Another way that companies use ETL is to move information to another application permanently. For instance, the new application might use another database vendor and most likely a very different database schema. ETL can be used to transform the data into a format suitable for the new application to use.
* An example would be an
Expense and Cost Recovery System (ECRS)
An expense and cost recovery system (ECRS) is a specialized subset of "extract, transform, load" (ETL) functioning as a powerful and flexible set of applications, including programs, scripts and databases designed to improve the cash flow of bus ...
such as used by
accountancies,
consultancies
A consultant (from la, consultare "to deliberate") is a professional (also known as ''expert'', ''specialist'', see variations of meaning below) who provides advice and other purposeful activities in an area of specialization.
Consulting servi ...
, and
legal firms. The data usually ends up in the
time and billing system, although some businesses may also utilize the raw data for employee productivity reports to Human Resources (personnel dept.) or equipment usage reports to Facilities Management.
Real-life ETL cycle
The typical real-life ETL cycle consists of the following execution steps:
# Cycle initiation
# Build
reference data
Reference data is data used to classify or categorize other data. Typically, they are static or slowly changing over time.
Examples of reference data include:
* Units of measurement
* Country codes
* Corporate codes
* Fixed conversion rates e.g. ...
# Extract (from sources)
#
Validate
# Transform (
clean, apply
business rules, check for
data integrity
Data integrity is the maintenance of, and the assurance of, data accuracy and consistency over its entire life-cycle and is a critical aspect to the design, implementation, and usage of any system that stores, processes, or retrieves data. The ter ...
, create
aggregates or disaggregates)
# Stage (load into
staging tables, if used)
#
Audit reports (for example, on compliance with business rules. Also, in case of failure, helps to diagnose/repair)
# Publish (to target tables)
#
Archive
An archive is an accumulation of historical records or materials – in any medium – or the physical facility in which they are located.
Archives contain primary source documents that have accumulated over the course of an individual ...
Challenges
ETL processes can involve considerable complexity, and significant operational problems can occur with improperly designed ETL systems.
The range of data values or data quality in an operational system may exceed the expectations of designers at the time validation and transformation rules are specified.
Data profiling of a source during data analysis can identify the data conditions that must be managed by transform rules specifications, leading to an amendment of validation rules explicitly and implicitly implemented in the ETL process.
Data warehouses are typically assembled from a variety of data sources with different formats and purposes. As such, ETL is a key process to bring all the data together in a standard, homogeneous environment.
Design analysis should establish the
scalability
Scalability is the property of a system to handle a growing amount of work by adding resources to the system.
In an economic context, a scalable business model implies that a company can increase sales given increased resources. For example, a ...
of an ETL system across the lifetime of its usage — including understanding the volumes of data that must be processed within
service level agreements. The time available to extract from source systems may change, which may mean the same amount of data may have to be processed in less time. Some ETL systems have to scale to process terabytes of data to update data warehouses with tens of terabytes of data. Increasing volumes of data may require designs that can scale from daily
batch to multiple-day micro batch to integration with
message queue
In computer science, message queues and mailboxes are software-engineering components typically used for inter-process communication (IPC), or for inter-thread communication within the same process. They use a queue for messaging – th ...
s or real-time change-data-capture for continuous transformation and update.
Performance
ETL vendors benchmark their record-systems at multiple TB (terabytes) per hour (or ~1 GB per second) using powerful servers with multiple CPUs, multiple hard drives, multiple gigabit-network connections, and much memory.
In real life, the slowest part of an ETL process usually occurs in the database load phase. Databases may perform slowly because they have to take care of concurrency, integrity maintenance, and indices. Thus, for better performance, it may make sense to employ:
* ''Direct path extract'' method or bulk unload whenever is possible (instead of querying the database) to reduce the load on source system while getting high-speed extract
* Most of the transformation processing outside of the database
* Bulk load operations whenever possible
Still, even using bulk operations, database access is usually the bottleneck in the ETL process. Some common methods used to increase performance are:
*
Partition tables (and indices): try to keep partitions similar in size (watch for
null
values that can skew the partitioning)
* Do all validation in the ETL layer before the load: disable
integrity
Integrity is the practice of being honest and showing a consistent and uncompromising adherence to strong moral and ethical principles and values.
In ethics, integrity is regarded as the honesty and truthfulness or accuracy of one's actions. In ...
checking (
disable constraint
...) in the target database tables during the load
* Disable
triggers (
disable trigger
...) in the target database tables during the load: simulate their effect as a separate step
* Generate IDs in the ETL layer (not in the database)
* Drop the
indices (on a table or partition) before the load - and recreate them after the load (SQL:
drop index
...
; create index
...)
* Use parallel bulk load when possible — works well when the table is partitioned or there are no indices (Note: attempting to do parallel loads into the same table (partition) usually causes locks — if not on the data rows, then on indices)
* If a requirement exists to do insertions, updates, or deletions, find out which rows should be processed in which way in the ETL layer, and then process these three operations in the database separately; you often can do bulk load for inserts, but updates and deletes commonly go through an
API (using
SQL)
Whether to do certain operations in the database or outside may involve a trade-off. For example, removing duplicates using
distinct
may be slow in the database; thus, it makes sense to do it outside. On the other side, if using
distinct
significantly (x100) decreases the number of rows to be extracted, then it makes sense to remove duplications as early as possible in the database before unloading data.
A common source of problems in ETL is a big number of dependencies among ETL jobs. For example, job "B" cannot start while job "A" is not finished. One can usually achieve better performance by visualizing all processes on a graph, and trying to reduce the graph making maximum use of
parallelism, and making "chains" of consecutive processing as short as possible. Again, partitioning of big tables and their indices can really help.
Another common issue occurs when the data are spread among several databases, and processing is done in those databases sequentially. Sometimes database replication may be involved as a method of copying data between databases — it can significantly slow down the whole process. The common solution is to reduce the processing graph to only three layers:
* Sources
* Central ETL layer
* Targets
This approach allows processing to take maximum advantage of parallelism. For example, if you need to load data into two databases, you can run the loads in parallel (instead of loading into the first — and then replicating into the second).
Sometimes processing must take place sequentially. For example, dimensional (reference) data are needed before one can get and validate the rows for main
"fact" tables.
Parallel processing
A development in ETL software is the implementation of
parallel processing. It has enabled a number of methods to improve overall performance of ETL when dealing with large volumes of data.
ETL applications implement three main types of parallelism:
* Data: By splitting a single sequential file into smaller data files to provide
parallel access
*
Pipeline: allowing the simultaneous running of several components on the same
data stream
In connection-oriented communication, a data stream is the transmission of a sequence of digitally encoded coherent signals to convey information. Typically, the transmitted symbols are grouped into a series of packets.
Data streaming has ...
, e.g. looking up a value on record 1 at the same time as adding two fields on record 2
* Component: The simultaneous running of multiple
processes
A process is a series or set of activities that interact to produce a result; it may occur once-only or be recurrent or periodic.
Things called a process include:
Business and management
*Business process, activities that produce a specific se ...
on different data streams in the same job, e.g. sorting one input file while removing duplicates on another file
All three types of parallelism usually operate combined in a single job or task.
An additional difficulty comes with making sure that the data being uploaded is relatively consistent. Because multiple source databases may have different update cycles (some may be updated every few minutes, while others may take days or weeks), an ETL system may be required to hold back certain data until all sources are synchronized. Likewise, where a warehouse may have to be reconciled to the contents in a source system or with the general ledger, establishing synchronization and reconciliation points becomes necessary.
Rerunnability, recoverability
Data warehousing procedures usually subdivide a big ETL process into smaller pieces running sequentially or in parallel. To keep track of data flows, it makes sense to tag each data row with "row_id", and tag each piece of the process with "run_id". In case of a failure, having these IDs help to roll back and rerun the failed piece.
Best practice also calls for ''checkpoints'', which are states when certain phases of the process are completed. Once at a checkpoint, it is a good idea to write everything to disk, clean out some temporary files, log the state, etc.
Virtual ETL
,
data virtualization had begun to advance ETL processing. The application of data virtualization to ETL allowed solving the most common ETL tasks of
data migration and application integration for multiple dispersed data sources. Virtual ETL operates with the abstracted representation of the objects or entities gathered from the variety of relational, semi-structured, and
unstructured data
Unstructured data (or unstructured information) is information that either does not have a pre-defined data model or is not organized in a pre-defined manner. Unstructured information is typically text-heavy, but may contain data such as dates, n ...
sources. ETL tools can leverage object-oriented modeling and work with entities' representations persistently stored in a centrally located
hub-and-spoke architecture. Such a collection that contains representations of the entities or objects gathered from the data sources for ETL processing is called a metadata repository and it can reside in memory or be made persistent. By using a persistent metadata repository, ETL tools can transition from one-time projects to persistent middleware, performing data harmonization and
data profiling consistently and in near-real time.
Dealing with keys
Unique key In relational database management systems, a unique key is a candidate key that is not the primary key of the relation. All the candidate keys of a relation can uniquely identify the records of the relation, but only one of them is used as the pr ...
s play an important part in all relational databases, as they tie everything together. A unique key is a column that identifies a given entity, whereas a
foreign key A foreign key is a set of attributes in a table that refers to the primary key of another table. The foreign key links these two tables. Another way to put it: In the context of relational databases, a foreign key is a set of attributes subject to ...
is a column in another table that refers to a primary key. Keys can comprise several columns, in which case they are composite keys. In many cases, the primary key is an auto-generated integer that has no meaning for the
business entity
In law, a legal person is any person or 'thing' (less ambiguously, any legal entity) that can do the things a human person is usually able to do in law – such as enter into contracts, sue and be sued, own property, and so on. The reason f ...
being represented, but solely exists for the purpose of the relational database - commonly referred to as a
surrogate key.
As there is usually more than one data source getting loaded into the warehouse, the keys are an important concern to be addressed. For example: customers might be represented in several data sources, with their
Social Security Number
In the United States, a Social Security number (SSN) is a nine-digit number issued to U.S. citizens, permanent residents, and temporary (working) residents under section 205(c)(2) of the Social Security Act, codified as . The number is issued t ...
as the primary key in one source, their phone number in another, and a surrogate in the third. Yet a data warehouse may require the consolidation of all the customer information into one
dimension
In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coor ...
.
A recommended way to deal with the concern involves adding a warehouse surrogate key, which is used as a foreign key from the fact table.
Usually, updates occur to a dimension's source data, which obviously must be reflected in the data warehouse.
If the primary key of the source data is required for reporting, the dimension already contains that piece of information for each row. If the source data uses a surrogate key, the warehouse must keep track of it even though it is never used in queries or reports; it is done by creating a
lookup table
In computer science, a lookup table (LUT) is an array that replaces runtime computation with a simpler array indexing operation. The process is termed as "direct addressing" and LUTs differ from hash tables in a way that, to retrieve a value v w ...
that contains the warehouse surrogate key and the originating key.
[Golfarelli/Rizzi, Data Warehouse Design, p 291] This way, the dimension is not polluted with surrogates from various source systems, while the ability to update is preserved.
The lookup table is used in different ways depending on the nature of the source data.
There are 5 types to consider;
three are included here:
;Type 1
:The dimension row is simply updated to match the current state of the source system; the warehouse does not capture history; the lookup table is used to identify the dimension row to update or overwrite
;Type 2
:A new dimension row is added with the new state of the source system; a new surrogate key is assigned; source key is no longer unique in the lookup table
;Fully logged
:A new dimension row is added with the new state of the source system, while the previous dimension row is updated to reflect it is no longer active and time of deactivation.
Tools
An established ETL framework may improve connectivity and
scalability
Scalability is the property of a system to handle a growing amount of work by adding resources to the system.
In an economic context, a scalable business model implies that a company can increase sales given increased resources. For example, a ...
. A good ETL tool must be able to communicate with the many different
relational database
A relational database is a (most commonly digital) database based on the relational model of data, as proposed by E. F. Codd in 1970. A system used to maintain relational databases is a relational database management system (RDBMS). Many relatio ...
s and read the various file formats used throughout an organization. ETL tools have started to migrate into
Enterprise Application Integration, or even
Enterprise Service Bus, systems that now cover much more than just the extraction, transformation, and loading of data. Many ETL vendors now have
data profiling,
data quality
Data quality refers to the state of qualitative or quantitative pieces of information. There are many definitions of data quality, but data is generally considered high quality if it is "fit for tsintended uses in operations, decision making and p ...
, and
metadata capabilities. A common use case for ETL tools include converting CSV files to formats readable by relational databases. A typical translation of millions of records is facilitated by ETL tools that enable users to input csv-like data feeds/files and import them into a database with as little code as possible.
ETL tools are typically used by a broad range of professionals — from students in computer science looking to quickly import large data sets to database architects in charge of company account management, ETL tools have become a convenient tool that can be relied on to get maximum performance. ETL tools in most cases contain a GUI that helps users conveniently transform data, using a visual data mapper, as opposed to writing large programs to parse files and modify data types.
While ETL tools have traditionally been for developers and IT staff, research firm Gartner wrote that the new trend is to provide these capabilities to business users so they can themselves create connections and data integrations when needed, rather than going to the IT staff. Gartner refers to these non-technical users as Citizen Integrators.
ETL vs. ELT
Extract, load, transform
Extract, load, transform (ELT) is an alternative to extract, transform, load (ETL) used with data lake implementations. In contrast to ETL, in ELT models the data is not transformed on entry to the data lake, but stored in its original raw forma ...
(ELT) is a variant of ETL where the extracted data is loaded into the target system first.
[Amazon Web Services, Data Warehousing on AWS, p 9]
The architecture for the analytics pipeline shall also consider where to cleanse and enrich data
as well as how to conform dimensions.
Ralph Kimball and
Joe Caserta's book The Data Warehouse ETL Toolkit, (Wiley, 2004), which is used as a textbook for courses teaching ETL processes in data warehousing, addressed this issue.
Cloud-based data warehouses like
Amazon Redshift,
Google BigQuery, and
Snowflake Computing have been able to provide highly scalable computing power. This lets businesses forgo preload transformations and replicate raw data into their data warehouses, where it can transform them as needed using
SQL.
After having used ELT, data may be processed further and stored in a data mart.
There are pros and cons to each approach. Most data integration tools skew towards ETL, while ELT is popular in database and data warehouse appliances. Similarly, it is possible to perform TEL (Transform, Extract, Load) where data is first transformed on a blockchain (as a way of recording changes to data, e.g., token burning) before extracting and loading into another data store.
See also
*
Architecture patterns (EA reference architecture)
*
Create, read, update and delete (CRUD)
*
Data cleansing
Data cleansing or data cleaning is the process of detecting and correcting (or removing) corrupt or inaccurate records from a record set, table, or database and refers to identifying incomplete, incorrect, inaccurate or irrelevant parts of the d ...
*
Data integration
Data integration involves combining data residing in different sources and providing users with a unified view of them.
This process becomes significant in a variety of situations, which include both commercial (such as when two similar companies ...
*
Data mart
A data mart is a structure/access pattern specific to ''data warehouse'' environments, used to retrieve client-facing data. The data mart is a subset of the data warehouse and is usually oriented to a specific business line or team. Whereas data w ...
*
Data mediation
*
Data mesh
Data mesh is a sociotechnical approach to build a decentralized data architecture by leveraging a domain-oriented, self-serve design (in a software development perspective), and borrows Eric Evans’ theory of domain-driven design and Manuel Pais†...
, a domain-oriented data architecture
*
Data migration
*
Electronic data interchange (EDI)
*
Enterprise architecture
*
Expense and cost recovery system (ECRS)
An expense and cost recovery system (ECRS) is a specialized subset of "extract, transform, load" (ETL) functioning as a powerful and flexible set of applications, including programs, scripts and databases designed to improve the cash flow of bus ...
*
Hartmann pipeline {{Infobox programming language
, name = Pipelines
, logo = image:pipjarg1.jpeg
, paradigm = Dataflow programming
, year = 1986
, developer = IBM
, designer = John P. Hartmann ( IBM)
, latest_release_version = 1.1.12/0012
, latest_release_date = ...
*
Legal Electronic Data Exchange Standard (LEDES)
*
Metadata discovery
*
Online analytical processing (OLAP)
*
Online transaction processing (OLTP)
*
Spatial ETL
Spatial extract, transform, load (spatial ETL), also known as geospatial transformation and load (GTL), provides the data processing functionality of traditional extract, transform, load (ETL) software, but with a primary focus on the ability to ma ...
References
{{Data warehouse
Data warehousing