Information extraction (IE) is the task of automatically extracting structured information from
unstructured and/or semi-structured
machine-readable documents and other electronically represented sources. In most of the cases this activity concerns processing human language texts by means of
natural language processing
Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to pro ...
(NLP). Recent activities in
multimedia
Multimedia is a form of communication that uses a combination of different content forms such as text, audio, images, animations, or video into a single interactive presentation, in contrast to tradition ...
document processing like automatic annotation and content extraction out of images/audio/video/documents could be seen as information extraction
Due to the difficulty of the problem, current approaches to IE (as of 2010) focus on narrowly restricted domains. An example is the extraction from newswire reports of corporate mergers, such as denoted by the formal relation:
:
,
from an online news sentence such as:
:''"Yesterday, New York based Foo Inc. announced their acquisition of Bar Corp."''
A broad goal of IE is to allow computation to be done on the previously unstructured data. A more specific goal is to allow
logical reasoning
Two kinds of logical reasoning are often distinguished in addition to formal deduction: induction and abduction. Given a precondition or ''premise'', a conclusion or ''logical consequence'' and a rule or ''material conditional'' that implies the ...
to draw inferences based on the logical content of the input data. Structured data is semantically well-defined data from a chosen target domain, interpreted with respect to category and
context
Context may refer to:
* Context (language use), the relevant constraints of the communicative situation that influence language use, language variation, and discourse summary
Computing
* Context (computing), the virtual environment required to su ...
.
Information extraction is the part of a greater puzzle which deals with the problem of devising automatic methods for text management, beyond its transmission, storage and display. The discipline of
information retrieval
Information retrieval (IR) in computing and information science is the process of obtaining information system resources that are relevant to an information need from a collection of those resources. Searches can be based on full-text or other co ...
(IR) has developed automatic methods, typically of a statistical flavor, for indexing large document collections and classifying documents. Another complementary approach is that of
natural language processing
Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to pro ...
(NLP) which has solved the problem of modelling human language processing with considerable success when taking into account the magnitude of the task. In terms of both difficulty and emphasis, IE deals with tasks in between both IR and NLP. In terms of input, IE assumes the existence of a set of documents in which each document follows a template, i.e. describes one or more entities or events in a manner that is similar to those in other documents but differing in the details. An example, consider a group of newswire articles on Latin American terrorism with each article presumed to be based upon one or more terroristic acts. We also define for any given IE task a template, which is a(or a set of) case frame(s) to hold the information contained in a single document. For the terrorism example, a template would have slots corresponding to the perpetrator, victim, and weapon of the terroristic act, and the date on which the event happened. An IE system for this problem is required to “understand” an attack article only enough to find data corresponding to the slots in this template.
History
Information extraction dates back to the late 1970s in the early days of NLP. An early commercial system from the mid-1980s was JASPER built for
Reuters
Reuters ( ) is a news agency owned by Thomson Reuters Corporation. It employs around 2,500 journalists and 600 photojournalists in about 200 locations worldwide. Reuters is one of the largest news agencies in the world.
The agency was estab ...
by the Carnegie Group Inc with the aim of providing
real-time financial news to financial traders.
Beginning in 1987, IE was spurred by a series of
Message Understanding Conference The Message Understanding Conferences (MUC) for computing and computer science, were initiated and financed by DARPA (Defense Advanced Research Projects Agency) to encourage the development of new and better methods of information extraction. The c ...
s. MUC is a competition-based conference that focused on the following domains:
*MUC-1 (1987), MUC-3 (1989): Naval operations messages.
*MUC-3 (1991), MUC-4 (1992): Terrorism in Latin American countries.
*MUC-5 (1993):
Joint venture
A joint venture (JV) is a business entity created by two or more parties, generally characterized by shared ownership, shared returns and risks, and shared governance. Companies typically pursue joint ventures for one of four reasons: to acces ...
s and microelectronics domain.
*MUC-6 (1995): News articles on management changes.
*MUC-7 (1998): Satellite launch reports.
Considerable support came from the U.S. Defense Advanced Research Projects Agency (
DARPA
The Defense Advanced Research Projects Agency (DARPA) is a research and development agency of the United States Department of Defense responsible for the development of emerging technologies for use by the military.
Originally known as the Adv ...
), who wished to automate mundane tasks performed by government analysts, such as scanning newspapers for possible links to terrorism.
Present significance
The present significance of IE pertains to the growing amount of information available in unstructured form.
Tim Berners-Lee
Sir Timothy John Berners-Lee (born 8 June 1955), also known as TimBL, is an English computer scientist best known as the inventor of the World Wide Web. He is a Professorial Fellow of Computer Science at the University of Oxford and a profess ...
, inventor of the
World Wide Web
The World Wide Web (WWW), commonly known as the Web, is an information system enabling documents and other web resources to be accessed over the Internet.
Documents and downloadable media are made available to the network through web se ...
, refers to the existing
Internet
The Internet (or internet) is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP) to communicate between networks and devices. It is a '' network of networks'' that consists of private, pub ...
as the web of ''documents'' and advocates that more of the content be made available as a
web of ''data''. Until this transpires, the web largely consists of unstructured documents lacking semantic
metadata
Metadata is "data that provides information about other data", but not the content of the data, such as the text of a message or the image itself. There are many distinct types of metadata, including:
* Descriptive metadata – the descriptive ...
. Knowledge contained within these documents can be made more accessible for machine processing by means of transformation into
relational form, or by marking-up with
XML
Extensible Markup Language (XML) is a markup language and file format for storing, transmitting, and reconstructing arbitrary data. It defines a set of rules for encoding documents in a format that is both human-readable and machine-readable ...
tags. An intelligent agent monitoring a news data feed requires IE to transform unstructured data into something that can be reasoned with. A typical application of IE is to scan a set of documents written in a
natural language
In neuropsychology, linguistics, and philosophy of language, a natural language or ordinary language is any language that has evolved naturally in humans through use and repetition without conscious planning or premeditation. Natural languages ...
and populate a database with the information extracted.
Tasks and subtasks
Applying information extraction to text is linked to the problem of
text simplification
Text simplification is an operation used in natural language processing to change, enhance, classify, or otherwise process an existing body of human-readable text so its grammar and structure is greatly simplified while the underlying meaning and ...
in order to create a structured view of the information present in free text. The overall goal being to create a more easily machine-readable text to process the sentences. Typical IE tasks and subtasks include:
* Template filling: Extracting a fixed set of fields from a document, e.g. extract perpetrators, victims, time, etc. from a newspaper article about a terrorist attack.
** Event extraction: Given an input document, output zero or more event templates. For instance, a newspaper article might describe multiple terrorist attacks.
*
Knowledge Base
A knowledge base (KB) is a technology used to store complex structured and unstructured information used by a computer system. The initial use of the term was in connection with expert systems, which were the first knowledge-based systems. ...
Population: Fill a database of facts given a set of documents. Typically the database is in the form of triplets, (entity 1, relation, entity 2), e.g. (
Barack Obama
Barack Hussein Obama II ( ; born August 4, 1961) is an American politician who served as the 44th president of the United States from 2009 to 2017. A member of the Democratic Party, Obama was the first African-American president of the U ...
, Spouse,
Michelle Obama)
**
Named entity recognition
Named-entity recognition (NER) (also known as (named) entity identification, entity chunking, and entity extraction) is a subtask of information extraction that seeks to locate and classify named entities mentioned in unstructured text into pre ...
: recognition of known entity names (for people and organizations), place names, temporal expressions, and certain types of numerical expressions, by employing existing knowledge of the domain or information extracted from other sentences.
Typically the recognition task involves assigning a unique identifier to the extracted entity. A simpler task is ''named entity detection'', which aims at detecting entities without having any existing knowledge about the entity instances. For example, in processing the sentence "M. Smith likes fishing", ''named entity detection'' would denote detecting that the phrase "M. Smith" does refer to a person, but without necessarily having (or using) any knowledge about a certain ''M. Smith'' who is (or, "might be") the specific person whom that sentence is talking about.
**
Coreference
In linguistics, coreference, sometimes written co-reference, occurs when two or more expressions refer to the same person or thing; they have the same referent. For example, in ''Bill said Alice would arrive soon, and she did'', the words ''Alice'' ...
resolution: detection of
coreference
In linguistics, coreference, sometimes written co-reference, occurs when two or more expressions refer to the same person or thing; they have the same referent. For example, in ''Bill said Alice would arrive soon, and she did'', the words ''Alice'' ...
and
anaphoric links between text entities. In IE tasks, this is typically restricted to finding links between previously-extracted named entities. For example, "International Business Machines" and "IBM" refer to the same real-world entity. If we take the two sentences "M. Smith likes fishing. But he doesn't like biking", it would be beneficial to detect that "he" is referring to the previously detected person "M. Smith".
**
Relationship extraction A relationship extraction task requires the detection and classification of semantic relationship mentions within a set of artifacts, typically from text or XML documents. The task is very similar to that of information extraction (IE), but IE add ...
: identification of relations between entities,
such as:
*** PERSON works for ORGANIZATION (extracted from the sentence "Bill works for IBM.")
*** PERSON located in LOCATION (extracted from the sentence "Bill is in France.")
* Semi-structured information extraction which may refer to any IE that tries to restore some kind of information structure that has been lost through publication, such as:
** Table extraction: finding and extracting tables from documents.
** Table information extraction : extracting information in structured manner from the tables. This is more complex task than table extraction, as table extraction is only the first step, while understanding the roles of the cells, rows, columns, linking the information inside the table and understanding the information presented in the table are additional tasks necessary for table information extraction.
** Comments extraction : extracting comments from actual content of article in order to restore the link between author of each sentence
* Language and vocabulary analysis
**
Terminology extraction
Terminology extraction (also known as term extraction, glossary extraction, term recognition, or terminology mining) is a subtask of information extraction. The goal of terminology extraction is to automatically extract relevant terms from a give ...
: finding the relevant terms for a given
corpus
Corpus is Latin for "body". It may refer to:
Linguistics
* Text corpus, in linguistics, a large and structured set of texts
* Speech corpus, in linguistics, a large set of speech audio files
* Corpus linguistics, a branch of linguistics
Music
* ...
* Audio extraction
** Template-based music extraction: finding relevant characteristic in an audio signal taken from a given repertoire; for instance time indexes of occurrences of percussive sounds can be extracted in order to represent the essential rhythmic component of a music piece.
Note that this list is not exhaustive and that the exact meaning of IE activities is not commonly accepted and that many approaches combine multiple sub-tasks of IE in order to achieve a wider goal. Machine learning, statistical analysis and/or natural language processing are often used in IE.
IE on non-text documents is becoming an increasingly interesting topic in research, and information extracted from multimedia documents can now be expressed in a high level structure as it is done on text. This naturally leads to the fusion of extracted information from multiple kinds of documents and sources.
World Wide Web applications
IE has been the focus of the MUC conferences. The proliferation of the
Web, however, intensified the need for developing IE systems that help people to cope with the
enormous amount of data that are available online. Systems that perform IE from online text should meet the requirements of low cost, flexibility in development and easy adaptation to new domains. MUC systems fail to meet those criteria. Moreover, linguistic analysis performed for unstructured text does not exploit the HTML/
XML
Extensible Markup Language (XML) is a markup language and file format for storing, transmitting, and reconstructing arbitrary data. It defines a set of rules for encoding documents in a format that is both human-readable and machine-readable ...
tags and the layout formats that are available in online texts. As a result, less linguistically intensive approaches have been developed for IE on the Web using
wrappers, which are sets of highly accurate rules that extract a particular page's content. Manually developing wrappers has proved to be a time-consuming task, requiring a high level of expertise.
Machine learning
Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence.
Machine ...
techniques, either
supervised or
unsupervised
''Unsupervised'' is an American adult animated sitcom created by David Hornsby, Rob Rosell, and Scott Marder which ran on FX from January 19 to December 20, 2012. The show was created, and for the most part, written by David Hornsby, Scott Marder ...
, have been used to induce such rules automatically.
''Wrappers'' typically handle highly structured collections of web pages, such as product catalogs and telephone directories. They fail, however, when the text type is less structured, which is also common on the Web. Recent effort on ''adaptive information extraction'' motivates the development of IE systems that can handle different types of text, from well-structured to almost free text -where common wrappers fail- including mixed types. Such systems can exploit shallow natural language knowledge and thus can be also applied to less structured texts.
A recent development is Visual Information Extraction, that relies on rendering a webpage in a browser and creating rules based on the proximity of regions in the rendered web page. This helps in extracting entities from complex web pages that may exhibit a visual pattern, but lack a discernible pattern in the HTML source code.
Approaches
The following standard approaches are now widely accepted:
* Hand-written regular expressions (or nested group of regular expressions)
* Using classifiers
** Generative:
naïve Bayes classifier
In statistics, naive Bayes classifiers are a family of simple "Probabilistic classification, probabilistic classifiers" based on applying Bayes' theorem with strong (naive) statistical independence, independence assumptions between the features (s ...
** Discriminative:
maximum entropy models such as
Multinomial logistic regression
* Sequence models
**
Recurrent neural network
A recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes can create a cycle, allowing output from some nodes to affect subsequent input to the same nodes. This allows it to exhibit temporal dynamic ...
**
Hidden Markov model
A hidden Markov model (HMM) is a statistical Markov model in which the system being modeled is assumed to be a Markov process — call it X — with unobservable ("''hidden''") states. As part of the definition, HMM requires that there be an ob ...
** Conditional Markov model (CMM) /
Maximum-entropy Markov model
In statistics, a maximum-entropy Markov model (MEMM), or conditional Markov model (CMM), is a graphical model for sequence labeling that combines features of hidden Markov models (HMMs) and maximum entropy (MaxEnt) models. An MEMM is a discrimina ...
(MEMM)
**
Conditional random field
Conditional random fields (CRFs) are a class of statistical modeling methods often applied in pattern recognition and machine learning and used for structured prediction. Whereas a classifier predicts a label for a single sample without consid ...
s (CRF) are commonly used in conjunction with IE for tasks as varied as extracting information from research papers to extracting navigation instructions.
Numerous other approaches exist for IE including hybrid approaches that combine some of the standard approaches previously listed.
Free or open source software and services
*
General Architecture for Text Engineering
General Architecture for Text Engineering or GATE is a Java suite of tools originally developed at the University of Sheffield beginning in 1995 and now used worldwide by a wide community of scientists, companies, teachers and students for many nat ...
(GATE) is bundled with a free Information Extraction system
* Apache
OpenNLP
The Apache OpenNLP library is a machine learning based toolkit for the processing of natural language text. It supports the most common NLP tasks, such as language detection, tokenization, sentence segmentation, part-of-speech tagging, named en ...
is a Java machine learning toolkit for natural language processing
*
OpenCalais is an automated information extraction web service from
Thomson Reuters
Thomson Reuters Corporation ( ) is a Canadian multinational media conglomerate. The company was founded in Toronto, Ontario, Canada, where it is headquartered at the Bay Adelaide Centre.
Thomson Reuters was created by the Thomson Corpora ...
(Free limited version)
*
Machine Learning for Language Toolkit (Mallet) is a Java-based package for a variety of natural language processing tasks, including information extraction.
*
DBpedia Spotlight
DBpedia (from "DB" for "database") is a project aiming to extract structured content from the information created in the Wikipedia project. This structured information is made available on the World Wide Web. DBpedia allows users to semantical ...
is an open source tool in Java/Scala (and free web service) that can be used for named entity recognition and
name resolution.
*
Natural Language Toolkit
The Natural Language Toolkit, or more commonly NLTK, is a suite of Library (computer science), libraries and programs for symbolic and statistical natural language processing (NLP) for English written in the Python (programming language), Python ...
is a suite of libraries and programs for symbolic and statistical natural language processing (NLP) for the Python programming language
* See also
CRF implementations
See also
; Extraction
*
Data extraction
Data extraction is the act or process of retrieving data out of (usually unstructured or poorly structured) data sources for further data processing or data storage ( data migration). The import into the intermediate extracting system is thus usua ...
*
Keyword extraction Keyword extraction is tasked with the automatic identification of terms that best describe the subject of a document.
''Key phrases'', ''key terms'', ''key segments'' or just ''keywords'' are the terminology which is used for defining the terms tha ...
*
Knowledge extraction
Knowledge extraction is the creation of knowledge from structured (relational databases, XML) and unstructured (text, documents, images) sources. The resulting knowledge needs to be in a machine-readable and machine-interpretable format and must r ...
*
Ontology extraction
*
Open information extraction
*
Table extraction
*
Terminology extraction
Terminology extraction (also known as term extraction, glossary extraction, term recognition, or terminology mining) is a subtask of information extraction. The goal of terminology extraction is to automatically extract relevant terms from a give ...
; Mining, crawling, scraping, and recognition
*
Apache Nutch
Apache Nutch is a highly extensible and scalable open source web crawler software project.
Features
Nutch is coded entirely in the Java programming language, but data is written in language-independent formats. It has a highly modular architec ...
, web crawler
*
Concept mining
*
Named entity recognition
Named-entity recognition (NER) (also known as (named) entity identification, entity chunking, and entity extraction) is a subtask of information extraction that seeks to locate and classify named entities mentioned in unstructured text into pre ...
*
Textmining
Text mining, also referred to as ''text data mining'', similar to text analytics, is the process of deriving high-quality information from text. It involves "the discovery by computer of new, previously unknown information, by automatically extract ...
*
Web scraping
Web scraping, web harvesting, or web data extraction is data scraping used for extracting data from websites. Web scraping software may directly access the World Wide Web using the Hypertext Transfer Protocol or a web browser. While web scrapin ...
; Search and translation
*
Enterprise search
Enterprise search is the practice of making content from multiple enterprise-type sources, such as databases and intranets, searchable to a defined audience.
"Enterprise search" is used to describe the software of search information within an ente ...
*
Faceted search
Faceted search is a technique that involves augmenting traditional search techniques with a faceted navigation system, allowing users to narrow down search results by applying multiple filters based on faceted classification of the items. It is som ...
*
Semantic translation
Semantic translation is the process of using semantic information to aid in the translation of data in one representation or data model to another representation or data model. Semantic translation takes advantage of semantics that associate meani ...
; General
*
Applications of artificial intelligence
Artificial intelligence (AI) has been used in applications to alleviate certain problems throughout industry and academia. AI, like electricity or computers, is a general purpose technology that has a multitude of applications. It has been used ...
*
DARPA TIPSTER Program
; Lists
*
List of emerging technologies
*
Outline of artificial intelligence
The following outline is provided as an overview of and topical guide to artificial intelligence:
Artificial intelligence (AI) – intelligence exhibited by machines or software. It is also the name of the scientific field which studies how to ...
References
External links
Alias-I "competition" pageA listing of academic toolkits and industrial toolkits for natural language information extraction.
Gabor Melli's page on IEDetailed description of the information extraction task.
{{DEFAULTSORT:Information extraction
Natural language processing