Natural-language understanding (NLU) or natural-language interpretation (NLI) is a subtopic of
natural-language processing in
artificial intelligence
Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machine
A machine is a physical system using Power (physics), power to apply Force, forces and control Motion, moveme ...
that deals with machine
reading comprehension. Natural-language understanding is considered an
AI-hard
In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems, assuming intelligence is computational, is equivalent to that of solv ...
problem.
There is considerable commercial interest in the field because of its application to
automated reasoning,
machine translation
Machine translation, sometimes referred to by the abbreviation MT (not to be confused with computer-aided translation, machine-aided human translation or interactive translation), is a sub-field of computational linguistics that investigates t ...
,
question answering, news-gathering,
text categorization,
voice-activation, archiving, and large-scale
content analysis
Content analysis is the study of documents and communication artifacts, which might be texts of various formats, pictures, audio or video. Social scientists use content analysis to examine patterns in communication in a replicable and systematic ...
.
History
The program
STUDENT
A student is a person enrolled in a school or other educational institution.
In the United Kingdom and most commonwealth countries, a "student" attends a secondary school or higher (e.g., college or university); those in primary or elementa ...
, written in 1964 by
Daniel Bobrow for his PhD dissertation at
MIT, is one of the earliest known attempts at natural-language understanding by a computer. Eight years after
John McCarthy coined the term
artificial intelligence
Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machine
A machine is a physical system using Power (physics), power to apply Force, forces and control Motion, moveme ...
, Bobrow's dissertation (titled ''Natural Language Input for a Computer Problem Solving System'') showed how a computer could understand simple natural language input to solve algebra word problems.
A year later, in 1965,
Joseph Weizenbaum
Joseph Weizenbaum (8 January 1923 – 5 March 2008) was a German American computer scientist and a professor at MIT. The Weizenbaum Award is named after him. He is considered one of the fathers of modern artificial intelligence.
Life and care ...
at MIT wrote
ELIZA
ELIZA is an early natural language processing computer program created from 1964 to 1966 at the MIT Artificial Intelligence Laboratory by Joseph Weizenbaum. Created to demonstrate the superficiality of communication between humans and machines ...
, an interactive program that carried on a dialogue in English on any topic, the most popular being psychotherapy. ELIZA worked by simple parsing and substitution of key words into canned phrases and Weizenbaum sidestepped the problem of giving the program a
database
In computing, a database is an organized collection of data stored and accessed electronically. Small databases can be stored on a file system, while large databases are hosted on computer clusters or cloud storage. The design of databases spa ...
of real-world knowledge or a rich
lexicon. Yet ELIZA gained surprising popularity as a toy project and can be seen as a very early precursor to current commercial systems such as those used by
Ask.com.
In 1969,
Roger Schank at
Stanford University introduced the
conceptual dependency theory for natural-language understanding. This model, partially influenced by the work of
Sydney Lamb, was extensively used by Schank's students at
Yale University
Yale University is a Private university, private research university in New Haven, Connecticut. Established in 1701 as the Collegiate School, it is the List of Colonial Colleges, third-oldest institution of higher education in the United Sta ...
, such as
Robert Wilensky,
Wendy Lehnert, and
Janet Kolodner.
In 1970,
William A. Woods introduced the
augmented transition network An augmented transition network or ATN is a type of graph theoretic structure used in the operational definition of formal languages, used especially in parsing relatively complex natural languages, and having wide application in artificial intelli ...
(ATN) to represent natural language input. Instead of ''
phrase structure rules'' ATNs used an equivalent set of
finite state automata that were called recursively. ATNs and their more general format called "generalized ATNs" continued to be used for a number of years.
In 1971,
Terry Winograd finished writing
SHRDLU for his PhD thesis at MIT. SHRDLU could understand simple English sentences in a restricted world of children's blocks to direct a robotic arm to move items. The successful demonstration of SHRDLU provided significant momentum for continued research in the field. Winograd continued to be a major influence in the field with the publication of his book ''Language as a Cognitive Process''. At Stanford, Winograd would later advise
Larry Page, who co-founded
Google
Google LLC () is an American Multinational corporation, multinational technology company focusing on Search Engine, search engine technology, online advertising, cloud computing, software, computer software, quantum computing, e-commerce, ar ...
.
In the 1970s and 1980s, the natural language processing group at
SRI International continued research and development in the field. A number of commercial efforts based on the research were undertaken, ''e.g.'', in 1982
Gary Hendrix
Gary Grant Hendrix (born May 14, 1948) is an American businessman who founded Symantec Corporation, an international corporation which produces computer software, particularly in the fields of information management and antivirus software.
Educ ...
formed
Symantec Corporation originally as a company for developing a natural language interface for database queries on personal computers. However, with the advent of mouse-driven
graphical user interfaces, Symantec changed direction. A number of other commercial efforts were started around the same time, ''e.g.'', Larry R. Harris at the Artificial Intelligence Corporation and Roger Schank and his students at Cognitive Systems Corp. In 1983, Michael Dyer developed the BORIS system at Yale which bore similarities to the work of Roger Schank and W. G. Lehnert.
The third millennium saw the introduction of systems using machine learning for text classification, such as the IBM
Watson
Watson may refer to:
Companies
* Actavis, a pharmaceutical company formerly known as Watson Pharmaceuticals
* A.S. Watson Group, retail division of Hutchison Whampoa
* Thomas J. Watson Research Center, IBM research center
* Watson Systems, make ...
. However, experts debate how much "understanding" such systems demonstrate: ''e.g.'', according to
John Searle
John Rogers Searle (; born July 31, 1932) is an American philosopher widely noted for contributions to the philosophy of language, philosophy of mind, and social philosophy. He began teaching at UC Berkeley in 1959, and was Willis S. and Mari ...
, Watson did not even understand the questions.
John Ball, cognitive scientist and inventor o
Patom Theory supports this assessment. Natural language processing has made inroads for applications to support human productivity in service and ecommerce, but this has largely been made possible by narrowing the scope of the application. There are thousands of ways to request something in a human language that still defies conventional natural language processing. "To have a meaningful conversation with machines is only possible when we match every word to the correct meaning based on the meanings of the other words in the sentence – just like a 3-year-old does without guesswork."
Scope and context
The umbrella term "natural-language understanding" can be applied to a diverse set of computer applications, ranging from small, relatively simple tasks such as short commands issued to
robot
A robot is a machine—especially one programmable by a computer—capable of carrying out a complex series of actions automatically. A robot can be guided by an external control device, or the control may be embedded within. Robots may be ...
s, to highly complex endeavors such as the full comprehension of newspaper articles or poetry passages. Many real-world applications fall between the two extremes, for instance
text classification
Document classification or document categorization is a problem in library science, information science and computer science. The task is to assign a document to one or more classes or categories. This may be done "manually" (or "intellectual ...
for the automatic analysis of emails and their routing to a suitable department in a corporation does not require an in-depth understanding of the text, but needs to deal with a much larger vocabulary and more diverse syntax than the management of simple queries to database tables with fixed schemata.
Throughout the years various attempts at processing natural language or ''English-like'' sentences presented to computers have taken place at varying degrees of complexity. Some attempts have not resulted in systems with deep understanding, but have helped overall system usability. For example,
Wayne Ratliff originally developed the ''Vulcan'' program with an English-like syntax to mimic the English speaking computer in
Star Trek
''Star Trek'' is an American science fiction media franchise created by Gene Roddenberry, which began with the eponymous 1960s television series and quickly became a worldwide pop-culture phenomenon. The franchise has expanded into vario ...
. Vulcan later became the
dBase
dBase (also stylized dBASE) was one of the first database management systems for microcomputers and the most successful in its day. The dBase system includes the core database engine, a query system, a forms engine, and a programming langua ...
system whose easy-to-use syntax effectively launched the personal computer database industry. Systems with an easy to use or ''English like'' syntax are, however, quite distinct from systems that use a rich
lexicon and include an internal representation (often as
first order logic) of the semantics of natural language sentences.
Hence the breadth and depth of "understanding" aimed at by a system determine both the complexity of the system (and the implied challenges) and the types of applications it can deal with. The "breadth" of a system is measured by the sizes of its vocabulary and grammar. The "depth" is measured by the degree to which its understanding approximates that of a fluent native speaker. At the narrowest and shallowest, ''English-like'' command interpreters require minimal complexity, but have a small range of applications. Narrow but deep systems explore and model mechanisms of understanding, but they still have limited application. Systems that attempt to understand the contents of a document such as a news release beyond simple keyword matching and to judge its suitability for a user are broader and require significant complexity, but they are still somewhat shallow. Systems that are both very broad and very deep are beyond the current state of the art.
Components and architecture
Regardless of the approach used, most natural-language-understanding systems share some common components. The system needs a
lexicon of the language and a
parser and
grammar
In linguistics, the grammar of a natural language is its set of structure, structural constraints on speakers' or writers' composition of clause (linguistics), clauses, phrases, and words. The term can also refer to the study of such constraint ...
rules to break sentences into an internal representation. The construction of a rich lexicon with a suitable
ontology
In metaphysics, ontology is the philosophical study of being, as well as related concepts such as existence, becoming, and reality.
Ontology addresses questions like how entities are grouped into categories and which of these entities ...
requires significant effort, ''e.g.'', the
Wordnet lexicon required many person-years of effort.
The system also needs theory from ''
semantics
Semantics (from grc, σημαντικός ''sēmantikós'', "significant") is the study of reference, meaning, or truth. The term can be used to refer to subfields of several distinct disciplines, including philosophy, linguistics and compu ...
'' to guide the comprehension. The interpretation capabilities of a language-understanding system depend on the semantic theory it uses. Competing semantic theories of language have specific trade-offs in their suitability as the basis of computer-automated semantic interpretation. These range from ''
naive semantics Naive semantics is an approach used in computer science for representing basic knowledge about a specific domain, and has been used in applications such as the representation of the meaning of natural language sentences in artificial intelligence a ...
'' or ''
stochastic semantic analysis Stochastic semantic analysis is an approach used in computer science as a semantic component of natural language understanding.
Stochastic models generally use the definition of segments of words as basic semantic units for the semantic models, and ...
'' to the use of ''
pragmatics
In linguistics and related fields, pragmatics is the study of how context contributes to meaning. The field of study evaluates how human language is utilized in social interactions, as well as the relationship between the interpreter and the in ...
'' to derive meaning from context.
Semantic parser Semantic parsing is the task of converting a natural language utterance to a logical form: a machine-understandable representation of its meaning. Semantic parsing can thus be understood as extracting the precise meaning of an utterance. Applicatio ...
s convert natural-language texts into formal meaning representations.
Advanced applications of natural-language understanding also attempt to incorporate logical
inference
Inferences are steps in reasoning, moving from premises to logical consequences; etymologically, the word '' infer'' means to "carry forward". Inference is theoretically traditionally divided into deduction and induction, a distinction that ...
within their framework. This is generally achieved by mapping the derived meaning into a set of assertions in
predicate logic
First-order logic—also known as predicate logic, quantificational logic, and first-order predicate calculus—is a collection of formal systems used in mathematics, philosophy, linguistics, and computer science. First-order logic uses quanti ...
, then using
logical deduction to arrive at conclusions. Therefore, systems based on functional languages such as
Lisp
A lisp is a speech impairment in which a person misarticulates sibilants (, , , , , , , ). These misarticulations often result in unclear speech.
Types
* A frontal lisp occurs when the tongue is placed anterior to the target. Interdental lispi ...
need to include a subsystem to represent logical assertions, while logic-oriented systems such as those using the language
Prolog
Prolog is a logic programming language associated with artificial intelligence and computational linguistics.
Prolog has its roots in first-order logic, a formal logic, and unlike many other programming languages, Prolog is intended primarily a ...
generally rely on an extension of the built-in logical representation framework.
The management of
context in natural-language understanding can present special challenges. A large variety of examples and counter examples have resulted in multiple approaches to the formal modeling of context, each with specific strengths and weaknesses.
See also
*
Computational semantics
*
Computational linguistics
Computational linguistics is an Interdisciplinarity, interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, comput ...
*
Discourse representation theory
*
Deep linguistic processing Deep linguistic processing is a natural language processing framework which draws on theoretical and descriptive linguistics. It models language predominantly by way of theoretical syntactic/semantic theory (e.g. CCG, HPSG, LFG, TAG, the Prague Sc ...
*
History of natural language processing
*
Information extraction
*
Mathematica
Wolfram Mathematica is a software system with built-in libraries for several areas of technical computing that allow machine learning, statistics, symbolic computation, data manipulation, network analysis, time series analysis, NLP, optimi ...
*
Natural-language processing
*
Natural-language programming
*
Natural-language user interface
**
Siri (software)
**
Wolfram Alpha
*
Open information extraction
In natural language processing, open information extraction (OIE) is the task of generating a structured, machine-readable representation of the information in text, usually in the form of triples or n-ary propositions.
Overview
A proposition can ...
*
Part-of-speech tagging
*
Speech recognition
Speech recognition is an interdisciplinary subfield of computer science and computational linguistics that develops methodologies and technologies that enable the recognition and translation of spoken language into text by computers with the ma ...
Notes
{{DEFAULTSORT:Natural Language Understanding
Natural language processing