HOME

TheInfoList



OR:

Word-sense disambiguation (WSD) is the process of identifying which
sense A sense is a biological system used by an organism for sensation, the process of gathering information about the world through the detection of stimuli. (For example, in the human body, the brain which is part of the central nervous system re ...
of a
word A word is a basic element of language that carries an objective or practical meaning, can be used on its own, and is uninterruptible. Despite the fact that language speakers often have an intuitive grasp of what a word is, there is no conse ...
is meant in a sentence or other segment of
context Context may refer to: * Context (language use), the relevant constraints of the communicative situation that influence language use, language variation, and discourse summary Computing * Context (computing), the virtual environment required to s ...
. In human language processing and
cognition Cognition refers to "the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses". It encompasses all aspects of intellectual functions and processes such as: perception, attention, though ...
, it is usually subconscious/automatic but can often come to
conscious Consciousness, at its simplest, is sentience and awareness of internal and external existence. However, the lack of definitions has led to millennia of analyses, explanations and debates by philosophers, theologians, linguisticians, and scien ...
attention when
ambiguity Ambiguity is the type of meaning in which a phrase, statement or resolution is not explicitly defined, making several interpretations plausible. A common aspect of ambiguity is uncertainty. It is thus an attribute of any idea or statement ...
impairs clarity of communication, given the pervasive
polysemy Polysemy ( or ; ) is the capacity for a sign (e.g. a symbol, a morpheme, a word, or a phrase) to have multiple related meanings. For example, a word can have several word senses. Polysemy is distinct from ''monosemy'', where a word has a singl ...
in
natural language In neuropsychology, linguistics, and philosophy of language, a natural language or ordinary language is any language that has evolved naturally in humans through use and repetition without conscious planning or premeditation. Natural languages ...
. In
computational linguistics Computational linguistics is an interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, computational linguistics ...
, it is an
open problem In science and mathematics, an open problem or an open question is a known problem which can be accurately stated, and which is assumed to have an objective and verifiable solution, but which has not yet been solved (i.e., no solution for it is know ...
that affects other computer-related writing, such as
discourse Discourse is a generalization of the notion of a conversation to any form of communication. Discourse is a major topic in social theory, with work spanning fields such as sociology, anthropology, continental philosophy, and discourse analysis. ...
, improving relevance of
search engine A search engine is a software system designed to carry out web searches. They search the World Wide Web in a systematic way for particular information specified in a textual web search query. The search results are generally presented in a ...
s,
anaphora resolution In linguistics, anaphora () is the use of an expression whose interpretation depends upon another expression in context (its antecedent or postcedent). In a narrower sense, anaphora is the use of an expression that depends specifically upon an ...
,
coherence Coherence, coherency, or coherent may refer to the following: Physics * Coherence (physics), an ideal property of waves that enables stationary (i.e. temporally and spatially constant) interference * Coherence (units of measurement), a deriv ...
, and
inference Inferences are steps in reasoning, moving from premises to logical consequences; etymologically, the word ''wikt:infer, infer'' means to "carry forward". Inference is theoretically traditionally divided into deductive reasoning, deduction and in ...
. Given that natural language requires reflection of neurological reality, as shaped by the abilities provided by the brain's
neural networks A neural network is a network or circuit of biological neurons, or, in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus, a neural network is either a biological neural network, made up of biological ...
, computer science has had a long-term challenge in developing the ability in computers to do
natural language processing Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to proc ...
and
machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
. Many techniques have been researched, including dictionary-based methods that use the knowledge encoded in lexical resources,
supervised machine learning Supervised learning (SL) is a machine learning paradigm for problems where the available data consists of labelled examples, meaning that each data point contains features (covariates) and an associated label. The goal of supervised learning alg ...
methods in which a classifier is trained for each distinct word on a corpus of manually sense-annotated examples, and completely unsupervised methods that cluster occurrences of words, thereby inducing word senses. Among these, supervised learning approaches have been the most successful
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
s to date. Accuracy of current algorithms is difficult to state without a host of caveats. In English, accuracy at the coarse-grained (
homograph A homograph (from the el, ὁμός, ''homós'', "same" and γράφω, ''gráphō'', "write") is a word that shares the same written form as another word but has a different meaning. However, some dictionaries insist that the words must also ...
) level is routinely above 90% (as of 2009), with some methods on particular homographs achieving over 96%. On finer-grained sense distinctions, top accuracies from 59.1% to 69.0% have been reported in evaluation exercises (SemEval-2007, Senseval-2), where the baseline accuracy of the simplest possible algorithm of always choosing the most frequent sense was 51.4% and 57%, respectively.


Variants

Disambiguation requires two strict inputs: a
dictionary A dictionary is a listing of lexemes from the lexicon of one or more specific languages, often arranged alphabetically (or by radical and stroke for ideographic languages), which may include information on definitions, usage, etymologie ...
to specify the senses which are to be disambiguated and a
corpus Corpus is Latin for "body". It may refer to: Linguistics * Text corpus, in linguistics, a large and structured set of texts * Speech corpus, in linguistics, a large set of speech audio files * Corpus linguistics, a branch of linguistics Music * ...
of
language Language is a structured system of communication. The structure of a language is its grammar and the free components are its vocabulary. Languages are the primary means by which humans communicate, and may be conveyed through a variety of ...
data to be disambiguated (in some methods, a training corpus of language examples is also required). WSD task has two variants: "lexical sample" (disambiguating the occurrences of a small sample of target words which were previously selected) and "all words" task (disambiguation of all the words in a running text). "All words" task is generally considered a more realistic form of evaluation, but the corpus is more expensive to produce because human annotators have to read the definitions for each word in the sequence every time they need to make a tagging judgement, rather than once for a block of instances for the same target word.


History

WSD was first formulated as a distinct computational task during the early days of machine translation in the 1940s, making it one of the oldest problems in computational linguistics.
Warren Weaver Warren Weaver (July 17, 1894 – November 24, 1978) was an American scientist, mathematician, and science administrator. He is widely recognized as one of the pioneers of machine translation and as an important figure in creating support for scien ...
first introduced the problem in a computational context in his 1949 memorandum on translation. Later, Bar-Hillel (1960) argued that WSD could not be solved by "electronic computer" because of the need in general to model all world knowledge. In the 1970s, WSD was a subtask of semantic interpretation systems developed within the field of artificial intelligence, starting with Wilks' preference semantics. However, since WSD systems were at the time largely rule-based and hand-coded they were prone to a knowledge acquisition bottleneck. By the 1980s large-scale lexical resources, such as the Oxford Advanced Learner's Dictionary of Current English (OALD), became available: hand-coding was replaced with knowledge automatically extracted from these resources, but disambiguation was still knowledge-based or dictionary-based. In the 1990s, the statistical revolution advanced computational linguistics, and WSD became a paradigm problem on which to apply supervised machine learning techniques. The 2000s saw supervised techniques reach a plateau in accuracy, and so attention has shifted to coarser-grained senses,
domain adaptation Domain adaptation is a field associated with machine learning and transfer learning. This scenario arises when we aim at learning from a source data distribution a well performing model on a different (but related) target data distribution. Fo ...
, semi-supervised and unsupervised corpus-based systems, combinations of different methods, and the return of knowledge-based systems via graph-based methods. Still, supervised systems continue to perform best.


Difficulties


Differences between dictionaries

One problem with word sense disambiguation is deciding what the senses are, as different
dictionaries A dictionary is a listing of lexemes from the lexicon of one or more specific languages, often arranged alphabetically (or by radical and stroke for ideographic languages), which may include information on definitions, usage, etymologies, ...
and
thesaurus A thesaurus (plural ''thesauri'' or ''thesauruses'') or synonym dictionary is a reference work for finding synonyms and sometimes antonyms of words. They are often used by writers to help find the best word to express an idea: Synonym dictionar ...
es will provide different divisions of words into senses. Some researchers have suggested choosing a particular dictionary, and using its set of senses to deal with this issue. Generally, however, research results using broad distinctions in senses have been much better than those using narrow ones. Most researchers continue to work on
fine-grained Granularity (also called graininess), the condition of existing in granules or grains, refers to the extent to which a material or system is composed of distinguishable pieces. It can either refer to the extent to which a larger entity is sub ...
WSD. Most research in the field of WSD is performed by using
WordNet WordNet is a lexical database of semantic relations between words in more than 200 languages. WordNet links words into semantic relations including synonyms, hyponyms, and meronyms. The synonyms are grouped into '' synsets'' with short defin ...
as a reference sense inventory for English. WordNet is a computational
lexicon A lexicon is the vocabulary of a language or branch of knowledge (such as nautical or medical). In linguistics, a lexicon is a language's inventory of lexemes. The word ''lexicon'' derives from Greek word (), neuter of () meaning 'of or fo ...
that encodes concepts as
synonym A synonym is a word, morpheme, or phrase that means exactly or nearly the same as another word, morpheme, or phrase in a given language. For example, in the English language, the words ''begin'', ''start'', ''commence'', and ''initiate'' are al ...
sets (e.g. the concept of car is encoded as ). Other resources used for disambiguation purposes include
Roget's Thesaurus ''Roget's Thesaurus'' is a widely used English-language thesaurus, created in 1805 by Peter Mark Roget (1779–1869), British physician, natural theologian and lexicographer. History It was released to the public on 29 April 1852. Roget was ...
and
Wikipedia Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system. Wikipedia is the largest and most-read refer ...
. More recently, BabelNet, a multilingual encyclopedic dictionary, has been used for multilingual WSD.


Part-of-speech tagging

In any real test,
part-of-speech tagging In corpus linguistics, part-of-speech tagging (POS tagging or PoS tagging or POST), also called grammatical tagging is the process of marking up a word in a text (corpus) as corresponding to a particular part of speech, based on both its definitio ...
and sense tagging have proven to be very closely related, with each potentially imposing constraints upon the other. The question whether these tasks should be kept together or decoupled is still not unanimously resolved, but recently scientists incline to test these things separately (e.g. in the Senseval/ SemEval competitions parts of speech are provided as input for the text to disambiguate). Both WSD and part-of-speech tagging involve disambiguating or tagging with words. However, algorithms used for one do not tend to work well for the other, mainly because the part of speech of a word is primarily determined by the immediately adjacent one to three words, whereas the sense of a word may be determined by words further away. The success rate for part-of-speech tagging algorithms is at present much higher than that for WSD, state-of-the art being around 96% accuracy or better, as compared to less than 75% accuracy in word sense disambiguation with
supervised learning Supervised learning (SL) is a machine learning paradigm for problems where the available data consists of labelled examples, meaning that each data point contains features (covariates) and an associated label. The goal of supervised learning alg ...
. These figures are typical for English, and may be very different from those for other languages.


Inter-judge variance

Another problem is inter-judge
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbe ...
. WSD systems are normally tested by having their results on a task compared against those of a human. However, while it is relatively easy to assign parts of speech to text, training people to tag senses has been proven to be far more difficult. While users can memorize all of the possible parts of speech a word can take, it is often impossible for individuals to memorize all of the senses a word can take. Moreover, humans do not agree on the task at hand – give a list of senses and sentences, and humans will not always agree on which word belongs in which sense. As human performance serves as the standard, it is an
upper bound In mathematics, particularly in order theory, an upper bound or majorant of a subset of some preordered set is an element of that is greater than or equal to every element of . Dually, a lower bound or minorant of is defined to be an eleme ...
for computer performance. Human performance, however, is much better on
coarse-grained Granularity (also called graininess), the condition of existing in granules or grains, refers to the extent to which a material or system is composed of distinguishable pieces. It can either refer to the extent to which a larger entity is sub ...
than
fine-grained Granularity (also called graininess), the condition of existing in granules or grains, refers to the extent to which a material or system is composed of distinguishable pieces. It can either refer to the extent to which a larger entity is sub ...
distinctions, so this again is why research on coarse-grained distinctions has been put to test in recent WSD evaluation exercises.


Pragmatics

Some AI researchers like
Douglas Lenat Douglas Bruce Lenat (born 1950) is the CEO of Cycorp, Inc. of Austin, Texas, and has been a prominent researcher in artificial intelligence; he was awarded the biannual IJCAI Computers and Thought Award in 1976 for creating the machine learning p ...
argue that one cannot parse meanings from words without some form of common sense ontology. This linguistic issue is called
pragmatics In linguistics and related fields, pragmatics is the study of how context contributes to meaning. The field of study evaluates how human language is utilized in social interactions, as well as the relationship between the interpreter and the int ...
. As agreed by researchers, to properly identify senses of words one must know common sense facts. Moreover, sometimes the common sense is needed to disambiguate such words like pronouns in case of having anaphoras or
cataphora In linguistics, cataphora (; from Greek, '' καταφορά'', ''kataphora'', "a downward motion" from '' κατά'', ''kata'', "downwards" and '' φέρω'', ''pherō'', "I carry") is the use of an expression or word that co-refers with a later, ...
s in the text.


Sense inventory and algorithms' task-dependency

A task-independent sense inventory is not a coherent concept: each task requires its own division of word meaning into senses relevant to the task. Additionally, completely different algorithms might be required by different applications. In machine translation, the problem takes the form of target word selection. The "senses" are words in the target language, which often correspond to significant meaning distinctions in the source language ("bank" could translate to the French "banque"—that is, 'financial bank' or "rive"—that is, 'edge of river'). In information retrieval, a sense inventory is not necessarily required, because it is enough to know that a word is used in the same sense in the query and a retrieved document; what sense that is, is unimportant.


Discreteness of senses

Finally, the very notion of "
word sense In linguistics, a word sense is one of the meanings of a word. For example, a dictionary may have over 50 different senses of the word " play", each of these having a different meaning based on the context of the word's usage in a sentence, as ...
" is slippery and controversial. Most people can agree in distinctions at the
coarse-grained Granularity (also called graininess), the condition of existing in granules or grains, refers to the extent to which a material or system is composed of distinguishable pieces. It can either refer to the extent to which a larger entity is sub ...
homograph A homograph (from the el, ὁμός, ''homós'', "same" and γράφω, ''gráphō'', "write") is a word that shares the same written form as another word but has a different meaning. However, some dictionaries insist that the words must also ...
level (e.g., pen as writing instrument or enclosure), but go down one level to
fine-grained Granularity (also called graininess), the condition of existing in granules or grains, refers to the extent to which a material or system is composed of distinguishable pieces. It can either refer to the extent to which a larger entity is sub ...
polysemy Polysemy ( or ; ) is the capacity for a sign (e.g. a symbol, a morpheme, a word, or a phrase) to have multiple related meanings. For example, a word can have several word senses. Polysemy is distinct from ''monosemy'', where a word has a singl ...
, and disagreements arise. For example, in Senseval-2, which used fine-grained sense distinctions, human annotators agreed in only 85% of word occurrences. Word meaning is in principle infinitely variable and context-sensitive. It does not divide up easily into distinct or discrete sub-meanings. Lexicographers frequently discover in corpora loose and overlapping word meanings, and standard or conventional meanings extended, modulated, and exploited in a bewildering variety of ways. The art of lexicography is to generalize from the corpus to definitions that evoke and explain the full range of meaning of a word, making it seem like words are well-behaved semantically. However, it is not at all clear if these same meaning distinctions are applicable in computational applications, as the decisions of lexicographers are usually driven by other considerations. In 2009, a task – named lexical substitution – was proposed as a possible solution to the sense discreteness problem. The task consists of providing a substitute for a word in context that preserves the meaning of the original word (potentially, substitutes can be chosen from the full lexicon of the target language, thus overcoming discreteness).


Approaches and methods

There are two main approaches to WSD – deep approaches and shallow approaches. Deep approaches presume access to a comprehensive body of world knowledge. These approaches are generally not considered to be very successful in practice, mainly because such a body of knowledge does not exist in a computer-readable format, outside very limited domains. Additionally due to the long tradition in
computational linguistics Computational linguistics is an interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, computational linguistics ...
, of trying such approaches in terms of coded knowledge and in some cases, it can be hard to distinguish between knowledge involved in linguistic or world knowledge. The first attempt was that by
Margaret Masterman Margaret Masterman (4 May 1910 – 1 April 1986) was a British linguist and philosopher, most known for her pioneering work in the field of computational linguistics and especially machine translation. She founded the Cambridge Language Re ...
and her colleagues, at the Cambridge Language Research Unit in England, in the 1950s. This attempt used as data a punched-card version of Roget's Thesaurus and its numbered "heads", as an indicator of topics and looked for repetitions in text, using a set intersection algorithm. It was not very successful, but had strong relationships to later work, especially Yarowsky's machine learning optimisation of a thesaurus method in the 1990s. Shallow approaches don't try to understand the text, but instead consider the surrounding words. These rules can be automatically derived by the computer, using a training corpus of words tagged with their word senses. This approach, while theoretically not as powerful as deep approaches, gives superior results in practice, due to the computer's limited world knowledge. There are four conventional approaches to WSD: *
Dictionary A dictionary is a listing of lexemes from the lexicon of one or more specific languages, often arranged alphabetically (or by radical and stroke for ideographic languages), which may include information on definitions, usage, etymologie ...
- and knowledge-based methods: These rely primarily on dictionaries, thesauri, and lexical
knowledge base A knowledge base (KB) is a technology used to store complex structured and unstructured information used by a computer system. The initial use of the term was in connection with expert systems, which were the first knowledge-based systems. ...
s, without using any corpus evidence. * Semi-supervised or minimally supervised methods: These make use of a secondary source of knowledge such as a small annotated corpus as seed data in a bootstrapping process, or a word-aligned bilingual corpus. * Supervised methods: These make use of sense-annotated corpora to train from. * Unsupervised methods: These eschew (almost) completely external information and work directly from raw unannotated corpora. These methods are also known under the name of word sense discrimination. Almost all these approaches work by defining a window of ''n'' content words around each word to be disambiguated in the corpus, and statistically analyzing those ''n'' surrounding words. Two shallow approaches used to train and then disambiguate are Naïve Bayes classifiers and
decision tree A decision tree is a decision support tool that uses a tree-like model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. It is one way to display an algorithm that only contains con ...
s. In recent research, kernel-based methods such as
support vector machine In machine learning, support vector machines (SVMs, also support vector networks) are supervised learning models with associated learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laborat ...
s have shown superior performance in
supervised learning Supervised learning (SL) is a machine learning paradigm for problems where the available data consists of labelled examples, meaning that each data point contains features (covariates) and an associated label. The goal of supervised learning alg ...
. Graph-based approaches have also gained much attention from the research community, and currently achieve performance close to the state of the art.


Dictionary- and knowledge-based methods

The Lesk algorithm is the seminal dictionary-based method. It is based on the hypothesis that words used together in text are related to each other and that the relation can be observed in the definitions of the words and their senses. Two (or more) words are disambiguated by finding the pair of dictionary senses with the greatest word overlap in their dictionary definitions. For example, when disambiguating the words in "pine cone", the definitions of the appropriate senses both include the words evergreen and tree (at least in one dictionary). A similar approach searches for the shortest path between two words: the second word is iteratively searched among the definitions of every semantic variant of the first word, then among the definitions of every semantic variant of each word in the previous definitions and so on. Finally, the first word is disambiguated by selecting the semantic variant which minimizes the distance from the first to the second word. An alternative to the use of the definitions is to consider general word-sense
relatedness The coefficient of relationship is a measure of the degree of consanguinity (or biological relationship) between two individuals. The term coefficient of relationship was defined by Sewall Wright in 1922, and was derived from his definition of th ...
and to compute the
semantic similarity Semantic similarity is a metric defined over a set of documents or terms, where the idea of distance between items is based on the likeness of their meaning or semantic content as opposed to lexicographical similarity. These are mathematical tools ...
of each pair of word senses based on a given lexical knowledge base such as
WordNet WordNet is a lexical database of semantic relations between words in more than 200 languages. WordNet links words into semantic relations including synonyms, hyponyms, and meronyms. The synonyms are grouped into '' synsets'' with short defin ...
. Graph-based methods reminiscent of spreading activation research of the early days of AI research have been applied with some success. More complex graph-based approaches have been shown to perform almost as well as supervised methods or even outperforming them on specific domains. Recently, it has been reported that simple graph connectivity measures, such as
degree Degree may refer to: As a unit of measurement * Degree (angle), a unit of angle measurement ** Degree of geographical latitude ** Degree of geographical longitude * Degree symbol (°), a notation used in science, engineering, and mathematics ...
, perform state-of-the-art WSD in the presence of a sufficiently rich lexical knowledge base. Also, automatically transferring
knowledge Knowledge can be defined as awareness of facts or as practical skills, and may also refer to familiarity with objects or situations. Knowledge of facts, also called propositional knowledge, is often defined as true belief that is distinc ...
in the form of
semantic relation Contemporary ontologies share many structural similarities, regardless of the ontology language in which they are expressed. Most ontologies describe individuals (instances), classes (concepts), attributes, and relations. Overview Common comp ...
s from Wikipedia to WordNet has been shown to boost simple knowledge-based methods, enabling them to rival the best supervised systems and even outperform them in a domain-specific setting. The use of selectional preferences (or selectional restrictions) is also useful, for example, knowing that one typically cooks food, one can disambiguate the word bass in "I am cooking basses" (i.e., it's not a musical instrument).


Supervised methods

Supervised methods are based on the assumption that the context can provide enough evidence on its own to disambiguate words (hence,
common sense ''Common Sense'' is a 47-page pamphlet written by Thomas Paine in 1775–1776 advocating independence from Great Britain to people in the Thirteen Colonies. Writing in clear and persuasive prose, Paine collected various moral and political argu ...
and
reasoning Reason is the capacity of consciously applying logic by drawing conclusions from new or existing information, with the aim of seeking the truth. It is closely associated with such characteristically human activities as philosophy, science, lang ...
are deemed unnecessary). Probably every machine learning algorithm going has been applied to WSD, including associated techniques such as
feature selection In machine learning and statistics, feature selection, also known as variable selection, attribute selection or variable subset selection, is the process of selecting a subset of relevant features (variables, predictors) for use in model construc ...
, parameter optimization, and
ensemble learning In statistics and machine learning, ensemble methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone. Unlike a statistical ensemble in statisti ...
.
Support Vector Machines In machine learning, support vector machines (SVMs, also support vector networks) are supervised learning models with associated learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laboratorie ...
and
memory-based learning In machine learning, instance-based learning (sometimes called memory-based learning) is a family of learning algorithms that, instead of performing explicit generalization, compare new problem instances with instances seen in training, which have b ...
have been shown to be the most successful approaches, to date, probably because they can cope with the high-dimensionality of the feature space. However, these supervised methods are subject to a new knowledge acquisition bottleneck since they rely on substantial amounts of manually sense-tagged corpora for training, which are laborious and expensive to create.


Semi-supervised methods

Because of the lack of training data, many word sense disambiguation algorithms use
semi-supervised learning Weak supervision is a branch of machine learning where noisy, limited, or imprecise sources are used to provide supervision signal for labeling large amounts of training data in a supervised learning setting. This approach alleviates the burden of ...
, which allows both labeled and unlabeled data. The
Yarowsky algorithm In computational linguistics the Yarowsky algorithm is an unsupervised learning algorithm for word sense disambiguation that uses the "one sense per collocation" and the "one sense per discourse" properties of human languages for word sense disa ...
was an early example of such an algorithm. It uses the ‘One sense per collocation’ and the ‘One sense per discourse’ properties of human languages for word sense disambiguation. From observation, words tend to exhibit only one sense in most given discourse and in a given collocation. The
bootstrapping In general, bootstrapping usually refers to a self-starting process that is supposed to continue or grow without external input. Etymology Tall boots may have a tab, loop or handle at the top known as a bootstrap, allowing one to use fingers ...
approach starts from a small amount of seed data for each word: either manually tagged training examples or a small number of surefire decision rules (e.g., 'play' in the context of 'bass' almost always indicates the musical instrument). The seeds are used to train an initial classifier, using any supervised method. This classifier is then used on the untagged portion of the corpus to extract a larger training set, in which only the most confident classifications are included. The process repeats, each new classifier being trained on a successively larger training corpus, until the whole corpus is consumed, or until a given maximum number of iterations is reached. Other semi-supervised techniques use large quantities of untagged corpora to provide
co-occurrence In linguistics, co-occurrence or cooccurrence is an above-chance frequency of occurrence of two terms (also known as coincidence or concurrence) from a text corpus alongside each other in a certain order. Co-occurrence in this linguistic sense can ...
information that supplements the tagged corpora. These techniques have the potential to help in the adaptation of supervised models to different domains. Also, an ambiguous word in one language is often translated into different words in a second language depending on the sense of the word. Word-aligned
bilingual Multilingualism is the use of more than one language, either by an individual speaker or by a group of speakers. It is believed that multilingual speakers outnumber monolingual speakers in the world's population. More than half of all E ...
corpora have been used to infer cross-lingual sense distinctions, a kind of semi-supervised system.


Unsupervised methods

Unsupervised learning Unsupervised learning is a type of algorithm that learns patterns from untagged data. The hope is that through mimicry, which is an important mode of learning in people, the machine is forced to build a concise representation of its world and t ...
is the greatest challenge for WSD researchers. The underlying assumption is that similar senses occur in similar contexts, and thus senses can be induced from text by clustering word occurrences using some
measure of similarity In statistics and related fields, a similarity measure or similarity function or similarity metric is a real-valued function that quantifies the similarity between two objects. Although no single definition of a similarity exists, usually such meas ...
of context, a task referred to as
word sense induction In computational linguistics, word-sense induction (WSI) or discrimination is an open problem of natural language processing, which concerns the automatic identification of the word sense, senses of a word (i.e. meaning (linguistics), meanings). Giv ...
or discrimination. Then, new occurrences of the word can be classified into the closest induced clusters/senses. Performance has been lower than for the other methods described above, but comparisons are difficult since senses induced must be mapped to a known dictionary of word senses. If a mapping to a set of dictionary senses is not desired, cluster-based evaluations (including measures of entropy and purity) can be performed. Alternatively, word sense induction methods can be tested and compared within an application. For instance, it has been shown that word sense induction improves Web search result clustering by increasing the quality of result clusters and the degree diversification of result lists. It is hoped that unsupervised learning will overcome the knowledge acquisition bottleneck because they are not dependent on manual effort. Representing words considering their context through fixed size dense vectors (
word embedding In natural language processing (NLP), word embedding is a term used for the representation of words for text analysis, typically in the form of a real-valued vector that encodes the meaning of the word such that the words that are closer in the v ...
s) has become one of the most fundamental blocks in several NLP systems. Even though most of traditional word embedding techniques conflate words with multiple meanings into a single vector representation, they still can be used to improve WSD. In addition to word embeddings techniques, lexical databases (e.g.,
WordNet WordNet is a lexical database of semantic relations between words in more than 200 languages. WordNet links words into semantic relations including synonyms, hyponyms, and meronyms. The synonyms are grouped into '' synsets'' with short defin ...
,
ConceptNet Open Mind Common Sense (OMCS) is an artificial intelligence project based at the Massachusetts Institute of Technology (MIT) Media Lab whose goal is to build and utilize a large commonsense knowledge base from the contributions of many thousands ...
, BabelNet) can also assist unsupervised systems in mapping words and their senses as dictionaries. Some techniques that combine lexical databases and word embeddings are presented in AutoExtend and Most Suitable Sense Annotation (MSSA). In AutoExtend, they present a method that decouples an object input representation into its properties, such as words and their word senses. AutoExtend uses a graph structure to map words (e.g. text) and non-word (e.g. synsets in
WordNet WordNet is a lexical database of semantic relations between words in more than 200 languages. WordNet links words into semantic relations including synonyms, hyponyms, and meronyms. The synonyms are grouped into '' synsets'' with short defin ...
) objects as nodes and the relationship between nodes as edges. The relations (edges) in AutoExtend can either express the addition or similarity between its nodes. The former captures the intuition behind the offset calculus, while the latter defines the similarity between two nodes. In MSSA, an unsupervised disambiguation system uses the similarity between word senses in a fixed context window to select the most suitable word sense using a pre-trained word embedding model and
WordNet WordNet is a lexical database of semantic relations between words in more than 200 languages. WordNet links words into semantic relations including synonyms, hyponyms, and meronyms. The synonyms are grouped into '' synsets'' with short defin ...
. For each context window, MSSA calculates the centroid of each word sense definition by averaging the word vectors of its words in WordNet's
glosses A gloss is a brief notation, especially a marginal one or an interlinear one, of the meaning of a word or wording in a text. It may be in the language of the text or in the reader's language if that is different. A collection of glosses is a ''g ...
(i.e., short defining gloss and one or more usage example) using a pre-trained word embeddings model. These centroids are later used to select the word sense with the highest similarity of a target word to its immediately adjacent neighbors (i.e., predecessor and successor words). After all words are annotated and disambiguated, they can be used as a training corpus in any standard word embedding technique. In its improved version, MSSA can make use of word sense embeddings to repeat its disambiguation process iteratively.


Other approaches

Other approaches may vary differently in their methods: * Domain-driven disambiguation; * Identification of dominant word senses; * WSD using Cross-Lingual Evidence. * WSD solution in John Ball's language independent NLU combining Patom Theory and RRG (Role and Reference Grammar) *
Type inference Type inference refers to the automatic detection of the type of an expression in a formal language. These include programming languages and mathematical type systems, but also natural languages in some branches of computer science and linguistics ...
in
constraint-based grammar Model-theoretic grammars, also known as constraint-based grammars, contrast with generative grammars in the way they define sets of sentences: they state constraints on syntactic structure rather than providing operations for generating syntactic ...
s


Other languages

*
Hindi Hindi (Devanāgarī: or , ), or more precisely Modern Standard Hindi (Devanagari: ), is an Indo-Aryan language spoken chiefly in the Hindi Belt region encompassing parts of northern, central, eastern, and western India. Hindi has been de ...
: Lack of
lexical resource In digital lexicography, natural language processing, and digital humanities, a lexical resource is a language resource consisting of data regarding the lexemes of the lexicon of one or more languages e.g., in the form of a database. Characte ...
s in Hindi have hindered the performance of supervised models of WSD, while the unsupervised models suffer due to extensive morphology. A possible solution to this problem is the design of a WSD model by means of
parallel corpora A parallel text is a text placed alongside its translation or translations. Parallel text alignment is the identification of the corresponding sentences in both halves of the parallel text. The Loeb Classical Library and the Clay Sanskrit Libr ...
. The creation of th
Hindi WordNet
has paved way for several Supervised methods which have been proven to produce a higher accuracy in disambiguating nouns.


Local impediments and summary

The knowledge acquisition bottleneck is perhaps the major impediment to solving the WSD problem. Unsupervised methods rely on knowledge about word senses, which is only sparsely formulated in dictionaries and lexical databases. Supervised methods depend crucially on the existence of manually annotated examples for every word sense, a requisite that can so far be met only for a handful of words for testing purposes, as it is done in the
Senseval SemEval (Semantic Evaluation) is an ongoing series of evaluations of computational semantic analysis systems; it evolved from the Senseval word sense evaluation series. The evaluations are intended to explore the nature of meaning in language. ...
exercises. One of the most promising trends in WSD research is using the largest
corpus Corpus is Latin for "body". It may refer to: Linguistics * Text corpus, in linguistics, a large and structured set of texts * Speech corpus, in linguistics, a large set of speech audio files * Corpus linguistics, a branch of linguistics Music * ...
ever accessible, the
World Wide Web The World Wide Web (WWW), commonly known as the Web, is an information system enabling documents and other web resources to be accessed over the Internet. Documents and downloadable media are made available to the network through web se ...
, to acquire lexical information automatically. WSD has been traditionally understood as an intermediate language engineering technology which could improve applications such as
information retrieval Information retrieval (IR) in computing and information science is the process of obtaining information system resources that are relevant to an information need from a collection of those resources. Searches can be based on full-text or other co ...
(IR). In this case, however, the reverse is also true:
web search engine A search engine is a software system designed to carry out web searches. They search the World Wide Web in a systematic way for particular information specified in a textual web search query. The search results are generally presented in a ...
s implement simple and robust IR techniques that can successfully mine the Web for information to use in WSD. The historic lack of training data has provoked the appearance of some new algorithms and techniques, as described in
Automatic acquisition of sense-tagged corpora The knowledge acquisition bottleneck is perhaps the major impediment to solving the word sense disambiguation (WSD) problem. Unsupervised learning methods rely on knowledge about word senses, which is barely formulated in dictionaries and lexica ...
.


External knowledge sources

Knowledge is a fundamental component of WSD. Knowledge sources provide data which are essential to associate senses with words. They can vary from corpora of texts, either unlabeled or annotated with word senses, to machine-readable dictionaries, thesauri, glossaries, ontologies, etc. They can be classified as follows: Structured: # Machine-readable dictionaries (MRDs) #
Ontologies In computer science and information science, an ontology encompasses a representation, formal naming, and definition of the categories, properties, and relations between the concepts, data, and entities that substantiate one, many, or all domains ...
#
Thesauri A thesaurus (plural ''thesauri'' or ''thesauruses'') or synonym dictionary is a reference work for finding synonyms and sometimes antonyms of words. They are often used by writers to help find the best word to express an idea: Synonym dictionar ...
Unstructured: # Collocation resources # Other resources (such as
word frequency list A word list (or ''lexicon'') is a list of a language's lexicon (generally sorted by frequency of occurrence either by levels or as a ranked list) within some given text corpus, serving the purpose of vocabulary acquisition. A lexicon sorted by ...
s,
stoplist Stop words are the words in a stop list (or ''stoplist'' or ''negative dictionary'') which are filtered out (i.e. stopped) before or after processing of natural language data (text) because they are insignificant. There is no single universal list ...
s, domain labels, etc.) #
Corpora Corpus is Latin for "body". It may refer to: Linguistics * Text corpus, in linguistics, a large and structured set of texts * Speech corpus, in linguistics, a large set of speech audio files * Corpus linguistics, a branch of linguistics Music * ...
: raw corpora and sense-annotated corpora


Evaluation

Comparing and evaluating different WSD systems is extremely difficult, because of the different test sets, sense inventories, and knowledge resources adopted. Before the organization of specific evaluation campaigns most systems were assessed on in-house, often small-scale,
data set A data set (or dataset) is a collection of data. In the case of tabular data, a data set corresponds to one or more database tables, where every column of a table represents a particular variable, and each row corresponds to a given record of the ...
s. In order to test one's algorithm, developers should spend their time to annotate all word occurrences. And comparing methods even on the same corpus is not eligible if there is different sense inventories. In order to define common evaluation datasets and procedures, public evaluation campaigns have been organized.
Senseval SemEval (Semantic Evaluation) is an ongoing series of evaluations of computational semantic analysis systems; it evolved from the Senseval word sense evaluation series. The evaluations are intended to explore the nature of meaning in language. ...
(now renamed SemEval) is an international word sense disambiguation competition, held every three years since 1998
Senseval-1
(1998)
Senseval-2
(2001)
Senseval-3
(2004), and its successor
SemEval
(2007). The objective of the competition is to organize different lectures, preparing and hand-annotating corpus for testing systems, perform a comparative evaluation of WSD systems in several kinds of tasks, including all-words and lexical sample WSD for different languages, and, more recently, new tasks such as
semantic role labeling In natural language processing, semantic role labeling (also called shallow semantic parsing or slot-filling) is the process that assigns labels to words or phrases in a sentence that indicates their semantic role in the sentence, such as that of ...
, gloss WSD, lexical substitution, etc. The systems submitted for evaluation to these competitions usually integrate different techniques and often combine supervised and knowledge-based methods (especially for avoiding bad performance in lack of training examples). In recent years 2007-2012, the WSD evaluation task choices had grown and the criterion for evaluating WSD has changed drastically depending on the variant of the WSD evaluation task. Below enumerates the variety of WSD tasks:


Task design choices

As technology evolves, the Word Sense Disambiguation (WSD) tasks grows in different flavors towards various research directions and for more languages: *
Classic monolingual WSD Classic monolingual Word Sense Disambiguation evaluation tasks uses WordNet as its sense inventory and is largely based on supervised / semi-supervised classification with the manually sense annotated corpora: *Classic English WSD uses the Prin ...
evaluation tasks use WordNet as the sense inventory and are largely based on supervised/ semi-supervised classification with the manually sense annotated corpora: ** Classic English WSD uses the
Princeton WordNet WordNet is a lexical database of semantic relations between words in more than 200 languages. WordNet links words into semantic relations including synonyms, hyponyms, and meronyms. The synonyms are grouped into '' synsets'' with short definiti ...
as it sense inventory and the primary classification input is normally based on th
SemCor
corpus. ** Classical WSD for other languages uses their respective WordNet as sense inventories and sense annotated corpora tagged in their respective languages. Often researchers will also tapped on the SemCor corpus and aligned bitexts with English as its source language * Cross-lingual WSD evaluation task is also focused on WSD across 2 or more languages simultaneously. Unlike the Multilingual WSD tasks, instead of providing manually sense-annotated examples for each sense of a polysemous noun, the sense inventory is built up on the basis of parallel corpora, e.g. Europarl corpus. * Multilingual WSD evaluation tasks focused on WSD across 2 or more languages simultaneously, using their respective WordNets as its sense inventories or BabelNet as multilingual sense inventory. It evolved from the Translation WSD evaluation tasks that took place in Senseval-2. A popular approach is to carry out monolingual WSD and then map the source language senses into the corresponding target word translations. *
Word Sense Induction and Disambiguation task SemEval (Semantic Evaluation) is an ongoing series of evaluations of computational semantic analysis systems; it evolved from the Senseval word sense evaluation series. The evaluations are intended to explore the nature of meaning in language. ...
is a combined task evaluation where the sense inventory is first induced from a fixed
training set In machine learning, a common task is the study and construction of algorithms that can learn from and make predictions on data. Such algorithms function by making data-driven predictions or decisions, through building a mathematical model from ...
data, consisting of polysemous words and the sentence that they occurred in, then WSD is performed on a different testing data set.


Software

* Babelfy, a unified state-of-the-art system for multilingual Word Sense Disambiguation and Entity Linking * BabelNet API, a Java API for knowledge-based multilingual Word Sense Disambiguation in 6 different languages using the BabelNet
semantic network A semantic network, or frame network is a knowledge base that represents semantic relations between concepts in a network. This is often used as a form of knowledge representation. It is a directed or undirected graph consisting of vertices, ...
* WordNet::SenseRelate, a project that includes free, open source systems for word sense disambiguation and lexical sample sense disambiguation * UKB: Graph Base WSD, a collection of programs for performing graph-based Word Sense Disambiguation and lexical similarity/relatedness using a pre-existing Lexical Knowledge Base * pyWSD, python implementations of Word Sense Disambiguation (WSD) technologies


See also

*
Ambiguity Ambiguity is the type of meaning in which a phrase, statement or resolution is not explicitly defined, making several interpretations plausible. A common aspect of ambiguity is uncertainty. It is thus an attribute of any idea or statement ...
*
Controlled natural language Controlled natural languages (CNLs) are subsets of natural languages that are obtained by restricting the grammar and vocabulary in order to reduce or eliminate ambiguity and complexity. Traditionally, controlled languages fall into two major typ ...
*
Entity linking In natural language processing, entity linking, also referred to as named-entity linking (NEL), named-entity disambiguation (NED), named-entity recognition and disambiguation (NERD) or named-entity normalization (NEN) is the task of assigning a uni ...
* Lesk algorithm *
Lexical substitution Lexical substitution is the task of identifying a substitute for a word in the context of a clause. For instance, given the following text: "After the ''match'', replace any remaining fluid deficit to prevent chronic dehydration throughout the tour ...
*
Part-of-speech tagging In corpus linguistics, part-of-speech tagging (POS tagging or PoS tagging or POST), also called grammatical tagging is the process of marking up a word in a text (corpus) as corresponding to a particular part of speech, based on both its definitio ...
*
Polysemy Polysemy ( or ; ) is the capacity for a sign (e.g. a symbol, a morpheme, a word, or a phrase) to have multiple related meanings. For example, a word can have several word senses. Polysemy is distinct from ''monosemy'', where a word has a singl ...
*
Semeval SemEval (Semantic Evaluation) is an ongoing series of evaluations of computational semantic analysis systems; it evolved from the Senseval word sense evaluation series. The evaluations are intended to explore the nature of meaning in language. ...
* Semantic unification * Judicial interpretation *
Sentence boundary disambiguation Sentence boundary disambiguation (SBD), also known as sentence breaking, sentence boundary detection, and sentence segmentation, is the problem in natural language processing of deciding where sentences begin and end. Natural language processing too ...
*
Syntactic ambiguity Syntactic ambiguity, also called structural ambiguity, amphiboly or amphibology, is a situation where a sentence may be interpreted in more than one way due to ambiguous sentence structure. Syntactic ambiguity arises not from the range of mean ...
*
Word sense In linguistics, a word sense is one of the meanings of a word. For example, a dictionary may have over 50 different senses of the word "play", each of these having a different meaning based on the context of the word's usage in a sentence, as fo ...
*
Word sense induction In computational linguistics, word-sense induction (WSI) or discrimination is an open problem of natural language processing, which concerns the automatic identification of the word sense, senses of a word (i.e. meaning (linguistics), meanings). Giv ...


References


Works cited

* * Agirre, E.; M. Stevenson. 2006. Knowledge sources for WSD. I
Word Sense Disambiguation: Algorithms and Applications
E. Agirre and P. Edmonds, Eds. Springer, New York, NY. * * Buitelaar, P.; B. Magnini, C. Strapparava and P. Vossen. 2006. Domain-specific WSD. In Word Sense Disambiguation: Algorithms and Applications, E. Agirre and P. Edmonds, Eds. Springer, New York, NY. * Chan, Y. S.; H. T. Ng. 2005. Scaling up word sense disambiguation via parallel texts. In Proceedings of the 20th National Conference on Artificial Intelligence (AAAI, Pittsburgh, PA). * Edmonds, P. 2000
Designing a task for SENSEVAL-2
Tech. note. University of Brighton, Brighton. U.K. * * Gliozzo, A.; B. Magnini and C. Strapparava. 2004
Unsupervised domain relevance estimation for word sense disambiguation
In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing (EMNLP, Barcelona, Spain). * Ide, N.; T. Erjavec, D. Tufis. 2002
Sense discrimination with parallel corpora
In Proceedings of ACL Workshop on Word Sense Disambiguation: Recent Successes and Future Directions (Philadelphia, PA). * Kilgarriff, A. 1997
I don't believe in word senses
Comput. Human. 31(2), pp. 91–113. * Kilgarriff, A.; G. Grefenstette. 2003
Introduction to the special issue on the Web as corpus
Computational Linguistics 29(3), pp. 333–347 * Kilgarriff, Adam; Joseph Rosenzweig, English Senseval: Report and Results May–June, 2000, University of Brighton * Lapata, M.; and F. Keller. 2007
An information retrieval approach to sense ranking
In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL, Rochester, NY). * Lenat, D. Archived a
Ghostarchive
and th
Wayback Machine
(GoogleTachTalks on YouTube) * Lenat, D.; R. V. Guha. 1989. Building Large Knowledge-Based Systems, Addison-Wesley * Lesk; M. 1986
Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from an ice cream cone
In Proc. of SIGDOC-86: 5th International Conference on Systems Documentation, Toronto, Canada. * Litkowski, K. C. 2005. Computational lexicons and dictionaries. In Encyclopaedia of Language and Linguistics (2nd ed.), K. R. Brown, Ed. Elsevier Publishers, Oxford, U.K. * Magnini, B; G. Cavaglià. 2000. Integrating subject field codes into WordNet. In Proceedings of the 2nd Conference on Language Resources and Evaluation (LREC, Athens, Greece). * McCarthy, D.; R. Koeling, J. Weeds, J. Carroll. 2007
Unsupervised acquisition of predominant word senses
Computational Linguistics 33(4): 553–590. * McCarthy, D.; R. Navigli. 2009
The English Lexical Substitution Task
Language Resources and Evaluation, 43(2), Springer. * Mihalcea, R. 2007
Using Wikipedia for Automatic Word Sense Disambiguation
In Proc. of the North American Chapter of the Association for Computational Linguistics (NAACL 2007), Rochester, April 2007. * Mohammad, S; G. Hirst. 2006
Determining word sense dominance using a thesaurus
In Proceedings of the 11th Conference on European chapter of the Association for Computational Linguistics (EACL, Trento, Italy). * Navigli, R. 2006
Meaningful Clustering of Senses Helps Boost Word Sense Disambiguation Performance
Proc. of the 44th Annual Meeting of the Association for Computational Linguistics joint with the 21st International Conference on Computational Linguistics (COLING-ACL 2006), Sydney, Australia. * Navigli, R.; A. Di Marco
Clustering and Diversifying Web Search Results with Graph-Based Word Sense Induction
Computational Linguistics, 39(3), MIT Press, 2013, pp. 709–754. * Navigli, R.; G. Crisafulli
Inducing Word Senses to Improve Web Search Result Clustering
Proc. of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP 2010), MIT Stata Center, Massachusetts, USA. * Navigli, R.; M. Lapata
An Experimental Study of Graph Connectivity for Unsupervised Word Sense Disambiguation
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 32(4), IEEE Press, 2010. * Navigli, R.; K. Litkowski, O. Hargraves. 2007
SemEval-2007 Task 07: Coarse-Grained English All-Words Task
Proc. of Semeval-2007 Workshop ( SemEval), in the 45th Annual Meeting of the Association for Computational Linguistics (ACL 2007), Prague, Czech Republic. * Navigli, R.;P. Velardi. 2005
Structural Semantic Interconnections: a Knowledge-Based Approach to Word Sense Disambiguation
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 27(7). * Palmer, M.; O. Babko-Malaya and H. T. Dang. 2004
Different sense granularities for different applications
In Proceedings of the 2nd Workshop on Scalable Natural Language Understanding Systems in HLT/NAACL (Boston, MA). * Ponzetto, S. P.; R. Navigli
Knowledge-rich Word Sense Disambiguation rivaling supervised systems
In Proc. of the 48th Annual Meeting of the Association for Computational Linguistics (ACL), 2010. * Pradhan, S.; E. Loper, D. Dligach, M. Palmer. 2007
SemEval-2007 Task 17: English lexical sample, SRL and all words
Proc. of Semeval-2007 Workshop (SEMEVAL), in the 45th Annual Meeting of the Association for Computational Linguistics (ACL 2007), Prague, Czech Republic. * Schütze, H. 1998
Automatic word sense discrimination
Computational Linguistics, 24(1): 97–123. * Snow, R.; S. Prakash, D. Jurafsky, A. Y. Ng. 2007
Learning to Merge Word Senses
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). * Snyder, B.; M. Palmer. 2004
The English all-words task
In Proc. of the 3rd International Workshop on the Evaluation of Systems for the Semantic Analysis of Text (Senseval-3), Barcelona, Spain. * * Wilks, Y.; B. Slator, L. Guthrie. 1996. Electric Words: dictionaries, computers and meanings. Cambridge, MA: MIT Press. * Yarowsky, D
Word-sense disambiguation using statistical models of Roget's