HOME

TheInfoList



OR:

In the fields of
computational linguistics Computational linguistics is an Interdisciplinarity, interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, comput ...
and probability, an ''n''-gram (sometimes also called Q-gram) is a contiguous sequence of ''n'' items from a given
sample Sample or samples may refer to: Base meaning * Sample (statistics), a subset of a population – complete data set * Sample (signal), a digital discrete sample of a continuous analog signal * Sample (material), a specimen or small quantity of s ...
of text or speech. The items can be phonemes,
syllable A syllable is a unit of organization for a sequence of speech sounds typically made up of a syllable nucleus (most often a vowel) with optional initial and final margins (typically, consonants). Syllables are often considered the phonological "bu ...
s, letters, words or base pairs according to the application. The ''n''-grams typically are collected from a text or speech corpus. When the items are words, -grams may also be called ''shingles''. Using
Latin numerical prefixes Numeral or number prefixes are prefixes derived from numerals or occasionally other numbers. In English and many other languages, they are used to coin numerous series of words. For example: * unicycle, bicycle, tricycle (1-cycle, 2-cycle, 3-cy ...
, an ''n''-gram of size 1 is referred to as a "unigram"; size 2 is a " bigram" (or, less commonly, a "digram"); size 3 is a " trigram". English cardinal numbers are sometimes used, e.g., "four-gram", "five-gram", and so on. In computational biology, a polymer or
oligomer In chemistry and biochemistry, an oligomer () is a molecule that consists of a few repeating units which could be derived, actually or conceptually, from smaller molecules, monomers.Quote: ''Oligomer molecule: A molecule of intermediate relativ ...
of a known size is called a ''k''-mer instead of an ''n''-gram, with specific names using Greek numerical prefixes such as "monomer", "dimer", "trimer", "tetramer", "pentamer", etc., or English cardinal numbers, "one-mer", "two-mer", "three-mer", etc.


Applications

An ''n''-gram model is a type of probabilistic
language model A language model is a probability distribution over sequences of words. Given any sequence of words of length , a language model assigns a probability P(w_1,\ldots,w_m) to the whole sequence. Language models generate probabilities by training on ...
for predicting the next item in such a sequence in the form of a (''n'' − 1)–order Markov model. ''n''-gram models are now widely used in probability,
communication theory Communication theory is a proposed description of communication phenomena, the relationships among them, a storyline describing these relationships, and an argument for these three elements. Communication theory provides a way of talking about a ...
,
computational linguistics Computational linguistics is an Interdisciplinarity, interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, comput ...
(for instance, statistical
natural language processing Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to pro ...
),
computational biology Computational biology refers to the use of data analysis, mathematical modeling and computational simulations to understand biological systems and relationships. An intersection of computer science, biology, and big data, the field also has fo ...
(for instance, biological sequence analysis), and data compression. Two benefits of ''n''-gram models (and algorithms that use them) are simplicity and scalability – with larger ''n'', a model can store more context with a well-understood space–time tradeoff, enabling small experiments to scale up efficiently.


Examples

Figure 1 shows several example sequences and the corresponding 1-gram, 2-gram and 3-gram sequences. Here are further examples; these are word-level 3-grams and 4-grams (and counts of the number of times they appeared) from the Google ''n''-gram corpus. 3-grams * ceramics collectables collectibles (55) * ceramics collectables fine (130) * ceramics collected by (52) * ceramics collectible pottery (50) * ceramics collectibles cooking (45) 4-grams * serve as the incoming (92) * serve as the incubator (99) * serve as the independent (794) * serve as the index (223) * serve as the indication (72) * serve as the indicator (120)


''n''-gram models

An ''n''-gram model models sequences, notably natural languages, using the statistical properties of ''n''-grams. This idea can be traced to an experiment by Claude Shannon's work in
information theory Information theory is the scientific study of the quantification (science), quantification, computer data storage, storage, and telecommunication, communication of information. The field was originally established by the works of Harry Nyquist a ...
. Shannon posed the question: given a sequence of letters (for example, the sequence "for ex"), what is the likelihood of the next letter? From training data, one can derive a
probability distribution In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomenon i ...
for the next letter given a history of size n: ''a'' = 0.4, ''b'' = 0.00001, ''c'' = 0, ....; where the probabilities of all possible "next-letters" sum to 1.0. More concisely, an ''n''-gram model predicts x_i based on x_, \dots, x_. In probability terms, this is P(x_i \mid x_, \dots, x_). When used for
language model A language model is a probability distribution over sequences of words. Given any sequence of words of length , a language model assigns a probability P(w_1,\ldots,w_m) to the whole sequence. Language models generate probabilities by training on ...
ing, independence assumptions are made so that each word depends only on the last ''n'' − 1 words. This Markov model is used as an approximation of the true underlying language. This assumption is important because it massively simplifies the problem of estimating the language model from data. In addition, because of the open nature of language, it is common to group words unknown to the language model together. Note that in a simple ''n''-gram language model, the probability of a word, conditioned on some number of previous words (one word in a bigram model, two words in a trigram model, etc.) can be described as following a categorical distribution (often imprecisely called a " multinomial distribution"). In practice, the probability distributions are smoothed by assigning non-zero probabilities to unseen words or ''n''-grams; see smoothing techniques.


Applications and considerations

''n''-gram models are widely used in statistical
natural language processing Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to pro ...
. In speech recognition, phonemes and sequences of phonemes are modeled using a ''n''-gram distribution. For parsing, words are modeled such that each ''n''-gram is composed of ''n'' words. For language identification, sequences of characters/ graphemes (''e.g.'',
letters of the alphabet A letter is a segmental symbol of a Phonemic orthography, phonemic writing system. The inventory of all letters forms an alphabet. Letters broadly correspond to phonemes in the spoken language, spoken form of the language, although there is rarel ...
) are modeled for different languages. For sequences of characters, the 3-grams (sometimes referred to as "trigrams") that can be generated from "good morning" are "goo", "ood", "od ", "d m", " mo", "mor" and so forth, counting the space character as a gram (sometimes the beginning and end of a text are modeled explicitly, adding "_ ⁠_g", "_go", "ng_", and "g_ ⁠_"). For sequences of words, the trigrams (shingles) that can be generated from "the dog smelled like a skunk" are "# the dog", "the dog smelled", "dog smelled like", "smelled like a", "like a skunk" and "a skunk #". Practitioners more interested in multiple word terms might preprocess strings to remove spaces. Many simply collapse
whitespace White space or whitespace may refer to: Technology * Whitespace characters, characters in computing that represent horizontal or vertical space * White spaces (radio), allocated but locally unused radio frequencies * TV White Space Database, a mec ...
to a single space while preserving paragraph marks, because the whitespace is frequently either an element of writing style or introduces layout or presentation not required by the prediction and deduction methodology. Punctuation is also commonly reduced or removed by preprocessing and is frequently used to trigger functionality. ''n''-grams can also be used for sequences of words or almost any type of data. For example, they have been used for extracting features for clustering large sets of satellite earth images and for determining what part of the Earth a particular image came from. They have also been very successful as the first pass in genetic sequence search and in the identification of the species from which short sequences of DNA originated. ''n''-gram models are often criticized because they lack any explicit representation of long range dependency. This is because the only explicit dependency range is (''n'' − 1) tokens for an ''n''-gram model, and since natural languages incorporate many cases of unbounded dependencies (such as wh-movement), this means that an ''n''-gram model cannot in principle distinguish unbounded dependencies from noise (since long range correlations drop exponentially with distance for any Markov model). For this reason, ''n''-gram models have not made much impact on linguistic theory, where part of the explicit goal is to model such dependencies. Another criticism that has been made is that Markov models of language, including ''n''-gram models, do not explicitly capture the performance/competence distinction. This is because ''n''-gram models are not designed to model linguistic knowledge as such, and make no claims to being (even potentially) complete models of linguistic knowledge; instead, they are used in practical applications. In practice, ''n''-gram models have been shown to be extremely effective in modeling language data, which is a core component in modern statistical language applications. Most modern applications that rely on ''n''-gram based models, such as machine translation applications, do not rely exclusively on such models; instead, they typically also incorporate
Bayesian inference Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, a ...
. Modern statistical models are typically made up of two parts, a prior distribution describing the inherent likelihood of a possible result and a likelihood function used to assess the compatibility of a possible result with observed data. When a language model is used, it is used as part of the prior distribution (e.g. to gauge the inherent "goodness" of a possible translation), and even then it is often not the only component in this distribution. Handcrafted features of various sorts are also used, for example variables that represent the position of a word in a sentence or the general topic of discourse. In addition, features based on the structure of the potential result, such as syntactic considerations, are often used. Such features are also used as part of the likelihood function, which makes use of the observed data. Conventional linguistic theory can be incorporated in these features (although in practice, it is rare that features specific to generative or other particular theories of grammar are incorporated, as computational linguists tend to be "agnostic" towards individual theories of grammar).


Out-of-vocabulary words

An issue when using n-gram language models are out-of-vocabulary (OOV) words. They are encountered in
computational linguistics Computational linguistics is an Interdisciplinarity, interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, comput ...
and
natural language processing Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to pro ...
when the input includes words which were not present in a system's dictionary or database during its preparation. By default, when a language model is estimated, the entire observed vocabulary is used. In some cases, it may be necessary to estimate the language model with a specific fixed vocabulary. In such a scenario, the n-grams in the corpus that contain an out-of-vocabulary word are ignored. The n-gram probabilities are smoothed over all the words in the vocabulary even if they were not observed. Nonetheless, it is essential in some cases to explicitly model the probability of out-of-vocabulary words by introducing a special token (e.g. '''') into the vocabulary. Out-of-vocabulary words in the corpus are effectively replaced with this special token before n-grams counts are cumulated. With this option, it is possible to estimate the transition probabilities of n-grams involving out-of-vocabulary words.


''n''-grams for approximate matching

''n''-grams can also be used for efficient approximate matching. By converting a sequence of items to a set of ''n''-grams, it can be embedded in a vector space, thus allowing the sequence to be compared to other sequences in an efficient manner. For example, if we convert strings with only letters in the English alphabet into single character 3-grams, we get a 26^3-dimensional space (the first dimension measures the number of occurrences of "aaa", the second "aab", and so forth for all possible combinations of three letters). Using this representation, we lose information about the string. For example, both the strings "abc" and "bca" give rise to exactly the same 2-gram "bc" (although is clearly not the same as ). However, we know empirically that if two strings of real text have a similar vector representation (as measured by cosine distance) then they are likely to be similar. Other metrics have also been applied to vectors of ''n''-grams with varying, sometimes better, results. For example, z-scores have been used to compare documents by examining how many standard deviations each ''n''-gram differs from its mean occurrence in a large collection, or text corpus, of documents (which form the "background" vector). In the event of small counts, the g-score (also known as g-test) may give better results for comparing alternative models. It is also possible to take a more principled approach to the statistics of ''n''-grams, modeling similarity as the likelihood that two strings came from the same source directly in terms of a problem in
Bayesian inference Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, a ...
. ''n''-gram-based searching can also be used for plagiarism detection.


Other applications

''n''-grams find use in several areas of computer science,
computational linguistics Computational linguistics is an Interdisciplinarity, interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, comput ...
, and applied mathematics. They have been used to: * design kernels that allow machine learning algorithms such as
support vector machine In machine learning, support vector machines (SVMs, also support vector networks) are supervised learning models with associated learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laboratorie ...
s to learn from string data * find likely candidates for the correct spelling of a misspelled word * improve compression in compression algorithms where a small area of data requires ''n''-grams of greater length * assess the probability of a given word sequence appearing in text of a language of interest in pattern recognition systems, speech recognition, OCR ( optical character recognition), Intelligent Character Recognition ( ICR), machine translation and similar applications * improve retrieval in
information retrieval Information retrieval (IR) in computing and information science is the process of obtaining information system resources that are relevant to an information need from a collection of those resources. Searches can be based on full-text or other co ...
systems when it is hoped to find similar "documents" (a term for which the conventional meaning is sometimes stretched, depending on the data set) given a single query document and a database of reference documents * improve retrieval performance in genetic sequence analysis as in the BLAST family of programs * identify the language a text is in or the species a small sequence of DNA was taken from * predict letters or words at random in order to create text, as in the
dissociated press Dissociated press is a parody generator (a computer program that generates nonsensical text). The generated text is based on another text using the Markov chain technique. The name is a play on " Associated Press" and the psychological term dis ...
algorithm. *
cryptanalysis Cryptanalysis (from the Greek ''kryptós'', "hidden", and ''analýein'', "to analyze") refers to the process of analyzing information systems in order to understand hidden aspects of the systems. Cryptanalysis is used to breach cryptographic sec ...


Space required for an ''n''-gram

Consider an ''n''-gram where the units are characters and a text with ''t'' characters, where n,t \in \mathbb. There are t-n+1 total strings, where each string requires n units of space. Thus, the total space required for this ''n''-gram is (t-n+1) \cdot n, which is simplified to: -n^2 + (t+1)n


Bias-versus-variance trade-off

To choose a value for ''n'' in an ''n''-gram model, it is necessary to find the right trade-off between the stability of the estimate against its appropriateness. This means that trigram (i.e. triplets of words) is a common choice with large training corpora (millions of words), whereas a bigram is often used with smaller ones.


Smoothing techniques

There are problems of balance weight between ''infrequent grams'' (for example, if a proper name appeared in the training data) and ''frequent grams''. Also, items not seen in the training data will be given a probability of 0.0 without smoothing. For unseen but plausible data from a sample, one can introduce pseudocounts. Pseudocounts are generally motivated on Bayesian grounds. In practice it is necessary to ''smooth'' the probability distributions by also assigning non-zero probabilities to unseen words or ''n''-grams. The reason is that models derived directly from the ''n''-gram frequency counts have severe problems when confronted with any ''n''-grams that have not explicitly been seen before – the zero-frequency problem. Various smoothing methods are used, from simple "add-one" (Laplace) smoothing (assign a count of 1 to unseen ''n''-grams; see Rule of succession) to more sophisticated models, such as Good–Turing discounting or back-off models. Some of these methods are equivalent to assigning a prior distribution to the probabilities of the ''n''-grams and using
Bayesian inference Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available. Bayesian inference is an important technique in statistics, a ...
to compute the resulting posterior ''n''-gram probabilities. However, the more sophisticated smoothing models were typically not derived in this fashion, but instead through independent considerations. * Linear interpolation (e.g., taking the weighted mean of the unigram, bigram, and trigram) * Good–Turing discounting * Witten–Bell discounting * Lidstone's smoothing * Katz's back-off model (trigram) * Kneser–Ney smoothing


Skip-gram

In the field of
computational linguistics Computational linguistics is an Interdisciplinarity, interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, comput ...
, in particular
language model A language model is a probability distribution over sequences of words. Given any sequence of words of length , a language model assigns a probability P(w_1,\ldots,w_m) to the whole sequence. Language models generate probabilities by training on ...
ing, skip-grams are a generalization of ''n''-grams in which the components (typically words) need not be consecutive in the text under consideration, but may leave gaps that are ''skipped'' over. They provide one way of overcoming the
data sparsity problem In the pursuit of knowledge, data (; ) is a collection of discrete values that convey information, describing quantity, quality, fact, statistics, other basic units of meaning, or simply sequences of symbols that may be further interpreted ...
found with conventional ''n''-gram analysis. In the area of computer security, skip-grams have proven more robust to attack than ngrams. Formally, an -gram is a consecutive subsequence of length of some sequence of tokens . A -skip--gram is a length- subsequence where the components occur at distance at most from each other. For example, in the input text: :''the rain in Spain falls mainly on the plain'' the set of 1-skip-2-grams includes all the bigrams (2-grams), and in addition the subsequences :''the in'', ''rain Spain'', ''in falls'', ''Spain mainly'', ''falls on'', ''mainly the'', and ''on plain''.


Syntactic ''n''-grams

Syntactic ''n''-grams are ''n''-grams defined by paths in syntactic dependency or constituent trees rather than the linear structure of the text. For example, the sentence "economic news has little effect on financial markets" can be transformed to syntactic ''n''-grams following the tree structure of its dependency relations: news-economic, effect-little, effect-on-markets-financial. Syntactic ''n''-grams are intended to reflect syntactic structure more faithfully than linear ''n''-grams, and have many of the same applications, especially as features in a Vector Space Model. Syntactic ''n''-grams for certain tasks gives better results than the use of standard ''n''-grams, for example, for authorship attribution. Another type of syntactic ''n''-grams are part-of-speech ''n''-grams, defined as fixed-length contiguous overlapping subsequences that are extracted from part-of-speech sequences of text. Part-of-speech ''n''-grams have several applications, most commonly in information retrieval.


See also

*
Collocation In corpus linguistics, a collocation is a series of words or terms that co-occur more often than would be expected by chance. In phraseology, a collocation is a type of compositional phraseme, meaning that it can be understood from the words th ...
* Hidden Markov model *
n-tuple In mathematics, a tuple is a finite ordered list (sequence) of elements. An -tuple is a sequence (or ordered list) of elements, where is a non-negative integer. There is only one 0-tuple, referred to as ''the empty tuple''. An -tuple is defi ...
*
String kernel In machine learning and data mining, a string kernel is a Positive-definite kernel, kernel function that operates on String (computer science), strings, i.e. finite sequences of symbols that need not be of the same length. String kernels can be intu ...
* MinHash * Feature extraction * Longest common substring problem


References


Further reading

* Christopher D. Manning, Hinrich Schütze, ''Foundations of Statistical Natural Language Processing'', MIT Press: 1999. . * * Frederick J. Damerau, ''Markov Models and Linguistic Theory''. Mouton. The Hague, 1971. * *


External links


Google's Google Books ''n''-gram viewer
an

(September 2006)
Microsoft's web ''n''-grams service

STATOPERATOR N-grams Project Weighted ''n''-gram viewer for every domain in Alexa Top 1M

1,000,000 most frequent 2,3,4,5-grams
from the 425 million word Corpus of Contemporary American English
Peachnote's music ngram viewer

Stochastic Language Models (''n''-Gram) Specification
(W3C)
Michael Collins's notes on ''n''-Gram Language Models

OpenRefine: Clustering In Depth
{{Natural Language Processing Natural language processing Computational linguistics Language modeling Speech recognition Corpus linguistics Probabilistic models