Occurrency Matrix
   HOME

TheInfoList



OR:

A document-term matrix is a mathematical
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** ''The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchis ...
that describes the frequency of terms that occur in a collection of documents. In a document-term matrix, rows correspond to documents in the collection and columns correspond to terms. This matrix is a specific instance of a document-feature matrix where "features" may refer to other properties of a document besides terms. It is also common to encounter the transpose, or term-document matrix where documents are the columns and terms are the rows. They are useful in the field of
natural language processing Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to pro ...
and computational text analysis. While the value of the cells is commonly the raw count of a given term, there are various schemes for weighting the raw counts such as, row normalizing (i.e. relative frequency/proportions) and tf-idf. Terms are commonly single words separated by whitespace or punctuation on either side (a.k.a. unigrams). In such a case, this is also referred to as "bag of words" representation because the counts of individual words is retained, but not the order of the words in the document.


General concept

When creating a data-set of terms that appear in a corpus of
document A document is a written, drawn, presented, or memorialized representation of thought, often the manifestation of non-fictional, as well as fictional, content. The word originates from the Latin ''Documentum'', which denotes a "teaching" or ...
s, the document-term matrix contains rows corresponding to the documents and columns corresponding to the terms. Each ''ij'' cell, then, is the number of times word ''j'' occurs in document ''i''. As such, each row is a vector of term counts that represents the content of the document corresponding to that row. For instance if one has the following two (short) documents: *D1 = "I like databases" *D2 = "I dislike databases", then the document-term matrix would be: which shows which documents contain which terms and how many times they appear. Note that, unlike representing a document as just a token-count list, the document-term matrix includes all terms in the corpus (i.e. the corpus vocabulary), which is why there are zero-counts for terms in the corpus which do not also occur in a specific document. As a result of the power-law distribution of tokens in nearly every corpus (see Zipf's law), it is common to weight the counts. This can be as simple as dividing counts by the total number of tokens in a document (called relative frequency or proportions), dividing by the maximum frequency in each document (called prop max), or taking the log of frequencies (called log count). If one desires to weight the words most unique to an individual document as compared to the corpus as a whole, it is common to use tf-idf, which divides the term frequency by the term's document frequency.


History of the concept

The document-term matrix emerged in the earliest years of the computerization of text. The increasing capacity for storing documents created the problem of retrieving a given document in an efficient manner. While previously the work of classifying and indexing was accomplished by hand, researchers explored the possibility of doing this automatically using word frequency information. One of the first published document-term matrices was in Harold Borko's 1962 article "The construction of an empirically based mathematically derived classification system" (page 282, see also his 1965 article). Borko references two computer programs, "FEAT" which stood for "Frequency of Every Allowable Term," written by John C. Olney of the System Development Corporation and the Descriptor Word Index Program, written by Eileen Stone also of the System Development Corporation:
Having selected the documents which were to make up the experimental library, the next step consisted of keypunching the entire body of text preparatory to computer processing.  The program used for this analysis was FEAT (Frequency of Every Allowable Term).  it was written by John C. Olney of the System Development Corporation and is designed to perform frequency and summary counts of individual words and of word pairs.  The  output of this program is an alphabetical listing, by frequency of occurrence, of all word types which appeared in the text.  Certain function words such as and, the,  at, a, etc., were placed in a "forbidden word list" table, and the frequency of these words was recorded  in a separate listing... A special computer program, called the Descriptor Word Index Program, was written to provide this information and to prepare a document-term matrix in a form suitable for in-put to the Factor Analysis Program. The Descriptor Word Index program was prepared by Eileen Stone of the System Development Corporation.
Shortly thereafter,
Gerard Salton Gerard A. "Gerry" Salton (8 March 1927 in Nuremberg – 28 August 1995) was a Professor of Computer Science at Cornell University. Salton was perhaps the leading computer scientist working in the field of information retrieval during his time, an ...
published "Some hierarchical models for automatic document retrieval" in 1963 which also included a visual depiction of a document-term matrix. Salton was at Harvard University at the time and his work was supported by the Air Force Cambridge Research Laboratories and Sylvania Electric Products, Inc. In this paper, Salton introduces the document-term matrix by comparison to a kind of term-context matrix used to measure similarities between words:
If it is desired to generate document associations or document clusters instead of word associations, the same procedures can be used with slight modifications. Instead of starting with a word-sentence matrix ''C'',... it is now convenient to construct a word-document matrix ''F,'' listing frequency of occurrence of word Wi in Document Dj... Document similarities can now be computed as before by comparing pairs of rows and by obtaining similarity coefficients based on the frequency of co-occurrences of the content words included in the given document. This procedure produces a document-document similarity matrix which can in turn be used for the generation of document clusters...
In addition to Borko and Salton, in 1964, F.W. Lancaster published a comprehensive review of automated indexing and retrieval. While the work was published while he worked at the Herner and Company in Washington D.C., the paper was written while he was "employed in research work at Aslib, on the Aslib Cranfield Project." Lancaster credits Borko with the document-term matrix:
Harold Borko, of the System Development Corporation, has carried this operation a little further. A significant group of clue words is chosen from the vocabulary of an experimental collection. These are arranged in a document/term matrix to show the frequency of occurrence of each term in each document.... A correlation coefficient for each word pair is then computed, based on their co-occurrence in the document set. The resulting term/term matrix... is then factor analysed and a series of factors are isolated. These factors, when interpreted and named on the basis of the terms with high loadings which appear in each of the factors, become the classes of an empirical classification. The terms with high loadings in each factor are the clue words or predictors of the categories.


Choice of terms

A point of view on the matrix is that each row represents a document. In the vectorial semantic model, which is normally the one used to compute a document-term matrix, the goal is to represent the topic of a document by the frequency of semantically significant terms. The terms are semantic units of the documents. It is often assumed, for
Indo-European languages The Indo-European languages are a language family native to the overwhelming majority of Europe, the Iranian plateau, and the northern Indian subcontinent. Some European languages of this family, English, French, Portuguese, Russian, Dutc ...
, that nouns, verbs and adjectives are the more significant
categories Category, plural categories, may refer to: Philosophy and general uses *Categorization, categories in cognitive science, information science and generally *Category of being *Categories (Aristotle), ''Categories'' (Aristotle) *Category (Kant) ...
, and that words from those categories should be kept as terms. Adding
collocation In corpus linguistics, a collocation is a series of words or terms that co-occur more often than would be expected by chance. In phraseology, a collocation is a type of compositional phraseme, meaning that it can be understood from the words th ...
as terms improves the quality of the vectors, especially when computing similarities between documents.


Applications


Improving search results

Latent semantic analysis Latent semantic analysis (LSA) is a technique in natural language processing, in particular distributional semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the do ...
(LSA, performing
singular-value decomposition In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is relate ...
on the document-term matrix) can improve search results by disambiguating polysemous words and searching for
synonym A synonym is a word, morpheme, or phrase that means exactly or nearly the same as another word, morpheme, or phrase in a given language. For example, in the English language, the words ''begin'', ''start'', ''commence'', and ''initiate'' are all ...
s of the query. However, searching in the high-dimensional continuous space is much slower than searching the standard
trie In computer science, a trie, also called digital tree or prefix tree, is a type of ''k''-ary search tree, a tree data structure used for locating specific keys from within a set. These keys are most often strings, with links between nodes def ...
data structure of search engines.


Finding topics

Multivariate analysis Multivariate statistics is a subdivision of statistics encompassing the simultaneous observation and analysis of more than one outcome variable. Multivariate statistics concerns understanding the different aims and background of each of the dif ...
of the document-term matrix can reveal topics/themes of the corpus. Specifically,
latent semantic analysis Latent semantic analysis (LSA) is a technique in natural language processing, in particular distributional semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the do ...
and
data clustering Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense) to each other than to those in other groups (clusters). It is a main task of ...
can be used, and, more recently,
probabilistic latent semantic analysis Probabilistic latent semantic analysis (PLSA), also known as probabilistic latent semantic indexing (PLSI, especially in information retrieval circles) is a statistical technique for the analysis of two-mode and co-occurrence data. In effect, one c ...
with its generalization
Latent Dirichlet allocation In natural language processing, Latent Dirichlet Allocation (LDA) is a generative statistical model that explains a set of observations through unobserved groups, and each group explains why some parts of the data are similar. The LDA is an exa ...
, and
non-negative matrix factorization Non-negative matrix factorization (NMF or NNMF), also non-negative matrix approximation is a group of algorithms in multivariate analysis and linear algebra where a matrix is factorized into (usually) two matrices and , with the property that ...
, have been found to perform well for this task.


See also

*
Bag of words model The bag-of-words model is a simplifying representation used in natural language processing and information retrieval (IR). In this model, a text (such as a sentence or a document) is represented as the bag (multiset) of its words, disregarding g ...


Implementations


Gensim
Open source Python framework for Vector Space modelling. Contains memory-efficient algorithms for constructing term-document matrices from text plus common transformations ( tf-idf, LSA,
LDA LDA may refer to: Aviation *Localizer type directional aid, an instrument approach to an airport *Landing distance available, the length of runway that is available for the ground run of an airplane landing Law *Legal document assistant, a non-la ...
).


References

{{DEFAULTSORT:Document-Term Matrix Natural language processing