Hashing Trick
   HOME

TheInfoList



OR:

In
machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
, feature hashing, also known as the hashing trick (by analogy to the
kernel trick In machine learning, kernel machines are a class of algorithms for pattern analysis, whose best known member is the support-vector machine (SVM). The general task of pattern analysis is to find and study general types of relations (for example ...
), is a fast and space-efficient way of vectorizing features, i.e. turning arbitrary features into indices in a vector or matrix. It works by applying a
hash function A hash function is any function that can be used to map data of arbitrary size to fixed-size values. The values returned by a hash function are called ''hash values'', ''hash codes'', ''digests'', or simply ''hashes''. The values are usually ...
to the features and using their hash values as indices directly, rather than looking the indices up in an
associative array In computer science, an associative array, map, symbol table, or dictionary is an abstract data type that stores a collection of (key, value) pairs, such that each possible key appears at most once in the collection. In mathematical terms an ...
. This trick is often attributed to Weinberger et al. (2009), but there exists a much earlier description of this method published by John Moody in 1989.


Motivation


Motivating Example

In a typical
document classification Document classification or document categorization is a problem in library science, information science and computer science. The task is to assign a document to one or more classes or categories. This may be done "manually" (or "intellectually") ...
task, the input to the machine learning algorithm (both during learning and classification) is free text. From this, a
bag of words The bag-of-words model is a simplifying representation used in natural language processing and information retrieval (IR). In this model, a text (such as a sentence or a document) is represented as the bag (multiset) of its words, disregarding g ...
(BOW) representation is constructed: the individual tokens are extracted and counted, and each distinct token in the training set defines a
feature Feature may refer to: Computing * Feature (CAD), could be a hole, pocket, or notch * Feature (computer vision), could be an edge, corner or blob * Feature (software design) is an intentional distinguishing characteristic of a software item ...
(independent variable) of each of the documents in both the training and test sets. Machine learning algorithms, however, are typically defined in terms of numerical vectors. Therefore, the bags of words for a set of documents is regarded as a
term-document matrix A document-term matrix is a mathematical matrix that describes the frequency of terms that occur in a collection of documents. In a document-term matrix, rows correspond to documents in the collection and columns correspond to terms. This matrix i ...
where each row is a single document, and each column is a single feature/word; the entry in such a matrix captures the frequency (or weight) of the 'th term of the ''vocabulary'' in document . (An alternative convention swaps the rows and columns of the matrix, but this difference is immaterial.) Typically, these vectors are extremely sparse—according to Zipf's law. The common approach is to construct, at learning time or prior to that, a ''dictionary'' representation of the vocabulary of the training set, and use that to map words to indices.
Hash table In computing, a hash table, also known as hash map, is a data structure that implements an associative array or dictionary. It is an abstract data type that maps keys to values. A hash table uses a hash function to compute an ''index'', ...
s and
trie In computer science, a trie, also called digital tree or prefix tree, is a type of ''k''-ary search tree, a tree data structure used for locating specific keys from within a set. These keys are most often strings, with links between nodes ...
s are common candidates for dictionary implementation. E.g., the three documents * ''John likes to watch movies. '' * ''Mary likes movies too.'' * ''John also likes football.'' can be converted, using the dictionary to the term-document matrix : \begin \textrm & \textrm & \textrm & \textrm & \textrm & \textrm & \textrm & \textrm & \textrm \\ 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 1 & 1 & 1 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 1 \end (Punctuation was removed, as is usual in document classification and clustering.) The problem with this process is that such dictionaries take up a large amount of storage space and grow in size as the training set grows. On the contrary, if the vocabulary is kept fixed and not increased with a growing training set, an adversary may try to invent new words or misspellings that are not in the stored vocabulary so as to circumvent a machine learned filter. To address this challenge,
Yahoo! Research Yahoo!, once one of the most popular web sites in the United States, is as of September 2021 a content sub-division of the namesake company Yahoo (2017–present), Yahoo Inc., owned by Apollo Global Management (90%) and Verizon Communications (10%) ...
attempted to use feature hashing for their spam filters. Note that the hashing trick isn't limited to text classification and similar tasks at the document level, but can be applied to any problem that involves large (perhaps unbounded) numbers of features.


Mathematical motivation

Mathematically, a token is an element t in a finite (or countably infinite) set T . Suppose we only need to process a finite corpus, then we can put all tokens appearing in the corpus into T , meaning that T is finite. However, suppose we want to process all possible words made of the English letters, then T is countably infinite. Most neural networks can only operate on real vector inputs, so we must construct a "dictionary" function \phi: T\to \R^n. When T is finite, of size , T, = m \leq n, then we can use
one-hot encoding In digital circuits and machine learning, a one-hot is a group of Bit, bits among which the legal combinations of values are only those with a single high (1) bit and all the others low (0). A similar implementation in which all bits are '1' exc ...
to map it into \R^n. First, ''arbitrarily'' enumerate T = \ , then define \phi(t_i) = e_i . In other words, we assign a unique index i to each token, then map the token with index i to the unit vector e_i . One-hot encoding is easy to interpret, but it requires one to maintain the arbitrary enumeration of T . Given a token t\in T , to compute \phi(t) , we must find out the index i of the token t . Thus, to implement \phi efficiently, we need a fast-to-compute bijection h: T\to \ , then we have \phi(t) = e_ . In fact, we can relax the requirement slightly: It suffices to have a fast-to-compute ''injection'' h: T\to \ , then use \phi(t) = e_ . In practice, there is no simple way to construct an efficient injection h: T\to \ . However, we do not need a strict injection, but only an ''approximate'' injection. That is, when t\neq t' , we should ''probably'' have h(t) \neq h(t') , so that ''probably'' \phi(t) \neq \phi(t') . At this point, we have just specified that h should be a hashing function. Thus, ineluctably, we reach the idea of feature hashing.


Algorithms


Feature hashing (Weinberger et al. 2009)

The basic feature hashing algorithm presented in (Weinberger et al. 2009) is defined as follows. First, one specifies two hash functions: the kernel hash h: T\to \, and the sign hash \zeta: T\to \ . Next, one define the feature hashing function:\phi: T\to \R^n, \quad \phi(t) = \zeta(t) e_ Finally, extend this feature hashing function to strings of tokens by\phi: T^* \to \R^n, \quad \phi(t_1, ..., t_k) =\sum_^k \phi(t_j) where T^* is the set of all finite strings consisting of tokens in T . Equivalently,\phi(t_1, ..., t_k) = \sum_^k \zeta(t_j) e_ = \sum_^n \left(\sum_ \zeta(t_j)\right) e_i


Geometric properties

We want to say something about the geometric property of \phi, but T, by itself, is just a set of tokens, we cannot impose a geometric structure on it except the discrete topology, which is generated by the
discrete metric Discrete may refer to: *Discrete particle or quantum in physics, for example in quantum theory * Discrete device, an electronic component with just one circuit element, either passive or active, other than an integrated circuit *Discrete group, a ...
. To make it nicer, we lift it to T\to \R^T, and lift \phi from \phi: T \to \R^n to \phi: \R^T \to \R^n by linear extension: \phi((x_t)_) = \sum_ x_t \zeta(t) e_ = \sum_^n \left(\sum_ x_t \zeta(t_j)\right) e_i There is an infinite sum there, which must be handled at once. There are essentially only two ways to handle infinities. One may impose a metric, then take its completion, to allow well-behaved infinite sums, or one may demand that nothing is actually infinite, only potentially so. Here, we go for the potential-infinity way, by restricting \R^T to contain only vectors with finite support: \forall (x_t)_ \in \R^T, only finitely many entries of (x_t)_ are nonzero. Define an
inner product In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often ...
on \R^T in the obvious way: \langle e_t, e_ \rangle = \begin 1, \textt=t', \\ 0, \text \end \quad \langle x, x'\rangle = \sum_ x_t x_ \langle e_t, e_ \rangleAs a side note, if T is infinite, then the inner product space \R^T is not
complete Complete may refer to: Logic * Completeness (logic) * Completeness of a theory, the property of a theory that every formula in the theory's language or its negation is provable Mathematics * The completeness of the real numbers, which implies t ...
. Taking its completion would get us to a Hilbert space, which allows well-behaved infinite sums. Now we have an inner product space, with enough structure to describe the geometry of the feature hashing function \phi: \R^T \to \R^n. First, we can see why h is called a "kernel hash": it allows us to define a
kernel Kernel may refer to: Computing * Kernel (operating system), the central component of most operating systems * Kernel (image processing), a matrix used for image convolution * Compute kernel, in GPGPU programming * Kernel method, in machine learn ...
K: T\times T \to \R by K(t, t') = \langle e_, e_\rangleIn the language of the "kernel trick", K is the kernel generated by the "feature map" \varphi : T \to \R^n, \quad \varphi(t) = e_Note that this is not the feature map we were using, which is \phi(t) = \zeta(t) e_ . In fact, we have been using ''another kernel'' K_: T\times T \to \R, defined by K_\zeta (t, t') = \langle \zeta(t)e_, \zeta(t')e_\rangleThe benefit of augmenting the kernel hash h with the binary hash \zeta is the following theorem, which states that \phi is an isometry "on average". The above statement and proof interprets the binary hash function \zeta not as a deterministic function of type T\to \, but as a random binary vector \^T with unbiased entries, meaning that Pr(\zeta(t) = +1) = Pr(\zeta(t) = -1) = \frac 1 2 for any t\in T. This is a good intuitive picture, though not rigorous. For a rigorous statement and proof, see


Pseudocode implementation

Instead of maintaining a dictionary, a feature vectorizer that uses the hashing trick can build a vector of a pre-defined length by applying a hash function to the features (e.g., words), then using the hash values directly as feature indices and updating the resulting vector at those indices. Here, we assume that feature actually means feature vector. function hashing_vectorizer(features : array of string, N : integer): x := new vector for f in features: h := hash(f) x mod N+= 1 return x Thus, if our feature vector is cat","dog","cat"and hash function is h(x_f)=1 if x_f is "cat" and 2 if x_f is "dog". Let us take the output feature vector dimension () to be 4. Then output will be ,2,1,0 It has been suggested that a second, single-bit output hash function be used to determine the sign of the update value, to counter the effect of hash collisions. If such a hash function is used, the algorithm becomes function hashing_vectorizer(features : array of string, N : integer): x := new vector for f in features: h := hash(f) idx := h mod N if ξ(f)

1: x dx+= 1 else: x dx-= 1 return x
The above pseudocode actually converts each sample into a vector. An optimized version would instead only generate a stream of (h, \zeta) pairs and let the learning and prediction algorithms consume such streams; a
linear model In statistics, the term linear model is used in different ways according to the context. The most common occurrence is in connection with regression models and the term is often taken as synonymous with linear regression model. However, the term ...
can then be implemented as a single hash table representing the coefficient vector.


Extensions and variations


Learned feature hashing

Feature hashing generally suffers from hash collision, which means that there exist pairs of different tokens with the same hash: t\neq t', \phi(t) = \phi(t') = v. A machine learning model trained on feature-hashed words would then have difficulty distinguishing t and t', essentially because v is polysemic. If t' is rare, then performance degradation is small, as the model could always just ignore the rare case, and pretend all v means t. However, if both are common, then the degradation can be serious. To handle this, one can train supervised hashing functions that avoids mapping common tokens to the same feature vectors.


Applications and practical performance

Ganchev and Dredze showed that in text classification applications with random hash functions and several tens of thousands of columns in the output vectors, feature hashing need not have an adverse effect on classification performance, even without the signed hash function. Weinberger et al. (2009) applied their version of feature hashing to
multi-task learning Multi-task learning (MTL) is a subfield of machine learning in which multiple learning tasks are solved at the same time, while exploiting commonalities and differences across tasks. This can result in improved learning efficiency and prediction ac ...
, and in particular,
spam filter Email filtering is the processing of email to organize it according to specified criteria. The term can apply to the intervention of human intelligence, but most often refers to the automatic processing of messages at an SMTP server, possibly appl ...
ing, where the input features are pairs (user, feature) so that a single parameter vector captured per-user spam filters as well as a global filter for several hundred thousand users, and found that the accuracy of the filter went up. Chen et al. (2015) combined the idea of feature hashing and
sparse matrix In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero. There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse b ...
to construct "virtual matrices": large matrices with small storage requirements. The idea is to treat a matrix M\in \R^ as a dictionary, with keys in n\times n, and values in \R. Then, as usual in hashed dictionaries, one can use a hash function h: \N\times \N\to m, and thus represent a matrix as a vector in \R^m, no matter how big n is. With virtual matrices, they constructed ''HashedNets'', which are large neural networks taking only small amounts of storage.


Implementations

Implementations of the hashing trick are present in: *
Apache Mahout Apache Mahout is a project of the Apache Software Foundation to produce free implementations of distributed or otherwise scalable machine learning algorithms focused primarily on linear algebra. In the past, many of the implementations use the ...
*
Gensim Gensim is an open-source library for unsupervised topic modeling, document indexing, retrieval by similarity, and other natural language processing functionalities, using modern statistical machine learning. Gensim is implemented in Python and ...
*
scikit-learn scikit-learn (formerly scikits.learn and also known as sklearn) is a free software machine learning library for the Python programming language. It features various classification, regression and clustering algorithms including support-vector ...
* sofia-ml *
Vowpal Wabbit Vowpal Wabbit (VW) is an open-source fast online interactive machine learning system library and program developed originally at Yahoo! Research, and currently at Microsoft Research. It was started and is led by John Langford. Vowpal Wabbit's i ...
* Apache Spark * R *
TensorFlow TensorFlow is a free and open-source software library for machine learning and artificial intelligence. It can be used across a range of tasks but has a particular focus on training and inference of deep neural networks. "It is machine learnin ...

Dask-ML
ref>


See also

* Bloom filter * Count–min sketch *
Heaps' law In linguistics, Heaps' law (also called Herdan's law) is an empirical law which describes the number of distinct words in a document (or set of documents) as a function of the document length (so called type-token relation). It can be formulated as ...
*
Locality-sensitive hashing In computer science, locality-sensitive hashing (LSH) is an algorithmic technique that hashes similar input items into the same "buckets" with high probability. (The number of buckets is much smaller than the universe of possible input items.) Since ...
*
MinHash In computer science and data mining, MinHash (or the min-wise independent permutations locality sensitive hashing scheme) is a technique for quickly estimating how Similarity measure, similar two sets are. The scheme was invented by , and initially ...


References

{{Reflist, 30em


External links


Hashing Representations for Machine Learning
on John Langford's website
What is the "hashing trick"? - MetaOptimize Q+A
Hashing Machine learning Articles with example pseudocode