Vector quantization (VQ) is a classical
quantization technique from
signal processing
Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing '' signals'', such as sound, images, and scientific measurements. Signal processing techniques are used to optimize transmissions, ...
that allows the modeling of probability density functions by the distribution of prototype vectors. It was originally used for
data compression
In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compressi ...
. It works by dividing a large set of points (
vector
Vector most often refers to:
*Euclidean vector, a quantity with a magnitude and a direction
*Vector (epidemiology), an agent that carries and transmits an infectious pathogen into another living organism
Vector may also refer to:
Mathematic ...
s) into groups having approximately the same number of points closest to them. Each group is represented by its
centroid
In mathematics and physics, the centroid, also known as geometric center or center of figure, of a plane figure or solid figure is the arithmetic mean position of all the points in the surface of the figure. The same definition extends to any ...
point, as in
k-means
''k''-means clustering is a method of vector quantization, originally from signal processing, that aims to Partition of a set, partition ''n'' observations into ''k'' clusters in which each observation belongs to the Cluster (statistics), cluster ...
and some other
clustering algorithms.
The density matching property of vector quantization is powerful, especially for identifying the density of large and high-dimensional data. Since data points are represented by the index of their closest centroid, commonly occurring data have low error, and rare data high error. This is why VQ is suitable for
lossy data compression
In information technology, lossy compression or irreversible compression is the class of data compression methods that uses inexact approximations and partial data discarding to represent the content. These techniques are used to reduce data size ...
. It can also be used for lossy data correction and
density estimation.
Vector quantization is based on the
competitive learning Competitive learning is a form of unsupervised learning in artificial neural networks
Artificial neural networks (ANNs), usually simply called neural networks (NNs) or neural nets, are computing systems inspired by the biological neural net ...
paradigm, so it is closely related to the
self-organizing map
A self-organizing map (SOM) or self-organizing feature map (SOFM) is an unsupervised machine learning technique used to produce a low-dimensional (typically two-dimensional) representation of a higher dimensional data set while preserving the ...
model and to
sparse coding
Neural coding (or Neural representation) is a neuroscience field concerned with characterising the hypothetical relationship between the stimulus and the individual or ensemble neuronal responses and the relationship among the electrical activit ...
models used in
deep learning algorithms such as
autoencoder
An autoencoder is a type of artificial neural network used to learn efficient codings of unlabeled data (unsupervised learning). The encoding is validated and refined by attempting to regenerate the input from the encoding. The autoencoder lear ...
.
Training
The simplest training algorithm for vector quantization is:
# Pick a sample point at random
# Move the nearest quantization vector centroid towards this sample point, by a small fraction of the distance
# Repeat
A more sophisticated algorithm reduces the bias in the density matching estimation, and ensures that all points are used, by including an extra sensitivity parameter :
# Increase each centroid's sensitivity
by a small amount
# Pick a sample point
at random
# For each quantization vector centroid
, let
denote the distance of
and
# Find the centroid
for which
is the smallest
# Move
towards
by a small fraction of the distance
# Set
to zero
# Repeat
It is desirable to use a cooling schedule to produce convergence: see
Simulated annealing
Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. ...
. Another (simpler) method is
LBG which is based on
K-Means
''k''-means clustering is a method of vector quantization, originally from signal processing, that aims to Partition of a set, partition ''n'' observations into ''k'' clusters in which each observation belongs to the Cluster (statistics), cluster ...
.
The algorithm can be iteratively updated with 'live' data, rather than by picking random points from a data set, but this will introduce some bias if the data are temporally correlated over many samples.
Applications
Vector quantization is used for lossy data compression, lossy data correction, pattern recognition, density estimation and clustering.
Lossy data correction, or prediction, is used to recover data missing from some dimensions. It is done by finding the nearest group with the data dimensions available, then predicting the result based on the values for the missing dimensions, assuming that they will have the same value as the group's centroid.
For
density estimation, the area/volume that is closer to a particular centroid than to any other is inversely proportional to the density (due to the density matching property of the algorithm).
Use in data compression
Vector quantization, also called "block quantization" or "pattern matching quantization" is often used in
lossy data compression
In information technology, lossy compression or irreversible compression is the class of data compression methods that uses inexact approximations and partial data discarding to represent the content. These techniques are used to reduce data size ...
. It works by encoding values from a multidimensional
vector space
In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called '' vectors'', may be added together and multiplied ("scaled") by numbers called '' scalars''. Scalars are often real numbers, but ...
into a finite set of values from a discrete
subspace of lower dimension. A lower-space vector requires less storage space, so the data is compressed. Due to the density matching property of vector quantization, the compressed data has errors that are inversely proportional to density.
The transformation is usually done by
projection or by using a
codebook. In some cases, a codebook can be also used to
entropy code
In information theory, an entropy coding (or entropy encoding) is any lossless data compression method that attempts to approach the lower bound declared by Shannon's source coding theorem, which states that any lossless data compression method m ...
the discrete value in the same step, by generating a
prefix code A prefix code is a type of code system distinguished by its possession of the "prefix property", which requires that there is no whole code word in the system that is a prefix (computer science), prefix (initial segment) of any other code word in th ...
d variable-length encoded value as its output.
The set of discrete amplitude levels is quantized jointly rather than each sample being quantized separately. Consider a ''k''-dimensional vector