In
applied mathematics
Applied mathematics is the application of mathematical methods by different fields such as physics, engineering, medicine, biology, finance, business, computer science, and industry. Thus, applied mathematics is a combination of mathematical s ...
, K-SVD is a
dictionary learning
Sparse coding is a representation learning method which aims at finding a sparse representation of the input data (also known as sparse coding) in the form of a linear combination of basic elements as well as those basic elements themselves. These ...
algorithm for creating a dictionary for
sparse representations, via a
singular value decomposition
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is related ...
approach. K-SVD is a generalization of the
k-means clustering
''k''-means clustering is a method of vector quantization, originally from signal processing, that aims to partition ''n'' observations into ''k'' clusters in which each observation belongs to the cluster with the nearest mean (cluster centers or ...
method, and it works by iteratively alternating between sparse coding the input data based on the current dictionary, and updating the atoms in the dictionary to better fit the data. It is structurally related to the
expectation maximization
Expectation or Expectations may refer to:
Science
* Expectation (epistemic)
* Expected value, in mathematical probability theory
* Expectation value (quantum mechanics)
* Expectation–maximization algorithm, in statistics
Music
* ''Expectation' ...
(EM) algorithm.
K-SVD can be found widely in use in applications such as image processing, audio processing, biology, and document analysis.
K-SVD algorithm
K-SVD is a kind of generalization of K-means, as follows.
The
k-means clustering
''k''-means clustering is a method of vector quantization, originally from signal processing, that aims to partition ''n'' observations into ''k'' clusters in which each observation belongs to the cluster with the nearest mean (cluster centers or ...
can be also regarded as a method of
sparse representation. That is, finding the best possible codebook to represent the data samples
by
nearest neighbor, by solving
:
which is equivalent to
:
.
The letter F denotes the
Frobenius norm
In mathematics, a matrix norm is a vector norm in a vector space whose elements (vectors) are matrices (of given dimensions).
Preliminaries
Given a field K of either real or complex numbers, let K^ be the -vector space of matrices with m rows ...
. The sparse representation term
enforces K-means algorithm to use only one atom (column) in dictionary
. To relax this constraint, the target of the K-SVD algorithm is to represent signal as a linear combination of atoms in
.
The K-SVD algorithm follows the construction flow of the K-means algorithm. However, in contrary to K-means, in order to achieve a linear combination of atoms in
, the sparsity term of the constraint is relaxed so that the number of nonzero entries of each column
can be more than 1, but less than a number
.
So, the objective function becomes
:
or in another objective form
:
In the K-SVD algorithm, the
is first fixed and the best coefficient matrix
is found. As finding the truly optimal
is impossible, we use an approximation pursuit method. Any algorithm such as OMP, the orthogonal
matching pursuit
Matching pursuit (MP) is a sparse approximation algorithm which finds the "best matching" projections of multidimensional data onto the span of an over-complete (i.e., redundant) dictionary D. The basic idea is to approximately represent a signal ...
can be used for the calculation of the coefficients, as long as it can supply a solution with a fixed and predetermined number of nonzero entries
.
After the sparse coding task, the next is to search for a better dictionary
. However, finding the whole dictionary all at a time is impossible, so the process is to update only one column of the dictionary
each time, while fixing
. The update of the
-th column is done by rewriting the penalty term as
:
where
denotes the ''k''-th row of ''X''.
By decomposing the multiplication
into sum of
rank 1 matrices, we can assume the other
terms are assumed fixed, and the
-th remains unknown. After this step, we can solve the minimization problem by approximate the
term with a
matrix using
singular value decomposition
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is related ...
, then update
with it. However, the new solution of vector
is very likely to be filled, because the sparsity constraint is not enforced.
To cure this problem, Define
as
:
which points to examples
that use atom
(also the entries of
that is nonzero). Then, define
as a matrix of size
, with ones on the
entries and zeros otherwise. When multiplying
, this shrinks the row vector
by discarding the zero entries. Similarly, the multiplication
is the subset of the examples that are current using the
atom. The same effect can be seen on
.
So the minimization problem as mentioned before becomes
:
and can be done by directly using SVD. SVD decomposes
into
. The solution for
is the first column of U, the coefficient vector
as the first column of
. After updating the whole dictionary, the process then turns to iteratively solve X, then iteratively solve D.
Limitations
Choosing an appropriate "dictionary" for a dataset is a non-convex problem, and K-SVD operates by an iterative update which does not guarantee to find the global optimum.
However, this is common to other algorithms for this purpose, and K-SVD works fairly well in practice.
{{better source, date=May 2014
See also
*
Sparse approximation Sparse approximation (also known as sparse representation) theory deals with sparse solutions for systems of linear equations. Techniques for finding these solutions and exploiting them in applications have found wide use in image processing, sign ...
*
Singular value decomposition
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is related ...
*
Matrix norm
In mathematics, a matrix norm is a vector norm in a vector space whose elements (vectors) are matrices (of given dimensions).
Preliminaries
Given a field K of either real or complex numbers, let K^ be the -vector space of matrices with m rows ...
*
k-means clustering
''k''-means clustering is a method of vector quantization, originally from signal processing, that aims to partition ''n'' observations into ''k'' clusters in which each observation belongs to the cluster with the nearest mean (cluster centers or ...
*
Low-rank approximation In mathematics, low-rank approximation is a mathematical optimization, minimization problem, in which the Loss function, cost function measures the fit between a given matrix (the data) and an approximating matrix (the optimization variable), subjec ...
References
Norms (mathematics)
Linear algebra
Cluster analysis algorithms