In
compressed sensing
Compressed sensing (also known as compressive sensing, compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a Signal (electronics), signal, by finding solutions to Underdetermined ...
, the
nullspace
In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the linear subspace of the domain of the map which is mapped to the zero vector. That is, given a linear map between two vector spaces and , the kernel of ...
property gives necessary and sufficient conditions on the reconstruction of sparse signals using the techniques of
-relaxation. The term "nullspace property" originates from Cohen, Dahmen, and DeVore. The nullspace property is often difficult to check in practice, and the
restricted isometry property In linear algebra, the restricted isometry property (RIP) characterizes matrices which are nearly orthonormal, at least when operating on sparse vectors. The concept was introduced by Emmanuel Candès and Terence TaoE. J. Candes and T. Tao, "Decodin ...
is a more modern condition in the field of compressed sensing.
The technique of -relaxation
The
non-convex -minimization problem,
subject to
,
is a standard problem in compressed sensing. However,
-minimization is known to be
NP-hard
In computational complexity theory, NP-hardness ( non-deterministic polynomial-time hardness) is the defining property of a class of problems that are informally "at least as hard as the hardest problems in NP". A simple example of an NP-hard pr ...
in general. As such, the technique of
-relaxation is sometimes employed to circumvent the difficulties of signal reconstruction using the
-norm. In
-relaxation, the
problem,
subject to
,
is solved in place of the
problem. Note that this relaxation is convex and hence amenable to the standard techniques of
linear programming
Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear function#As a polynomial function, li ...
- a computationally desirable feature. Naturally we wish to know when
-relaxation will give the same answer as the
problem. The nullspace property is one way to guarantee agreement.
Definition
An
complex matrix
has the nullspace property of order
, if for all index sets
with
we have that:
for all
.
Recovery Condition
The following theorem gives necessary and sufficient condition on the recoverability of a given
-sparse vector in
. The proof of the theorem is a standard one, and the proof supplied here is summarized from Holger Rauhut.
Let
be a
complex matrix. Then every
-sparse signal
is the unique solution to the
-relaxation problem with
if and only if
satisfies the nullspace property with order
.
For the forwards direction notice that
and
are distinct vectors with
by the linearity of
, and hence by uniqueness we must have
as desired. For the backwards direction, let
be
-sparse and
another (not necessary
-sparse) vector such that
and
. Define the (non-zero) vector
and notice that it lies in the nullspace of
. Call
the support of
, and then the result follows from an elementary application of the
triangle inequality
In mathematics, the triangle inequality states that for any triangle, the sum of the lengths of any two sides must be greater than or equal to the length of the remaining side.
This statement permits the inclusion of degenerate triangles, but ...
:
, establishing the minimality of
.
References
{{Reflist
Linear algebra