In
mathematics, given a field
, nonnegative integers
, and a
matrix , a rank decomposition or rank factorization of is a factorization of of the form , where
and
, where
is the
rank
Rank is the relative position, value, worth, complexity, power, importance, authority, level, etc. of a person or object within a ranking, such as:
Level or position in a hierarchical organization
* Academic rank
* Diplomatic rank
* Hierarchy
* ...
of
.
Existence
Every finite-dimensional matrix has a rank decomposition: Let
be an
matrix whose
column rank
In linear algebra, the rank of a matrix is the dimension of the vector space generated (or spanned) by its columns. p. 48, § 1.16 This corresponds to the maximal number of linearly independent columns of . This, in turn, is identical to the dime ...
is
. Therefore, there are
linearly independent
In the theory of vector spaces, a set of vectors is said to be if there is a nontrivial linear combination of the vectors that equals the zero vector. If no such linear combination exists, then the vectors are said to be . These concepts are ...
columns in
; equivalently, the
dimension
In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coor ...
of the
column space
In linear algebra, the column space (also called the range or image) of a matrix ''A'' is the span (set of all possible linear combinations) of its column vectors. The column space of a matrix is the image or range of the corresponding mat ...
of
is
. Let
be any
basis
Basis may refer to:
Finance and accounting
* Adjusted basis, the net cost of an asset after adjusting for various tax-related items
*Basis point, 0.01%, often used in the context of interest rates
* Basis trading, a trading strategy consisting ...
for the column space of
and place them as column vectors to form the
matrix
. Therefore, every column vector of
is a
linear combination of the columns of
. To be precise, if
is an
matrix with
as the
-th column, then
:
where
's are the scalar coefficients of
in terms of the basis
. This implies that
, where
is the
-th element of
.
Non-uniqueness
If
is a rank factorization, taking
and
gives another rank factorization for any invertible matrix
of compatible dimensions.
Conversely, if
are two rank factorizations of
, then there exists an invertible matrix
such that
and
.
Construction
Rank factorization from reduced row echelon forms
In practice, we can construct one specific rank factorization as follows: we can compute
, the
reduced row echelon form
In linear algebra, a matrix is in echelon form if it has the shape resulting from a Gaussian elimination.
A matrix being in row echelon form means that Gaussian elimination has operated on the rows, and
column echelon form means that Gaussian e ...
of
. Then
is obtained by removing from
all non-
pivot columns (which can be determined by looking for columns in
which do not contain a pivot), and
is obtained by eliminating any all-zero rows of
.
Note: For a
full-rank square matrix (i.e. when
), this procedure will yield the trivial result
and
(the
identity matrix).
Example
Consider the matrix
:
is in reduced echelon form.
Then
is obtained by removing the third column of
, the only one which is not a pivot column, and
by getting rid of the last row of zeroes from
, so
:
It is straightforward to check that
:
Proof
Let
be an
permutation matrix such that
in
block partitioned form, where the columns of
are the
pivot columns of
. Every column of
is a linear combination of the columns of
, so there is a matrix
such that
, where the columns of
contain the coefficients of each of those linear combinations. So
,
being the
identity matrix. We will show now that
.
Transforming
into its reduced row echelon form
amounts to left-multiplying by a matrix
which is a product of
elementary matrices, so
, where
. We then can write
, which allows us to identify
, i.e. the nonzero
rows of the reduced echelon form, with the same permutation on the columns as we did for
. We thus have
, and since
is invertible this implies
, and the proof is complete.
Singular value decomposition
If
then one can also construct a full-rank factorization of
via a
singular value decomposition
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is re ...
:
Since
is a full-column-rank matrix and
is a full-row-rank matrix, we can take
and
.
Consequences
rank(A) = rank(AT)
An immediate consequence of rank factorization is that the rank of
is equal to the rank of its transpose
. Since the columns of
are the rows of
, the
column rank
In linear algebra, the rank of a matrix is the dimension of the vector space generated (or spanned) by its columns. p. 48, § 1.16 This corresponds to the maximal number of linearly independent columns of . This, in turn, is identical to the dime ...
of
equals its
row rank.
Proof: To see why this is true, let us first define rank to mean column rank. Since
, it follows that
. From the definition of
matrix multiplication
In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the s ...
, this means that each column of
is a
linear combination of the columns of
. Therefore, the column space of
is contained within the column space of
and, hence,
.
Now,
is
, so there are
columns in
and, hence,
. This proves that
.
Now apply the result to
to obtain the reverse inequality: since
, we can write
. This proves
.
We have, therefore, proved
and
, so
.
Notes
References
*
*
*
*
*
{{refend
Matrix decompositions
Linear algebra