
The rank–nullity theorem is a theorem in
linear algebra
Linear algebra is the branch of mathematics concerning linear equations such as:
:a_1x_1+\cdots +a_nx_n=b,
linear maps such as:
:(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n,
and their representations in vector spaces and through matric ...
, which asserts that the
dimension
In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coor ...
of the
domain of a
linear map is the sum of its
rank (the dimension of its
image
An image is a visual representation of something. It can be two-dimensional, three-dimensional, or somehow otherwise feed into the visual system to convey information. An image can be an artifact, such as a photograph or other two-dimensio ...
) and its ''nullity'' (the dimension of its
kernel
Kernel may refer to:
Computing
* Kernel (operating system), the central component of most operating systems
* Kernel (image processing), a matrix used for image convolution
* Compute kernel, in GPGPU programming
* Kernel method, in machine lea ...
).
[ p. 70, §2.1, Theorem 2.3]
Stating the theorem
Let
be a linear transformation between two vector spaces where
's domain
is finite dimensional. Then
where
In other words,
This theorem can be refined via the
splitting lemma to be a statement about an
isomorphism
In mathematics, an isomorphism is a structure-preserving mapping between two structures of the same type that can be reversed by an inverse mapping. Two mathematical structures are isomorphic if an isomorphism exists between them. The word i ...
of spaces, not just dimensions. Explicitly, since induces an isomorphism from
to
the existence of a basis for that extends any given basis of
implies, via the splitting lemma, that
Taking dimensions, the rank–nullity theorem follows.
Matrices
Since
one can represent linear maps as
matrices. In the case of an
matrix, the dimension of the domain is
the number of columns in the matrix. Thus the rank–nullity theorem for a given matrix
immediately becomes
Proofs
Here we provide two proofs. The first
operates in the general case, using linear maps. The second proof looks at the homogeneous system
for
with
rank and shows explicitly that there exists a set of
linearly independent
In the theory of vector spaces, a set of vectors is said to be if there is a nontrivial linear combination of the vectors that equals the zero vector. If no such linear combination exists, then the vectors are said to be . These concepts ...
solutions that span the kernel of
.
While the theorem requires that the domain of the linear map be finite-dimensional, there is no such assumption on the codomain. This means that there are linear maps not given by matrices for which the theorem applies. Despite this, the first proof is not actually more general than the second: since the image of the linear map is finite-dimensional, we can represent the map from its domain to its image by a matrix, prove the theorem for that matrix, then compose with the inclusion of the image into the full codomain.
First proof
Let
be vector spaces over some field
and
defined as in the statement of the theorem with
.
As
is a
subspace, there exists a basis for it. Suppose
and let
be such a basis.
We may now, by the
Steinitz exchange lemma, extend
with
linearly independent vectors
to form a full basis of
.
Let
such that
is a basis for
.
From this, we know that
We now claim that
is a basis for
.
The above equality already states that
is a generating set for
; it remains to be shown that it is also linearly independent to conclude that it is a basis.
Suppose
is not linearly independent, and let
for some
.
Thus, owing to the linearity of
, it follows that
This is a contradiction to
being a basis, unless all
are equal to zero. This shows that
is linearly independent, and more specifically that it is a basis for
.
To summarize, we have
, a basis for
, and
, a basis for
.
Finally we may state that
This concludes our proof.
Second proof
Let
with
linearly independent
In the theory of vector spaces, a set of vectors is said to be if there is a nontrivial linear combination of the vectors that equals the zero vector. If no such linear combination exists, then the vectors are said to be . These concepts ...
columns (i.e.
). We will show that:
To do this, we will produce a matrix
whose columns form a
basis of the null space of
.
Without loss of generality, assume that the first
columns of
are linearly independent. So, we can write
where
*
with
linearly independent column vectors, and
*
, each of whose
columns are linear combinations of the columns of
.
This means that
for some
(see
rank factorization) and, hence,
Let
where
is the
identity matrix
In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere.
Terminology and notation
The identity matrix is often denoted by I_n, or simply by I if the size is immaterial ...
. We note that
satisfies
Therefore, each of the
columns of
are particular solutions of
.
Furthermore, the
columns of
are
linearly independent
In the theory of vector spaces, a set of vectors is said to be if there is a nontrivial linear combination of the vectors that equals the zero vector. If no such linear combination exists, then the vectors are said to be . These concepts ...
because
will imply
for
:
Therefore, the column vectors of
constitute a set of
linearly independent solutions for
.
We next prove that ''any'' solution of
must be a
linear combination of the columns of
.
For this, let
be any vector such that
. Note that since the columns of
are linearly independent,
implies
.
Therefore,
This proves that any vector
that is a solution of
must be a linear combination of the
special solutions given by the columns of
. And we have already seen that the columns of
are linearly independent. Hence, the columns of
constitute a basis for the
null space of
. Therefore, the
nullity of
is
. Since
equals rank of
, it follows that
. This concludes our proof.
Reformulations and generalizations
This theorem is a statement of the
first isomorphism theorem of algebra for the case of vector spaces; it generalizes to the
splitting lemma.
In more modern language, the theorem can also be phrased as saying that each short exact sequence of vector spaces splits. Explicitly, given that
is a
short exact sequence of vector spaces, then
, hence
Here ''R'' plays the role of im ''T'' and ''U'' is ker ''T'', i.e.
In the finite-dimensional case, this formulation is susceptible to a generalization: if
is an
exact sequence
An exact sequence is a sequence of morphisms between objects (for example, groups, rings, modules, and, more generally, objects of an abelian category) such that the image of one morphism equals the kernel of the next.
Definition
In the conte ...
of finite-dimensional vector spaces, then
The rank–nullity theorem for finite-dimensional vector spaces may also be formulated in terms of the ''index'' of a linear map. The index of a linear map
, where
and
are finite-dimensional, is defined by
Intuitively,
is the number of independent solutions
of the equation
, and
is the number of independent restrictions that have to be put on
to make
solvable. The rank–nullity theorem for finite-dimensional vector spaces is equivalent to the statement
We see that we can easily read off the index of the linear map
from the involved spaces, without any need to analyze
in detail. This effect also occurs in a much deeper result: the
Atiyah–Singer index theorem states that the index of certain differential operators can be read off the geometry of the involved spaces.
Citations
References
*
*
*
*.
*
*
{{DEFAULTSORT:Rank-nullity theorem
Theorems in linear algebra
Isomorphism theorems
Articles containing proofs