Kernel Of A Matrix
   HOME

TheInfoList



OR:

In
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
, the kernel of a
linear map In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that ...
, also known as the null space or nullspace, is the
linear subspace In mathematics, and more specifically in linear algebra, a linear subspace, also known as a vector subspaceThe term ''linear subspace'' is sometimes used for referring to flats and affine subspaces. In the case of vector spaces over the reals, l ...
of the
domain Domain may refer to: Mathematics *Domain of a function, the set of input values for which the (total) function is defined ** Domain of definition of a partial function ** Natural domain of a partial function **Domain of holomorphy of a function * ...
of the map which is mapped to the zero vector. That is, given a linear map between two
vector space In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called '' vectors'', may be added together and multiplied ("scaled") by numbers called ''scalars''. Scalars are often real numbers, but can ...
s and , the kernel of is the vector space of all elements of such that , where denotes the
zero vector In mathematics, a zero element is one of several generalizations of 0, the number zero to other algebraic structures. These alternate meanings may or may not reduce to the same thing, depending on the context. Additive identities An additive iden ...
in , or more symbolically: :\ker(L) = \left\ .


Properties

The kernel of is a
linear subspace In mathematics, and more specifically in linear algebra, a linear subspace, also known as a vector subspaceThe term ''linear subspace'' is sometimes used for referring to flats and affine subspaces. In the case of vector spaces over the reals, l ...
of the domain .Linear algebra, as discussed in this article, is a very well established mathematical discipline for which there are many sources. Almost all of the material in this article can be found in , , and Strang's lectures. In the linear map L : V \to W, two elements of have the same image in if and only if their difference lies in the kernel of , that is, L\left(\mathbf_1\right) = L\left(\mathbf_2\right) \quad \text \quad L\left(\mathbf_1-\mathbf_2\right) = \mathbf. From this, it follows that the image of is isomorphic to the
quotient In arithmetic, a quotient (from lat, quotiens 'how many times', pronounced ) is a quantity produced by the division of two numbers. The quotient has widespread use throughout mathematics, and is commonly referred to as the integer part of a ...
of by the kernel: \operatorname(L) \cong V / \ker(L). In the case where is finite-dimensional, this implies the
rank–nullity theorem The rank–nullity theorem is a theorem in linear algebra, which asserts that the dimension of the domain of a linear map is the sum of its rank (the dimension of its image) and its ''nullity'' (the dimension of its kernel). p. 70, §2.1, Theo ...
: \dim(\ker L) + \dim(\operatorname L) = \dim(V). where the term refers the dimension of the image of , \dim(\operatorname L), while ' refers to the dimension of the kernel of , \dim(\ker L). That is, \operatorname(T) = \dim(\operatorname L) \qquad \text \qquad \operatorname(T) = \dim(\ker L), so that the rank–nullity theorem can be restated as \operatorname(T) + \operatorname(T) = \dim \left(\operatorname T\right). When is an
inner product space In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often ...
, the quotient V / \ker(L) can be identified with the
orthogonal complement In the mathematical fields of linear algebra and functional analysis, the orthogonal complement of a subspace ''W'' of a vector space ''V'' equipped with a bilinear form ''B'' is the set ''W''⊥ of all vectors in ''V'' that are orthogonal to every ...
in of \ker(L) This is the generalization to linear operators of the
row space Row or ROW may refer to: Exercise *Rowing, or a form of aquatic movement using oars *Row (weight-lifting), a form of weight-lifting exercise Math *Row vector, a 1 × ''n'' matrix in linear algebra. *Row (database), a single, implicitly structured ...
, or coimage, of a matrix.


Application to modules

The notion of kernel also makes sense for
homomorphism In algebra, a homomorphism is a structure-preserving map between two algebraic structures of the same type (such as two groups, two rings, or two vector spaces). The word ''homomorphism'' comes from the Ancient Greek language: () meaning "same" ...
s of
modules Broadly speaking, modularity is the degree to which a system's components may be separated and recombined, often with the benefit of flexibility and variety in use. The concept of modularity is used primarily to reduce complexity by breaking a s ...
, which are generalizations of vector spaces where the scalars are elements of a
ring Ring may refer to: * Ring (jewellery), a round band, usually made of metal, worn as ornamental jewelry * To make a sound with a bell, and the sound made by a bell :(hence) to initiate a telephone connection Arts, entertainment and media Film and ...
, rather than a
field Field may refer to: Expanses of open ground * Field (agriculture), an area of land used for agricultural purposes * Airfield, an aerodrome that lacks the infrastructure of an airport * Battlefield * Lawn, an area of mowed grass * Meadow, a grass ...
. The domain of the mapping is a module, with the kernel constituting a
submodule In mathematics, a module is a generalization of the notion of vector space in which the field of scalars is replaced by a ring. The concept of ''module'' generalizes also the notion of abelian group, since the abelian groups are exactly the mo ...
. Here, the concepts of rank and nullity do not necessarily apply.


In functional analysis

If ''V'' and ''W'' are
topological vector space In mathematics, a topological vector space (also called a linear topological space and commonly abbreviated TVS or t.v.s.) is one of the basic structures investigated in functional analysis. A topological vector space is a vector space that is als ...
s such that ''W'' is finite-dimensional, then a linear operator ''L'': ''V'' → ''W'' is
continuous Continuity or continuous may refer to: Mathematics * Continuity (mathematics), the opposing concept to discreteness; common examples include ** Continuous probability distribution or random variable in probability and statistics ** Continuous ...
if and only if the kernel of ''L'' is a closed subspace of ''V''.


Representation as matrix multiplication

Consider a linear map represented as a ''m'' × ''n'' matrix ''A'' with coefficients in a
field Field may refer to: Expanses of open ground * Field (agriculture), an area of land used for agricultural purposes * Airfield, an aerodrome that lacks the infrastructure of an airport * Battlefield * Lawn, an area of mowed grass * Meadow, a grass ...
''K'' (typically \mathbb or \mathbb), that is operating on column vectors x with ''n'' components over ''K''. The kernel of this linear map is the set of solutions to the equation , where 0 is understood as the
zero vector In mathematics, a zero element is one of several generalizations of 0, the number zero to other algebraic structures. These alternate meanings may or may not reduce to the same thing, depending on the context. Additive identities An additive iden ...
. The
dimension In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coor ...
of the kernel of ''A'' is called the nullity of ''A''. In set-builder notation, :\operatorname(A) = \operatorname(A) = \operatorname(A) = \left\. The matrix equation is equivalent to a homogeneous system of linear equations: :A\mathbf=\mathbf \;\;\Leftrightarrow\;\; \begin a_ x_1 &&\; + \;&& a_ x_2 &&\; + \;\cdots\; + \;&& a_ x_n &&\; = \;&&& 0 \\ a_ x_1 &&\; + \;&& a_ x_2 &&\; + \;\cdots\; + \;&& a_ x_n &&\; = \;&&& 0 \\ && && && && &&\vdots\ \;&&& \\ a_ x_1 &&\; + \;&& a_ x_2 &&\; + \;\cdots\; + \;&& a_ x_n &&\; = \;&&& 0\text \\ \end Thus the kernel of ''A'' is the same as the solution set to the above homogeneous equations.


Subspace properties

The kernel of a matrix ''A'' over a field ''K'' is a
linear subspace In mathematics, and more specifically in linear algebra, a linear subspace, also known as a vector subspaceThe term ''linear subspace'' is sometimes used for referring to flats and affine subspaces. In the case of vector spaces over the reals, l ...
of K''n''. That is, the kernel of ''A'', the set Null(''A''), has the following three properties: # Null(''A'') always contains the
zero vector In mathematics, a zero element is one of several generalizations of 0, the number zero to other algebraic structures. These alternate meanings may or may not reduce to the same thing, depending on the context. Additive identities An additive iden ...
, since . # If and , then . This follows from the distributivity of matrix multiplication over addition. # If and ''c'' is a
scalar Scalar may refer to: *Scalar (mathematics), an element of a field, which is used to define a vector space, usually the field of real numbers * Scalar (physics), a physical quantity that can be described by a single element of a number field such ...
, then , since .


The row space of a matrix

The product ''A''x can be written in terms of the
dot product In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a scalar as a result". It is also used sometimes for other symmetric bilinear forms, for example in a pseudo-Euclidean space. is an alge ...
of vectors as follows: :A\mathbf = \begin \mathbf_1 \cdot \mathbf \\ \mathbf_2 \cdot \mathbf \\ \vdots \\ \mathbf_m \cdot \mathbf \end. Here, a1, ... , a''m'' denote the rows of the matrix ''A''. It follows that x is in the kernel of ''A'', if and only if x is orthogonal (or perpendicular) to each of the row vectors of ''A'' (since orthogonality is defined as having a dot product of 0). The
row space Row or ROW may refer to: Exercise *Rowing, or a form of aquatic movement using oars *Row (weight-lifting), a form of weight-lifting exercise Math *Row vector, a 1 × ''n'' matrix in linear algebra. *Row (database), a single, implicitly structured ...
, or coimage, of a matrix ''A'' is the
span Span may refer to: Science, technology and engineering * Span (unit), the width of a human hand * Span (engineering), a section between two intermediate supports * Wingspan, the distance between the wingtips of a bird or aircraft * Sorbitan ester ...
of the row vectors of ''A''. By the above reasoning, the kernel of ''A'' is the
orthogonal complement In the mathematical fields of linear algebra and functional analysis, the orthogonal complement of a subspace ''W'' of a vector space ''V'' equipped with a bilinear form ''B'' is the set ''W''⊥ of all vectors in ''V'' that are orthogonal to every ...
to the row space. That is, a vector x lies in the kernel of ''A'', if and only if it is perpendicular to every vector in the row space of ''A''. The dimension of the row space of ''A'' is called the
rank Rank is the relative position, value, worth, complexity, power, importance, authority, level, etc. of a person or object within a ranking, such as: Level or position in a hierarchical organization * Academic rank * Diplomatic rank * Hierarchy * ...
of ''A'', and the dimension of the kernel of ''A'' is called the nullity of ''A''. These quantities are related by the
rank–nullity theorem The rank–nullity theorem is a theorem in linear algebra, which asserts that the dimension of the domain of a linear map is the sum of its rank (the dimension of its image) and its ''nullity'' (the dimension of its kernel). p. 70, §2.1, Theo ...
:\operatorname(A) + \operatorname(A) = n.


Left null space

The left null space, or
cokernel The cokernel of a linear mapping of vector spaces is the quotient space of the codomain of by the image of . The dimension of the cokernel is called the ''corank'' of . Cokernels are dual to the kernels of category theory, hence the nam ...
, of a matrix ''A'' consists of all column vectors x such that xT''A'' = 0T, where T denotes the
transpose In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations). The tr ...
of a matrix. The left null space of ''A'' is the same as the kernel of ''A''T. The left null space of ''A'' is the orthogonal complement to the
column space In linear algebra, the column space (also called the range or image) of a matrix ''A'' is the span (set of all possible linear combinations) of its column vectors. The column space of a matrix is the image or range of the corresponding mat ...
of ''A'', and is dual to the
cokernel The cokernel of a linear mapping of vector spaces is the quotient space of the codomain of by the image of . The dimension of the cokernel is called the ''corank'' of . Cokernels are dual to the kernels of category theory, hence the nam ...
of the associated linear transformation. The kernel, the row space, the column space, and the left null space of ''A'' are the four fundamental subspaces associated to the matrix ''A''.


Nonhomogeneous systems of linear equations

The kernel also plays a role in the solution to a nonhomogeneous system of linear equations: :A\mathbf=\mathbf\quad \text \quad \begin a_ x_1 &&\; + \;&& a_ x_2 &&\; + \;\cdots\; + \;&& a_ x_n &&\; = \;&&& b_1 \\ a_ x_1 &&\; + \;&& a_ x_2 &&\; + \;\cdots\; + \;&& a_ x_n &&\; = \;&&& b_2 \\ && && && && &&\vdots\ \;&&& \\ a_ x_1 &&\; + \;&& a_ x_2 &&\; + \;\cdots\; + \;&& a_ x_n &&\; = \;&&& b_m \\ \end If u and v are two possible solutions to the above equation, then :A(\mathbf-\mathbf) = A\mathbf - A\mathbf = \mathbf - \mathbf = \mathbf\, Thus, the difference of any two solutions to the equation ''A''x = b lies in the kernel of ''A''. It follows that any solution to the equation ''A''x = b can be expressed as the sum of a fixed solution v and an arbitrary element of the kernel. That is, the solution set to the equation ''A''x = b is :\left\, Geometrically, this says that the solution set to ''A''x = b is the
translation Translation is the communication of the meaning of a source-language text by means of an equivalent target-language text. The English language draws a terminological distinction (which does not exist in every language) between ''transla ...
of the kernel of ''A'' by the vector v. See also
Fredholm alternative In mathematics, the Fredholm alternative, named after Ivar Fredholm, is one of Fredholm's theorems and is a result in Fredholm theory. It may be expressed in several ways, as a theorem of linear algebra, a theorem of integral equations, or as a ...
and
flat (geometry) In geometry, a flat or Euclidean subspace is a subset of a Euclidean space that is itself a Euclidean space (of lower dimension). The flats in two-dimensional space are points and lines, and the flats in three-dimensional space are points, lin ...
.


Illustration

The following is a simple illustration of the computation of the kernel of a matrix (see , below for methods better suited to more complex calculations). The illustration also touches on the row space and its relation to the kernel. Consider the matrix :A = \begin2 & 3 & 5 \\ -4 & 2 & 3\end. The kernel of this matrix consists of all vectors for which :\begin2 & 3 & 5 \\ -4 & 2 & 3\end\begin x \\ y \\ z\end = \begin 0 \\ 0 \end, which can be expressed as a homogeneous system of linear equations involving ''x'', ''y'', and ''z'': :\begin 2x + 3y + 5z &= 0, \\ -4x + 2y + 3z &= 0. \end The same linear equations can also be written in matrix form as: : \left begin 2 & 3 & 5 & 0 \\ -4 & 2 & 3 & 0 \end\right Through
Gauss–Jordan elimination In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used ...
, the matrix can be reduced to: : \left begin 1 & 0 & 1/16 & 0 \\ 0 & 1 & 13/8 & 0 \end\right Rewriting the matrix in equation form yields: :\begin x &= -\fracz \\ y &= -\fracz. \end The elements of the kernel can be further expressed in parametric form, as follows: :\begin x \\ y \\ z\end = c \begin -1/16 \\ -13/8 \\ 1\end\quad (\textc \in \mathbb) Since ''c'' is a
free variable In mathematics, and in other disciplines involving formal languages, including mathematical logic and computer science, a free variable is a notation (symbol) that specifies places in an expression where substitution may take place and is not ...
ranging over all real numbers, this can be expressed equally well as: : \begin x\\ y\\ z \end = c \begin -1\\ -26\\ 16 \end. The kernel of ''A'' is precisely the solution set to these equations (in this case, a line through the origin in R3). Here, since the vector (−1,−26,16)T constitutes a
basis Basis may refer to: Finance and accounting * Adjusted basis, the net cost of an asset after adjusting for various tax-related items *Basis point, 0.01%, often used in the context of interest rates * Basis trading, a trading strategy consisting ...
of the kernel of ''A''. The nullity of ''A'' is 1. The following dot products are zero: : \begin 2 & 3 & 5 \end \begin -1\\ -26\\ 16 \end = 0 \quad\mathrm\quad \begin -4 & 2 & 3 \end \begin -1\\ -26\\ 16 \end = 0\mathrm which illustrates that vectors in the kernel of ''A'' are orthogonal to each of the row vectors of ''A''. These two (linearly independent) row vectors span the row space of ''A''—a plane orthogonal to the vector (−1,−26,16)T. With the rank 2 of ''A'', the nullity 1 of ''A'', and the dimension 3 of ''A'', we have an illustration of the rank-nullity theorem.


Examples

*If , then the kernel of ''L'' is the solution set to a homogeneous system of linear equations. As in the above illustration, if ''L'' is the operator: L(x_1, x_2, x_3) = (2 x_1 + 3 x_2 + 5 x_3,\; - 4 x_1 + 2 x_2 + 3 x_3) then the kernel of ''L'' is the set of solutions to the equations \begin 2x_1 &\;+\;& 3x_2 &\;+\;& 5x_3 &\;=\;& 0 \\ -4x_1 &\;+\;& 2x_2 &\;+\;& 3x_3 &\;=\;& 0 \end *Let ''C'' ,1denote the
vector space In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called '' vectors'', may be added together and multiplied ("scaled") by numbers called ''scalars''. Scalars are often real numbers, but can ...
of all continuous real-valued functions on the interval ,1 and define by the rule L(f) = f(0.3). Then the kernel of ''L'' consists of all functions for which . *Let ''C''(R) be the vector space of all infinitely differentiable functions , and let be the
differentiation operator In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and retur ...
: D(f) = \frac. Then the kernel of ''D'' consists of all functions in whose derivatives are zero, i.e. the set of all constant functions. *Let be the direct product of infinitely many copies of , and let be the
shift operator In mathematics, and in particular functional analysis, the shift operator also known as translation operator is an operator that takes a function to its translation . In time series analysis, the shift operator is called the lag operator. Shift ...
s(x_1, x_2, x_3, x_4, \ldots) = (x_2, x_3, x_4, \ldots). Then the kernel of ''s'' is the one-dimensional subspace consisting of all vectors . *If is an
inner product space In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often ...
and is a subspace, the kernel of the
orthogonal projection In linear algebra and functional analysis, a projection is a linear transformation P from a vector space to itself (an endomorphism) such that P\circ P=P. That is, whenever P is applied twice to any vector, it gives the same result as if it wer ...
is the
orthogonal complement In the mathematical fields of linear algebra and functional analysis, the orthogonal complement of a subspace ''W'' of a vector space ''V'' equipped with a bilinear form ''B'' is the set ''W''⊥ of all vectors in ''V'' that are orthogonal to every ...
to in .


Computation by Gaussian elimination

A
basis Basis may refer to: Finance and accounting * Adjusted basis, the net cost of an asset after adjusting for various tax-related items *Basis point, 0.01%, often used in the context of interest rates * Basis trading, a trading strategy consisting ...
of the kernel of a matrix may be computed by Gaussian elimination. For this purpose, given an ''m'' × ''n'' matrix ''A'', we construct first the row
augmented matrix In linear algebra, an augmented matrix is a matrix obtained by appending the columns of two given matrices, usually for the purpose of performing the same elementary row operations on each of the given matrices. Given the matrices and , where ...
\beginA\\\hline I\end, where is the ''n'' × ''n'' identity matrix. Computing its
column echelon form In linear algebra, a matrix is in echelon form if it has the shape resulting from a Gaussian elimination. A matrix being in row echelon form means that Gaussian elimination has operated on the rows, and column echelon form means that Gaussian ...
by Gaussian elimination (or any other suitable method), we get a matrix \beginB\\\hline C\end. A basis of the kernel of ''A'' consists in the non-zero columns of ''C'' such that the corresponding column of ''B'' is a zero column. In fact, the computation may be stopped as soon as the upper matrix is in column echelon form: the remainder of the computation consists in changing the basis of the vector space generated by the columns whose upper part is zero. For example, suppose that :A=\begin 1 & 0 & -3 & 0 & 2 & -8 \\ 0 & 1 & 5 & 0 & -1 & 4 \\ 0 & 0 & 0 & 1 & 7 & -9 \\ 0 & 0 & 0 & 0 & 0 & 0 \end. Then : \beginA\\\hline I\end= \begin 1 & 0 & -3 & 0 & 2 & -8 \\ 0 & 1 & 5 & 0 & -1 & 4 \\ 0 & 0 & 0 & 1 & 7 & -9 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ \hline 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end. Putting the upper part in column echelon form by column operations on the whole matrix gives : \beginB \\ \hline C\end = \begin 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ \hline 1 & 0 & 0 & 3 & -2 & 8 \\ 0 & 1 & 0 & -5 & 1 & -4 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & -7 & 9 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end. The last three columns of ''B'' are zero columns. Therefore, the three last vectors of ''C'', :\left !\! \begin 3 \\ -5 \\ 1 \\ 0 \\ 0 \\ 0 \end \right,\; \left !\! \begin -2 \\ 1 \\ 0 \\ -7 \\ 1 \\ 0 \end \right\; \left !\! \begin 8 \\ -4 \\ 0 \\ 9 \\ 0 \\ 1 \end \right are a basis of the kernel of ''A''. Proof that the method computes the kernel: Since column operations correspond to post-multiplication by invertible matrices, the fact that \beginA\\\hline I\end reduces to \beginB \\ \hline C\end means that there exists an invertible matrix P such that \beginA\\ \hline I\end P = \beginB \\ \hline C\end, with B in column echelon form. Thus AP=B, IP=C, and AC=B. A column vector \mathbf v belongs to the kernel of A (that is A \mathbf v = \mathbf 0) if and only B\mathbf w=\mathbf 0, where \mathbf w = P^ \mathbf v = C^ \mathbf v. As B is in column echelon form, B \mathbf w = \mathbf 0, if and only if the nonzero entries of \mathbf w correspond to the zero columns of B. By multiplying by C, one may deduce that this is the case if and only if \mathbf v = C \mathbf w is a linear combination of the corresponding columns of C.


Numerical computation

The problem of computing the kernel on a computer depends on the nature of the coefficients.


Exact coefficients

If the coefficients of the matrix are exactly given numbers, the
column echelon form In linear algebra, a matrix is in echelon form if it has the shape resulting from a Gaussian elimination. A matrix being in row echelon form means that Gaussian elimination has operated on the rows, and column echelon form means that Gaussian ...
of the matrix may be computed by
Bareiss algorithm In mathematics, the Bareiss algorithm, named after Erwin Bareiss, is an algorithm to calculate the determinant or the echelon form of a matrix with integer entries using only integer arithmetic; any divisions that are performed are guaranteed to be ...
more efficiently than with Gaussian elimination. It is even more efficient to use
modular arithmetic In mathematics, modular arithmetic is a system of arithmetic for integers, where numbers "wrap around" when reaching a certain value, called the modulus. The modern approach to modular arithmetic was developed by Carl Friedrich Gauss in his boo ...
and Chinese remainder theorem, which reduces the problem to several similar ones over
finite field In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field that contains a finite number of elements. As with any field, a finite field is a set on which the operations of multiplication, addition, subtr ...
s (this avoids the overhead induced by the non-linearity of the computational complexity of integer multiplication). For coefficients in a finite field, Gaussian elimination works well, but for the large matrices that occur in
cryptography Cryptography, or cryptology (from grc, , translit=kryptós "hidden, secret"; and ''graphein'', "to write", or ''-logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of adver ...
and
Gröbner basis In mathematics, and more specifically in computer algebra, computational algebraic geometry, and computational commutative algebra, a Gröbner basis is a particular kind of generating set of an ideal in a polynomial ring over a field . A Gröbn ...
computation, better algorithms are known, which have roughly the same computational complexity, but are faster and behave better with modern computer hardware.


Floating point computation

For matrices whose entries are
floating-point number In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can be r ...
s, the problem of computing the kernel makes sense only for matrices such that the number of rows is equal to their rank: because of the
rounding error A roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Rounding errors are d ...
s, a floating-point matrix has almost always a full rank, even when it is an approximation of a matrix of a much smaller rank. Even for a full-rank matrix, it is possible to compute its kernel only if it is well conditioned, i.e. it has a low condition number. Even for a well conditioned full rank matrix, Gaussian elimination does not behave correctly: it introduces rounding errors that are too large for getting a significant result. As the computation of the kernel of a matrix is a special instance of solving a homogeneous system of linear equations, the kernel may be computed by any of the various algorithms designed to solve homogeneous systems. A state of the art software for this purpose is the
Lapack LAPACK ("Linear Algebra Package") is a standard software library for numerical linear algebra. It provides routines for solving systems of linear equations and linear least squares, eigenvalue problems, and singular value decomposition. It als ...
library.


See also

* Kernel (algebra) *
Zero set In mathematics, a zero (also sometimes called a root) of a real-, complex-, or generally vector-valued function f, is a member x of the domain of f such that f(x) ''vanishes'' at x; that is, the function f attains the value of 0 at x, or e ...
* System of linear equations *
Row and column spaces In linear algebra, the column space (also called the range or image) of a matrix ''A'' is the span (set of all possible linear combinations) of its column vectors. The column space of a matrix is the image or range of the corresponding matrix ...
*
Row reduction In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of operations performed on the corresponding matrix (mathematics), matrix of coefficients. This me ...
*
Four fundamental subspaces In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the linear subspace of the Domain of a function, domain of the map which is mapped to the zero vector. That is, given a linear map between two vector space ...
*
Vector space In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called '' vectors'', may be added together and multiplied ("scaled") by numbers called ''scalars''. Scalars are often real numbers, but can ...
* Linear subspace * Linear operator * Function space *
Fredholm alternative In mathematics, the Fredholm alternative, named after Ivar Fredholm, is one of Fredholm's theorems and is a result in Fredholm theory. It may be expressed in several ways, as a theorem of linear algebra, a theorem of integral equations, or as a ...


Notes and references


Bibliography

* * * * * * * *


External links

* *
Khan Academy Khan Academy is an American non-profit educational organization created in 2008 by Sal Khan. Its goal is creating a set of online tools that help educate students. The organization produces short lessons in the form of videos. Its website also in ...

Introduction to the Null Space of a Matrix
{{DEFAULTSORT:Kernel (linear algebra) Linear algebra Functional analysis Matrices Numerical linear algebra