HOME

TheInfoList



OR:

In
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrice ...
, a Jordan normal form, also known as a Jordan canonical form (JCF), is an
upper triangular matrix In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries ''above'' the main diagonal are zero. Similarly, a square matrix is called if all the entries ''below'' the main diagonal are ...
of a particular form called a Jordan matrix representing a
linear operator In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a Map (mathematics), mapping V \to W between two vect ...
on a
finite-dimensional In mathematics, the dimension of a vector space ''V'' is the cardinality (i.e., the number of vectors) of a basis of ''V'' over its base field. p. 44, §2.36 It is sometimes called Hamel dimension (after Georg Hamel) or algebraic dimension to d ...
vector space In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called '' vectors'', may be added together and multiplied ("scaled") by numbers called ''scalars''. Scalars are often real numbers, but can ...
with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal (on the
superdiagonal In geometry, a diagonal is a line segment joining two vertices of a polygon or polyhedron, when those vertices are not on the same edge. Informally, any sloping line is called diagonal. The word ''diagonal'' derives from the ancient Gre ...
), and with identical diagonal entries to the left and below them. Let ''V'' be a vector space over a field ''K''. Then a basis with respect to which the matrix has the required form exists
if and only if In logic and related fields such as mathematics and philosophy, "if and only if" (shortened as "iff") is a biconditional logical connective between statements, where either both statements are true or both are false. The connective is bic ...
all
eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
s of the matrix lie in ''K'', or equivalently if the
characteristic polynomial In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The c ...
of the operator splits into linear factors over ''K''. This condition is always satisfied if ''K'' is
algebraically closed In mathematics, a field is algebraically closed if every non-constant polynomial in (the univariate polynomial ring with coefficients in ) has a root in . Examples As an example, the field of real numbers is not algebraically closed, because ...
(for instance, if it is the field of
complex number In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the fo ...
s). The diagonal entries of the normal form are the eigenvalues (of the operator), and the number of times each eigenvalue occurs is called the
algebraic multiplicity In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted ...
of the eigenvalue. If the operator is originally given by a
square matrix In mathematics, a square matrix is a matrix with the same number of rows and columns. An ''n''-by-''n'' matrix is known as a square matrix of order Any two square matrices of the same order can be added and multiplied. Square matrices are often ...
''M'', then its Jordan normal form is also called the Jordan normal form of ''M''. Any square matrix has a Jordan normal form if the field of coefficients is extended to one containing all the eigenvalues of the matrix. In spite of its name, the normal form for a given ''M'' is not entirely unique, as it is a
block diagonal matrix In mathematics, a block matrix or a partitioned matrix is a matrix that is '' interpreted'' as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original ma ...
formed of
Jordan block In the mathematical discipline of matrix theory, a Jordan matrix, named after Camille Jordan, is a block diagonal matrix over a ring (whose identities are the zero 0 and one 1), where each block along the diagonal, called a Jordan block, has th ...
s, the order of which is not fixed; it is conventional to group blocks for the same eigenvalue together, but no ordering is imposed among the eigenvalues, nor among the blocks for a given eigenvalue, although the latter could for instance be ordered by weakly decreasing size. The Jordan–Chevalley decomposition is particularly simple with respect to a basis for which the operator takes its Jordan normal form. The diagonal form for diagonalizable matrices, for instance
normal matrices In mathematics, a complex square matrix is normal if it commutes with its conjugate transpose : The concept of normal matrices can be extended to normal operators on infinite dimensional normed spaces and to normal elements in C*-algebras. As ...
, is a special case of the Jordan normal form. The Jordan normal form is named after Camille Jordan, who first stated the Jordan decomposition theorem in 1870.Brechenmacher
"Histoire du théorème de Jordan de la décomposition matricielle (1870-1930). Formes de représentation et méthodes de décomposition"
Thesis, 2007


Overview


Notation

Some textbooks have the ones on the
subdiagonal In geometry, a diagonal is a line segment joining two vertices of a polygon or polyhedron, when those vertices are not on the same edge. Informally, any sloping line is called diagonal. The word ''diagonal'' derives from the ancient Gre ...
; that is, immediately below the main diagonal instead of on the superdiagonal. The eigenvalues are still on the main diagonal.


Motivation

An ''n'' × ''n'' matrix ''A'' is diagonalizable if and only if the sum of the dimensions of the eigenspaces is ''n''. Or, equivalently, if and only if ''A'' has ''n''
linearly independent In the theory of vector spaces, a set of vectors is said to be if there is a nontrivial linear combination of the vectors that equals the zero vector. If no such linear combination exists, then the vectors are said to be . These concepts are ...
eigenvectors In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted ...
. Not all matrices are diagonalizable; matrices that are not diagonalizable are called defective matrices. Consider the following matrix: : A = \left begin _____5_&__4_&__2_&__1_\\[2pt_____0_&__1_&_-1_&_-1_\\[2pt.html" ;"title="pt.html" ;"title="begin 5 & 4 & 2 & 1 \\[2pt">begin 5 & 4 & 2 & 1 \\[2pt 0 & 1 & -1 & -1 \\[2pt">pt.html" ;"title="begin 5 & 4 & 2 & 1 \\[2pt">begin 5 & 4 & 2 & 1 \\[2pt 0 & 1 & -1 & -1 \\[2pt -1 & -1 & 3 & 0 \\ pt 1 & 1 & -1 & 2 \end\right]. Including multiplicity, the eigenvalues of ''A'' are ''λ'' = 1, 2, 4, 4. The Hamel dimension, dimension of the eigenspace corresponding to the eigenvalue 4 is 1 (and not 2), so ''A'' is not diagonalizable. However, there is an invertible matrix ''P'' such that ''J'' = ''P''−1''AP'', where :J = \begin 1 & 0 & 0 & 0 \\ pt 0 & 2 & 0 & 0 \\ pt 0 & 0 & 4 & 1 \\ pt 0 & 0 & 0 & 4 \end. The matrix J is almost diagonal. This is the Jordan normal form of ''A''. The section ''Example'' below fills in the details of the computation.


Complex matrices

In general, a square complex matrix ''A'' is similar to a
block diagonal matrix In mathematics, a block matrix or a partitioned matrix is a matrix that is '' interpreted'' as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original ma ...
:J = \begin J_1 & \; & \; \\ \; & \ddots & \; \\ \; & \; & J_p\end where each block ''Ji'' is a square matrix of the form :J_i = \begin \lambda_i & 1 & \; & \; \\ \; & \lambda_i & \ddots & \; \\ \; & \; & \ddots & 1 \\ \; & \; & \; & \lambda_i \end. So there exists an invertible matrix ''P'' such that ''P''−1''AP'' = ''J'' is such that the only non-zero entries of ''J'' are on the diagonal and the superdiagonal. ''J'' is called the Jordan normal form of ''A''. Each ''J''''i'' is called a
Jordan block In the mathematical discipline of matrix theory, a Jordan matrix, named after Camille Jordan, is a block diagonal matrix over a ring (whose identities are the zero 0 and one 1), where each block along the diagonal, called a Jordan block, has th ...
of ''A''. In a given Jordan block, every entry on the superdiagonal is 1. Assuming this result, we can deduce the following properties: * Counting multiplicities, the eigenvalues of ''J'', and therefore of ''A'', are the diagonal entries. * Given an eigenvalue ''λ''''i'', its
geometric multiplicity In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
is the dimension of ker(''A'' − ''λ''''i'' ''I''), where ''I'' is the
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. Terminology and notation The identity matrix is often denoted by I_n, or simply by I if the size is immaterial or ...
, and it is the number of Jordan blocks corresponding to ''λ''''i''. * The sum of the sizes of all Jordan blocks corresponding to an eigenvalue ''λ''''i'' is its
algebraic multiplicity In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted ...
. * ''A'' is diagonalizable if and only if, for every eigenvalue ''λ'' of ''A'', its geometric and algebraic multiplicities coincide. In particular, the Jordan blocks in this case are 1 × 1 matrices; that is, scalars. * The Jordan block corresponding to ''λ'' is of the form ''λI'' + ''N'', where ''N'' is a
nilpotent matrix In linear algebra, a nilpotent matrix is a square matrix ''N'' such that :N^k = 0\, for some positive integer k. The smallest such k is called the index of N, sometimes the degree of N. More generally, a nilpotent transformation is a linear tr ...
defined as ''N''''ij'' = ''δi'',''j''−1 (where δ is the
Kronecker delta In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: \delta_ = \begin 0 &\text i \neq j, \\ 1 & ...
). The nilpotency of ''N'' can be exploited when calculating ''f''(''A'') where ''f'' is a complex analytic function. For example, in principle the Jordan form could give a closed-form expression for the exponential exp(''A''). * The number of Jordan blocks corresponding to ''λ'' of size at least ''j'' is dim ker(''A'' − ''λI'')''j'' − dim ker(''A'' − ''λI'')''j''−1. Thus, the number of Jordan blocks of size ''j'' is *:2 \dim \ker (A - \lambda_i I)^j - \dim \ker (A - \lambda_i I)^ - \dim \ker (A - \lambda_i I)^ * Given an eigenvalue ''λ''''i'', its multiplicity in the minimal polynomial is the size of its largest Jordan block.


Example

Consider the matrix A from the example in the previous section. The Jordan normal form is obtained by some similarity transformation: :P^AP = J; that is, AP = PJ. Let P have column vectors p_i, i = 1, \ldots, 4, then : A \begin p_1 & p_2 & p_3 & p_4 \end = \begin p_1 & p_2 & p_3 & p_4 \end \begin 1 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 \\ 0 & 0 & 4 & 1 \\ 0 & 0 & 0 & 4 \end = \begin p_1 & 2p_2 & 4p_3 & p_3+4p_4 \end. We see that : (A - 1 I) p_1 = 0 : (A - 2 I) p_2 = 0 : (A - 4 I) p_3 = 0 : (A - 4 I) p_4 = p_3. For i = 1,2,3 we have p_i \in \ker(A-\lambda_ I), that is, p_i is an eigenvector of A corresponding to the eigenvalue \lambda_i. For i=4, multiplying both sides by (A-4I) gives : (A-4I)^2 p_4 = (A-4I) p_3. But (A-4I)p_3 = 0, so : (A-4I)^2 p_4 = 0. Thus, p_4 \in \ker(A-4 I)^2. Vectors such as p_4 are called generalized eigenvectors of ''A''.


Example: Obtaining the normal form

This example shows how to calculate the Jordan normal form of a given matrix. Consider the matrix :A = \left \begin 5 & 4 & 2 & 1 \\ 0 & 1 & -1 & -1 \\ -1 & -1 & 3 & 0 \\ 1 & 1 & -1 & 2 \end \right which is mentioned in the beginning of the article. The
characteristic polynomial In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The c ...
of ''A'' is : \begin \chi(\lambda) & = \det(\lambda I - A) \\ & = \lambda^4 - 11 \lambda^3 + 42 \lambda^2 - 64 \lambda + 32 \\ & = (\lambda-1)(\lambda-2)(\lambda-4)^2. \, \end This shows that the eigenvalues are 1, 2, 4 and 4, according to algebraic multiplicity. The eigenspace corresponding to the eigenvalue 1 can be found by solving the equation ''Av'' = ''λv''. It is spanned by the column vector ''v'' = (−1, 1, 0, 0)T. Similarly, the eigenspace corresponding to the eigenvalue 2 is spanned by ''w'' = (1, −1, 0, 1)T. Finally, the eigenspace corresponding to the eigenvalue 4 is also one-dimensional (even though this is a double eigenvalue) and is spanned by ''x'' = (1, 0, −1, 1)T. So, the
geometric multiplicity In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
(that is, the dimension of the eigenspace of the given eigenvalue) of each of the three eigenvalues is one. Therefore, the two eigenvalues equal to 4 correspond to a single Jordan block, and the Jordan normal form of the matrix ''A'' is the
direct sum The direct sum is an operation between structures in abstract algebra, a branch of mathematics. It is defined differently, but analogously, for different kinds of structures. To see how the direct sum is used in abstract algebra, consider a mor ...
: J = J_1(1) \oplus J_1(2) \oplus J_2(4) = \begin 1 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 \\ 0 & 0 & 4 & 1 \\ 0 & 0 & 0 & 4 \end. There are three Jordan chains. Two have length one: and , corresponding to the eigenvalues 1 and 2, respectively. There is one chain of length two corresponding to the eigenvalue 4. To find this chain, calculate : \ker(A-4I)^2 = \operatorname \, \left\ where ''I'' is the 4 × 4 identity matrix. Pick a vector in the above span that is not in the kernel of ''A'' − 4''I''; for example, ''y'' = (1,0,0,0)T. Now, (''A'' − 4''I'')''y'' = ''x'' and (''A'' − 4''I'')''x'' = 0, so is a chain of length two corresponding to the eigenvalue 4. The transition matrix ''P'' such that ''P''−1''AP'' = ''J'' is formed by putting these vectors next to each other as follows : P = \left begin v & w & x & y \end\right= \left \begin -1 & 1 & 1 & 1 \\ 1 & -1 & 0 & 0 \\ 0 & 0 & -1 & 0 \\ 0 & 1 & 1 & 0 \end \right A computation shows that the equation ''P''−1''AP'' = ''J'' indeed holds. :P^AP=J=\begin 1 & 0 & 0 & 0 \\ 0 & 2 & 0 & 0 \\ 0 & 0 & 4 & 1 \\ 0 & 0 & 0 & 4 \end. If we had interchanged the order in which the chain vectors appeared, that is, changing the order of ''v'', ''w'' and together, the Jordan blocks would be interchanged. However, the Jordan forms are equivalent Jordan forms.


Generalized eigenvectors

Given an eigenvalue ''λ'', every corresponding Jordan block gives rise to a Jordan chain of linearly independent vectors ''pi, i = ''1'', ..., b'', where ''b'' is the size of the Jordan block. The generator, or lead vector, ''pb'' of the chain is a generalized eigenvector such that (''A'' − ''λ''I)''b''''p''''b'' = 0. The vector ''p''1 = (''A'' − ''λ''I)''b''−1''p''''b'' is an ordinary eigenvector corresponding to ''λ''. In general, ''p''''i'' is a preimage of ''p''''i''−1 under ''A'' − ''λ''I. So the lead vector generates the chain via multiplication by ''A'' − ''λ''I. Therefore the statement that every square matrix ''A'' can be put in Jordan normal form is equivalent to the claim that the underlying vector space has a basis composed of Jordan chains.


A proof

We give a proof by induction that any complex-valued square matrix ''A'' may be put in Jordan normal form. Since the underlying vector space can be shown to be the direct sum of
invariant subspace In mathematics, an invariant subspace of a linear mapping ''T'' : ''V'' → ''V '' i.e. from some vector space ''V'' to itself, is a subspace ''W'' of ''V'' that is preserved by ''T''; that is, ''T''(''W'') ⊆ ''W''. General desc ...
s associated with the eigenvalues, ''A'' can be assumed to have just one eigenvalue ''λ''. The 1 × 1 case is trivial. Let ''A'' be an ''n'' × ''n'' matrix. The range of ''A'' − ''λ''I, denoted by Ran(''A'' − ''λ''I), is an invariant subspace of ''A''. Also, since ''λ'' is an eigenvalue of ''A'', the dimension of Ran(''A'' − ''λ''I), ''r'', is strictly less than ''n'', so, by the inductive hypothesis, Ran(''A'' − ''λ''I) has a basis composed of Jordan chains. Next consider the
kernel Kernel may refer to: Computing * Kernel (operating system), the central component of most operating systems * Kernel (image processing), a matrix used for image convolution * Compute kernel, in GPGPU programming * Kernel method, in machine learn ...
, that is, the subspace ker(''A'' − ''λ''I). If :\operatorname(A - \lambda I) \cap \ker(A - \lambda I) = \, the desired result follows immediately from the
rank–nullity theorem The rank–nullity theorem is a theorem in linear algebra, which asserts that the dimension of the domain of a linear map is the sum of its rank (the dimension of its image) and its ''nullity'' (the dimension of its kernel). p. 70, §2.1, Theo ...
. (This would be the case, for example, if ''A'' were
Hermitian {{Short description, none Numerous things are named after the French mathematician Charles Hermite (1822–1901): Hermite * Cubic Hermite spline, a type of third-degree spline * Gauss–Hermite quadrature, an extension of Gaussian quadrature m ...
.) Otherwise, if :Q = \operatorname(A - \lambda I) \cap \ker(A - \lambda I) \neq \, let the dimension of ''Q'' be ''s'' ≤ ''r''. Each vector in ''Q'' is an eigenvector, so Ran(''A'' − ''λ''I) must contain ''s'' Jordan chains corresponding to ''s'' linearly independent eigenvectors. Therefore the basis must contain ''s'' vectors, say , that are lead vectors of these Jordan chains. We can "extend the chains" by taking the preimages of these lead vectors. (This is the key step.) Let ''q''''i'' be such that :\; (A - \lambda I) q_i = p_i \mbox i = r-s+1, \ldots, r. The set , being preimages of the linearly independent set under ''A'' − λ I, is also linearly independent. Clearly no non-trivial linear combination of the ''q''''i'' can lie in ker(''A'' − ''λI''), for ''i''=''r''−''s''+1, ..., ''r'' is linearly independent. Furthermore, no non-trivial linear combination of the ''q''''i'' can belong to Ran(''A'' − ''λ'' I) because it would then be a linear combination of the basic vectors ''p''1, ..., ''p''''r'', and this linear combination would have a contribution of basic vectors not in ker(''A'' − ''λI'') because otherwise it would belong to ker(''A'' − ''λI''). The action of ''A'' − ''λI'' on both linear combinations would then produce an equality of a non-trivial linear combination of lead vectors and such a linear combination of non-lead vectors, which would contradict the linear independence of (''p''1, ..., ''p''''r''). Finally, we can pick any linearly independent set whose projection spans :\ker(A - \lambda I) / Q. Each ''z''''i'' forms a Jordan chain of length 1. By construction, the union of the three sets , , and is linearly independent, and its members combine to form Jordan chains. Finally, by the rank–nullity theorem, the cardinality of the union is ''n''. In other words, we have found a basis composed of Jordan chains, and this shows ''A'' can be put in Jordan normal form.


Uniqueness

It can be shown that the Jordan normal form of a given matrix ''A'' is unique up to the order of the Jordan blocks. Knowing the algebraic and geometric multiplicities of the eigenvalues is not sufficient to determine the Jordan normal form of ''A''. Assuming the algebraic multiplicity ''m''(''λ'') of an eigenvalue ''λ'' is known, the structure of the Jordan form can be ascertained by analyzing the ranks of the powers (''A'' − ''λI'')''m''(''λ''). To see this, suppose an ''n'' × ''n'' matrix ''A'' has only one eigenvalue ''λ''. So ''m''(''λ'') = ''n''. The smallest integer ''k''1 such that :(A - \lambda I)^ = 0 is the size of the largest Jordan block in the Jordan form of ''A''. (This number ''k''1 is also called the index of ''λ''. See discussion in a following section.) The rank of :(A - \lambda I)^ is the number of Jordan blocks of size ''k''1. Similarly, the rank of :(A - \lambda I)^ is twice the number of Jordan blocks of size ''k''1 plus the number of Jordan blocks of size ''k''1 − 1. The general case is similar. This can be used to show the uniqueness of the Jordan form. Let ''J''1 and ''J''2 be two Jordan normal forms of ''A''. Then ''J''1 and ''J''2 are similar and have the same spectrum, including algebraic multiplicities of the eigenvalues. The procedure outlined in the previous paragraph can be used to determine the structure of these matrices. Since the rank of a matrix is preserved by similarity transformation, there is a bijection between the Jordan blocks of ''J''1 and ''J''2. This proves the uniqueness part of the statement.


Real matrices

If ''A'' is a real matrix, its Jordan form can still be non-real. Instead of representing it with complex eigenvalues and 1's on the superdiagonal, as discussed above, there exists a real invertible matrix ''P'' such that ''P''−1''AP'' = ''J'' is a real
block diagonal matrix In mathematics, a block matrix or a partitioned matrix is a matrix that is '' interpreted'' as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original ma ...
with each block being a real Jordan block. A real Jordan block is either identical to a complex Jordan block (if the corresponding eigenvalue \lambda_i is real), or is a block matrix itself, consisting of 2×2 blocks (for non-real eigenvalue \lambda_i = a_i+ib_i with given algebraic multiplicity) of the form :C_i = \left \begin a_i & -b_i \\ b_i & a_i \\ \end \right and describe multiplication by \lambda_i in the complex plane. The superdiagonal blocks are 2×2 identity matrices and hence in this representation the matrix dimensions are larger than the complex Jordan form. The full real Jordan block is given by :J_i = \begin C_i & I & & \\ & C_i & \ddots & \\ & & \ddots & I \\ & & & C_i \end. This real Jordan form is a consequence of the complex Jordan form. For a real matrix the nonreal eigenvectors and generalized eigenvectors can always be chosen to form
complex conjugate In mathematics, the complex conjugate of a complex number is the number with an equal real part and an imaginary part equal in magnitude but opposite in sign. That is, (if a and b are real, then) the complex conjugate of a + bi is equal to a - ...
pairs. Taking the real and imaginary part (linear combination of the vector and its conjugate), the matrix has this form with respect to the new basis.


Matrices with entries in a field

Jordan reduction can be extended to any square matrix ''M'' whose entries lie in a field ''K''. The result states that any ''M'' can be written as a sum ''D'' + ''N'' where ''D'' is
semisimple In mathematics, semi-simplicity is a widespread concept in disciplines such as linear algebra, abstract algebra, representation theory, category theory, and algebraic geometry. A semi-simple object is one that can be decomposed into a sum of ''sim ...
, ''N'' is
nilpotent In mathematics, an element x of a ring R is called nilpotent if there exists some positive integer n, called the index (or sometimes the degree), such that x^n=0. The term was introduced by Benjamin Peirce in the context of his work on the cl ...
, and ''DN'' = ''ND''. This is called the Jordan–Chevalley decomposition. Whenever ''K'' contains the eigenvalues of ''M'', in particular when ''K'' is
algebraically closed In mathematics, a field is algebraically closed if every non-constant polynomial in (the univariate polynomial ring with coefficients in ) has a root in . Examples As an example, the field of real numbers is not algebraically closed, because ...
, the normal form can be expressed explicitly as the
direct sum The direct sum is an operation between structures in abstract algebra, a branch of mathematics. It is defined differently, but analogously, for different kinds of structures. To see how the direct sum is used in abstract algebra, consider a mor ...
of Jordan blocks. Similar to the case when ''K'' is the complex numbers, knowing the dimensions of the kernels of (''M'' − ''λI'')''k'' for 1 ≤ ''k'' ≤ ''m'', where ''m'' is the
algebraic multiplicity In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted ...
of the eigenvalue ''λ'', allows one to determine the Jordan form of ''M''. We may view the underlying vector space ''V'' as a ''K'' 'x''
module Module, modular and modularity may refer to the concept of modularity. They may also refer to: Computing and engineering * Modular design, the engineering discipline of designing complex devices using separately designed sub-components * Modul ...
by regarding the action of ''x'' on ''V'' as application of ''M'' and extending by ''K''-linearity. Then the polynomials (''x'' − ''λ'')''k'' are the elementary divisors of ''M'', and the Jordan normal form is concerned with representing ''M'' in terms of blocks associated to the elementary divisors. The proof of the Jordan normal form is usually carried out as an application to the ring ''K'' 'x''of the
structure theorem for finitely generated modules over a principal ideal domain In mathematics, in the field of abstract algebra, the structure theorem for finitely generated modules over a principal ideal domain is a generalization of the fundamental theorem of finitely generated abelian groups and roughly states that finite ...
, of which it is a corollary.


Consequences

One can see that the Jordan normal form is essentially a classification result for square matrices, and as such several important results from linear algebra can be viewed as its consequences.


Spectral mapping theorem

Using the Jordan normal form, direct calculation gives a spectral mapping theorem for the polynomial functional calculus: Let ''A'' be an ''n'' × ''n'' matrix with eigenvalues ''λ''1, ..., ''λ''''n'', then for any polynomial ''p'', ''p''(''A'') has eigenvalues ''p''(''λ''1), ..., ''p''(''λ''''n'').


Characteristic polynomial

The
characteristic polynomial In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The c ...
of is p_A(\lambda)=\det (\lambda I-A). Similar matrices have the same characteristic polynomial. Therefore, p_A(\lambda)=p_J(\lambda)=\prod_i (\lambda-\lambda_i)^, where \lambda_i is the ''i''th root of p_J and m_i is its multiplicity, because this is clearly the characteristic polynomial of the Jordan form of ''A''.


Cayley–Hamilton theorem

The
Cayley–Hamilton theorem In linear algebra, the Cayley–Hamilton theorem (named after the mathematicians Arthur Cayley and William Rowan Hamilton) states that every square matrix over a commutative ring (such as the real or complex numbers or the integers) satisfies ...
asserts that every matrix ''A'' satisfies its characteristic equation: if is the
characteristic polynomial In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The c ...
of , then p_A(A)=0. This can be shown via direct calculation in the Jordan form, since if \lambda_i is an eigenvalue of multiplicity m, then its Jordan block J_i clearly satisfies (J_i-\lambda_i I)^=0. As the diagonal blocks do not affect each other, the ''i''th diagonal block of (A-\lambda_i I)^ is (J_i-\lambda_i I)^=0; hence p_A(A)=\prod_i (A-\lambda_i I)^=0. The Jordan form can be assumed to exist over a field extending the base field of the matrix, for instance over the
splitting field In abstract algebra, a splitting field of a polynomial with coefficients in a field is the smallest field extension of that field over which the polynomial ''splits'', i.e., decomposes into linear factors. Definition A splitting field of a poly ...
of ; this field extension does not change the matrix in any way.


Minimal polynomial

The minimal polynomial P of a square matrix ''A'' is the unique
monic polynomial In algebra, a monic polynomial is a single-variable polynomial (that is, a univariate polynomial) in which the leading coefficient (the nonzero coefficient of highest degree) is equal to 1. Therefore, a monic polynomial has the form: :x^n+c_x^+\ ...
of least degree, ''m'', such that ''P''(''A'') = 0. Alternatively, the set of polynomials that annihilate a given ''A'' form an ideal ''I'' in ''C'' 'x'' the
principal ideal domain In mathematics, a principal ideal domain, or PID, is an integral domain in which every ideal is principal, i.e., can be generated by a single element. More generally, a principal ideal ring is a nonzero commutative ring whose ideals are principa ...
of polynomials with complex coefficients. The monic element that generates ''I'' is precisely ''P''. Let ''λ''1, …, ''λ''''q'' be the distinct eigenvalues of ''A'', and ''s''''i'' be the size of the largest Jordan block corresponding to ''λ''''i''. It is clear from the Jordan normal form that the minimal polynomial of ''A'' has degree ''s''''i''. While the Jordan normal form determines the minimal polynomial, the converse is not true. This leads to the notion of elementary divisors. The elementary divisors of a square matrix ''A'' are the characteristic polynomials of its Jordan blocks. The factors of the minimal polynomial ''m'' are the elementary divisors of the largest degree corresponding to distinct eigenvalues. The degree of an elementary divisor is the size of the corresponding Jordan block, therefore the dimension of the corresponding invariant subspace. If all elementary divisors are linear, ''A'' is diagonalizable.


Invariant subspace decompositions

The Jordan form of a ''n'' × ''n'' matrix ''A'' is block diagonal, and therefore gives a decomposition of the ''n'' dimensional Euclidean space into invariant subspaces of ''A''. Every Jordan block ''J''''i'' corresponds to an invariant subspace ''X''''i''. Symbolically, we put :\mathbb^n = \bigoplus_^k X_i where each ''X''''i'' is the span of the corresponding Jordan chain, and ''k'' is the number of Jordan chains. One can also obtain a slightly different decomposition via the Jordan form. Given an eigenvalue ''λ''''i'', the size of its largest corresponding Jordan block ''s''''i'' is called the index of ''λ''''i'' and denoted by ''v''(''λ''''i''). (Therefore, the degree of the minimal polynomial is the sum of all indices.) Define a subspace ''Y''''i'' by : Y_i = \ker(\lambda_i I - A)^. This gives the decomposition :\mathbb^n = \bigoplus_^l Y_i where ''l'' is the number of distinct eigenvalues of ''A''. Intuitively, we glob together the Jordan block invariant subspaces corresponding to the same eigenvalue. In the extreme case where ''A'' is a multiple of the identity matrix we have ''k'' = ''n'' and ''l'' = 1. The projection onto ''Yi'' and along all the other ''Yj'' ( ''j'' ≠ ''i'' ) is called the spectral projection of ''A'' at v''i'' and is usually denoted by ''P''(''λ''''i'' ; ''A''). Spectral projections are mutually orthogonal in the sense that ''P''(''λ''''i'' ; ''A'') ''P''(v''j'' ; ''A'') = 0 if ''i'' ≠ ''j''. Also they commute with ''A'' and their sum is the identity matrix. Replacing every v''i'' in the Jordan matrix ''J'' by one and zeroing all other entries gives ''P''(v''i'' ; ''J''), moreover if ''U J U''−1 is the similarity transformation such that ''A'' = ''U J U''−1 then ''P''(''λ''''i'' ; ''A'') = ''U P''(''λ''''i'' ; ''J'') ''U''−1. They are not confined to finite dimensions. See below for their application to compact operators, and in holomorphic functional calculus for a more general discussion. Comparing the two decompositions, notice that, in general, ''l'' ≤ ''k''. When ''A'' is normal, the subspaces ''X''''i'''s in the first decomposition are one-dimensional and mutually orthogonal. This is the
spectral theorem In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). This is extremely useful be ...
for normal operators. The second decomposition generalizes more easily for general compact operators on Banach spaces. It might be of interest here to note some properties of the index, ''ν''(''λ''). More generally, for a complex number ''λ'', its index can be defined as the least non-negative integer ''ν''(''λ'') such that :\ker(\lambda - A)^ = \ker (\lambda - A)^m, \; \forall m \geq \nu(\lambda) . So ''ν''(v) > 0 if and only if ''λ'' is an eigenvalue of ''A''. In the finite-dimensional case, ''ν''(v) ≤ the algebraic multiplicity of v.


Plane (flat) normal form

The Jordan form is used to find a normal form of matrices up to conjugacy such that normal matrices make up an algebraic variety of a low fixed degree in the ambient matrix space. Sets of representatives of matrix conjugacy classes for Jordan normal form or rational canonical forms in general do not constitute linear or affine subspaces in the ambient matrix spaces. Vladimir Arnold posed a problem: Find a canonical form of matrices over a field for which the set of representatives of matrix conjugacy classes is a union of affine linear subspaces (flats). In other words, map the set of matrix conjugacy classes injectively back into the initial set of matrices so that the image of this embedding—the set of all normal matrices, has the lowest possible degree—it is a union of shifted linear subspaces. It was solved for algebraically closed fields by Peteris Daugulis. The construction of a uniquely defined plane normal form of a matrix starts by considering its Jordan normal form.


Matrix functions

Iteration of the Jordan chain motivates various extensions to more abstract settings. For finite matrices, one gets matrix functions; this can be extended to compact operators and the holomorphic functional calculus, as described further below. The Jordan normal form is the most convenient for computation of the matrix functions (though it may be not the best choice for computer computations). Let ''f''(''z'') be an analytical function of a complex argument. Applying the function on a ''n''×''n'' Jordan block ''J'' with eigenvalue ''λ'' results in an upper triangular matrix: : f(J) =\begin f(\lambda) & f'(\lambda) & \tfrac & \cdots & \tfrac\\ 0 & f(\lambda) & f'(\lambda) & \cdots & \tfrac \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & 0 & f(\lambda) & f'(\lambda) \\ 0 & 0 & 0 & 0 & f(\lambda) \end, so that the elements of the ''k''-th superdiagonal of the resulting matrix are \tfrac. For a matrix of general Jordan normal form the above expression shall be applied to each Jordan block. The following example shows the application to the power function ''f''(''z'') = ''zn'': : \begin \lambda_1 & 1 & 0 & 0 & 0 \\ 0 & \lambda_1 & 1 & 0 & 0 \\ 0 & 0 & \lambda_1 & 0 & 0 \\ 0 & 0 & 0 & \lambda_2 & 1 \\ 0 & 0 & 0 & 0 & \lambda_2 \end^n =\begin \lambda_1^n & \tbinom\lambda_1^ & \tbinom\lambda_1^ & 0 & 0 \\ 0 & \lambda_1^n & \tbinom\lambda_1^ & 0 & 0 \\ 0 & 0 & \lambda_1^n & 0 & 0 \\ 0 & 0 & 0 & \lambda_2^n & \tbinom\lambda_2^ \\ 0 & 0 & 0 & 0 & \lambda_2^n \end, where the binomial coefficients are defined as \binom=\prod_^k \frac. For integer positive ''n'' it reduces to standard definition of the coefficients. For negative ''n'' the identity \binom k = (-1)^k\binom may be of use.


Compact operators

A result analogous to the Jordan normal form holds for compact operators on a
Banach space In mathematics, more specifically in functional analysis, a Banach space (pronounced ) is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vector ...
. One restricts to compact operators because every point ''x'' in the spectrum of a compact operator ''T'' is an eigenvalue; The only exception is when ''x'' is the limit point of the spectrum. This is not true for bounded operators in general. To give some idea of this generalization, we first reformulate the Jordan decomposition in the language of functional analysis.


Holomorphic functional calculus

Let ''X'' be a Banach space, ''L''(''X'') be the bounded operators on ''X'', and ''σ''(''T'') denote the
spectrum A spectrum (plural ''spectra'' or ''spectrums'') is a condition that is not limited to a specific set of values but can vary, without gaps, across a continuum. The word was first used scientifically in optics to describe the rainbow of colors ...
of ''T'' ∈ ''L''(''X''). The holomorphic functional calculus is defined as follows: Fix a bounded operator ''T''. Consider the family Hol(''T'') of complex functions that is holomorphic on some open set ''G'' containing ''σ''(''T''). Let Γ = be a finite collection of
Jordan curve In mathematics, a curve (also called a curved line in older texts) is an object similar to a line, but that does not have to be straight. Intuitively, a curve may be thought of as the trace left by a moving point. This is the definition that a ...
s such that ''σ''(''T'') lies in the ''inside'' of Γ, we define ''f''(''T'') by : f(T) = \frac 1 \int_\Gamma f(z)(z - T)^ \, dz. The open set ''G'' could vary with ''f'' and need not be connected. The integral is defined as the limit of the Riemann sums, as in the scalar case. Although the integral makes sense for continuous ''f'', we restrict to holomorphic functions to apply the machinery from classical function theory (for example, the Cauchy integral formula). The assumption that ''σ''(''T'') lie in the inside of Γ ensures ''f''(''T'') is well defined; it does not depend on the choice of Γ. The functional calculus is the mapping Φ from Hol(''T'') to ''L''(''X'') given by : \; \Phi(f) = f(T). We will require the following properties of this functional calculus: # Φ extends the polynomial functional calculus. # The ''spectral mapping theorem'' holds: ''σ''(''f''(''T'')) = ''f''(''σ''(''T'')). # Φ is an algebra homomorphism.


The finite-dimensional case

In the finite-dimensional case, ''σ''(''T'') = is a finite discrete set in the complex plane. Let ''e''''i'' be the function that is 1 in some open neighborhood of ''λ''''i'' and 0 elsewhere. By property 3 of the functional calculus, the operator :e_i(T) is a projection. Moreover, let ''νi'' be the index of ''λ''''i'' and :f(z)= (z - \lambda_i)^. The spectral mapping theorem tells us : f(T) e_i (T) = (T - \lambda_i)^ e_i (T) has spectrum . By property 1, ''f''(''T'') can be directly computed in the Jordan form, and by inspection, we see that the operator ''f''(''T'')''ei''(''T'') is the zero matrix. By property 3, ''f''(''T'') ''e''''i''(''T'') = ''e''''i''(''T'') ''f''(''T''). So ''e''''i''(''T'') is precisely the projection onto the subspace :\operatorname e_i (T) = \ker(T - \lambda_i)^. The relation :\sum_i e_i = 1 implies :\mathbb^n = \bigoplus_i \; \operatorname e_i (T) = \bigoplus_i \ker(T - \lambda_i)^ where the index ''i'' runs through the distinct eigenvalues of ''T''. This is the invariant subspace decomposition :\mathbb^n = \bigoplus_i Y_i given in a previous section. Each ''ei''(''T'') is the projection onto the subspace spanned by the Jordan chains corresponding to ''λ''''i'' and along the subspaces spanned by the Jordan chains corresponding to v''j'' for ''j'' ≠ ''i''. In other words, ''ei''(''T'') = ''P''(''λ''''i'';''T''). This explicit identification of the operators ''ei''(''T'') in turn gives an explicit form of holomorphic functional calculus for matrices: :For all ''f'' ∈ Hol(''T''), :f(T) = \sum_ \sum_^ \frac (T - \lambda_i)^k e_i (T). Notice that the expression of ''f''(''T'') is a finite sum because, on each neighborhood of v''i'', we have chosen the Taylor series expansion of ''f'' centered at v''i''.


Poles of an operator

Let ''T'' be a bounded operator ''λ'' be an isolated point of ''σ''(''T''). (As stated above, when ''T'' is compact, every point in its spectrum is an isolated point, except possibly the limit point 0.) The point ''λ'' is called a pole of operator ''T'' with order ''ν'' if the resolvent function ''R''''T'' defined by : R_T(\lambda) = (\lambda - T)^ has a pole of order ''ν'' at ''λ''. We will show that, in the finite-dimensional case, the order of an eigenvalue coincides with its index. The result also holds for compact operators. Consider the annular region ''A'' centered at the eigenvalue ''λ'' with sufficiently small radius ''ε'' such that the intersection of the open disc ''Bε''(''λ'') and ''σ''(''T'') is . The resolvent function ''R''''T'' is holomorphic on ''A''. Extending a result from classical function theory, ''R''''T'' has a
Laurent series In mathematics, the Laurent series of a complex function f(z) is a representation of that function as a power series which includes terms of negative degree. It may be used to express complex functions in cases where a Taylor series expansion c ...
representation on ''A'': :R_T(z) = \sum_^\infty a_m (\lambda - z)^m where :a_ = - \frac \int_C (\lambda - z) ^ (z - T)^ d z and ''C'' is a small circle centered at ''λ''. By the previous discussion on the functional calculus, : a_ = -(\lambda - T)^ e_\lambda (T) where e_\lambda is 1 on B_\varepsilon(\lambda) and 0 elsewhere. But we have shown that the smallest positive integer ''m'' such that :a_ \neq 0 and a_ = 0 \; \; \forall \; l \geq m is precisely the index of ''λ'', ''ν''(''λ''). In other words, the function ''R''''T'' has a pole of order ''ν''(''λ'') at ''λ''.


Numerical analysis

If the matrix ''A'' has multiple eigenvalues, or is close to a matrix with multiple eigenvalues, then its Jordan normal form is very sensitive to perturbations. Consider for instance the matrix : A = \begin 1 & 1 \\ \varepsilon & 1 \end. If ''ε'' = 0, then the Jordan normal form is simply : \begin 1 & 1 \\ 0 & 1 \end. However, for ''ε'' ≠ 0, the Jordan normal form is : \begin 1+\sqrt\varepsilon & 0 \\ 0 & 1-\sqrt\varepsilon \end. This ill conditioning makes it very hard to develop a robust numerical algorithm for the Jordan normal form, as the result depends critically on whether two eigenvalues are deemed to be equal. For this reason, the Jordan normal form is usually avoided in
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods ...
; the stable
Schur decomposition In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows one to write an arbitrary complex square matrix as unitarily equivalent to an upper t ...
or pseudospectraSee Golub & Van Loan (2014), §7.9 are better alternatives.


See also

*
Canonical basis In mathematics, a canonical basis is a basis of an algebraic structure that is canonical in a sense that depends on the precise context: * In a coordinate space, and more generally in a free module, it refers to the standard basis defined by the K ...
*
Canonical form In mathematics and computer science, a canonical, normal, or standard form of a mathematical object is a standard way of presenting that object as a mathematical expression. Often, it is one which provides the simplest representation of an ...
* Frobenius normal form * Jordan matrix * Jordan–Chevalley decomposition *
Matrix decomposition In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of ...
* Modal matrix * Weyr canonical form


Notes


References

* * * * * * * * * * * * * * * *
''Jordan Canonical Form'' article at mathworld.wolfram.com
{{Matrix classes Linear algebra Matrix theory Matrix normal forms Matrix decompositions