HOME

TheInfoList



OR:

In mathematics, the Jordan–Chevalley decomposition, named after
Camille Jordan Marie Ennemond Camille Jordan (; 5 January 1838 – 22 January 1922) was a French mathematician, known both for his foundational work in group theory and for his influential ''Cours d'analyse''. Biography Jordan was born in Lyon and educated at ...
and
Claude Chevalley Claude Chevalley (; 11 February 1909 – 28 June 1984) was a French mathematician who made important contributions to number theory, algebraic geometry, class field theory, finite group theory and the theory of algebraic groups. He was a foundi ...
, expresses a
linear operator In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that pr ...
as the sum of its commuting semisimple part and its
nilpotent In mathematics, an element x of a ring R is called nilpotent if there exists some positive integer n, called the index (or sometimes the degree), such that x^n=0. The term was introduced by Benjamin Peirce in the context of his work on the cla ...
part. The multiplicative decomposition expresses an invertible operator as the product of its commuting semisimple and unipotent parts. The decomposition is easy to describe when the
Jordan normal form In linear algebra, a Jordan normal form, also known as a Jordan canonical form (JCF), is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to som ...
of the operator is given, but it exists under weaker hypotheses than the existence of a Jordan normal form. Analogues of the Jordan-Chevalley decomposition exist for elements of
linear algebraic group In mathematics, a linear algebraic group is a subgroup of the group of invertible n\times n matrices (under matrix multiplication) that is defined by polynomial equations. An example is the orthogonal group, defined by the relation M^TM = I_n w ...
s,
Lie algebra In mathematics, a Lie algebra (pronounced ) is a vector space \mathfrak g together with an operation called the Lie bracket, an alternating bilinear map \mathfrak g \times \mathfrak g \rightarrow \mathfrak g, that satisfies the Jacobi ident ...
s, and
Lie group In mathematics, a Lie group (pronounced ) is a group that is also a differentiable manifold. A manifold is a space that locally resembles Euclidean space, whereas groups define the abstract concept of a binary operation along with the ad ...
s, and the decomposition is an important tool in the study of these objects.


Decomposition of a linear operator

Consider linear operators on a finite-dimensional
vector space In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called ''vectors'', may be added together and multiplied ("scaled") by numbers called ''scalars''. Scalars are often real numbers, but ca ...
over a field. An operator T is semisimple if every T-invariant subspace has a complementary T-invariant subspace (if the underlying field is
algebraically closed In mathematics, a field is algebraically closed if every non-constant polynomial in (the univariate polynomial ring with coefficients in ) has a root in . Examples As an example, the field of real numbers is not algebraically closed, because ...
, this is the same as the requirement that the operator be
diagonalizable In linear algebra, a square matrix A is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P and a diagonal matrix D such that or equivalently (Such D are not unique.) F ...
). An operator ''x'' is ''
nilpotent In mathematics, an element x of a ring R is called nilpotent if there exists some positive integer n, called the index (or sometimes the degree), such that x^n=0. The term was introduced by Benjamin Peirce in the context of his work on the cla ...
'' if some power ''x''''m'' of it is the zero operator. An operator ''x'' is ''
unipotent In mathematics, a unipotent element ''r'' of a ring ''R'' is one such that ''r'' − 1 is a nilpotent element; in other words, (''r'' − 1)''n'' is zero for some ''n''. In particular, a square matrix ''M'' is a unipote ...
'' if ''x'' − 1 is nilpotent. Now, let ''x'' be any operator. A Jordan–Chevalley decomposition of ''x'' is an expression of it as a sum :''x'' = ''x''''s'' + ''x''''n'', where ''x''''s'' is semisimple, ''x''''n'' is nilpotent, and ''x''''s'' and ''x''''n'' commute. Over a
perfect field In algebra, a field ''k'' is perfect if any one of the following equivalent conditions holds: * Every irreducible polynomial over ''k'' has distinct roots. * Every irreducible polynomial over ''k'' is separable. * Every finite extension of ''k'' i ...
, such a decomposition exists (cf. #Proof of uniqueness and existence), the decomposition is unique, and the ''x''''s'' and ''x''''n'' are polynomials in ''x'' with no constant terms. for the algebraically closed field case. In particular, for any such decomposition over a perfect field, an operator that commutes with ''x'' also commutes with ''x''''s'' and ''x''''n''. If ''x'' is an invertible operator, then a multiplicative Jordan–Chevalley decomposition expresses ''x'' as a product :''x'' = ''x''''s'' · ''x''u, where ''x''''s'' is semisimple, ''x''u is unipotent, and ''x''''s'' and ''x''''u'' commute. Again, over a perfect field, such a decomposition exists, the decomposition is unique, and ''x''''s'' and ''x''u are polynomials in ''x''. The multiplicative version of the decomposition follows from the additive one since, as x_s is easily seen to be invertible, :x = x_s + x_n = x_s\left(1 + x_s^x_n\right) and 1 + x_s^x_n is unipotent. (Conversely, by the same type of argument, one can deduce the additive version from the multiplicative one.) If ''x'' is written in
Jordan normal form In linear algebra, a Jordan normal form, also known as a Jordan canonical form (JCF), is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to som ...
(with respect to some basis) then ''x''''s'' is the endomorphism whose matrix contains just the diagonal terms of ''x'', and ''x''''n'' is the endomorphism whose matrix contains just the off-diagonal terms; ''x''''u'' is the endomorphism whose matrix is obtained from the Jordan normal form by dividing all entries of each Jordan block by its diagonal element.


Proof of uniqueness and existence

The uniqueness follows from the fact x_s, x_n are polynomial in ''x'': if x = x_s' + x_n' is another decomposition such that x'_s and x'_n commute, then x_s - x_s' = x_n' - x_n, and both x_s', x_n' commute with ''x'', hence with x_s, x_n since they are polynomials in x. The sum of commuting nilpotent endomorphisms is nilpotent, and over a perfect field the sum of commuting semisimple endomorphisms is again semisimple. Since the only operator which is both semisimple and nilpotent is the zero operator it follows that x_s = x_s' and x_n = x_n'. We show the existence. Let ''V'' be a finite-dimensional vector space over a perfect field ''k'' and x : V \to V an endomorphism. First assume the base field ''k'' is algebraically closed. Then the vector space ''V'' has the direct sum decomposition V = \bigoplus_^r V_i where each V_i is the kernel of (x - \lambda_i I)^, the generalized eigenspace and ''x'' stabilizes V_i, meaning x \cdot V_i \subset V_i. Now, define x_s : V \to V so that, on each V_i, it is the scalar multiplication by \lambda_i. Note that, in terms of a basis respecting the direct sum decomposition, x_s is a diagonal matrix; hence, it is a semisimple endomorphism. Since x - x_s : V_i \to V_i is then x - \lambda_i I : V_i \to V_i whose m_i-th power is zero, we also have that x_n := x - x_s is nilpotent, establishing the existence of the decomposition. (Choosing a basis carefully on each V_i, one can then put ''x'' in the Jordan normal form and x_s, x_n are the diagonal and the off-diagonal parts of the normal form. But this is not needed here.) The fact that x_s, x_n are polynomials in ''x'' follows from the
Chinese remainder theorem In mathematics, the Chinese remainder theorem states that if one knows the remainders of the Euclidean division of an integer ''n'' by several integers, then one can determine uniquely the remainder of the division of ''n'' by the product of the ...
. Indeed, let f(t) = \operatorname(t I - x) be the
characteristic polynomial In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The char ...
of ''x''. Then it is the product of the characteristic polynomials of x : V_i \to V_i; i.e., f(t) = \prod_^r (t - \lambda_i)^, where d_i = \dim V_i. Also, d_i \ge m_i (because, in general, a nilpotent matrix is killed when raised to the size of the matrix). Now, the Chinese remainder theorem applied to the polynomial ring k /math> gives a polynomial p(t) satisfying the conditions :p(t) \equiv 0 \bmod t,\, p(t) \equiv \lambda_i \bmod (t - \lambda_i)^ (for all i). (There is a redundancy in the conditions if some \lambda_i is zero but that is not an issue; just remove it from the conditions.) The condition p(t) \equiv \lambda_i \bmod (t - \lambda_i)^, when spelled out, means that p(t) - \lambda_i = g_i(t) (t - \lambda_i)^ for some polynomial g_i(t). Since (x - \lambda_i I)^ is the zero map on V_i, p(x) and x_s agree on each V_i; i.e., p(x) = x_s. Also then q(x) = x_n with q(t) = t - p(t). The condition p(t) \equiv 0 \bmod t ensures that p(t) and q(t) have no constant terms. This completes the proof of the algebraically closed field case. If ''k'' is an arbitrary perfect field, let \Gamma = \operatorname\left(\overline/k\right) be the absolute Galois group of ''k''. By the first part, we can choose polynomials p, q over \overline such that x = p(x) + q(x) is the decomposition into the semisimple and nilpotent part. For each \sigma in \Gamma, x = \sigma(x) = \sigma(p(x)) + \sigma(q(x)) = p(x) + q(x). Now, \sigma(p(x)) = \sigma(p)(x) is a polynomial in x; so is \sigma(q(x)). Thus, \sigma(p(x)) and \sigma(q(x)) commute. Also, the application of \sigma evidently preserves semisimplicity and nilpotency. Thus, by the uniqueness of decomposition (over \overline), \sigma(p(x)) = p(x) and \sigma(q(x)) = q(x). Hence, x_s = p(x), x_n = q(x) are \Gamma-invariant; i.e., they are endomorphisms (represented by matrices) over ''k''. Finally, since \left\ contains a \overline-basis that spans the space containing x_s, x_n, by the same argument, we also see that p, q have coefficients in ''k''. This completes the proof. Q.E.D.


Short proof using abstract algebra

proves the existence of a decomposition as a consequence of the Wedderburn principal theorem. (This approach is not only short but also makes the role of the assumption that the base field be perfect clearer.) Let ''V'' be a finite-dimensional vector space over a perfect field ''k'', x : V \to V an endomorphism and A = k \subset \operatorname(V) the subalgebra generated by ''x''. Note that ''A'' is a commutative
Artinian ring In mathematics, specifically abstract algebra, an Artinian ring (sometimes Artin ring) is a ring that satisfies the descending chain condition on (one-sided) ideals; that is, there is no infinite descending sequence of ideals. Artinian rings are ...
. The Wedderburn principal theorem states: for a finite-dimensional algebra ''A'' with the Jacobson radical ''J'', if A/J is separable, then the natural surjection p: A \to A/J splits; i.e., A contains a semisimple subalgebra B such that p, _B : B \overset \to A/J is an isomorphism. In the setup here, A/J is separable since the base field is perfect (so the theorem is applicable) and ''J'' is also the nilradical of ''A''. There is then the vector-space decomposition A = B \oplus J. In particular, the endomorphism ''x'' can be written as x = x_s + x_n where x_s is in B and x_n in J. Now, the image of ''x'' generates A/J \simeq B; thus x_s is semisimple and is a polynomial of ''x''. Also, x_n is nilpotent since J is nilpotent and is a polynomial of ''x'' since x_s is. \square


Nilpotency criterion

The Jordan decomposition can be used to characterize nilpotency of an endomorphism. Let ''k'' be an algebraically closed field of characteristic zero, E = \operatorname_\mathbb(k) the endomorphism ring of ''k'' over rational numbers and ''V'' a finite-dimensional vector space over ''k''. Given an endomorphism x : V \to V, let x = s + n be the Jordan decomposition. Then s is diagonalizable; i.e., V = \bigoplus V_i where each V_i is the eigenspace for eigenvalue \lambda_i with multiplicity m_i. Then for any \varphi\in E let \varphi(s) : V \to V be the endomorphism such that \varphi(s) : V_i \to V_i is the multiplication by \varphi(\lambda_i). Chevalley calls \varphi(s) the replica of s given by \varphi. (For example, if k = \mathbb, then the complex conjugate of an endomorphism is an example of a replica.) Now, ''Proof:'' First, since n \varphi(s) is nilpotent, :0 = \operatorname(x\varphi(s)) = \sum_i \operatorname\left(s\varphi(s) , V_i\right) = \sum_i m_i \lambda_i\varphi(\lambda_i). If \varphi is the complex conjugation, this implies \lambda_i = 0 for every ''i''. Otherwise, take \varphi to be a \mathbb-linear functional \varphi : k \to \mathbb followed by \mathbb \hookrightarrow k. Applying that to the above equation, one gets: :\sum_i m_i \varphi(\lambda_i)^2 = 0 and, since \varphi(\lambda_i) are all real numbers, \varphi(\lambda_i) = 0 for every ''i''. Varying the linear functionals then implies \lambda_i = 0 for every ''i''. \square A typical application of the above criterion is the proof of
Cartan's criterion for solvability In mathematics, Cartan's criterion gives conditions for a Lie algebra in characteristic 0 to be solvable, which implies a related criterion for the Lie algebra to be semisimple. It is based on the notion of the Killing form, a symmetric bilinear f ...
of a Lie algebra. It says: if \mathfrak \subset \mathfrak(V) is a Lie subalgebra over a field ''k'' of characteristic zero such that \operatorname(xy) = 0 for each x \in \mathfrak, y \in D \mathfrak = mathfrak, \mathfrak/math>, then \mathfrak is solvable. ''Proof:'' Without loss of generality, assume ''k'' is algebraically closed. By
Lie's theorem In mathematics, specifically the theory of Lie algebras, Lie's theorem states that, over an algebraically closed field of characteristic zero, if \pi: \mathfrak \to \mathfrak(V) is a finite-dimensional representation of a solvable Lie algebra, then ...
and
Engel's theorem In representation theory, a branch of mathematics, Engel's theorem states that a finite-dimensional Lie algebra \mathfrak g is a nilpotent Lie algebra_if_and_only_if_for_each_X_\in_\mathfrak_g,_the_adjoint_representation_of_a_Lie_algebra.html" "ti ...
, it suffices to show for each x \in D \mathfrak g, x is a nilpotent endomorphism of ''V''. Write x = \sum_i _i, y_i/math>. Then we need to show: :\operatorname(x \varphi(s)) = \sum_i \operatorname( _i, y_i\varphi(s)) = \sum_i \operatorname(x_i _i, \varphi(s) is zero. Let \mathfrak' = \mathfrak(V). Note we have: \operatorname_(x) : \mathfrak \to D \mathfrak and, since \operatorname_(s) is the semisimple part of the Jordan decomposition of \operatorname_(x), it follows that \operatorname_(s) is a polynomial without constant term in \operatorname_(x); hence, \operatorname_(s) : \mathfrak \to D \mathfrak and the same is true with \varphi(s) in place of s. That is, varphi(s), \mathfrak\subset D \mathfrak, which implies the claim given the assumption. \square


Counterexample to existence over an imperfect field

If the ground field is not perfect, then a Jordan–Chevalley decomposition may not exist. Example: Let ''p'' be a prime number, let k be imperfect of characteristic p, and choose a in k that is not a pth power. Let V = k \left(X^p - a\right)^2, let x = \overline Xand let T be the k-linear operator given by multiplication by x in V. This has as its invariant k-linear subspaces precisely the ideals of V viewed as a ring, which correspond to the ideals of k /math> containing \left(X^p - a\right)^2. Since X^p - a is irreducible in k /math>, ideals of ''V'' are 0, V and J = \left(x^p - a\right)V. Suppose T = S + N for commuting k-linear operators S and N that are respectively semisimple (just over k, which is weaker than semisimplicity over an algebraic closure of k) and nilpotent. Since S and N commute, they each commute with T = S + N and hence each acts k /math>-linearly on V. Therefore S and N are each given by multiplication by respective members of V s = S(1) and n =N(1), with s + n = T(1) = x. Since N is nilpotent, n is nilpotent in V, therefore \overline n = 0 in V/J, for V/J is a field. Hence, n\in J, therefore n = \left(x^p - a\right)h(x) for some polynomial h(X) \in k /math>. Also, we see that n^2 = 0. Since k is of characteristic p, we have x^p = s^p + n^p = s^p. Also, since \overline x = \overline s in A/J, we have h\left(\overline s\right) = h\left(\overline x\right), therefore h(s) - h(x)\in J in V. Since \left(x^p - a\right)J = 0, we have \left(x^p - a\right)h(x) = \left(x^p - a\right)h(s). Combining these results we get x = s + n = s + \left(s^p - a\right)h(s). This shows that s generates V as a k-algebra and thus the S-stable k-linear subspaces of V are ideals of V, i.e. they are 0, J and V. We see that J is an S-invariant subspace of V which has no complement S-invariant subspace, contrary to the assumption that S is semisimple. Thus, there is no decomposition of T as a sum of commuting k-linear operators that are respectively semisimple and nilpotent. Note that minimal polynomial of T is inseparable over k and is a square in k /math>. It can be shown that if minimal polynomial of k linear operator L is separable then L has Jordan-Chevalley decomposition and that if this polynomial is product of distinct irreducible polynomials in k /math>, then L is semisimple over k.


Analogous decompositions

The multiplicative version of the Jordan-Chevalley decomposition generalizes to a decomposition in a linear algebraic group, and the additive version of the decomposition generalizes to a decomposition in a Lie algebra.


Lie algebras

Let \mathfrak(V) denote the Lie algebra of the endomorphisms of a finite-dimensional vector space ''V'' over a perfect field. If x = x_s + x_n is the Jordan decomposition, then \operatorname(x) = \operatorname(x_s) + \operatorname(x_n) is the Jordan decomposition of \operatorname(x) on the vector space \mathfrak(V). Indeed, first, \operatorname(x_s) and \operatorname(x_n) commute since operatorname(x_s), \operatorname(x_n)= \operatorname( _s, x_n = 0. Second, in general, for each endomorphism y \in \mathfrak(V), we have: # If y^m = 0, then \operatorname(y)^ = 0, since \operatorname(y) is the difference of the left and right multiplications by ''y''. # If y is semisimple, then \operatorname(y) is semisimple. Hence, by uniqueness, \operatorname(x)_s = \operatorname(x_s) and \operatorname(x)_n = \operatorname(x_n). If \pi: \mathfrak \to \mathfrak(V) is a finite-dimensional representation of a semisimple finite-dimensional complex Lie algebra, then \pi preserves the Jordan decomposition in the sense: if x = x_s + x_n, then \pi(x_s) = \pi(x)_s and \pi(x_n) = \pi(x)_n.


Real semisimple Lie algebras

In the formulation of Chevalley and Mostow, the additive decomposition states that an element ''X'' in a real
semisimple Lie algebra In mathematics, a Lie algebra is semisimple if it is a direct sum of simple Lie algebras. (A simple Lie algebra is a non-abelian Lie algebra without any non-zero proper ideals). Throughout the article, unless otherwise stated, a Lie algebra is ...
g with
Iwasawa decomposition In mathematics, the Iwasawa decomposition (aka KAN from its expression) of a semisimple Lie group generalises the way a square real matrix can be written as a product of an orthogonal matrix and an upper triangular matrix (QR decomposition, a conseq ...
g = k ⊕ a ⊕ n can be written as the sum of three commuting elements of the Lie algebra ''X'' = ''S'' + ''D'' + ''N'', with ''S'', ''D'' and ''N'' conjugate to elements in k, a and n respectively. In general the terms in the Iwasawa decomposition do not commute.


Linear algebraic groups

Let G be a
linear algebraic group In mathematics, a linear algebraic group is a subgroup of the group of invertible n\times n matrices (under matrix multiplication) that is defined by polynomial equations. An example is the orthogonal group, defined by the relation M^TM = I_n w ...
over a perfect field. Then, essentially by definition, there is a closed embedding G \hookrightarrow \mathbf_n. Now, to each element g \in G, by the multiplicative Jordan decomposition, there are a pair of a semisimple element g_s and a unipotent element g_u ''a priori'' in \mathbf_n such that g = g_s g_u = g_u g_s. But, as it turns out, the elements g_s, g_u can be shown to be in G (i.e., they satisfy the defining equations of ''G'') and that they are independent of the embedding into \mathbf_n; i.e., the decomposition is intrinsic. When ''G'' is abelian, G is then the direct product of the closed subgroup of the semisimple elements in ''G'' and that of unipotent elements.


Real semisimple Lie groups

The multiplicative decomposition states that if ''g'' is an element of the corresponding connected semisimple Lie group ''G'' with corresponding Iwasawa decomposition ''G'' = ''KAN'', then ''g'' can be written as the product of three commuting elements ''g'' = ''sdu'' with ''s'', ''d'' and ''u'' conjugate to elements of ''K'', ''A'' and ''N'' respectively. In general the terms in the Iwasawa decomposition ''g'' = ''kan'' do not commute.


References

* * * * * * * * * * * * * {{DEFAULTSORT:Jordan-Chevalley Decomposition Linear algebra Lie algebras Algebraic groups Matrix decompositions