HOME

TheInfoList



OR:

In mathematics, specifically the theory of
Lie algebra In mathematics, a Lie algebra (pronounced ) is a vector space \mathfrak g together with an Binary operation, operation called the Lie bracket, an Alternating multilinear map, alternating bilinear map \mathfrak g \times \mathfrak g \rightarrow ...
s, Lie's theorem states that, over an algebraically closed field of characteristic zero, if \pi: \mathfrak \to \mathfrak(V) is a finite-dimensional representation of a
solvable Lie algebra In mathematics, a Lie algebra \mathfrak is solvable if its derived series terminates in the zero subalgebra. The ''derived Lie algebra'' of the Lie algebra \mathfrak is the subalgebra of \mathfrak, denoted : mathfrak,\mathfrak/math> that consist ...
, then there's a
flag A flag is a piece of fabric (most often rectangular or quadrilateral) with a distinctive design and colours. It is used as a symbol, a signalling device, or for decoration. The term ''flag'' is also used to refer to the graphic design empl ...
V = V_0 \supset V_1 \supset \cdots \supset V_n = 0 of invariant subspaces of \pi(\mathfrak) with \operatorname V_i = i, meaning that \pi(X)(V_i) \subseteq V_i for each X \in \mathfrak and ''i''. Put in another way, the theorem says there is a basis for ''V'' such that all linear transformations in \pi(\mathfrak) are represented by upper triangular matrices. This is a generalization of the result of Frobenius that
commuting matrices In linear algebra, two matrices A and B are said to commute if AB=BA, or equivalently if their commutator ,B AB-BA is zero. A set of matrices A_1, \ldots, A_k is said to commute if they commute pairwise, meaning that every pair of matrices in the s ...
are simultaneously upper triangularizable, as commuting matrices generate an
abelian Lie algebra In mathematics, a Lie algebra (pronounced ) is a vector space \mathfrak g together with an operation called the Lie bracket, an alternating bilinear map \mathfrak g \times \mathfrak g \rightarrow \mathfrak g, that satisfies the Jacobi ident ...
, which is a fortiori solvable. A consequence of Lie's theorem is that any finite dimensional solvable Lie algebra over a field of characteristic 0 has a nilpotent
derived algebra In mathematics, a Lie algebra \mathfrak is solvable if its derived series terminates in the zero subalgebra. The ''derived Lie algebra'' of the Lie algebra \mathfrak is the subalgebra of \mathfrak, denoted : mathfrak,\mathfrak/math> that consi ...
(see #Consequences). Also, to each flag in a finite-dimensional vector space ''V'', there correspond a
Borel subalgebra In mathematics, specifically in representation theory, a Borel subalgebra of a Lie algebra \mathfrak is a maximal solvable subalgebra. The notion is named after Armand Borel. If the Lie algebra \mathfrak is the Lie algebra of a complex Lie group, ...
(that consist of linear transformations stabilizing the flag); thus, the theorem says that \pi(\mathfrak) is contained in some Borel subalgebra of \mathfrak(V).


Counter-example

For algebraically closed fields of characteristic ''p''>0 Lie's theorem holds provided the dimension of the representation is less than ''p'' (see the proof below), but can fail for representations of dimension ''p''. An example is given by the 3-dimensional nilpotent Lie algebra spanned by 1, ''x'', and ''d''/''dx'' acting on the ''p''-dimensional vector space ''k'' 'x''(''x''''p''), which has no eigenvectors. Taking the semidirect product of this 3-dimensional Lie algebra by the ''p''-dimensional representation (considered as an abelian Lie algebra) gives a solvable Lie algebra whose derived algebra is not nilpotent.


Proof

The proof is by induction on the dimension of \mathfrak and consists of several steps. (Note: the structure of the proof is very similar to that for
Engel's theorem In representation theory, a branch of mathematics, Engel's theorem states that a finite-dimensional Lie algebra \mathfrak g is a nilpotent Lie algebra_if_and_only_if_for_each_X_\in_\mathfrak_g,_the_adjoint_representation_of_a_Lie_algebra.html" "ti ...
.) The basic case is trivial and we assume the dimension of \mathfrak is positive. We also assume ''V'' is not zero. For simplicity, we write X \cdot v = \pi(X)(v). Step 1: Observe that the theorem is equivalent to the statement: *There exists a vector in ''V'' that is an eigenvector for each linear transformation in \pi(\mathfrak). Indeed, the theorem says in particular that a nonzero vector spanning V_ is a common eigenvector for all the linear transformations in \pi(\mathfrak). Conversely, if ''v'' is a common eigenvector, take V_ to its span and then \pi(\mathfrak) admits a common eigenvector in the quotient V/V_; repeat the argument. Step 2: Find an ideal \mathfrak of codimension one in \mathfrak. Let D\mathfrak = mathfrak, \mathfrak/math> be the
derived algebra In mathematics, a Lie algebra \mathfrak is solvable if its derived series terminates in the zero subalgebra. The ''derived Lie algebra'' of the Lie algebra \mathfrak is the subalgebra of \mathfrak, denoted : mathfrak,\mathfrak/math> that consi ...
. Since \mathfrak is solvable and has positive dimension, D\mathfrak \ne \mathfrak and so the quotient \mathfrak/D\mathfrak is a nonzero abelian Lie algebra, which certainly contains an ideal of codimension one and by the ideal correspondence, it corresponds to an ideal of codimension one in \mathfrak. Step 3: There exists some linear functional \lambda in \mathfrak^* such that :V_ = \ is nonzero. This follows from the inductive hypothesis (it is easy to check that the eigenvalues determine a linear functional). Step 4: V_ is a \mathfrak-invariant subspace. (Note this step proves a general fact and does not involve solvability.) Let Y \in \mathfrak, v \in V_, then we need to prove Y \cdot v \in V_. If v = 0 then it's obvious, so assume v \ne 0 and set recursively v_0 = v, \, v_ = Y \cdot v_i. Let U = \operatorname \ and \ell \in \mathbb_0 be the largest such that v_0,\ldots,v_\ell are linearly independent. Then we'll prove that they generate ''U'' and thus \alpha = (v_0,\ldots,v_\ell) is a basis of ''U''. Indeed, assume by contradiction that it's not the case and let m \in \mathbb_0 be the smallest such that v_m \notin \langle v_0,\ldots,v_\ell\rangle, then obviously m \ge \ell + 1. Since v_0,\ldots,v_ are linearly dependent, v_ is a linear combination of v_0,\ldots,v_\ell. Applying the map Y^ it follows that v_m is a linear combination of v_,\ldots,v_. Since by the minimality of ''m'' each of these vectors is a linear combination of v_0,\ldots,v_\ell, so is v_m, and we get the desired contradiction. We'll prove by induction that for every n \in \mathbb_0 and X \in \mathfrak there exist elements a_,\ldots,a_ of the base field such that a_=\lambda(X) and :X \cdot v_n = \sum_^ a_v_i. The n=0 case is straightforward since X \cdot v_0 = \lambda(X) v_0. Now assume that we have proved the claim for some n \in \mathbb_0 and all elements of \mathfrak and let X \in \mathfrak. Since \mathfrak is an ideal, it's ,Y\in \mathfrak, and thus :X \cdot v_ = Y \cdot (X \cdot v_n) +
, Y The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline ...
\cdot v_n = Y \cdot \sum_^ a_v_i + \sum_^ a_v_i = a_v_0 + \sum_^ (a_ + a_)v_i + \lambda(X)v_, and the induction step follows. This implies that for every X \in \mathfrak the subspace ''U'' is an invariant subspace of ''X'' and the matrix of the restricted map \pi(X), _U in the basis \alpha is upper triangular with diagonal elements equal to \lambda(X), hence \operatorname(\pi(X), _U) = \dim(U) \lambda(X). Applying this with ,Y\in \mathfrak instead of ''X'' gives \operatorname(\pi( ,Y, _U) = \dim(U) \lambda( ,Y. On the other hand, ''U'' is also obviously an invariant subspace of ''Y'', and so :\operatorname(\pi( ,Y, _U) = \operatorname( pi(X),\pi(Y)_U]) = \operatorname( pi(X), _U, \pi(Y), _U = 0 since commutators have zero trace, and thus \dim(U) \lambda( ,Y = 0. Since \dim(U) > 0 is invertible (because of the assumption on the characteristic of the base field), \lambda(
, Y The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline ...
= 0 and :X \cdot (Y \cdot v) = Y \cdot (X \cdot v) +
, Y The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline ...
\cdot v = Y \cdot (\lambda(X) v) + \lambda(
, Y The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline ...
v = \lambda(X) (Y \cdot v), and so Y \cdot v \in V_. Step 5: Finish up the proof by finding a common eigenvector. Write \mathfrak = \mathfrak + L where ''L'' is a one-dimensional vector subspace. Since the base field is algebraically closed, there exists an eigenvector in V_ for some (thus every) nonzero element of ''L''. Since that vector is also eigenvector for each element of \mathfrak, the proof is complete. \square


Consequences

The theorem applies in particular to the
adjoint representation In mathematics, the adjoint representation (or adjoint action) of a Lie group ''G'' is a way of representing the elements of the group as linear transformations of the group's Lie algebra, considered as a vector space. For example, if ''G'' is G ...
\operatorname: \mathfrak \to \mathfrak(\mathfrak) of a (finite-dimensional) solvable Lie algebra \mathfrak over an algebraically closed field of characteristic zero; thus, one can choose a basis on \mathfrak with respect to which \operatorname(\mathfrak) consists of upper triangular matrices. It follows easily that for each x, y \in \mathfrak, \operatorname(
, y The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline ...
= operatorname(x), \operatorname(y)/math> has diagonal consisting of zeros; i.e., \operatorname(
, y The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline ...
is a strictly upper triangular matrix. This implies that mathfrak g, \mathfrak g/math> is a
nilpotent Lie algebra In mathematics, a Lie algebra \mathfrak is nilpotent if its lower central series terminates in the zero subalgebra. The ''lower central series'' is the sequence of subalgebras : \mathfrak \geq mathfrak,\mathfrak\geq mathfrak,[\mathfrak,\mathfrak ...
. Moreover, if the base field is not algebraically closed then solvability and nilpotency of a Lie algebra is unaffected by extending the base field to its algebraic closure. Hence, one concludes the statement (the other implication is obvious): :''A finite-dimensional Lie algebra \mathfrak g over a field of characteristic zero is solvable if and only if the derived algebra D \mathfrak g = mathfrak g, \mathfrak g/math> is nilpotent.'' Lie's theorem also establishes one direction in Cartan's criterion for solvability: :''If ''V'' is a finite-dimensional vector space over a field of characteristic zero and \mathfrak \subseteq \mathfrak(V) a Lie subalgebra, then \mathfrak is solvable if and only if \operatorname(XY) = 0 for every X \in \mathfrak and Y \in mathfrak, \mathfrak/math>.'' Indeed, as above, after extending the base field, the implication \Rightarrow is seen easily. (The converse is more difficult to prove.) Lie's theorem (for various ''V'') is equivalent to the statement: :''For a solvable Lie algebra \mathfrak g over an algebraically closed field of characteristic zero, each finite-dimensional simple \mathfrak-module (i.e., irreducible as a representation) has dimension one.'' Indeed, Lie's theorem clearly implies this statement. Conversely, assume the statement is true. Given a finite-dimensional \mathfrak g-module ''V'', let V_1 be a maximal \mathfrak g-submodule (which exists by finiteness of the dimension). Then, by maximality, V/V_1 is simple; thus, is one-dimensional. The induction now finishes the proof. The statement says in particular that a finite-dimensional simple module over an
abelian Lie algebra In mathematics, a Lie algebra (pronounced ) is a vector space \mathfrak g together with an operation called the Lie bracket, an alternating bilinear map \mathfrak g \times \mathfrak g \rightarrow \mathfrak g, that satisfies the Jacobi ident ...
is one-dimensional; this fact remains true over any base field since in this case every vector subspace is a Lie subalgebra. Here is another quite useful application: :''Let \mathfrak be a finite-dimensional Lie algebra over an algebraically closed field of characteristic zero with
radical Radical may refer to: Politics and ideology Politics *Radical politics, the political intent of fundamental societal change *Radicalism (historical), the Radical Movement that began in late 18th century Britain and spread to continental Europe and ...
\operatorname(\mathfrak). Then each finite-dimensional simple representation \pi: \mathfrak \to \mathfrak(V) is the
tensor product In mathematics, the tensor product V \otimes W of two vector spaces and (over the same field) is a vector space to which is associated a bilinear map V\times W \to V\otimes W that maps a pair (v,w),\ v\in V, w\in W to an element of V \otimes W ...
of a simple representation of \mathfrak/\operatorname(\mathfrak) with a one-dimensional representation of \mathfrak (i.e., a linear functional vanishing on Lie brackets). By Lie's theorem, we can find a linear functional \lambda of \operatorname(\mathfrak) so that there is the weight space V_ of \operatorname(\mathfrak). By Step 4 of the proof of Lie's theorem, V_ is also a \mathfrak-module; so V = V_. In particular, for each X \in \operatorname(\mathfrak), \operatorname(\pi(X)) = \dim(V) \lambda(X). Extend \lambda to a linear functional on \mathfrak that vanishes on mathfrak g, \mathfrak g/math>; \lambda is then a one-dimensional representation of \mathfrak. Now, (\pi, V) \simeq (\pi, V) \otimes (-\lambda) \otimes \lambda. Since \pi coincides with \lambda on \operatorname(\mathfrak), we have that V \otimes (-\lambda) is trivial on \operatorname(\mathfrak) and thus is the restriction of a (simple) representation of \mathfrak/\operatorname(\mathfrak). \square


See also

*
Engel's theorem In representation theory, a branch of mathematics, Engel's theorem states that a finite-dimensional Lie algebra \mathfrak g is a nilpotent Lie algebra_if_and_only_if_for_each_X_\in_\mathfrak_g,_the_adjoint_representation_of_a_Lie_algebra.html" "ti ...
, which concerns a
nilpotent Lie algebra In mathematics, a Lie algebra \mathfrak is nilpotent if its lower central series terminates in the zero subalgebra. The ''lower central series'' is the sequence of subalgebras : \mathfrak \geq mathfrak,\mathfrak\geq mathfrak, mathfrak,\mathfrak_...
. *Lie–Kolchin_theorem,_which_is_about_a_(connected)_solvable_linear_algebraic_group.


__References_


__Sources_

* *. *_ mathfrak,\mathfrak_...
. *Lie–Kolchin_theorem,_which_is_about_a_(connected)_solvable_linear_algebraic_group.


__References_


__Sources_

* *. *_Nathan_Jacobson">Jacobson,_Nathan,_''Lie_algebras'',_Republication_of_the_1962_original._Dover_Publications,_Inc.,_New_York,_1979.__ *Jean-Pierre_Serre:_Complex_Semisimple_Lie_Algebras,_Springer,_Berlin,_2001._{{ISBN.html" ;"title="Nathan_Jacobson.html" ;"title="Lie–Kolchin_theorem.html" ;"title="mathfrak,\mathfrak ...
. *Lie–Kolchin theorem">mathfrak,\mathfrak ...
. *Lie–Kolchin theorem, which is about a (connected) solvable linear algebraic group.


References


Sources

* *. * Nathan Jacobson">Jacobson, Nathan, ''Lie algebras'', Republication of the 1962 original. Dover Publications, Inc., New York, 1979. *Jean-Pierre Serre: Complex Semisimple Lie Algebras, Springer, Berlin, 2001. {{ISBN">3-5406-7827-1 Theorems about algebras