Whitehead's Lemma (Lie Algebras)
   HOME

TheInfoList



OR:

In
homological algebra Homological algebra is the branch of mathematics that studies homology (mathematics), homology in a general algebraic setting. It is a relatively young discipline, whose origins can be traced to investigations in combinatorial topology (a precurs ...
, Whitehead's lemmas (named after J. H. C. Whitehead) represent a series of statements regarding
representation theory Representation theory is a branch of mathematics that studies abstract algebraic structures by ''representing'' their elements as linear transformations of vector spaces, and studies modules over these abstract algebraic structures. In essen ...
of finite-dimensional, semisimple Lie algebras in characteristic zero. Historically, they are regarded as leading to the discovery of
Lie algebra cohomology In mathematics, Lie algebra cohomology is a cohomology theory for Lie algebras. It was first introduced in 1929 by Élie Cartan to study the topology of Lie groups and homogeneous spaces by relating cohomological methods of Georges de Rham to p ...
. One usually makes the distinction between Whitehead's first and second lemma for the corresponding statements about first and second order cohomology, respectively, but there are similar statements pertaining to Lie algebra cohomology in arbitrary orders which are also attributed to Whitehead. The first Whitehead lemma is an important step toward the proof of
Weyl's theorem on complete reducibility In algebra, Weyl's theorem on complete reducibility is a fundamental result in the theory of Lie algebra representations (specifically in the representation theory of semisimple Lie algebras). Let \mathfrak be a semisimple Lie algebra over a field ...
.


Statements

Without mentioning cohomology groups, one can state Whitehead's first lemma as follows: Let \mathfrak be a finite-dimensional, semisimple Lie algebra over a field of characteristic zero, ''V'' a finite-dimensional
module Module, modular and modularity may refer to the concept of modularity. They may also refer to: Computing and engineering * Modular design, the engineering discipline of designing complex devices using separately designed sub-components * Mo ...
over it, and f\colon \mathfrak \to V a linear map such that :f(
, y The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline ...
= xf(y) - yf(x). Then there exists a vector v \in V such that f(x) = xv for all x \in \mathfrak. In terms of
Lie algebra cohomology In mathematics, Lie algebra cohomology is a cohomology theory for Lie algebras. It was first introduced in 1929 by Élie Cartan to study the topology of Lie groups and homogeneous spaces by relating cohomological methods of Georges de Rham to p ...
, this is, by definition, equivalent to the fact that H^1(\mathfrak,V) = 0 for every such representation. The proof uses a Casimir element (see the proof below). Similarly, Whitehead's second lemma states that under the conditions of the first lemma, also H^2(\mathfrak,V) = 0. Another related statement, which is also attributed to Whitehead, describes Lie algebra cohomology in arbitrary order: Given the same conditions as in the previous two statements, but further let V be
irreducible In philosophy, systems theory, science, and art, emergence occurs when an entity is observed to have properties its parts do not have on their own, properties or behaviors that emerge only when the parts interact in a wider whole. Emergence ...
under the \mathfrak-action and let \mathfrak act nontrivially, so \mathfrak \cdot V \neq 0. Then H^q(\mathfrak,V) = 0 for all q \geq 0.


Proof

As above, let \mathfrak be a finite-dimensional semisimple Lie algebra over a field of characteristic zero and \pi: \mathfrak \to \mathfrak(V) a finite-dimensional representation (which is semisimple but the proof does not use that fact). Let \mathfrak = \operatorname(\pi) \oplus \mathfrak_1 where \mathfrak_1 is an ideal of \mathfrak. Then, since \mathfrak_1 is semisimple, the trace form (x, y) \mapsto \operatorname(\pi(x)\pi(y)), relative to \pi, is nondegenerate on \mathfrak_1. Let e_i be a basis of \mathfrak_1 and e^i the dual basis with respect to this trace form. Then define the Casimir element c by :c = \sum_i e_i e^i, which is an element of the universal enveloping algebra of \mathfrak g_1. Via \pi, it acts on ''V'' as a linear endomorphism (namely, \pi(c) = \sum_i \pi(e_i) \circ \pi(e^i) : V \to V.) The key property is that it commutes with \pi(\mathfrak) in the sense \pi(x)\pi(c) = \pi(c)\pi(x) for each element x \in \mathfrak. Also, \operatorname(\pi(c)) = \sum \operatorname(\pi(e_i)\pi(e^i)) = \dim \mathfrak_1. Now, by
Fitting's lemma The Fitting lemma, named after the mathematician Hans Fitting, is a basic statement in abstract algebra. Suppose ''M'' is a module over some ring. If ''M'' is indecomposable and has finite length, then every endomorphism of ''M'' is either an au ...
, we have the vector space decomposition V = V_0 \oplus V_1 such that \pi(c) : V_i \to V_i is a (well-defined)
nilpotent endomorphism In linear algebra, a nilpotent matrix is a square matrix ''N'' such that :N^k = 0\, for some positive integer k. The smallest such k is called the index of N, sometimes the degree of N. More generally, a nilpotent transformation is a linear transf ...
for i = 0 and is an automorphism for i = 1. Since \pi(c) commutes with \pi(\mathfrak), each V_i is a \mathfrak-submodule. Hence, it is enough to prove the lemma separately for V = V_0 and V = V_1. First, suppose \pi(c) is a nilpotent endomorphism. Then, by the early observation, \dim(\mathfrak/\operatorname(\pi)) = \operatorname(\pi(c)) = 0; that is, \pi is a trivial representation. Since \mathfrak = mathfrak, \mathfrak/math>, the condition on f implies that f(x) = 0 for each x \in \mathfrak; i.e., the zero vector v = 0 satisfies the requirement. Second, suppose \pi(c) is an automorphism. For notational simplicity, we will drop \pi and write x v = \pi(x)v. Also let (\cdot, \cdot) denote the trace form used earlier. Let w = \sum e_i f(e^i), which is a vector in V. Then :x w = \sum_i e_i x f(e^i) + \sum_i , e_if(e^i). Now, : , e_i= \sum_j ( , e_i e^j) e_j = -\sum_j ( , e^j e_i) e_j and, since , e^j= \sum_i ( , e^j e_i) e^i, the second term of the expansion of xw is :-\sum_j e_j f( , e^j = -\sum_i e_i (x f(e^i) - e^i f(x)). Thus, :x w = \sum_i e_i e^i f(x) = c f(x). Since c is invertible and c^ commutes with x, the vector v = c^w has the required property. \square


Notes


References

* {{cite book , last=Jacobson , first=Nathan , author-link=Nathan Jacobson , title=Lie algebras , year=1979 , publisher=Dover Publications , edition=Republication of the 1962 original , isbn=978-0-486-13679-0 , oclc=867771145 , url=http://www.freading.com/ebooks/details/r:download/ZnJlYWQ6OTc4MDQ4NjEzNjc5MDpl Lie algebras