Lie Algebras

Included page "template:title" does not exist (create it now)

Getting Oriented

As with much of mathematics, there are two primary ways to look at Lie algebras: algebraically and geometrically . The formal algebraic definition presents them as a vector space with a product satisfying a few conditions. Geometrically, Lie algebras re basically the ‘local’ picture of Lie groups; they occur as the tangent space of a Lie group, and encode everything about its algebraic structure. Thus, the classification and description of Lie algebras is very important in the theory of Lie groups.

Perhaps the guiding questions in Lie algebras are the most common in mathematics: how many are there, and what do they look like? The drive to answer these questions leads, of course, to many abstract definitions, to restricting to just pieces of the Lie algebra (representation theory), and to even more abstract theories. In the end, however, the important class of simple Lie algebras can be completely classified using, of all things, a little bit of graph theory (Dynkin diagrams). This classification will conclude our discussion.

The Basics

A Lie algebra is a vector space with an anticommutative product satisfying some special conditions:

Lie algebra
a vector space $\mathsf{g}$ over a field $F$ together with a product $(x,y)\mapsto [X,Y]$ (called the Lie bracket) which is linear in both $X$ and $Y$ satisfying: (i) $[X,Y]=-[Y,X]$, and (ii) the Jacobi Identity:
(1)
\begin{align} [[X,Y],Z]+[[Y,Z],X]+[[Z,X],Y]=0 \text{ for all } X,Y,Z \in \mathsf{g}. \end{align}

Lie algebras naturally arise from associative algebras (bilinear vector spaces with associative multiplication $XY$) by letting $[X,Y]=XY-YX$. Thus, for example, matrix groups can be given a product making them Lie algebras; the Lie algebra of \nn matrices over field $F$ is called $gl_n(F)$.

Some Algebraic Definitions

Much of the structure of groups and rings carries over to Lie algebras. One can define homomorphisms and isomorphisms of Lie algebra analogous to those for groups (meaning they preserve the product).

We also have subalgebras and ideals (which are subspaces $\mathsf{h}$ of $\mathsf{g}$ with $[\mathsf{g},\mathsf{h}]\subset \mathsf{h}$). Since $[\mathsf{h},\mathsf{k}]=[\mathsf{k},\mathsf{h}]$ for all subspaces, it is immediate that every ideal is 2-sided. Ideals give factor Lie algebras $\mathsf{g}/\mathsf{h}$ by setting $[X+\mathsf{h},Y+\mathsf{h}]=[X,Y]+\mathsf{h}$, and a natural factor homomorphism $X\mapsto X+\mathsf{h}$. We also have the First Isomorphism Theorem stating that a surjective homomorphism of Lie groups $\theta:\mathsf{g}_1\to \mathsf{g}_2$ gives an isomorphism $\mathsf{g}_1/\mathsf{k}\cong \mathsf{g}_2$, where $\mathsf{k}=\ker\theta$ is the kernel (an ideal of $\mathsf{g}_1$).

Simple algebras are the most important ones, since every other Lie algebra can be constructed from simple ones:

Simple Lie algebra $\mathsf{g}$
a Lie algebra $\mathsf{g}$ with no proper nontrivial ideals; that is, its only ideals are $\mathsf{g}$ and 0.

Representations and Modules

Here is another important definition:

Representation of a Lie algebra $g$
a homomorphism $\rho: g \to gl_n(F)$.

Representations $\rho_1$ and $\rho_2$ are equivalent if there exists a nonsingular matrix $T$ such that $\rho_2(x)=\inv{T}\rho(x) T$ for all $x\in g$. The definition is simple, but this is a fundamental element of modern geometry.

There is a close correspondence between representations and $g$-modules. A left module (over the Lie algebra $g$) is a vector space $V$ over $F$ with multiplication $g\times V\to V$ which is bilinear and satisfies $[xy]v=x(yv)-y(xv)$ for all $x,y\in g, v\in V$. A submodule $U$ of $V$ is a subspace of $V$ with $gU\subset U$. A $g$-module is irreducible if its only submodules are $0$ and $V$.

Now, let's see how a finite dimensional $g$-module gives rise to a representation. Let $e_1,\ldots,e_n$ be a basis of $v$, so that multiplication is given by $xe_j=\sum_{i=1}^n\rho_{ij}(x)e_i$. This gives a representation $x\mapsto\rho(x)$, which is actually independent, up to equivalence, of the basis chosen for the module.

Note that every Lie algebra $g$ is actually a $g$-module, with multiplication $(x,y)\mapsto[xy]$. This is called the adjoint $g$-module, and gives rise to the adjoint representation of $g$.

Generalizing Abelian: Nilpotent and Soluble Lie Algebras

Lie algebras exhibit some traits of rings and some traits of groups. We've already encountered a few, and here we will give some more.

A Lie algebra $g$ is abelian if $[gg]=0$, so that all elements commute. More generally, it is nilpotent if $g^i=0$ for some $i$, where $g^i$ is defined recursively by $g^1=g$ and $g^i=[g^{i-1}g]$. Of course, this gives a descending sequence of ideals $g=g^1\supset g^2\supset g^3\supset \cdots$, so we're really just asking if this sequence reaches 0. One example of a nilpotent Lie algebra is a set of upper/lower triangular matrices with zeros on the diagonal as well.

Even more general than nilpotent Lie algebras are the soluble ones, for which $g^{(i)}=0$ for some $i$, with $g^{(0)}=g$ and $g^{(n)}=[g^{(n-1)}g^{(n-1)}]$. A corresponding example is a set of upper/lower triangular matrices (with arbitrary diagonal). As this suggests, every nilpotent Lie algebra is soluble.

An Extended Example: Roots of $sl_n(\mathbb{C})$

The Lie algebra $sl_n(\mathbb{C})$ is the “special linear” set of \nn matrices A with $\mathsf{tr} A=0$. It is an ideal of $gl_n(\mathbb{C})$ since $\mathsf{tr}(AB)=\mathsf{tr}(BA)$. This shows that $gl_n(\mathbb{C})$ is not simple, although $sl_n(\mathbb{C})$ is simple for $n\geq 2$.

We now find a special set of representations for this Lie algebra called the roots of $sl_n(\mathbb{C})$. Let $h$ be the abelian subalgebra of $sl_n(\mathbb{C})$ consisting of diagonal matrices. Note that $\dim h=n-1$ (the $-1$ due to the condition $\mathsf{tr} A=0$), and $g$ is actually a left $h$-module, since $[hg]\subset g$.

Now, we can decompose $sl_n(\mathbb{C})=h \extsum \sum_{i\neq j} \mathbb{C} E_{ij}$, where $E_{ij}$ is the matrix with a 1 in the $i,j$'th position, the only nonzero entry. The $h$-submodule $\mathbb{C} E_{ij}$ corresponds to the 1-dimensional representation of $h$ taking $\diag(\lambda_1,\ldots,\lambda_n)$ to $\lambda_i-\lambda_j$. The $n(n-1)$ such representations are called the roots of $sl_n(\mathbb{C})$ with respect to $h$. The set of roots is $\Phi$, which is a subset of the dual space $h^*=\mathsf{Hom}(h,\mathbb{C})$.

The linearly independent set of roots $\Pi=\{\alpha_1,\ldots,\alpha_{n-1}\}$ with $\alpha_i=\lambda_i-\lambda_{i+1}$ is called the set of fundamental or simple roots. This allows us to write $\Phi$ as a union of the sets of positive and negative combinations of the fundamental roots: $\Phi=\Phi^+\cup\Phi^-$.

The example here is important because it is a microcosm of the general structure of Lie algebras; indeed, the study of roots is central to the classification of certain types of Lie algebras.

Classifying the Simple Lie Algebras

Recall from the previous section that a simple Lie algebra is one with no nontrivial ideals. It is an amazing fact that the simple Lie algebras can be completely classified. This is accomplished by studying the root system of a Lie algebra. It all boils down to the configuration (relative positions) of the fundamental roots in $\mathbb{R}^n$. The relationship between these roots can be encoded into a graph, and then graph theory may be used to enumerate all possible configurations. These configurations in turn give us all possible simple Lie algebras.

Cartan subalgebras

The set of roots depends on the Cartan subalgebra of a Lie algebra $\mathsf{g}$, a special subalgebra existing in every Lie algebra:

Cartan subalgebra
a subalgebra $\mathsf{h}$ of a Lie group $\mathsf{g}$ which is nilpotent and its own idealizer: if $\mathsf{h}\subset\mathsf{k}\subset\mathsf{g}$ then $\mathsf{h}$ is not an ideal of $\mathsf{k}$ unless $\mathsf{k}=\mathsf{h}$.

Every Lie algebra has a Cartan subalgebra, and any two such subalgebras are conjugate by an inner automorphism. In the simple case, Cartan subalgebras correspond to maximal diagonalizable (abelian) subalgebras. As an example, we saw above that the Cartan subalgebra of $sl_n(\mathbb{C})$ consists of the diagonal matrices.

Given a Cartan subalgebra $\mathsf{h}$, we can regard $\mathsf{g}$ as a (left) $\mathsf{h}$-module, allowing us to consider the action of $\mathsf{h}$ on $\mathsf{g}$. In this way, we may decompose $\mathsf{g}$ as a direct sum

(2)
\begin{align} \mathsf{g}=\mathsf{h} \extsum \sum_\alpha \mathbb{C} e_\alpha, \end{align}

the sum being taken over the eigenvalues of the $\mathsf{h}$-action. Thus, for all $X\in\mathsf{h}$, we have $[X,e_\alpha]=\alpha(X)e_\alpha$ with $\alpha(X)\in\mathbb{C}$, and the eigenvalues $\alpha$ are elements of the dual space $\mathsf{h}^*$.

Since all Cartan subalgebras are conjugate, this decomposition is unique. Moreover, each of the summands $\mathbb{C} e_\alpha$ is a 1-dimensional $\mathsf{h}$-module, since the algebra is simple. The set of eigenvalues $\alpha$ are called the roots of the Lie algebra, and form the set of roots $\Phi$. One can verify that if $\alpha\in\Phi$ then $]]-\alpha\in\Phi$ as well, so we can decompose $\Phi$ into a set of positive roots $\Phi^+$ and a set of negative roots $\Phi^-$. Moreover, $\Phi$ spans $\mathsf{h}^*$, so $\Phi^+$ and $\Phi^-$ do as well. We can take a maximal linearly independent subset $\Pi\subset\Phi^+$, called the set of fundamental roots; this generates $\Phi$ and therefore also spans $\mathsf{h}^*$.

The linear combinations of fundamental roots with real coefficients forms a subgroup $\mathsf{h}_\mathbb{R}^*\subset\mathsf{h}^*$; its dimension is the rank of the Lie algebra $\mathsf{g}$. We will see in the next section that it's a Euclidean space, and will form the setting for diagrams of the fundamental roots.

The Killing Form

When a Lie algebra is simple, one can define an ‘inner product’ which is $\mathsf{Ad}$-invariant, called the Killing form:

(3)
\begin{align} \langle{X,Y}\rangle=\mathsf{tr}(\mathsf{ad} X,\mathsf{ad} Y). \end{align}

The Killing form is always symmetric and bilinear. If $\mathsf{g}$ is simple then it is nondegenerate. Moreover, it has nothing to do with the annihilation of a vector or death; it is named after a mathematician.

This form induces a corresponding one on the dual space, giving us a map $\mathsf{h}^*\times\mathsf{h}^*\to\mathbb{C}$, defined by $\langle{f_X,f_Y}\rangle=\langle{X,Y}\rangle$, where $f_X$ is the map taking $Z\mapsto\langle{X,Z}\rangle$. Every element of $\mathsf{h}^*$ has this form: since the Killing form is nondegenerate, the map $X\mapsto f_X$ is bijective. Restricting to $\mathsf{h}_\mathbb{R}^*$, we have a true (positive definite) inner product, making $\mathsf{h}_\mathbb{R}^*$ a Euclidean space which, of course, contains the set of roots $\Phi$.

In our classifying process, we will look at just the configuration of the root system in $\mathsf{h}_\mathbb{R}^*$ rather than the entire Lie algebra. This is something akin to identifying an animal specimen by its skeleton.

The Weyl Group

The Weyl group $W$ encodes the transformations between the roots of our Lie algebra $\mathsf{g}$, and is generated by maps $s_\alpha:\mathsf{h}_\mathbb{R}^*\to\mathsf{h}_\mathbb{R}^*$ for $\alpha\in\Phi$ (or equivalently $\alpha\in\Pi$) with

(4)
\begin{align} s_\alpha(\lambda)=\lambda-2\frac{\langle{\alpha,\lambda}\rangle}{\langle{\alpha,\alpha}\rangle}\alpha. \end{align}

This is essentially a reflection in the hyperplane orthogonal to $\alpha$. The Weyl group $W$ is finite and generates the root system given the fundamental roots: $\Phi=W(\Pi)$.

One can encode all the information about the Weyl group in a matrix. Since it permutes the roots, one can regard row $i$ of the matrix as telling where the group element $s_{\alpha_i}$ takes another root $\alpha_j$; hence we define a matrix (called the Cartan matrix) by $A_{ij}=2\frac{\langle{\alpha_i,\alpha_j}\rangle}{\langle{\alpha_i,\alpha_i}\rangle}$. Since $s_{\alpha_i}(\alpha_j)$ is another root, it must be a $\mathbb{Z}$ combination of $\alpha_i$ and $\alpha_j$, and so $A_{ij}$ is an integer. Moreover, $A_{ii}=2$ for all $i$, and $A_{ij}\leq 0$ for $i\neq j$. The angle $\theta_{ij}$ between roots $\alpha_i$ and $\alpha_j$ is given by $4\cos^2\theta_{ij}=A_{ij}A_{ji}$. We can then define $n_{ij}=A_{ij}A_{ji}\in\{0,1,2,3\}$.

What this means is that there are only 4 possible angles between any two roots, and knowing the set of angles between the fundamental roots completely determines the configuration of the root system in $\mathsf{h}_\mathbb{R}^*$.

Completing the Classification: Dynkin Diagrams

We can encode the relationship between the fundamental roots described by the numbers $n_{ij}$ in a graph: the vertices are the fundamental roots and there are $n_{ij}$ edges between the roots $\alpha_i$ and $\alpha_j$. This graph is called a Dynkin diagram.

How do we determine which graphs correspond to simple Lie algebras? It is obvious that the graph must be connected. It is also obvious that the form

(5)
\begin{align} Q(x_1,\ldots,x_n)=2\left\langle \frac{\sum x_i\alpha_i}{\sqrt{\langle{\alpha_i,\alpha_i}\rangle}}, \frac{\sum x_i\alpha_i}{\sqrt{\langle{\alpha_i,\alpha_i}\rangle}} \right\rangle \end{align}

is positive definite. Surprisingly, these two facts are enough to reduce us to the following graphs:

(6)
\begin{equation} some graphs \end{equation}

The graphs uniquely determine $n_{ij}$, but not necessarily $A_{ij}$: if $n_{ij}$ is either 2 or 3, then one has either $A_{ij}>A_{ji}$ or $A_{ij}<A_{ji}$ (if $n_{ij}$ equals 0 or 1, then $A_{ij}=A_{ji}$). One may distinguish these cases by placing an arrow on the graphs with double and triple edges:

(7)
\begin{equation} some more graphs \end{equation}

This actually completes the classification, except for the true fact that two distinct simple Lie algebras cannot give rise to the same diagram. This allows us to list all the simple Lie algebras, corresponding to the above diagrams:

  1. $A_n$ corresponds to the Lie algebra $sl_{n+1}\mathbb{C}$;
  2. $B_n$ corresponds to the skew-symmetric matrices $so_{2n+1}\mathbb{C}$, or matrices $X$ with $XA+AT^t=0$ for $A=\cdots$;
  3. $C_n$ corresponds to matrices $X$ with $XA+AT^t=0$ for $A=\cdots$;
  4. $D_n$ corresponds to the skew-symmetric matrices $so_{2n}\mathbb{C}$.

The others are sporadic, and have dimensions 14/$G_2$, 52/$F_4$, 78/$E_6$, 133/$E_7$, and 248/$E_8$.

Going Further

The Road Ahead

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License