# Split Semisimple Linear Algebraic Groups of Type A_n

This post is going to be an amalgamation of some of the various ideas I’ve had in the last year, I guess. A year ago I thought about writing a post going over some of the representation theory of algebraic groups. This was a fools errand as the year in between taught me.

Instead, I’ve decided to write some posts that give details on the algebraic groups of type A — whatever that means at the moment if you haven’t heard it before. There are a couple of reasons why I want to do this: it’s good to work out some examples, I’ll have computations available for later, I eventually would like to have a pretty comprehensive list for algebraic groups of type ABCDEFG so why not start with A.

Since I’m repurposing an old blog, there might be a bit of heterogeneity in the content below. But I now realize that’s totally fine because this is a blog; it doesn’t have to be a treatise, or a textbook account, or have all the details.

### Preliminaries

Let’s start out with some vocabulary. The objects I want to talk about are certain group schemes. These are objects which you should think of as being both a scheme, and a group. This is the analog in scheme theory of the concept of a Lie group in, say, complex manifold theory. There are problems with this interpretation from the get-go however. For example: how do you multiply a nonclosed generic point by a closed point, what does it mean to multiply two points of a scheme — given that the underlying set of a product is not the product of the underlying sets, and how does this form a group? This is, in my opinion, one of the biggest differences (or problems) one will first encounter when they try to generalize the idea of a Lie group to a scheme.

So how do we get around this problem? Well, we can generalize our definitions. What are the basic properties of a group? They are the existence of an identity, inverses, and the ability to multiply. Since we can’t use points directly, we’ll have to try to capture these properties with morphisms instead.

(1.1) Definition: A group scheme over a scheme $X$ is a group object in the category of schemes over $X$. This means a group scheme $G$ over $X$ is a scheme $G\xrightarrow{\pi} X$ with morphisms $\mu:G\times_X G\rightarrow G$, $i:G\rightarrow G$, and $e:X\rightarrow G$ such that the following equalities hold:

a) $\mu\circ(\mu\times\text{id})= \mu \circ (\text{id}\times \mu)$ (associativity)
b) $\mu\circ (\text{id},i)=\mu\circ(i,\text{id})=e\circ \pi$ (inverses)
c) $\mu\circ (e\times \text{id})= \mu\circ (\text{id}\times e)= \rho$ (identity); here $\rho$ is the canonical isomorphism $X\times_X G\rightarrow G$ provided by the fibered product.

The examples of group schemes this post will focus on are called split semisimple linear algebraic groups of type A. The type A part may be familiar from other areas of Lie theory — it comes from the root data we associate to our algebraic group. When I say algebraic group, I really just mean a group scheme over $k$, where $k$ is some field, and being over $k$ is shorthand for being over $\text{Spec}(k)$. But, to handle the simplest case first, we only want to deal with finite dimensional objects. This means we also impose a finite type (read as: finite dimensional) assumption when we speak about algebraic groups. Formally, we have just said:

(1.2) Definition: an algebraic group is a group scheme of finite type over a field $k$.

Before I talk about why these definitions give us the ability to think of a scheme as a group, let me talk a little about its implications in the case $G$ is an affine algebraic group (when $G$ is a group scheme over an arbitrary scheme $X$, the same conclusions could be made locally when $G$ is affine over $X$. I won’t work in this generality. The most general situation I’ll consider is when $X$ is affine, and $G$ is affine over $X$ which, in particular, implies $G$ is affine).

Because of the equivalence between commutative rings and affine schemes, we get maps going in the other direction, satisfying the reverse conditions of definition 1.1, whenever we have an affine $G$. The ring associated to an affine group scheme is then called a Hopf algebra.

(1.3) Definition: let $R$ be a commutative ring. An $R$ Hopf algebra is an $R$-algebra $R\xrightarrow{\pi} A$ together with comultiplication, coinversion, and counit maps: $\Delta:A\rightarrow A\otimes_R A$, $\iota:A\rightarrow A$, $\epsilon:R\rightarrow A$ which satisfy the relations

a) $\text{id}\otimes \Delta \circ \Delta = \Delta\otimes\text{id}\circ \Delta$ (associativity)
b) $(\iota, \text{id})\circ \Delta = (\text{id}, \iota)\circ \Delta = \pi\circ \epsilon$ (inverses)
c) $\epsilon \otimes \text{id} \circ \Delta = \text{id}\otimes \epsilon \circ \Delta = \rho$ (identity); here $\rho$ is the canonical isomorphism $R\otimes_R A\simeq A$ provided by the tensor product.

We’ll really only need to use Hopf algebras to show some properties of the underlying scheme of the algebraic group we are talking about. Possibly in another post (to be added), I’ll work out an example of how one can work with an algebraic group completely ring/scheme theoretically (but really only a simple one, probably $\mathbb{G}_{a,\mathbb{R}}$ or $\mathbb{G}_{m,\mathbb{R}}$ because I’ve done it before).

Now let’s return to why these properties in particular should capture the group theoretic properties an algebraic group is supposed to have. Often times it’s easier to work directly with the functor defined by the “points” of an algebraic group instead of the algebraic group itself. More generally, every scheme $X$ defines a functor $h_X: (\mathsf{Sch}/k)^{op}\rightarrow \mathsf{Set}$, and the group schemes can be identified as those schemes $X$ such that this functor factors $h_X:(\mathsf{Sch}/k)^{op}\rightarrow \mathsf{Grp}\rightarrow \mathsf{Set}$ with the last arrow being the forgetful functor (taking a group to its underlying set).

To see this, look again at the morphisms of definition 1.1. Try to convince yourself that, if these were given on a set $G$ instead of a scheme $G$, then they would determine the structure of a group. Now observe, via the functor description of a scheme $X$, the same arrows translate into natural transformations of functors

$\text{Hom}(-,G\times_X G)\simeq \text{Hom}(-,G)\times_{\text{Hom}(-,X)}\text{Hom}(-,G)\rightarrow \text{Hom}(-,G)$

and so on… If we plug a scheme $Y$ into these functors, then these really are just maps of sets, and from here you’ve convinced yourself these maps determined a group structure (but now on the set $\text{Hom}(Y,G)$).

From this we can work more categorically. I’ll summarize the main points of what we will need in the below paragraphs but, if you want to see more detail then take a look at Alex Youcis’ blog here.

Choose a scheme $X$. As I mentioned above, this defines a functor $h_X:= \text{Hom}_{\mathsf{Sch}}(-, X): (\mathsf{Sch}/k)^{op}\rightarrow \mathsf{Set}$. We can construct a category whose objects are functors from $(\mathsf{Sch}/k)^{op}$ to $\mathsf{Set}$ and whose morphisms are natural transformations of functors. Let’s call such a category $\widehat{\mathsf{Sch}/k}$ and define a morphism of categories $\omega:\mathsf{Sch}/k\rightarrow \widehat{\mathsf{Sch}/k}$ as the pair of maps $\text{Obj}(\mathsf{Sch}/k)\rightarrow \text{Obj}(\widehat{\mathsf{Sch}/k})$, $X\mapsto h_X$, and $\text{Mor}(\mathsf{Sch}/k)\rightarrow \text{Mor}(\widehat{\mathsf{Sch}/k})$, $(X\xrightarrow{f} Y)\mapsto (h_X\xrightarrow{f\circ -} h_Y)$. With this notation it’s possible to show

(1.4) Proposition: The morphism $\omega: \mathsf{Sch}/k\rightarrow \widehat{\mathsf{Sch}/k}$ is a fully faithful embedding.

This is a great result for many reasons. Here are two that we’ll use fairly often. Proposition 1.4 says to give a morhpism of algebraic groups $G\rightarrow H$ it is equivalent to give a morphism of groups $h_G(Y)\rightarrow h_H(Y)$ for every scheme $Y$. It’s actually incredibly easier to do the latter; it’s easier to check that we have defined a group homomorphism for every scheme $Y$ than it is to check we have defined a morphism of schemes commuting with their group structures. The second use of prop. 1.4 is: say we define a functor $F:(\mathsf{Sch}/k)^{op}\rightarrow \mathsf{Grp}$ and we can find a scheme $X$ such that $F(Y)=h_X(Y)$ for all schemes $Y$. Then it follows $X$ is the only scheme with this property. Said differently: schemes are uniquely determined by the functors they define. We’ll use both of these properties quite frequently.

But, before we continue, we can simplify this even further. One might wonder if we really need to check all schemes $Y$ for these results to be true (maybe because general schemes are just affine schemes glued together). It turns out that we don’t. It’s sufficient to check these conditions on the affine schemes only. Formally,

(1.5) Lemma: Any functor $F:(\mathsf{Sch}/k)^{op}\rightarrow \mathsf{Set}$ which factors through $\mathsf{Grp}$ is uniquely determined by its restriction to $(\mathsf{Aff}/k)^{op}$.

Now, with a given scheme $G$, I want to emphasize the scheme and not the functor $h_G$, so I’ll use (as is common practice) the notation $G(Y):=h_G(Y):=\text{Hom}_{\mathsf{Sch}}(Y,G)$. Other notational conventions I’ve chosen in this blog are as follows. An arbitrary field is denoted $k$, and a finite type $k$-algebra is denoted $R$. I use $G_R$ for base change of $G\rightarrow k$ along a morphism of affine schemes $R\rightarrow k$. I’ve already used this notation but, to avoid writing $\text{Spec}(-)$ more than I want to, I’ll often identify $R$ and $\text{Spec}(R)$ unless I use both simultaneously.

To conclude this section, we’ll end with some examples (some of which we’ll end up considering in the detail of the structure theory of algebraic groups in the following sections).

(1.6) Example: Define the functor $\mathbb{G}_a:(\mathsf{Sch}/k)^{op}\rightarrow \mathsf{Set}$ by $Y\mapsto \mathcal{O}_Y(Y)$. We say it is represented by the scheme $\mathbb{A}^1$ since, on affine schemes $\text{Spec}(R)$, we have

$h_{\mathbb{A}^1}(\text{Spec}(R))=\text{Hom}_{\mathsf{Sch}}(\text{Spec}(R),\mathbb{A}^1)=\text{Hom}_{\mathsf{Alg}}(k[X],R)=R:=\mathbb{G}_a(\text{Spec}(R)).$

The only nontrivial equality in the above is a result of the universal mapping property of $k[X]$. This allows us to identify any morphism $k[X]\rightarrow R$ with the image of $X$. We then give the set $\text{Hom}_{\mathsf{Alg}}(k[X],R)$ the group structure $(X\mapsto r)+(X\mapsto s)=(X\mapsto r+s)$ so that the functor factors through $\mathsf{Grp}\rightarrow \mathsf{Set}$. This is called the additive group.

I used some formal terminology in the above, so I might as well include it as a definition.

(1.7) Definition: A functor $F:\mathsf{C}^{op}\rightarrow \mathsf{Set}$ is represented by an object $X\in \text{Obj}(\mathsf{C})$ if $F=h_X=\text{Hom}_\mathsf{C}(-,X)$.

(1.8) Example: Define the functor $V_n$ by $V_n(Y)=\mathcal{O}_Y(Y)^{\times n}$. A similar calculation to the above shows $V_n$ is represented by $\mathbb{A}^n$. Note for an affine scheme $\text{Spec}(R)$ we find $V_n(\text{Spec}(R))=R^n=k^n\otimes_k R$. This functor is called the vector group of dimension n. Most of the time I probably won’t write the subscript $n$. Instead I’ll say $V$ of dimension $n$, sticking with the convention from vector spaces.

(1.9) Example: For a vector space $V$, the functor $GL(V)$ is defined on affines $\text{Spec}(R)$ by $\text{Spec}(R)\rightsquigarrow Aut_{R-\text{linear}}(V_R)$. When $V$ is finite dimensional then the choice of a basis for $V$ shows it is represented by the affine scheme $A=\text{Spec}(k[X_{11},X_{12},...,X_{nn},1/\det])$ where $\det$ is the determinant of the $n\times n$ matrix in the variables $X_{11},...,X_{nn}$ (e.g. for $n=2$ the determinant is $X_{11}X_{22}-X_{12}X_{21}$). If we’ve chosen a basis we’ll write $GL_n$ instead of $GL(V)$. To see this functor is represented by $A$, we can proceed as in (Ex. 1.6.):

$h_A(\text{Spec}(R))=\text{Hom}_{\mathsf{Sch}}(\text{Spec}(R),A)=\text{Hom}_{\mathsf{Alg}}(k[X_{11},...,X_{nn}]_{\det},R)$

By the universal property of localization, $\text{Hom}_{\mathsf{Alg}}(k[X_{11},...,X_{nn}]_{\det},R)$ is the subset of $\text{Hom}_{\mathsf{Alg}}(k[X_{11},...,X_{nn}],R)=R^{n^2}$ such that $\det(X_{11},...,X_{nn})$ is invertible. We give this set the group structure of matrix multiplication. This is the general linear group. The special case $GL_1$ is written $\mathbb{G}_m$ and called the multiplicative group.

Remark: In the above example, $GL(V):\mathsf{Sch}/k\rightarrow \mathsf{Grp}$ is a group valued functor for any vector space $V$. We can always describe a morphism to, or from, this functor. That is, we can always give morphisms between functors in the target category of (Prop 1.4.) but, we have only shown this will be a morphism of algebraic groups if $V$ is finite dimensional and the other functor in the morphism is represented by a scheme.

(1.10) Example: The special linear group is the quotient of $GL_n$ at the ideal $(\det -1)$. The group scheme structure is the same as in example (1.9). In particular, it is a sub-group scheme of $GL_n$.

### Algebraic groups – type A

From here on out including any background is pretty much out of the question. I’m going to assume everything I know, and a couple of things I don’t know, about the structure theory of algebraic groups. All of our schemes are going to be over a fixed but arbitrary field $k$ and all of our group schemes are going to be affine (these are also called linear; I usually refer to them just as algebraic groups). The starting point for this series of blogs is going to be the following type of theorem.

(2.1) Theorem: split semisimple linear algebraic groups $G$ are classified, up to isomorphism, by the pair made of: the root system of $G$ and the fundamental group of $G$.

The secondary starting point, and only for this post, is going to be the following lemma.

(2.2) Lemma: $SL_{n+1}$ is semisimple of type $A_n$.

This will proved later in this post (above Theorem 2.8). But for now let’s take it on faith and study some of the geometric properties of $SL_{n}$. There are multiple proofs for the following, and I’m not sure which proof is the cleanest. But here are some proofs which work, at least in the case of $SL_{n}$.

(2.3) Lemma: $SL_{n}$ is smooth and connected.

Proof. Since $SL_{n}$ is a variety (finite type over a field), it’s flat over its base. This means we only have to prove $SL_{n}$ is geometrically regular to show that its smooth. Since we’ve already seen $SL_{n}$ is an algebraic group (example 1.10), it’s sufficient to show $SL_{n}$ is geometrically reduced. To do this it’s sufficient to show, after base changing to an algebraic closure of $k$ that the determinant minus 1 is irreducible. Hopefully you’re a sane person who picked the variables $x_{11},...,x_{nn}$ to be outside any power of a transcendence base of the field $k$ over its prime field (which was really implicit in my definition of $SL_n$ in the first place). The determinant has degree 1 in each of the variables $x_{ij}$ so, if $\det-1$ factors as $fg$ for two polynomials $f,g$ then only one can have a $x_{11}$ term. Say $f$ does and write $f=x_{11}y+h$, $\det-1= x_{11}gy+gh$. Similarly $\det-1$ has degree 1 in $X_{12}$, so one of $f,g$ has degree 1 in $X_{12}$. It can’t be $g$ however, because then there would be a $x_{11}x_{12}$ term in the determinant, which is visibly seen to be false by checking the determinant via cofactor expansion. A similar argument works with all the $X_{1j}$, so $f$ has all the $X_{1j}$ terms. But again, this is all of them, as can be seen from expanding the determinant by the top row. We’re done then since $(\det+f) g=\det-1$ implies $g$ is constant (as it can contain no variables else the left side has degree 2), and this in turn implies $g=1$ so that $f=-1$ and $\det-1$ is irreducible.$\square$

This also determines the dimension of $SL_n$ as the dimension of $GL_n$ minus 1. Since $GL_n$ is an open subset of an irreducible variety, it has the same dimension as $k[x_{11},...,x_{nn}]$ which is $n^2$. We get, as a corollary,

(2.4) Corollary: $SL_n$ has dimension $n^2-1$.

Proof. Given above.$\square$

Unfortunately that’s about as far as I can go in terms of the geometry of this algebraic group. It would be interesting if one could say more (computing the cohomology ring over $k=\mathbb{C}$ has been done, as an example, so we should be able to read off geometric information from there. For arbitrary fields I don’t know what one can say).

In a different direction, we could examine the algebraic structure of $SL_n$.

(2.5) Lemma: the diagonal subgroup variety is a split maximal torus. This can be realized as either $T:=\mathcal{O}_{SL_n}(SL_n)/(x_{ij})_{i\neq j}$ or as the subfunctor consisting of diagonal matrices.

Proof. We want to show that $T$ is isomorphic with some power of $\mathbb{G}_m$. Using the functor approach this is pretty straightforward, just define an isomorphism $\mathbb{G}_m^{n-1}\rightarrow T$ by

$(a_1,...,a_{n-1})\mapsto \begin{pmatrix} a_1 & & & \\ & \ddots & & \\ & & a_{n-1} & \\ &&& (a_1\cdots a_{n-1})^{-1}\end{pmatrix}.$

Hence this torus is split. To see that $T$ is maximal, we will show the centralizer of $T$ in $SL_n$ is itself. To show this, we’ll use the fact that the Lie algebra of the centralizer is equal to the fixed points of the action of $T$ on the Lie algebra of $G$. It will turn out that the Lie algebra of the centralizer has dimension $n-1$, which will complete the proof since the centralizer of a torus is smooth and connected so if we have $\text{Lie}(T)=\text{Lie}(C_G(T))$ this implies $T=C_G(T)$. The rest is proved in the following lemma.$\square$

(2.6) Lemma: the Lie algebra of $SL_n$ is isomorphic with the $k$-span of $E_{ij}$ and $E_{ii}-E_{jj}$ where $i\neq j$ and $i,j$ run over the numbers from $1,...,n$. The fixed points of the action of $T$ on $\text{Lie}(SL_n)$ are exactly the vectors $E_{ii}-E_{jj}$.

Proof. The vector $E_{ij}$ represents the matrix with 0’s every except the $i$th row and $j$th column, where there is a 1 instead. Recall the Lie algebra of an algebraic group $G$ is defined to be the group-kernel of the map $G(k[\varepsilon]/(\varepsilon^2))\rightarrow G(k)$ which is given by composing with the natural projection $\text{Spec}(k)\rightarrow \text{Spec}(k[\varepsilon]/(\varepsilon^2))$ $\varepsilon\mapsto 0$. This is the tangent space at the identity of the group. To compute the Lie algebra of $SL_n$ we first compute the Lie algebra for $GL_n$. These are all matrices $A$ with values in $k[\varepsilon]/(\varepsilon^2)$ with invertible determinant and satisfying the condition if $\varepsilon \mapsto 0$ then $A\mapsto I_n$. All such matrices $A$ have to be equal to a sum $I_n+B\varepsilon$ where $B$ is an arbitrary matrix of $M_{n\times n}(k)$ and all matrices of this form appear in the Lie algebra. Multiplying any two matrices of $\text{Lie}(GL_n)$, say $A=I_n+B\varepsilon$ and $A'=I_n+B'\varepsilon$ gives

$AA'=(I_n+B\varepsilon)(I_n+B'\varepsilon)=I_n+(B+B')\varepsilon + BB'\varepsilon^2= I_n+(B+B')\varepsilon$

which allows us to define an isomorphism $\text{Lie}(GL_n)\cong M_{n\times n}(k)$ with the vector space of $n\times n$ matrices with values in $k$.

We identify $\text{Lie}(SL_n)$ with the sub-Lie algebra of $\text{Lie}(GL_n)$ consisting of those matrices with determinant 1. If $n=2$, it’s easy to see the determinant of a matrix $A=I_n+B\varepsilon=I_n+(a_{ij})\varepsilon$ is equal $1+(a_{11}+a_{22})\varepsilon=1+\text{tr}(B)\varepsilon$. Computing the determinant of such a matrix $A=I_n+B\varepsilon=I_n+(a_{ij})\varepsilon$ via cofactor expansion (of the top row) we find $\det(A)= (1+a_{11})\varepsilon*g_{11}+a_{12}\varepsilon*g_{12}+\cdots + a_{1n}\varepsilon*g_{1n}$ with $g_{1j}$ the appropriate minors. All of summands vanish except the first, since $g_{1j}$ can be computed by cofactor expansion along a column containing all terms degree 1 in $\varepsilon$. By induction this shows $A$ is an element of $\text{Lie}(SL_n)$ only when $1+\text{tr}(B)\varepsilon=1$ which happens if and only if $\text{tr}(B)=0$. Under the isomorphism $\text{Lie}(GL_n)\cong M_{n\times n}(k)$ we’ve identified $\text{Lie}(SL_n)$ with those matrices having trace 0.

The claim about $\text{Lie}(SL_n)$ being spanned by the $E_{ij}$ and $E_{ii}-E_{jj}$ where $i\neq j$ can then be seen as: this is a vector space of dimension $n^2-1$, and $\text{Lie}(SL_n)$ is the kernel of the map $\text{tr}:\text{Lie}(GL_n)\rightarrow k$ which is surjective. So they must be equal since they have the same dimension.

Finally, we come to the torus action. $T$ acts on $SL_n$ via conjugation, $T\times SL_n\rightarrow SL_n$. We have inclusions $G(R)\subset G(R[\varepsilon]/(\varepsilon^2))$ for every finite type $k$-algebra $R$ and every algebraic group $G$. We also have inclusions $G(k[\varepsilon]/(\varepsilon^2))\subset G(R[\varepsilon]/(\varepsilon^2))$. Evaluating the conjugation map on $R[\varepsilon]/(\varepsilon^2)$-points gives us then an action $T(R[\varepsilon]/(\varepsilon^2))\times SL_n(R[\varepsilon]/(\varepsilon^2))\rightarrow SL_n(R[\varepsilon]/(\varepsilon^2))$. The inclusion $T(R)\subset T(R[\varepsilon]/(\varepsilon^2))$ and $\text{Lie}(SL_n)\subset SL_n(k[\varepsilon]/(\varepsilon^2))\subset SL_n(R[\varepsilon]/(\varepsilon^2))$ defines the action of $T$ on $\text{Lie}(SL_n)$. In short, it’s just conjugation. If $t=\text{diag}(t_1,...,t_{n-1},(t_1\cdots t_{n-1})^{-1})\in T(R)$ is a diagonal matrix, multiplying out $tAt^{-1}$ shows the claim.$\square$

Our next goal is to use this torus to study the root system of $SL_n$ (or $SL_{n+1}$). We’ll start by determining the Weyl group for $SL_n$.

(2.7) Lemma: The Weyl group of $SL_n$ is isomorphic with $S_n$.

Proof. We can do this in two ways. I want to do it in both ways because they give different information and both are useful. The first way is directly from the following definition: the Weyl group of an algebraic group $G$ with respect to a split maximal torus $T$ is defined as the group $W(G,T):=N_G(T)/T$. This is a constant group scheme so we can identify it with its rational points, or with its points over an algebraic closure so that we could work directly with a quotient of groups. I’ll just give explicit representatives for this group, and defer that this is enough data until after we look at the second method of proof. We work with the maximal torus described in lemma 2.5.

Let $s_{ij}$ be the permutation matrix which transposes the rows $i$ and $j$. This isn’t an element in $SL_n(\overline{k})$ but, it is up to a sign change. Multiply one of the rows of $s_{ij}$ by $-1$ (the choice doesn’t matter; they are all equivalent modulo $T$). Let $E$ be the group of $n!$ elements generated by these matrices as $i,j$ runs over the numbers $1,...,n$. No two $s_{ij}$ are equivalent matrices mod $T(\overline{k})$, by comparing the position of 0’s in the matrices. To see that these are all the necessary representatives of $W(SL_n,T)$, we will compute this group another way.

Note that the character group $X^*(T)=\text{Hom}(T,\mathbb{G}_m)$ of the split maximal torus of $SL_n$ is a free abelian group of rank $n-1$. Under the isomorphism $T\cong \mathbb{G}_m^{n-1}$ described above, this group is spanned by the elements $\alpha_i: \text{diag}(t_1,...,t_{n-1},(t_1\cdots t_{n-1})^{-1})\mapsto t_i$ where $i$ goes through $1,...,n$ under the relation $\alpha_n=-\alpha_1-\cdots - \alpha_{n-1}$. A base for this lattice is given by $\alpha_i$ where $i\in [1, n-1]$ for instance. On the other hand, the cocharacter group $X_*(T)=\text{Hom}(\mathbb{G}_m,T)$ is spanned by the elements $\beta_i-\beta_j$ where $\beta_i:\mathbb{G}_m\rightarrow T$ is the map $t\mapsto \text{diag}(1,...,t,...,1)$ with $t$ in the $i$th spot and 1’s elsewhere. This can be considered in the lattice spanned by $\beta_i$ subject to the relation the sum of the coefficients must equal 0. If we call $A=\mathbb{Z}^n/\mathbb{Z}$ the lattice spanned by $\alpha_i$ modulo $\alpha_1+\cdots + \alpha_n =0$ and $A^\vee \subset\mathbb{Z}^n$ the sublattice of the lattice spanned by $\beta_i$ subject to the relation the sum of the coefficients is 0, then we have a natural duality between $A$ and $A^\vee$. This duality is realized by the pairing $(\alpha_i,\beta_j)=\delta_{ij}$. Eventually I’ll write that paragraph less miserably.

Anyways, now we want to look at the root and the weight lattice. The root lattice is the sublattice of $X^*(T)$ spanned by the roots. Going back to our calculation of the action of $T$ on $SL_n$, we find $tE_{ij}t^{-1}=t_i/t_jE_{ij}$. Hence the root lattice is spanned by those characters $\alpha_i-\alpha_j$. This has index $n$ in $X^*(T)$ but I won’t check this at moment. Via the above pairing, the coroots are exactly the elements $\beta_i-\beta_j$, which span $X_*(T)$. This also shows $SL_n$ will be simply connected (once we have shown it is semisimple).

The final piece of this proof is to show the reflections $r_{\alpha_i-\alpha_j}$ generate $S_n$. Then, as this group is isomorphic with $W(SL_n,T)$ we will have shown the Weyl group is $S_n$. We will have also found representatives of the group $W(SL_n,T)$ explicitly since we originally found $n!$ distinct elements and this shows there can be at most $n!$.

Now observe $r_{\alpha_i-\alpha_j}$ acts as the transposition swapping $(i,j)$ on the characters $\alpha_i,\alpha_j$. These $n$ reflections generate a subgroup of $S_n$, and it is known the transpositions generate the entire group. So we are done.$\square$

That was a lot of work for seemingly not very much reward. But we’re now ready to prove Theorem 2.2. Actually, we could have done the first part of 2.2 at any time but, the second part had to wait until we had the computation of the Weyl group completed. I decided to do them together for whatever arbitrary reason.

Proof of 2.2. That $SL_n$ is semisimple follows from the Lie-Kolchin theorem. We change base to an algebraic closure, and then we can assume a solvable subgroup is given as a subgroup of upper-triangular matrices. Since the radical of $SL_n$ is normal, it is closed under conjugation so it must also be a subgroup of the lower-triangular matrices. Hence it is diagonal. Let $t=\text{diag}(t_1,...,t_n)$ be a matrix contained in the radical of $SL_n(\overline{k})$. Then a computation with $i\neq j$ shows $(1+E_{ij}) t (1-E_{ij})= (1+E_{ij})(t-{t_i}E_{ij})=t+t_j E_ij - t_i E_ij$ which is diagonal only when $t_i=t_j$. Hence all the entries of $t$ are the same. Therefore the radical is contained in the scalar matrices, which for $SL_n$ is the group $\mu_n$. The only connected subgroup variety of this is the trivial group, which shows $SL_n$ is semisimple. To see $SL_{n+1}$ has root system of type $A_n$, we should start by finding a base for the given root system. Relative to the Borel subgroup of upper-triangular matrices, the simple roots are the $\gamma_i=\alpha_i-\alpha_{i+1}$. Taking the inner product $\langle\gamma_i,\gamma_j\rangle$ gives 0 if $j\neq i+1$, and 2 when $j=i+1$. This is known as the root system of type $A_n$.$\square$

I’m going to end this section with some facts about the family of type $A_n$ that I don’t want to prove at the moment (or don’t know how to prove at the moment).

(2.8) Theorem: The algebraic group $SL_n$ is the simply connected semisimple algebraic group of type $A_n$. It is almost simple with center $\mu_n$. The adjoint algebraic group of type $A_n$ is $PSL_n:=SL_n/\mu_n$. The adjoint algebraic group is simple (I think; at least if it is over a field with greater or equal four elements). Thus, the almost simple algebraic groups of type $A$ are the quotients of $SL_n$ by subgroups $\mu_d$ for varying $n$ and divisors $d$ of $n$.

### Homogeneous varieties – type A

This section is, truth be told, the ulterior motive of this whole post. Unfortunately, it’s a broad enough topic I don’t want to include all of the details. Instead, this section will focus on examples. Moreover, all of the examples I’ll work out will be split (as opposed to twisted — what this means is I will consider Grassmannians and Flag varieties instead of their twisted forms, e.g. Severi Brauer varieties will not be discussed).

(3.1) Definition: Let $V$ be a vector space over $k$. Define the functor $\text{Gr}(n,V)$ on points as

$\text{Gr}(n,V)(R)=\{ W\subset V\otimes_k R : W \text{ is a projective } R \text{ summand of rank n}\}$

This is called the Grassmannian of $n$-planes in $V$.

To be more explicit about the defining conditions of this functor, we note that $V\otimes_k R$ is a $R$-module by its right action. Then $W$ is an element of this set if there is a projective $R$-module $W'$ so that $V\otimes_k R\cong R^{\oplus \dim V} \cong W\oplus W'$. The restriction maps on the functor are those coming from changing the base.

Note that there is symmetry in the definition. In other words the natural transformation $\text{Gr}(n,V)\rightarrow \text{Gr}(\dim V-n, V)$ given by sending a summand to its complement defines a bijection on points, hence is naturally an isomorphism. There is also the obligatory mention, $\text{Gr}(1,V)=\mathbb{P}(V)$.

(3.2) Theorem: The Grassmannian is represented by a scheme, which we also call the Grassmannian, and we also denote by $\text{Gr}(n,V)$.

Reference. Here’s a reference.

The proofs in the Stacks project are more general than are being considered here. I’ve translated some of the conditions to the case where the base is a field.

(3.3) Definition: Let $V$ be a vector space over $k$. Given a strictly increasing sequence of positive integers $d_1<\cdots we define the functor $\text{Fl}^{d_1,...,d_s}(V)$ on points by

$\text{Fl}^{d_1,...,d_s}(V)(R)=\{ (V_1,...,V_s) : V_i\subset V\otimes_k R, V_i\subset V_{i+1}, \text{ each } V_i \text{ is a projective summand of } V\otimes_k R, \text{ and } \dim(V_i)= d_i\}.$

It’s called the $(d_1,...,d_s)$split partial flag variety of type A.

(3.4) Theorem: For any sequence $(d_1,...,d_s)$ as above, the functor $\text{Fl}^{d_1,...,d_s}(V)$ is represented by a scheme. We give this scheme the same name and notation as the functor it defines.

Proof Sketch. Choose a basis $\{e_1,...,e_n\}$ for $V\cong k^n$. Then two flags in $\text{Fl}^{d_1,...,d_s}(V)(R)$ are given with respect to this basis, without loss of generality, by $\langle e_1,...,e_{d_i}\rangle \subset \cdots \subset \langle e_1,...,e_{d_s}\rangle$ and $\langle f_1,...,f_{d_1}\rangle\subset \cdots \subset \langle f_1,...,f_{d_s}\rangle$ where the $f_i$ are sums in $e_i$ forming a sequence of linearly independent vectors. $GL_n(R)$ acts on the first flag, taking $e_i\mapsto f_i$. By scaling we can assume the matrix this defines has determinant 1, and is hence in $SL_n(R)$. Picking a point of $\text{Fl}^{d_1,...,d_s}(V)(k)$ for this action realizes the flag variety as a quotient of $SL_n$ by the isotropy group of this point.

This is a sketch — not a proof. Mainly because one would need to check that the flag variety functor is the quotient of $SL_n$. This might mean adjusting the definition to include extensions by faithfully flat algebras… It’s actually a lot of work, and I don’t really know how to formalize it properly at the moment. That’s why I insist the above is only a sketch. The change won’t really affect us, whichever one it is.

Remark: Grassmannians are special cases of partial flag varieties.

(3.5) Example: Let’s compute the variety for $\text{Gr}(2,k^4):=\text{Gr}(2,4)$ as the quotient $SL_4/P\cong \text{Gr}(2,4)$ by some parabolic subgroup $P$ (I know it’s parabolic because Grassmannians are projective — this follows from the Plücker embedding which I won’t go into detail about). Let $e_i$ be the standard basis vector for $k^4$ with 1 in the $i$th position and 0’s elsewhere. A plane in $k^4$ is given by the (span of the) pair $Q=(e_1,e_2)$ and we know the action of $SL_4$ on the Grassmannian $\text{Gr}(2,4)$ is transitive. We’ll find $P$ then to be the isotropy group $(SL_4)_Q$. A computation shows

$\begin{pmatrix} a_{11} & a_{12} & a_{13} & a_{14}\\ a_{21} & a_{22} & a_{23} & a_{24} \\ 0 & 0 & a_{33} & a_{34} \\ 0 & 0 & a_{43} & a_{44} \end{pmatrix} \cdot Q = Q.$

The subgroup of $SL_n$ generated by these matrices will then do the trick. To be formal we should actually say the algebraic group defined by the vanishing of the equations $X_{31}, X_{32}, X_{41}, X_{42}$.

I want to continue this example by computing the Bruhat decomposition of $\text{Gr}(2,4)$ (for purposes of a later post). The Plücker embedding actually shows $\text{Gr}(2,4)$ embeds in $\mathbb{P}^5$ with one defining equation. In particular it has dimension 4. This is helpful as a guide but not strictly necessary. Let’s find a Levi decomposition for $P$. That is, we’ll write $P=L_I\ltimes U$ where $L_I$ is a reductive group and $U$ is a unipotent group.

I claim $L_I= SL_2\times SL_2 \times \mathbb{G}_m$. In fact, I claim we have a decomposition, for every $R$ and every $M\in P(R)$,

$M=\begin{pmatrix} A & 0 \\ 0 & B \end{pmatrix}\cdot \begin{pmatrix} tI_2 & 0 \\ 0 & t^{-1} I_2\end{pmatrix} \cdot \begin{pmatrix} I_2 & C \\ 0 & I_2\end{pmatrix}$

where $I_2$ is the $2\times 2$ identity, $A,B$ are elements of $SL_2(R)$, $t\in \mathbb{G}_m(R)$, and $C\in M_{2\times 2}(R)$. To see this, write

$A'=\begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix}$, $B'=\begin{pmatrix} a_{33} & a_{34}\\ a_{43} & a_{44} \end{pmatrix}$, $C'=\begin{pmatrix}a_{13} & a_{14} \\ a_{23} & a_{24}\end{pmatrix}$.

Observe, $1=\det(M)=\det(A'B')=\det(A')\det(B')$ since we can compute this using block matrices. Setting $A=A'/\det(A')$, $B= B'/\det(B')$, $C=(A')^{-1}C'$ and $t=\det(A')$ gives the desired decomposition.

From here it’s easy to see the Weyl group $W(P_I,T)$, where $T$ is the same split torus we described in section 2. It’s $S_2\times S_2$ corresponding to the subgroup of $W(SL_n,T)$ generated by the permutations $(12)$ and $(34)$. The Bruhat decomposition is given then by the recipe

$G/P_I=\bigcup_{\overline{w}\in W(SL_n,T)/W(P_I,T)} BwP_I/P_I$

where the union takes place over one representative from each coset of $W(SL_n,T)/W(P_I,T)$, and $B$ is the standard Borel of upper triangular matrices. Since the numerator has 24 elements, and the denominator has 4, we are consequently looking for 6 specific representatives. Here are some choices, one from each coset, in a list:

$s_e=\begin{pmatrix} 1&&& \\ &1&& \\ &&1& \\ &&&1 \end{pmatrix}$, $s_{23}=\begin{pmatrix} 1&&& \\ &&1& \\ &1&& \\ &&&-1 \end{pmatrix}$$s_{132}\begin{pmatrix} && 1& \\ 1&&& \\ &1&& \\ &&&-1\end{pmatrix}$, $s_{234}=\begin{pmatrix} 1&&& \\ &&1& \\ &&&1 \\ &-1&& \end{pmatrix}$, $s_{1243}=\begin{pmatrix} &1&& \\ &&&1 \\ 1&&& \\ &&-1& \end{pmatrix}$, $s_{13,24}=\begin{pmatrix} &&1& \\ &&&1 \\ 1&&& \\ &1&& \end{pmatrix}$.

Counting inversions of the above permutations we can compute the Bruhat order for these elements. This means we find the length of each permutation to be $\ell(s_e)= 0, \ell(s_{23})=1, \ell(s_{132})=2, \ell(s_{234})=2, \ell(s_{1243})=3, \ell(s_{13,24})=4$. From general theory, this calculates the dimensions of the given closures $X_w=\overline{BwP_I/P_I}$ in $SL_4/P_I$ as $\dim(X_w)=\ell(w)$. To compute equations defining these invariants seems like too large a task at the moment, so I’ll stop here. This is enough for some purposes.

(3.6) Example: As a second example, we repeat the above computations for the flag variety $\text{Fl}^{2,1}(k^3)$ corresponding to the parabolic subgroup $P$ such that $SL_3/P\cong \text{Fl}^{2,1}(k^3)$. Let $e_i$ represent the standard basis vectors, and consider the flag $F:=(\langle e_1 \rangle \subset \langle e_1,e_2 \rangle)$. Since $SL_3$ acts transitively on $\text{Fl}^{2,1}(k^3)$, computing the isotropy group at $F$ will provide us with such a quotient.

But we can skip the calculation of the matrix group $P$, if we want, because $\text{Fl}^{2,1}(k^3)$ is the variety of complete flags. The stabilizer for the standard coordinate flag is just the Borel subgroup of upper triangular matrices, $B$, which has been referred to throughout this whole blog. That is, $B$ is the subvariety of $SL_3$ corresponding to the vanishing of the functions $X_{21}, X_{31}, X_{32}$.

To compute the Bruhat decomposition we can use the entire Weyl group $W(SL_3, T)\cong S_3$. More precisely, the decomposition appears in the form

$SL_3/B=\bigcup_{\overline{w}\in W(SL_3,T)} BwB/B.$

The six representatives are the six permutation matrices,

$s_e=\begin{pmatrix} 1 && \\ &1& \\ && 1 \end{pmatrix}$, $s_{12}=\begin{pmatrix} & 1 & \\ 1 && \\ &&-1 \end{pmatrix}$, $s_{13}=\begin{pmatrix} && 1 \\ &1&\\ -1&&\end{pmatrix}$, $s_{23}=\begin{pmatrix} 1&& \\ &&1 \\ & -1 & \end{pmatrix}$, $s_{123}= \begin{pmatrix} & 1 & \\ & & 1 \\ 1 && \end{pmatrix}$, $s_{132}=\begin{pmatrix} & & 1 \\ 1 && \\ & 1 & \end{pmatrix}$

and the lengths of these elements are $\ell(s_e)=0, \ell(s_{12})=1, \ell(s_{23})=1, \ell(s_{123})=2, \ell(s_{132})=2, \ell(s_{13})= 3$.

Advertisements

## 5 comments on “Split Semisimple Linear Algebraic Groups of Type A_n”

1. […] This post, and this one about K-theory, both serve the same purpose: to work out some explicit examples in intersection theory. The examples of Chow rings I’d like to compute are projective spaces, Grassmannians, and flag varieties. They are, depending on ones background, the simplest projective homogeneous varieties under the action of a split semisimple algebraic group of type A (that is to say, they are subvarieties of projective space that carry an action of an algebraic group, , where, on all -points, the action is also transitive). A current goal of mine is to be able to describe the intersection rings for various homogeneous varieties for every split semisimple algebraic group (all types – ABCDEFG); I can say with certainty this will not be achieved in the near future. Here however, it is achieved, for some homogeneous varieties under the action of a split semisimple algebraic group of type A. To get a good understanding of the below, knowing anything about algebraic groups is strictly unnecessary but, interested readers could find more information here.  […]

Like

2. alexyoucis says:

Hey there 🙂 One nice thing you can read off about the algebraic geometry of of $\text{SL}_n$ from its simply connectedness is the fact that it has trivial Picard group! This is kind of surprising to me–it’s not at all obvious why $k[x_{ij}]/(\det-1)$ is a UFD. But, as I’m sure you’re aware, for any semisimple group one has that $\text{Pic}$ is finite. If it were non-trivial then one could produce, using the Kummer sequence, a non-trivial cohomology class in $H^1(G,\mathbb{Z}/n\mathbb{Z})$ (let’s work over $k^\text{sep}$ for the moment) which is impossible since, as you remarked, it’s simply connected. In fact, one can go further and actually say that if $G$ is a semisimple group and $\widetilde{G}\to G$ is its universal cover, with kernel $\pi_1(G)$, then $\text{Pic}(G)$ is the $k$-points of the Cartier dual of $\pi_1(G)$ (as a finite flat group scheme).

Also, I always found the Plucker embedding a bit confusing and hard to remember. You can see that the Grassmanian is projective since it’s a homogenous variety (so quasi-projective) and it evidently satisfies the valuative criterion (you can just choose a stable flat of lattices for a DVR!).

Nice post by the way!

Like

• eoinmackall says:

Hey! Thanks for reading. I hadn’t really thought about the implication that the coordinate ring was a UFD. It’s certainly interesting that the Picard group can be identified with the rational points of the dual of the fundamental group. I don’t think I know enough to speak with any conviction (on top of a little bit of exhaustion from waking up early to TA today) but, the difficulty for me would be in seeing this for fields of char. p>0. I was unaware that the definition of fundamental group I was using (the kernel of a central isogeny from a universal cover) coincided with the etale fundamental group (well, dual to each other — in char. 0). The best I could find for this was this paper by Brion, (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.221.297&rep=rep1&type=pdf). I think I’ll need to look at this because it seems really interesting.

On a related note, I was thinking of reading through Grothendieck’s “Torsion homologique et sections rationelles” where I think he shows the Chow ring of a semisimple algebraic group is finite. Maybe I will combine these two observations in the format of a blog at some point…

That’s funny because I always found the valuative criterion impossible to use. I like that the Plucker embedding gives you a way to find equations describing the Grassmannian in projective space (even though doing so is usually unmanageable). Is it actually intuitive to use the valuative criterion (should I invest in learning it better)?

Like

• alexyoucis says:

I mean, I certainly think so. I generally like to think of things explicitly in terms of their functor of points, so usually criteria that allow me to exploit this perspective are desirable.

Also to be clear, it’s of course not true that if $k$ is characteristic $0$ then $\pi_1(G)$ can be identified with $\pi_1^{\text{et}}(G)$. More naturally it’s that $\pi_1(G)$ is $\pi_1(G_{\overline{k}})$–namely it classifies geometrically connected finite etale covers of $G$.

Like

• alexyoucis says:

I should mention that if you’re interested in moduli spaces, then almost definitionally your’e going to have to work with (a modification) of the valuative criterion!

Like