Search

Tangent Space of $\text{SL}_2(\mathbb R)$

This was a project I did about three years ago when I was in third year - I thought that it would be a good fit for this blog. I didn't change much from the original (except the necessary ones to change an Overleaf document to a Blogger post).


Introduction

The group $\text{SL}_2(\mathbb R)$ consists of $2\times 2$ real matrices of determinant $1$. So, it is the collection of all points $(a,b,c,d)$ satisfying $ad-bc=1$.

Of course, $\text{SL}_2(\mathbb R)\subset \text{M}_2(\mathbb R)$ as $\text{M}_2(\mathbb R)$ consists of all the points of the form $(a,b,c,d)$. And since $\text{M}_2(\mathbb R)\cong \mathbb R^4$, and $\mathbb R^4$ has the Euclidean metric on it, we can induce that metric to $\text{SL}_2(\mathbb R)$ and consider it as a metric space.


Two properties of $\text{SL}_2(\mathbb R)$ immediately stick out-

1. $\text{SL}_2(\mathbb R)$ is closed in $\text{M}_2(\mathbb R)$.

2. $\text{SL}_2(\mathbb R)$ is unbounded.


Proof of 1 : Consider the function

\begin{align*}&\mathtt{det}:\text{M}_2(\mathbb R)\to \mathbb R\\ &(a,b,c,d)\mapsto ad-bc \end{align*}

Clearly, $\text{SL}_2(\mathbb R)=(\mathtt{det})^{-1}(\{1\})$. And since the $\mathtt{det}$ map is continuous, the result immediately follows.


Proof of 2 : From definition, $(a,b,c,d)\in \text{SL}_2(\mathbb R) \iff ad-bc=1$. So, fixing $b=b_0$ and $d=d_0$, we have

$$a=\frac{1+b_0c}{d_0}$$

which means that any arbitrarily large value of $a$ can be accounted for by using suitably large value of $c$.


Problem 1 :

Let $A\in \text{SL}_2(\mathbb R)$. Show that $\exists \epsilon$ such that $B_{\epsilon}(A)\cap \text{SL}_2(\mathbb R)$ is homeomorphic to an open subset of $\mathbb R^3$.

Solution :

First let us only look at the points of the form $p=\left(a,b,c,\frac{1+bc}{a}\right)$ with $a\neq 0$. So, consider the set 

$$U=\left\{\begin{pmatrix} a&b\\ c&d \end{pmatrix}\in \text{SL}_2(\mathbb R) : a\neq 0\right\}$$ 

Let us consider the map

\begin{align*}\phi: &\,U\to \phi (U)\subset \mathbb R^*\times \mathbb R\times \mathbb R\\ &\,\begin{pmatrix} a&b\\ c&d \end{pmatrix}\mapsto (a,b,c)\end{align*}

where $\mathbb R^*=\mathbb R\setminus \{0\}$.

Clearly, $U$ is an open subset of $\text{SL}_2(\mathbb R)$ as $a\neq 0$ is an open condition. Also, the map $\phi$ is clearly continuous.

Now, the inverse of $\phi$ is given by

\begin{align*}\phi^{-1}:&\,\phi(U)\to U\\ &(a,b,c)\mapsto \begin{pmatrix} a&b\\ c&\,\frac{1+bc}{a} \end{pmatrix}\end{align*}

which is continuous since the inverse of the open set $U_a\times U_b\times U_c\times U_{\frac{1+bc}{a}}$ is $U_a\times U_b\times U_c$ which is open.

So, $\phi$ is a homeomorphism from $U$ to $\phi(U)$.

Now, let $a=0$. But, since $ad-bc=1$, if $a=0$, then $b$ and $c$ must be non-zero. So, now let us consider the set

$$V=\left\{\begin{pmatrix} a&b\\ c&d \end{pmatrix}\in \text{SL}_2(\mathbb R) : b\neq 0\right\}$$

and the map

\begin{align*}\psi: &\,V\to \psi(V)\subset \mathbb R^*\times \mathbb R\times \mathbb R\\&\,\begin{pmatrix}a&b\\c&d\end{pmatrix}\mapsto (a,b,d)\end{align*}

This map is a homeomorphism from $V$ to $\psi(V)$ due to the same arguments as the last one.

So, $\text{SL}_2(\mathbb R)$ is a $3$-dimensional topological manifold with the atlas $\{(U,\phi),(V,\psi)\}$.

So, $\forall A\in \text{SL}_2(\mathbb R)$, $\exists \epsilon>0$ such that $B_{\epsilon}(A)\cap \text{SL}_2(\mathbb R)$ is homeomorphic to an open subset of $\mathbb R^3$.


Problem 2 :

Let $\gamma:(\alpha,\beta)\to \text{SL}_2(\mathbb R)$ be a continuous curve. We say $\gamma$ is smooth if $\gamma$ is a smooth curve in $\mathbb R^4$. Show that $B\cdot \gamma$ is a smooth curve for any $B\in \text{SL}_2(\mathbb R)$.

Solution :

Let

$$B=\begin{pmatrix}a_1&a_2\\b_1&b_2\end{pmatrix}$$

and

$$\gamma(t)=\begin{pmatrix}p_1(t)&q_1(t)\\p_2(t)&q_2(t)\end{pmatrix}$$

Then,

$$B\cdot\gamma(t)=\begin{pmatrix}\sum_{1=1}^2 a_ip_i(t)&\sum_{1=1}^2 a_iq_i(t)\\\sum_{1=1}^2 b_ip_i(t)&\sum_{1=1}^2 b_iq_i(t)\end{pmatrix}$$

Now, since $\gamma$ is a smooth curve, each of the $p_i$'s and $q_i$'s are smooth. So, clearly, $B\cdot \gamma$ is also smooth.

We discuss about this a little bit more in Appendix Notes 1.


Problem 3 :

Let $\gamma$, $\eta$ be smooth curves in $\text{SL}_2(\mathbb R)$ passing through $I=\begin{pmatrix}1&0\\0&1\end{pmatrix}$, i.e., $\gamma(0)=\eta(0)$. We say, $\gamma\sim \eta\iff \gamma^\prime(0)=\eta^\prime(0)$. Show that the set of equivalence classes of smooth curves passing through $I$ is a vector space, denoted by $\mathfrak{sl_2}$, of dimension 3.

Solution :

First, we will try to prove that $\mathfrak{sl_2}$ is the set of all traceless matrices. To do that, let us note that if 

$$x(t)=\begin{pmatrix}x_1(t) & x_2(t)  \\ x_3(t) & x_4(t) \end{pmatrix}$$

is a parametrized curve in $\text{SL}_2(\mathbb{R})$ with

\begin{equation} x(0)=\begin{pmatrix}x_1(0) & x_2(0)  \\ x_3(0) & x_4(0) \end{pmatrix}=\begin{pmatrix}1 & 0  \\ 0 & 1 \end{pmatrix}\end{equation}

Now, if $x(t)\in \text{SL}_2(\mathbb{R})$, then,

\begin{align*}&x_1(t)x_4(t)-x_2(t)x_3(t)=1\\ \implies &x_1^\prime (t)x_4+x_1(t)x_4^\prime (t)-x_2^\prime(t)x_3- x_2(t)x_3^\prime (t)=0\\ \implies &x_1^\prime(0)+x_4^\prime(0)=0\end{align*}

This shows that any element in $\mathfrak{sl_2}$ is traceless.

Now, we want to show that if $A$ is a traceless matrix, then $A$ is in $\mathfrak{sl_2}$. This, we will prove in the answer of the next question. For now, we will assume that this is true.

So, the question boils down to proving that the set of traceless matrices has dimension three. To do so, let us prove that

$$\mathtt{span}\left\{ \begin{pmatrix}1 & 0  \\ 0 & -1 \end{pmatrix}, \begin{pmatrix}0 & 1  \\ 0 & 0 \end{pmatrix}, \begin{pmatrix}0 & 0  \\ 1 & 0 \end{pmatrix}\right\}=T$$

where $T$ is the set of all traceless matrices.

But, this is clearly true as given $A=\begin{pmatrix}a & b  \\ c & -a \end{pmatrix}$, we can write

$$A=a.\begin{pmatrix}1 & 0  \\ 0 & -1 \end{pmatrix} +b.\begin{pmatrix}0 & 1  \\ 0 & 0 \end{pmatrix} +c.\begin{pmatrix}0 & 0  \\ 1 & 0 \end{pmatrix}$$

and since the linear independence of the given vectors is trivially true, this completes our proof.


Problem 4 :

Show that $\mathfrak{sl_2}$ can be identified, in a natural way, to traceless $2\times 2$ matrices. Moreover, the exponential map

\begin{align*} \text{exp}:&\,\{A\in \text{M}_2(\mathbb R) : \text{tr}(A)=0\}\to \text{SL}_2(\mathbb R)\\ &\,A\mapsto e^A \end{align*}

is well defined and produces a smooth curve

\begin{align*} \gamma_A:&\,\mathbb R\to \text{SL}_2(\mathbb R)\\ &\,\gamma_A(t)=\text{exp}(tA) \end{align*}

passing through $I$ with $\gamma_A^\prime (0)=A$.

Solution :

This map is clearly well defined as the matrix exponential map is well defined. Some details about this can be found in Appendix Problem 1.

Now, we want to show that, for any complex matrix $A$,

$$\det(\text{exp}(A))=e^{\text{tr}(A)}$$

To do that, let us recall that every complex matrix has a Jordan normal form and that the determinant of a triangular matrix is the product of the diagonal. So,

$$\exp(A)=\exp(S^{-1} J S ) = S^{-1} \exp(J) S$$

and hence,

\begin{align*} \det(\exp(A))&=\det(\exp(S J S^{-1}))\\&=\det(S \exp(J) S^{-1})\\&=\det(S) \det(\exp(J)) \det (S^{-1})\\&=\det(\exp (J))\\&=\prod_{i=1}^n e^{j_{ii}}\\&=e^{\sum_{i=1}^n{j_{ii}}}\\ &=e^{\text{tr}(J)}\\ &=e^{\text{tr}(A)} \end{align*}

which completes the proof.

So, if trace of $A$ is zero, that immediately gives that the determinant of $\exp (A)$ will be $1$, which (combined with our proof in the last problem) proves that the traceless matrices can be naturally identified with $\mathfrak{sl_2}$.

Now, we have

$$\gamma_A(t):=\text{exp}(tA)$$

This $\gamma_A$ is clearly smooth as

$$\text{exp}(tA)=\sum_{n=0}^\infty \frac{t^nA^n}{n!}$$

which gives

\begin{align*} \left( \text{exp}(tA)\right)^\prime &= \left(\sum_{n=0}^\infty \frac{t^nA^n}{n!}\right)^\prime\\ &=\sum_{n=0}^\infty \frac{nt^{n-1}A^n}{n!}\\ &=\sum_{n=0}^\infty \frac{t^{n-1}A^n}{(n-1)!}\\ &=A.\text{exp}(tA) \end{align*}

which is clearly well defined. We add some rigour to this argument in Appendix Problem 2.

Now,

$$\gamma_A(t)=\text{exp}(tA)$$

which gives

$$\gamma_A(0)=\text{exp}(\mathbf{0})=I$$

and

$$\gamma_A^\prime (t)=A.\text{exp}(tA)$$

which gives

$$\gamma_A^\prime(0)=A.\text{exp}(\mathbf{0})=A$$

hence completing the proof.


Problem 5 :

What is the fundamental group of $\text{SL}_2(\mathbb R)$?

Solution :

We claim that the fundamental group of $\text{SL}_2(\mathbb R)$ is $\mathbb Z$.

To prove our claim, first we will prove that $\text{SL}_2(\mathbb R)$ deformation retracts to $\text{SO}_2(\mathbb R)$. To do that, let us define the retraction map,

\begin{align*} \varphi : &\,\text{SL}_2(\mathbb R)\to \text{SO}_2(\mathbb R)\\ &\, (v_1,v_2)\mapsto \left(\frac{v_1}{||v_1||},\frac{e_2}{||e_2||}\right) \end{align*}

where $e_2=v_2-(u_1.v_2)u_1$ where $u_1=\frac{v_1}{||v_1||}$.

To prove that this is indeed a deformation retract, let us note the homotopy

\begin{align*} \text{H} : &\,\text{SL}_2(\mathbb R)\times I\to \text{SL}_2(\mathbb R)\\ &\,(A,t)\mapsto (1-t)A+t\varphi(A)\end{align*}

so that $\text{H}(A,0)=A$ and $\text{H}(A,1)=\varphi(A)$ hence completing the proof.

Now, we want to show that $\text{SO}_2(\mathbb R)$ is homeomorphic to $S^1$. To do this, let us note that the map

\begin{align*} f:\;&\text{SO}_2(\mathbb R)\to S^1\\ &\begin{pmatrix}\cos \theta & -\sin \theta  \\ \sin \theta & \cos \theta \end{pmatrix}\mapsto e^{i\theta}=(\cos \theta, \sin \theta) \end{align*}

is one-one, onto and continuous.

Proving that this map is an injection is trivial as $(\cos \theta_1, \sin \theta_1)=(\cos \theta_2, \sin \theta_2)$ implies $\theta_1=\theta_2$.
Surjectivity is also trivial as any point in $S^1$ is of the form $(\cos \theta, \sin \theta)$ which has the preimage $\begin{pmatrix}\cos \theta & -\sin \theta  \\ \sin \theta & \cos \theta \end{pmatrix}$.
To prove that the map is a homeomorphism, we simply need to note that it is component wise continuous, and the inverse

\begin{align*} f^{-1}:\;&S^1\to\text{SO}_2(\mathbb R)\\ &(\cos \theta, \sin \theta)\mapsto\begin{pmatrix}\cos \theta & -\sin \theta  \\ \sin \theta & \cos \theta \end{pmatrix} \end{align*}

is also component wise continuous. 

Now, we know that the fundamental group of $S^1$ is $\mathbb Z$. So, the fundamental group of $\text{SO}_2(\mathbb R)$ and hence $\text{SL}_2(\mathbb R)$ should also be $\mathbb Z$. This completes the proof.


Problem 6 :

What does $\text{SL}_2(\mathbb R)$ look topologically?

Solution :

We claim that $\text{SL}_2(\mathbb R)$ is homeomorphic to $S^1\times \mathbb R^2$.

Let us recall that any invertible matrix $M$ has a unique decomposition of the form $M=QR$ where $Q$ is orthogonal and $R$ is upper triangular. So, given $M\in \text{SL}_2(\mathbb R)$, we have

$$M=\begin{pmatrix}\cos \theta & -\sin \theta  \\ \sin \theta & \cos \theta \end{pmatrix}R$$

where $R$ is upper triangular.

So,

$$\det M=\det\begin{pmatrix}\cos \theta & -\sin \theta  \\ \sin \theta & \cos \theta \end{pmatrix}\det R$$

hence giving $\det R=\pm 1$.

So,

$$M=\begin{pmatrix}\cos \theta & -\sin \theta\\\sin \theta & \ \ \ \cos\theta\end{pmatrix}\begin{pmatrix}a&b\\0&1/a\end{pmatrix}$$

where $a>0$, $b\in \mathbb R$.

This proves our claim.


Alternately, we may also take a different and much more beautiful approach and note that the group $\text{SL}_2(\mathbb R)$ acts transitively on $\mathbb{R}^2 \setminus \{(0, 0)\}$ which is clearly homeomorphic to $\mathbb{R} \times S^1$. The stabilizer of the vector 

$$\begin{pmatrix} 1 \\ 0 \end{pmatrix}$$

is the set of matrices of the form

$$\begin{pmatrix} 1 & a \\ 0 & 1 \end{pmatrix}$$

which is clearly homeomorphic to $\mathbb{R}$, hence completing the proof.


Appendix :

Appendix Problem 1 :

Prove that the matrix exponential is well defined.

Proof :

First, we will show that, if $X$ and $Y$ are two $n\times n$ matrices, then

$$||XY|| \le ||X||.||Y||$$

where

$$||X||=\left(\sum_{i,j=1}^n x_{ij}^2\right)^{\frac 12}$$

where $x_{ij}$ is the $ij$-th entry of $X$.

So, let $X=\left\{x_{ij}\right\}$ and $Y=\left\{y_{ij}\right\}$. Using Cauchy-Schwarz Inequality, we have

\begin{align*} ||XY||^2&=\sum_{i,j} (xy)_{ij}^2\\ &\le \sum_{i,j} \left(\sum_{k}x_{ik}^2\right)\left(\sum_{k}y_{kj}^2\right)\\ &\le \left(\sum_{i,k} x_{ik}^2\right)\left(\sum_{j,k} y_{kj}^2\right)\\ &=||X||^2||Y||^2 \end{align*}

hence completing the proof.

Now, let us consider the sequence of partial sums

$$S_n=\sum_{k=0}^n \frac{X^k}{k!}$$

We will show that $\{S_n\}_{n=1}^\infty$ is an uniformly Cauchy sequence. To do so, let us note that for $n>m$, we have

\begin{align*} ||S_n-S_m|| &=\left\lVert \sum_{k=m+1}^n \frac{X^k}{k!}\right\rVert\\ &\le \sum_{k=m+1}^n \frac{||X^k||}{k!}\\ &\le \sum_{k=m+1}^n \frac{||X||^k}{k!}\end{align*}

which of course goes to 0, as the sequence of partial sums, $\sum_{k=1}^n \frac{x^k}{k!}$ uniformly converges (as a sequence of reals) to $e^{x}$. So, $\{S_n\}_{n=1}^\infty$ is an uniformly Cauchy sequence.

Now, we know that $(\mathbb R^{n\times n}, ||.||_{\mathbb R^{n\times n}})$ is complete. So, $\{S_n\}_{n=1}^\infty$ being an uniformly Cauchy sequence, must converge uniformly.

This completes the proof.


Appendix Problem 2 :

Prove that $\gamma_A$ as defined in Problem 3 is a smooth curve.

Proof :

Let us consider the partial sums of $\exp (tA)$ given by

\begin{align*} S_n&=\sum_{k=0}^n \frac{(tA)^k}{k!}\\ &=\sum_{k=0}^n \frac{t^kA^k}{k!} \end{align*}

So, we have

\begin{align*} S_n^\prime &=\left(\sum_{k=0}^n \frac{t^kA^k}{k!}\right)^\prime\\ &=\sum_{k=0}^n \frac{kt^{k-1}A^k}{k!}\\ &=A.\sum_{k=0}^n \frac{t^kA^k}{k!} \end{align*}

which (using the result of Appendix Problem 1) converges.


The result that we have used in the last argument is from Rudin's Principles of Mathematical Analysis, Theorem 7.17, given by
Suppose $\{f_n\}$ is a sequence of functions, differentiable on $[a,b]$ and such that $\{f_n(x_0)\}$ converges for some point $x_0$ on $[a,b]$. If $\{f_n^{\prime}\}$ converges uniformly on $[a,b]$, then $\{f_n\}$ converges uniformly on $[a,b]$, to a function $f$, and

$$f^{\prime}(x)=\lim_{n\to\infty}f_n^\prime(x),\quad(a\leq x\leq b).$$


Arguing inductively, we can show that $\gamma_A$ is $C^\infty$ and

$$\gamma_A^{(n)}=A^{n-1}.\gamma_A$$

which completes the proof.


Alternately, we could have also noted that each entry of $\gamma_A$ being a power series in $t$, is $C^\infty$ and hence $\gamma_A$ is also $C^\infty$.


Appendix Problem 3 :

We want to understand what a smooth curve in $\text{SL}_2(\mathbb R)$ actually means. This we will do in the next appendix notes. For now, we wish to set the stage and look at $\text{SL}_2(\mathbb R)$ as a differentiable manifold.

Proof :

We know that, to obtain a $\mathcal C^1$-differentiable manifold from a topological manifold with atlas $\mathscr A$, we only need to check that every transition map between charts in $\mathscr A$ is differentiable in the usual sense.
In this case, we have the atlas $\mathscr A=\{(U,x),(V,y)\}$ (from solution of Problem 1). It is easy to see that

$$\left(y\circ x^{-1}\right).(a,b,c)=y.\begin{pmatrix}a & b  \\ c & \frac{1+bc}{a} \end{pmatrix}=\left(a,b,\frac{1+bc}{a}\right)$$

This gives us the transition map

\begin{align*} y\circ x^{-1}:&x(U\cap V)\to y(U\cap V)\\ &(a,b,c)\mapsto \left(a,b,\frac{1+bc}{a}\right) \end{align*}

Similarly,

$$\left(x\circ y^{-1}\right).(a,b,d)=y.\begin{pmatrix}a & b  \\ \frac{ad-1}{b} & d \end{pmatrix}=\left(a,b,\frac{ad-1}{b}\right)$$

giving us

\begin{align*} x\circ y^{-1}:&y(U\cap V)\to x(U\cap V)\\ &(a,b,d)\mapsto \left(a,b,\frac{ad-1}{b}\right) \end{align*}

Clearly, the transition maps are differentiable as $a\neq 0$ and $b\neq 0$.


This proves that $\mathscr A$ is a differentiable atlas. So, we see that $\text{SL}_2(\mathbb R)$ is a differentiable manifold.


Appendix Note 1 :

Let us recall that a function $f:\mathbb R^n\to M$ on a manifold $M$ is called smooth if for all charts $(U,\phi)$ the function

$$\phi\circ f:\mathbb R^n \to \phi(U)$$

is smooth (in the Euclidean sense).

In our case (in Problem 2), we were already given the definition that $\gamma$ is smooth if $\gamma$ is a smooth curve in $\mathbb R^4$. We want to unify the two definitions which will establish that $\text{SL}_2(\mathbb R)$ is a good enough manifold. So, given the two charts $\{(U,\phi),(V,\psi)\}$, let us look at

\begin{align*} &\phi\circ\gamma : \mathbb{R} \to \gamma \cap U \to\mathbb{R}^3\\ &\phi\circ \gamma(t)=\phi\circ \begin{pmatrix} p_1(t)&q_1(t)\\ p_2(t)&q_2(t) \end{pmatrix}=\left(p_1(t),q_1(t),p_2(t)\right) \end{align*}

and

\begin{align*} &\psi\circ\gamma : \mathbb{R} \to \gamma \cap V \to\mathbb{R}^3\\ &\psi\circ \gamma(t)=\psi\circ \begin{pmatrix} p_1(t)&q_1(t)\\ p_2(t)&q_2(t) \end{pmatrix}=\left(p_1(t),q_1(t),q_2(t)\right) \end{align*}

Now, these being maps from $\mathbb R^n$ to $\mathbb R^m$ are smooth in the Euclidean sense as the components are smooth.

Once this unification is done, we can now say that component-wise smoothness is enough to guarantee manifold-wise smoothness. This justifies the argument we used in Problem 2 to show that $B\cdot \gamma$ is smooth.


Appendix Problem 4 :

Prove that $\text{SL}_2(\mathbb R)$ is a topological space.

Proof :

Let us recall that if $N$ is a subset of $M$, and $\mathcal O$ is a topology on $M$, then

$$\mathcal O \big |_N:=\{U\cap N: U\in \mathcal O\}$$

equips $N$ with the subset topology inherited from $M$.

So, let us take the standard topology on $\mathbb R$ given by

\begin{align*} &B_r(x):=\{y\in \mathbb R: |x-y|<r\}\\ &U\in \mathcal O_{\mathbb R} \iff \forall x\in U\;\exists r>0:B_r(x)\subset U \end{align*}

Now, we can equip $\mathbb R^4$ with the product topology so that we can finally define

$$\mathcal O:=\left (\mathcal O_{\mathbb R}\right )\big |_{\text{SL}_2(\mathbb R)}$$

so that $(\text{SL}_2(\mathbb R),\mathcal O)$ is a topological space.

This added to Appendix Problem 3 and Problem 1 shows that $\text{SL}_2(\mathbb R)$ is a differentiable topological manifold.


Appendix Problem 5 :

Prove that $\text{SL}_2(\mathbb R)$ forms a group under the usual definition of multiplication.

Proof :

The usual definition of multiplication is of course given by

\begin{align*} &\cdot : \text{SL}_2(\mathbb R)\times\text{SL}_2(\mathbb R) \to \text{SL}_2(\mathbb R)\\ &\left(\begin{pmatrix}a & b  \\ c & d \end{pmatrix},\begin{pmatrix}e & f  \\ g & h \end{pmatrix}\right)\mapsto \begin{pmatrix}ae+bg & af+bh  \\ ce+dg & cf+dh \end{pmatrix} \end{align*}

This operation is closed as real numbers are closed under addition and multiplication. It is also straightforward to check that

1. this operation is associative

2. has the identity element $\begin{pmatrix}1 & 0  \\ 0 & 1 \end{pmatrix}$, and

3. for each $\begin{pmatrix}a & b  \\ c & d \end{pmatrix}$, admits the inverse $\begin{pmatrix}d & -b  \\ -c & a \end{pmatrix}$.


This completes the proof that $\text{SL}_2(\mathbb R)$ is a (non-commutative) group.


Appendix Problem 6 :

Prove that $\text{SL}_2(\mathbb R)$ is connected.

Proof :

We will prove the more general case of $\text{SL}_n(\mathbb R)$.

First, let us prove that the elementary matrices of first type generate $\text{SL}_n(\mathbb R)$. This is Exercise 2.4.8(b) of Artin's Algebra.

If $E$ is an elementary matrix, then $X\mapsto EX$ can be described as follows: if $E$ is Type 1, with the off-diagonal entry $a$ at $(i,j)$, we do $r_i\mapsto r_i+ar_j$; if $E$ is Type 3, with the modified diagonal entry $c\ne0$ at index $i$, then $r_i\mapsto cr_i$.

Note that Type 1 matrices have determinant $1$ and Type 3 matrices have determinant $c$.

In order to show$$M = E_1E_2\dots E_k$$for some (permitted) elementary matrices $E_i$, it suffices to show $$I_n = F_kF_{k-1}\dots F_1M$$for some elementary $F_i$, since then$$M = F_1^{-1}\dots F_k^{-1},$$as elementary matrices are invertible, and their inverses are elementary as well.

Now, we consider $M\in \text{SL}_n(\mathbb{R})$. Using the row operations corresponding to Type 1 elementary matrices, we turn column $i$ into $e_i$ ($1$ at position $i$, $0$ elsewhere) from left to right.

Take the leftmost column $i$ with $c_i \ne e_i$, if it exists (otherwise, we are done). Since $\det(M) = 1 \ne 0$, we can not have $c_i$ written as a linear combination of$$c_1=e_1,\dots,\,c_{i-1}=e_{i-1};$$hence one of entries $i,i+1,\dots,n$ must be nonzero, say $j$.

Subtracting $r_j$ from the other rows as necessary, we first clear out all column $i$ (except for row $j$). Note that none of this affects columns $1$ through $i-1$. If $i=n$, we have a diagonal matrix with determinant $1$ and the first $n-1$ entries all $1$'s, so we are done. Otherwise, if $i<n$, pick an arbitrary row $k\ne j$ from $i$ to $n$, and add a suitable multiple of $r_j$ to $r_k$ so that the $(k,i)$ entry becomes $1$. Now subtract a suitable multiple of $r_k$ from $r_j$ so the $(j,i)$ entry becomes $0$. If $k=i$, we can proceed to column $i+1$; otherwise, add $r_k$ to $r_i$ and subtract $r_i$ from $r_k$, and then proceed to column $i+1$.

Now, Let $\sim$ be the binary operation corresponding to path-connectivity in $\text{SL}_n(\mathbb{R})$. Clearly, $\sim$ is an equivalence relation.

In order to show $\text{SL}_n(\mathbb{R})$ is path-connected, it suffices to show $A\sim I_n$ for all $A\in \text{SL}_n(\mathbb{R})$. But, we just proved that $A$ can be written as a (possibly empty) product of elementary matrices of the first type, so it in fact suffices to prove that

$$E_{uv}(a)M\sim M$$

for all $M\in \text{SL}_n(\mathbb{R})$ and Type 1 elementary matrices $E_{uv}(a)$ ($1\le u,\,v\le n$) of the form

$$I_n + [a[(i,\,j) = (u,\,v)]]_{i,\,j\,=\,1}^n$$

Yet

$$M\to E_{uv}(b)M$$

simply adds $b$ times row $j$ to row $i$, i.e. takes $r_i$ to $r_i+br_j$. For fixed $u$, $v$, $M$, this map is continuous in $b$ (and preserves the determinant), so the continuous function

$$X(t) = E_{uv}(ta)M$$

over $[0,1]$ takes

$$X(0) = M \to X(1) = E_{uv}(a)M$$

while remaining inside $\text{SL}_n(\mathbb{R})$, as desired.

Alternately, we can also first prove that $\text{GL}_n^+(\mathbb R):=\big\{A\in M_n(\mathbb R): \det(A)>0\big\}$ is connected, and then consider the continuous surjective map

$$\Psi\colon \text{GL}_n^+(\mathbb R)\ni A\mapsto \frac{A}{\det A}\in \text{SL}_2(\mathbb R)$$

and recall that continuous image of a connected set is connected.

Now, we can prove $\text{GL}_n^+(\mathbb R)$ is connected by induction on $n$. Consider the map

$$p\colon \text{M}_n(\mathbb R)=\Bbb R^n\times \text{M}_{n(n-1)}(\mathbb R) \to \Bbb R^n$$

given by $p(A)=Ae_1$. That is $p$ sends $A\in \text{M}_n(\mathbb R)$ to its first column. Note that $p$ is a projection map, hence open and continuous. 

Note that, $\text{GL}_1^+(\mathbb R)=(0,\infty)$. Now, let $f\colon \text{GL}_n^+(\mathbb R)\to \Bbb R^n-\{0\}$ be the restriction of $p$ to the $\text{GL}_n^+(\mathbb R)\subseteq_{\text{open}}\text{M}_n(\mathbb R)$. So, $f$ is also open continuous. 

Next, $f^{-1}(e_1)=\Bbb R^{n-1}\times \text{GL}^+(n-1,\Bbb R)$, which is a connected set by induction. For $y\in \Bbb R^n-\{0\}$ choose $B\in \text{GL}_n^+(\mathbb R)$ with $f(B)=y$. Then, $f^{-1}(y)=\big\{B\cdot C:C\in f^{-1}(e_1)\big\}$. Hence each fibre of $f$ is connected. But, we know that if $Y$ be connected and $f\colon X\to Y$ is a surjective continuous map having connected fibers, and if $f$ is open, then $X$ is also connected. This, we can prove by contradiction-

If possible, write $X=U\bigsqcup V$ where $U,V$ are non-empty open subsets of $X$. Then, $f(U),f(V)$ are open subsets of $Y$ such that $f(U)\cup f(V)=Y$. Now, $Y$ is connected implies $f(U)\cap f(V)\not=\emptyset.$ Take, $y\in f(U)\cap f(V)$, then $f^{-1}(y)\cap U\not =\emptyset$ and $f^{-1}(y)\cap V\not=\emptyset$, contradicts to the fact that fibers are connected sets.

This proves that $\text{SL}_2(\mathbb R)$ in particular is path connected, and hence connected.


Appendix Problem 7 :

Prove that $\text{SL}_2(\mathbb R)$ forms a Lie group.

Proof :

We have equipped $\text{SL}_2(\mathbb R)$ with both a group and a manifold structure. In order to obtain a Lie group structure, we have to check that these two structures are compatible, i.e, we need to show that the two maps

\begin{align*} \mu:&\text{SL}_2(\mathbb R)\times\text{SL}_2(\mathbb R)\to\text{SL}_2(\mathbb R)\\ &\left(\begin{pmatrix}a & b  \\ c & d \end{pmatrix},\begin{pmatrix}e & f  \\ g & h \end{pmatrix}\right)\mapsto \begin{pmatrix}ae+bg & af+bh  \\ ce+dg & cf+dh \end{pmatrix} \end{align*}

and

\begin{align*} i:&\text{SL}_2(\mathbb R)\to\text{SL}_2(\mathbb R)\\ &\begin{pmatrix}a & b  \\ c & d \end{pmatrix}\to \begin{pmatrix}a & b  \\ c & d \end{pmatrix}^{-1}\end{align*}

are differentiable with the differentiable structure on $\text{SL}_2(\mathbb R)$. For instance, for the inverse map $i$, we have to show that the map $y\circ i\circ x^{-1}$ is differentiable in the usual sense for any pair of charts $(U,x),(V,y)\in \mathscr A$.

\begin{array}{c} U\subseteq \text{SL}_2(\mathbb R) \xrightarrow{\quad i\quad} V\subseteq \text{SL}_2(\mathbb R) \\ \downarrow x\qquad\qquad\qquad\quad \downarrow y \\ x(U)\subseteq \mathbb R^3\xrightarrow{y\circ i\circ x^{-1}} y(V)\subseteq \mathbb R^3 \end{array}

But, since $\text{SL}_2(\mathbb R)$ is connected, the differentiability of the transition maps in $\mathscr A$ implies that if $y\circ i\circ x^{-1}$ is differentiable for any two given charts, then it is differentiable for all charts in $\mathscr A$. Hence, we can simply let $(U,x)$ and $(V,y)$ be the two charts on $\text{SL}_2(\mathbb R)$ defined above. then we have

$$\left(y\circ i\circ x^{-1}\right)(a,b,c)=(y\circ i)\left(\begin{pmatrix}a & b  \\ c & \frac{1+bc}{a} \end{pmatrix}\right)=y\left(\begin{pmatrix}\frac{1+bc}{a} & -b  \\ -c & a \end{pmatrix}\right)=\left(\frac{1+bc}{a},-b,a\right)$$

which is clearly differentiable as a map between open subsets of $\mathbb R^3$ as $a\neq 0$ on $x(U)$.

To prove that $\mu$ is differentiable, we can proceed almost similarly once we have a differentiable structure on the product manifold $\text{SL}_2(\mathbb R)\times \text{SL}_2(\mathbb R)$. Or, we may also argue that the matrix multiplication $\text{M}_2(\mathbb R)\times\text{M}_2(\mathbb R)\to\text{M}_2(\mathbb R)$ is

given by smooth expressions in the entries (involving products and sums) and hence it is a smooth map, from which it follows that the restriction to $\text{SL}_2(\mathbb R)\times\text{SL}_2(\mathbb R)\to\text{SL}_2(\mathbb R)$ is also smooth.

This completes the proof that $\text{SL}_2(\mathbb R)$ is a 3-dimensional Lie group.

On the Neighbour Sum Problem

As a kid, I remember being utterly fascinated by this deceptively simple Olympiad problem. The 64 squares of an 8 × 8 chessboard are filled ...