Search

The Law of Iterated Logarithm

1. Introduction

The goal is to give a complete proof of the Law of Iterated Logarithm. The setup we will work in is the following. Let $\{X_i\}_{i=1}^\infty$ be a sequence of iid random variables with mean $0$ and variance $1$. Let
$$S_n := \sum_{i=1}^n X_i \qquad \text{ and } \qquad \bar{S}_n := \frac {S_n}n $$
for $n\in \mathbb Z^+$. From now on, we will use $X_n$ and $S_n$ as defined in our discussions.

We wish to prove the following theorem.
Theorem 1 (The Law of Iterated Logarithm) : Let
$$\Lambda_n := \sqrt{2n \log \log n}$$
wherever it is defined. Then,
$$\limsup_{n\to \infty} \frac{S_n}{\Lambda_n} = 1$$
almost surely. By replacing $X_j$ by $-X_j$, this also proves that
$$\liminf_{n\to \infty} \frac{S_n}{\Lambda_n} = -1$$
almost surely. In fact, for any $f\in C(\mathbb R, \mathbb R)$,
$$\limsup_{n\to \infty} f\left(\frac{S_n}{\Lambda_n}\right) = \sup_{t\in [-1,1]} f(t)$$
almost surely. That is, the set of limit points of
$$\left\{\frac{S_n}{\Lambda_n} : n\ge 1\right\}$$
coincides with the interval $[-1,1]$ almost surely.

In what follows, we will use the notation
$$\Lambda_\beta := \Lambda_{\left\lfloor\beta\right\rfloor} \qquad \text{ and } \qquad \tilde{S}_\beta := \frac{S_{\left\lfloor\beta\right\rfloor}}{\Lambda_\beta}$$
where $\left\lfloor\beta\right\rfloor$ is greatest integer less than or equal to $\beta$.

The main reference we are following is Probability Theory, An Analytic View by Daniel W. Stroock.

2. Lemmas

To prove Theorem 1, we need three lemmas.

Lemma 1 : For $a\in (0,\infty)$ and $\beta\in (1,\infty)$, if
$$\sum_{m=1}^\infty \mathbb P \left(\left\lvert \tilde{S}_{\beta^m}\right\rvert \ge \frac{a}{\sqrt \beta}\right)<\infty$$
then
$$\limsup_{n\to \infty} \left\lvert \tilde{S}_n\right\rvert \le a$$
almost surely.

Lemma 2 : For any sequence $\{X_n : n\ge 1\}$ of iid random variables with mean $0$ and variance $\sigma^2$, we have
$$\limsup_{n\to \infty} \left\lvert \tilde{S}_n \right\rvert \le 8\sigma$$
almost surely.

Lemma 3 : Let the sequence $\{X_n\}$ have a common distribution $\mu$. Further, let the moment-generating function
$$M_\mu (\xi) := \int_{\mathbb R} e^{\xi x} \mu(\text{d}x) < \infty$$
for all $\xi \in \mathbb R$. Then, for all $R\in (0,\infty)$, there is an $N(R)\in \mathbb Z^+$ and a suitable constant $K$ such that
$$\mathbb P\left(\left\lvert \tilde{S}_n\right\rvert \ge R\right) \le 2\, \exp \left [-\left(1-K\sqrt{\frac{8R\log\log n}{n}}\right) R^2 \log\log n\right]$$
for all $n\ge N(R)$. Also, for each $\varepsilon\in (0,1]$, there is an $N(\varepsilon)\in \mathbb Z^+$ such that for all for all $n\ge N(\varepsilon)$ and $\left\lvert a\right\rvert \le \frac{1}{\varepsilon}$,
$$\mathbb P \left(\left\lvert \tilde{S}_n - a\right\rvert < \varepsilon\right) \ge \frac 12 \, \exp \left[-\left(a^2 +4K\left\lvert a\right\rvert \varepsilon \right)\log \log n\right]$$
for some suitable constant $K$.

3. Proof of Theorem
We will first give a proof assuming the three lemmas and then give a proof of the lemmas afterwards.

Proof of Theorem 1 : Observe first that for any $X_n$ and $\varepsilon>0$, it is easy to find a $\psi \in C_\text{b}(\mathbb R)$ so that $Y_n := \psi \circ X_n$ has mean $0$ and variance $1$ while $Z_n := X_n - Y_n$ has variance less than $\varepsilon^2$ (what we want is to find a smooth truncation - one way to do this is by first considering $X_n\cdot \mathbf 1_{\left\lvert X_n\right\rvert<n}$ and then appropriately smoothening the edges). Applying Lemma 2 to this $Z_n$, we have
$$\limsup_{n\to \infty} \left\lvert \tilde{S}_n(\omega)\right\rvert \le 1 + \limsup_{n\to \infty} \left\lvert \frac{\sum_{m=1}^n Z_m(\omega)}{\Lambda_n}\right\rvert \le 1+8\varepsilon$$
and for $a\in [-1,1]$, we have
$$\liminf_{n\to \infty}\left\lvert \tilde{S}_n(\omega)-a\right\rvert \le \limsup_{n\to \infty} \left\lvert \frac{\sum_{m=1}^n Z_m(\omega)}{\Lambda_n}\right\rvert \le 8\varepsilon$$
for almost every $\omega\in \Omega$. So, it is enough to assume that the random variables are bounded, and that is what will be assumed in the rest of the proof.

Now, for a given $\beta >1$, we can use Lemma 3 to argue
$$\mathbb P\left(\left\lvert\tilde{S}_{\beta^m}\right\rvert \ge \sqrt\beta\right) \le 2\, \exp\left[-\sqrt\beta \log\log \left\lfloor \beta^m\right\rfloor\right]$$
for all sufficiently large $m\in \mathbb Z^+$. So, using Lemma 1 with $a=\beta$, we see that
$$\limsup_{n\to \infty} \left\lvert\tilde{S}_n\right\rvert \le \beta$$
almost surely for all $\beta\in (1,\infty)$.

So, to complete the proof, we need to show
$$\mathbb P\left(\liminf_{n\to \infty} \left\lvert \tilde{S}_n -a\right\rvert <\varepsilon\right)=1$$
for all $a\in (-1,1)$ and all $\varepsilon>0$. To do this, we wish to use the Borel-Cantelli Lemma. But for that, we need to make sure we are dealing with independent events. For this, we can use the previously proved result to argue that
$$\liminf_{n\to \infty} \left\lvert \tilde{S}_n -a\right\rvert \le \inf_{k\ge 2}\, \liminf_{m\to \infty} \left\lvert \tilde{S}_{k^m} -a\right\rvert \le \liminf_{k\to \infty}\,\liminf_{n\to \infty} \left\lvert\frac{S_{k^m}-S_{k^{m-1}}}{\Lambda_{k^m}}-a\right\rvert$$
almost surely for all integers $k\ge 2$.

Now, since the events
$$A_{k,m} := \left\{\left\lvert\frac{S_{k^m}-S_{k^{m-1}}}{\Lambda_{k^m}} -a\right\rvert <\varepsilon\right\}_{m\in \mathbb Z^+}$$
are mutually independent for each $k\ge 2$, it is enough to check that
$$\sum_{m=1}^\infty \mathbb P\left(A_{k,m}\right)=\infty$$
for sufficiently large $k$.

But, since
$$\lim_{k\to \infty} \,\max_{m\in \mathbb Z^+} \left\lvert\frac{\Lambda_{k^m}}{\Lambda_{k^m-k^{m-1}}} -1\right\rvert = 0$$
and
$$\mathbb P\left(A_{k,m}\right) = \mathbb P\left(\left\lvert \tilde{S}_{k^m -k^{m-1}}-\frac{a\,\Lambda_{k^m}}{\Lambda_{k^m -k^{m-1}}}\right\rvert < \frac{\varepsilon \,\Lambda_{k^m}}{\Lambda_{k^m -k^{m-1}}}\right)$$
for all $k\ge 2$, it is enough to show
$$\sum_{m=1}^\infty \mathbb P\left(\left\lvert \tilde{S}_{k^m -k^{m-1}}-a\right\rvert < \varepsilon\right) = \infty$$
for every $k\ge 2$ and $a\in (-1,1)$ and $\varepsilon>0$. This follows by using the second part of Lemma 3 and choosing $\tilde \varepsilon>0$ so small that $\rho := a^2 + 4K\tilde{\varepsilon}|a|<1$ from which it is possible to conclude that when $0<\varepsilon<\tilde{\varepsilon}$, we have
$$\mathbb P\left(\left\lvert \tilde{S}_n -a\right\rvert <\varepsilon\right) \ge \frac 12 \, \exp\left[ -\rho \log\log n\right]$$
for all sufficiently large $n$.

This completes the proof. $\square$

4. Proof of Lemma 1

We first introduce the idea of a median. Give a real valued random variable $Y$, we say $\alpha\in \mathbb R$ is a median of $Y$ if
$$\mathbb P(Y\le \alpha) \ge \frac 12 \qquad \text{ and } \qquad \mathbb P(Y\ge \alpha) \ge \frac 12$$
and write as $\alpha\in \operatorname{med}(Y)$. It is easy to see that
$$\alpha:= \inf \left\{t\in \mathbb R : \mathbb P(Y\le t)\ge \frac 12\right\}$$
is a median of $Y$. In other words, every $Y$ has a median. Additionally, it can be shown that if $Y\in L^2 (\mathbb R)$ and $m$ is the mean of $Y$, then
$$\left\lvert \alpha-m \right\rvert \le \sqrt{2 \,\text{Var}(Y)}$$
for all $\alpha\in \text{med}(Y)$.

We also have the following theorem.
Theorem 2 (Lévy Reflection Principle) : For $k\le \ell$, choose $\alpha_{\ell,k} \in \operatorname{med}(S_{\ell}-S_k)$. Then, for all $\varepsilon>0$, we have
$$\mathbb P\left(\max_{1\le n\le N} \left\lvert S_n +\alpha_{n,m}\right\rvert \ge \varepsilon\right)\le 2\mathbb P(\left\lvert S_N\right\rvert \ge \varepsilon)$$
for all $N\in \mathbb Z^+$.

With this at hand, we move to the proof.

Proof of Lemma 1 : Let $\beta\in (1,\infty)$ be given. For each $m\in \mathbb N$ and $1\le n\le \beta^m$, let $\alpha_{m,n}$ be a median of $S_{\left\lfloor \beta ^m\right\rfloor}-S_n$. This gives $\left\lvert \alpha_{m,n}\right\rvert \le \sqrt{2\beta^m}$ so that
\begin{align*}\limsup_{n\to \infty} \left\lvert \tilde{S}_n \right\rvert &= \limsup_{m\to \infty} \max_{\beta^{m-1}\le n\le \beta^m} \left\lvert \tilde{S}_n \right\rvert\\ &\le \sqrt{\beta} \,\limsup_{m\to \infty} \,\max_{n\le \beta^m} \frac{\left\lvert S_n\right\rvert}{\Lambda_{\beta^m}}\\ &\le \sqrt \beta \,\limsup_{m\to \infty} \,\max_{n\le \beta^m} \frac{\left\lvert S_n + \alpha_{m,n}\right\rvert}{\Lambda_{\beta^m}} \end{align*}
which gives
$$\mathbb P \left(\limsup_{n\to \infty} \left\lvert \tilde{S}_n \right\rvert \ge a\right) \le \mathbb P\left(\limsup_{m\to \infty}\max_{n\le \beta^m} \frac{\left\lvert S_n + \alpha_{m,n}\right\rvert}{\Lambda_{\beta^m}} \ge \frac{a}{\sqrt \beta}\right)$$
and hence on application of Theorem 2, we have
$$\mathbb P\left(\max_{n\le \beta^m} \frac{\left\lvert S_n + \alpha_{m,n}\right\rvert}{\Lambda_{\beta^m}} \ge \frac{a}{\sqrt \beta}\right) \le 2\mathbb P\left(\left\lvert \tilde{S}_{\beta^m}\right\rvert \ge \frac{a}{\sqrt \beta}\right)$$
from which the desired result follows using Borel-Cantelli Lemma. $\square$

5. Proof of Lemma 2

Again, we need to introduce a few objects before we can move to our proof.

We define the Rademacher functions $\{R_n\}_{n\in \mathbb Z^+}$ on $[0,1)$ as follows. Consider the function $R:\mathbb R\to \{-1,1\}$ given by
$$R(t) := \begin{cases} -1 & \text{if}\qquad t-\lfloor t\rfloor \in \left[0,\frac 12\right)\\ \,\,\,\,\,1 & \text{if}\qquad t-\lfloor t\rfloor \in \left[\frac 12, 1\right) \end{cases}$$
and define $R_n$ on $[0,1)$ by
$$R_n(\omega) := R\left(2^{n-1} \omega\right)$$
for $n\in \mathbb Z^+$ and $\omega\in [0,1)$. It can be shown that the Rademacher functions are mutually independent.

We will use these functions in our proof.

Proof of Lemma 2 : Without loss of generality, it can be assumed that $\sigma=1$. We begin by considering the case where the $X_n$'s are symmetric, i.e., $X_n$ and $-X_n$ has the same distribution as $X_n$ itself. For this case, we will be able to prove the result with the constant $8$ replaced by $4$ once we can show
$$\sum_{m=0}^\infty \mathbb P\left(\left\lvert \tilde{S}_{2^m}\ge 2^{3/2}\right\rvert\right)<\infty$$
since we can then use Lemma 1.

Now, let $\left(\Omega, \mathcal F, \mathbb P\right)$ be the probability space on which the $X_n$'s are defined and let $\{R_n\}_{n\in \mathbb Z^+}$ be the sequence of Rademacher functions on $[0,1)$. Set $\mathbb Q := \lambda_{[0,1)}\times \mathbb P$ on $\left([0,1)\times \Omega , \mathcal B_{[0,1)} \times \mathcal F\right)$ where $\lambda_{[0,1)}$ is the Lebesgue measure on $[0,1)$. It is easy to check that the symmetry of $X_n$ is equivalent to the statement that
$$\omega \in \Omega \mapsto \left(X_1(\omega), X_2(\omega), \dots\right) \in \mathbb R^{\mathbb Z^+}$$
has the same distribution under $\mathbb P$ as
$$(t,\omega) \in [0,1)\times \Omega \mapsto \left(R_1(t)X_1(\omega), R_2(t)X_2(\omega), \dots\right)\in \mathbb R^{\mathbb Z^+}$$
has under $\mathbb Q$.

Now, from Hoeffding's inequality, we know that for a given $\{\sigma_k\}_{k=1}^n \in \mathbb R$ and mutually independent random variables $\{X_i\}_{i=1}^n$ with common distribution $\mu$, the centered Bernoulli distribution given by $\mu\left(\{\pm 1\}\right)=\frac 12$, if $\nu$ denotes the distribution $S := \sum_{1}^n \sigma_k X_k$, then for all $a\in [0,\infty)$, we have
$$\mathbb P \left(|S|\ge a\right) \le 2 \,\exp \left[-\frac{a^2}{2 \Sigma^2}\right]$$
where $\Sigma^2 := \sum_{1}^n \sigma_k^2$. We use this with $\sigma_k = X_k(\omega)$ and note that
$$\lambda_{[0,1)} \left(\left\{ t\in [0,1) : \left\lvert \sum_{n=1}^{2^m} R_n(t)X_n(\omega)\right\rvert \ge a\right\}\right) \le 2 \,\exp \left[-\frac{a^2}{2\sum_{1}^{2^m} X_n(\omega)^2}\right]$$
for all $a\in [0,\infty)$ and $\omega\in \Omega$.

So, if we define
$$A_m := \left\{\omega \in \Omega : \frac{1}{2^m} \sum_{n=1}^{2^m} X_m(\omega)^2 \ge 2\right\}$$
and
$$F_m(\omega) := \lambda_{[0,1)} \left(\left\{t\in [0,1) : \left\lvert \sum_{n=1}^{2^m} R_n(t)X_n(\omega)\right\rvert \ge 2^{3/2} \Lambda_{2^m}\right\}\right)$$
then we have
\begin{align*} \mathbb P \left(\left\{ \omega\in \Omega : \left\lvert S_{2^m}(\omega)\right\rvert\ge 2^{3/2} \Lambda_{2^m}\right\}\right) &= \int_{\Omega} F_m(\omega) \mathbb P\left(\text{d}\omega\right) \\ &\le 2 \int_{\Omega} \exp \left[-\frac{8\Lambda_{2^m}^2}{2 \sum_{1}^{2^m} X_n(\omega)}\right] \mathbb P\left(\text{d}\omega\right)\\ &\le 2\, \exp\left[-4 \log\log 2^m\right] + 2\mathbb P \left (A_m \right) \end{align*}
due to Tonelli's Theorem.

So, it is enough to show $\sum_0^\infty \mathbb P\left(A_m\right) <\infty$. To do so, let us set
$$T_m := \sum_{n=1}^{2^m} X_n^2 \qquad B_m := \left\{\frac{T_{m+1}-T_m}{2^m}\ge 2\right\} \qquad \bar{T}_m := \frac{T_m}{2^m}$$
for $m\in \mathbb Z^+$. Clearly $\{B_m\}_{m\in \mathbb Z^+}$ are independent and $\mathbb P\left(A_m\right) = \mathbb P\left(B_m\right)$. So, it is enough to show
$$\mathbb P\left(\limsup_{m\to \infty} B_m\right) = \mathbb P \left(\limsup_{m\to \infty} \frac{T_{m+1}-T_m}{2^m} \ge 2\right) = 0$$
and then use Borel-Cantelli Lemma.

But, by Strong Law of Large Numbers, we know $\bar{T}_m \to 1$ almost surely, and hence
$$\frac{T_{m+1}-T_m}{2^m} \to 1$$
almost surely. This completes the proof for the symmetric case replacing the constant $8$ by $4$.

To eliminate the symmetry assumption, let $\left(\Omega, \mathcal F, \mathbb P\right)$ be the space on which $X_n$'s are defined and define $\{Y_n\}$ on $\left(\Omega^2, \mathcal F^2, \mathbb P^2\right)$ as
$$Y_n (\omega_1, \omega_2) := X_n(\omega_1) - X_n(\omega_2)$$
for $n\in \mathbb Z^+$. It is easy to check that the $Y_n$'s are symmetric. This gives
$$\limsup_{n\to \infty} \frac{\left\lvert S_n(\omega_1) - S_n(\omega_2)\right\rvert}{\Lambda_n} \le 2^{5/2} \le 8$$
for $\mathbb Q$ almost every $(\omega_1,\omega_2)\in \Omega^2$.

Now, we wish to proceed by contradiction. If possible, let
$$\limsup_{n\to \infty} \frac{\left\lvert S_n(\omega)\right\rvert}{\Lambda_n} >8$$
on a set of positive $\mathbb P$ measure so that by Kolmogorov's $0-1$ Law, there is an $\varepsilon>0$ such that
$$\limsup_{n\to \infty} \frac{\left\lvert S_n(\omega)\right\rvert}{\Lambda_n}\ge 8+\varepsilon$$
for $\mathbb P$ almost every $\omega\in \Omega$. Using Fubini's Theorem, this gives that for $\mathbb P^2$ almost every $(\omega_1,\omega_2)\in \Omega^2$, there is a $\left\{ n_m(\omega_1) : m\in \mathbb Z^+\right\}\subset \mathbb Z^+$ such that $n_m(\omega_1)$ increases to $\infty$ so that the limit
$$\lim_{m\to \infty} \frac{S_{n_m(\omega_1)}(\omega_1)}{\Lambda_{n_m(\omega)}} \ge 8+\varepsilon$$
exists and
$$\liminf_{m\to \infty} \frac{\left\lvert S_{n_m(\omega_1)}(\omega_2)\right\rvert}{\Lambda_{n_m(\omega_1)}} \ge \lim_{m\to \infty} \frac{S_{n_m(\omega_1)}(\omega_1)}{\Lambda_{n_m(\omega)}} - \limsup_{m\to \infty} \frac{\left\lvert S_{n_m(\omega)}(\omega_1) - S_{n_m(\omega)}(\omega_2)\right\rvert}{\Lambda_{n_m(\omega_1)}} \ge \varepsilon$$
so that by Fubini's Theorem, we have an $\{n_m : m\in \mathbb Z^+\}\subset \mathbb Z^+$ such that $n_m$ increases to $\infty$ and
$$\liminf_{m\to \infty} \frac{\left\lvert S_{n_m}(\omega_2)\right\rvert}{\Lambda_{n_m}}\ge \varepsilon$$
for $\mathbb P$ almost every $\omega_2\in \Omega$. This contradicts Markov's Inequality owing to
$$\mathbb E \left[\left(\frac{S_n}{\Lambda_n}\right)^2\right] = \frac{1}{2\log\log n} \to 0$$
hence completing the proof. $\square$

Proof of Lemma 3

As usual, we need to discuss some new ideas before we can move to our proof. We define
$$\Lambda_{\mu}(\xi) := \log \left(M_\mu(\xi)\right)$$
and define the Legendre transform of $\Lambda_\mu$ as
$$I_\mu (x) := \sup \left\{\xi x - \Lambda_\mu (\xi) : \xi\in \mathbb R\right\}$$
for $x\in \mathbb R$. Also, let $\Xi_\mu := \left(\Lambda_\mu^\prime\right)^{-1}$.

Lemma 4 : There exists a $\delta\in (0,1]$ and a $K\in (0,\infty)$ such that $[m-\delta, m+\delta]\subset (\alpha,\beta)$ and
$$\left\lvert \Lambda_{\mu}^{\prime \prime}(\Xi(x))\right\rvert \le K \qquad \left\lvert \Xi_{\mu}(x)\right\rvert \le K\left\lvert x-m\right\rvert$$
and
$$\left\lvert I_\mu (x) - \frac{(x-m)^2}{2\sigma^2} \right\rvert \le K\left\lvert x-m\right\rvert^3$$
for all $x\in [m-\delta, m+\delta]$. In particular, if $0<\varepsilon <\delta$, then
$$\mathbb P\left(\left\lvert \bar{S}_n-m\right\rvert \ge \varepsilon\right)\le 2 \exp\left[-n \left(\frac{\varepsilon^2}{2\sigma^2}-K\varepsilon^3\right)\right]$$
and if $|a-m|<\delta$, then
$$\mathbb P\left(\left\lvert \bar{S}_n-m\right\rvert < \varepsilon\right)\ge \left(1-\frac{K}{n\varepsilon^2}\right)\exp \left[-n \left(\frac{|a-m|^2}{2\sigma^2} + K|a-m|\left(\varepsilon + |a-m|^2\right)\right)\right]$$
for any $\varepsilon>0$.

Proof : Recall that we started with the assumption that our random variables have mean $0$ and variance $1$. So, we have $\Lambda_\mu(0)=\Lambda_\mu^\prime(0)=0$ and $\Lambda_\mu^{\prime \prime}(0)=1$ and hence $\Xi_\mu(0)=0$ and $\Xi^\prime _\mu (0)=1$. So, we can find an $M\in (0,\infty)$ and a $\delta\in (0,1]$ with $\alpha<-\delta<\delta<\beta$ for which
$$\left\lvert \Xi_{\mu}(x) -x\right\rvert \le M|x|^2$$
whenever $|x|\le \delta$ and
$$\left\lvert \Lambda_{\mu}(\xi)-\frac{\xi^2}{2}\right\rvert \le M|\xi|^3$$
whenever $|\xi|\le (M+1)\delta$. This immediately proves
$$\left\lvert \Xi_\mu (x) \right\rvert \le (M+1) |x|$$
for $|x|\le \delta$ and the estimate for $I_\mu$ follows since $I_\mu(x)=\Xi(x)x-\Lambda_\mu(\Xi_\mu (x))$. $\square$

This gives us enough room to discuss our proof.

Proof of Lemma 3 : Begin by setting
$$\lambda_n := \frac{\Lambda_n}{n}=\sqrt{\frac{2\log\log n}{n}}$$
wherever it is defined.

To prove the first part, we apply the upper bound of $\mathbb P\left(\left\lvert \bar{S}_n-m\right\rvert \ge \varepsilon\right)$ obtained in Lemma 4 to see that
$$\mathbb P\left(\left\lvert \tilde{S}_n\right\rvert \ge R\right) = \mathbb P\left(\left\lvert \bar{S}_n\right\rvert \ge R\lambda_n\right) \le 2 \,\exp\left[-n \left(\frac{R^2\lambda_n^2}{2}-KR^3\lambda_n^3\right)\right]$$
for sufficiently large $n\in \mathbb Z^+$.

To prove the next part of the lemma, we note
$$\mathbb P\left(\left\lvert \tilde{S}_n-a\right\rvert < \varepsilon\right) = \mathbb P\left(\left\lvert \bar{S}_n-a_n\right\rvert < \varepsilon_n\right)$$
where $a_n=a\lambda_n$ and $\varepsilon_n=\varepsilon \lambda_n$.So, using the lower bound on $\mathbb P\left(\left\lvert \bar{S}_n-m\right\rvert < \varepsilon\right)$ obtained in Lemma 4 to conclude
\begin{align*} \mathbb P\left(\left\lvert \tilde{S}_n-a\right\rvert < \varepsilon\right) &\ge \left(1-\frac{K}{n\varepsilon_n^2}\right) \exp\left[-n \left(\frac{a_n^2}{2}+K|a_n|\left(\varepsilon_n + a_n^2\right)\right)\right]\\ &\ge \left(1-\frac{K}{2\varepsilon^2 \log\log n}\right) \exp\left[-\left(a^2 + 2K|a|\left(\varepsilon + a^2 \lambda_n\right)\right)\log\log n\right] \end{align*}
for sufficiently large $n$. $\square$


Acknowledgement : I'd like to thank Satvik Saha, who has been my partner in reading this proof, patiently answering my barrage of inane questions and making clever observations that I've shamelessly passed off as my own in this write-up. I feel safe admitting it here because, let’s be real, no one’s reading this far anyway.

No comments:

Post a Comment

On the Neighbour Sum Problem

As a kid, I remember being utterly fascinated by this deceptively simple Olympiad problem. The 64 squares of an 8 × 8 chessboard are filled ...