Search

A Surprising Proof of an Inequality

Problem : Prove that for real numbers $x_1,\dots ,x_n$,
$$\sum_{i=1}^n \sum_{j=1}^n \left\lvert x_i - x_j\right\rvert^r \le \sum_{i=1}^n \sum_{j=1}^n \left\lvert x_i + x_j\right\rvert^r$$
for any $r\in [0,2]$.

Solution : The cases $r=0,2$ can be checked by hand. So, we assume $r\in (0,2)$ and begin by noting that the integral
$$\mathcal I (\alpha) = \int_{0}^\infty \frac{1-\cos \left(\alpha x\right)}{x^{r+1}}\;\text{d}x$$
is positive and finite. Also, it is easy to prove using the substitution $y=|\alpha|x$ that $\mathcal I(\alpha)=\sqrt{|\alpha|}\cdot \mathcal I(1)$.

Now, using this, we have
\begin{align*} |a+b|^r-|a-b|^r &= \frac{1}{\mathcal I(1)} \int_0^\infty \frac{\cos \left((a-b)x\right) - \cos \left((a+b)x\right)}{x^{r+1}} \;\text{d}x\\ &= \frac{1}{\mathcal I(1)} \int_0^\infty \frac{2 \sin (ax) \sin (bx)}{x^{r+1}} \;\text{d}x \end{align*}
and hence
$$\sum_{i=1}^n \sum_{j=1}^n \left\lvert x_i - x_j\right\rvert^r - \sum_{i=1}^n \sum_{j=1}^n \left\lvert x_i + x_j\right\rvert^r = \frac{2}{\mathcal I(1)} \int_0^\infty \frac{\left(\sum_{i=1}^n \sin (x_i x)\right)^2}{x^{r+1}} \;\text{d}x \ge 0$$
hence completing the proof.

Prokhorov’s Theorem

The aim is to give a proof of Prokhorov’s Theorem.

Theorem : Let $\{\mu_n : n\ge 1\}$ be a sequence of probability measures on $\left(\mathbb R^d, \mathcal B\left(\mathbb R^d\right)\right)$. If $\{\mu_n : n\ge 1\}$ is tight, then every sequence has a further subsequence that converges weakly to some probability measure.

We will use the following (well known) fact in our proof.
Fact : Let $\{\mu_n : n\ge 1\}$ be a sequence of probability measures on $\left(\mathbb R^d, \mathcal B\left(\mathbb R^d\right)\right)$. Then, weak convergence $\mu_n\xrightarrow[]{d} \mu_0$ is equivalent to $\int g \;\text{d}\mu_n \to \int g \;\text{d}\mu_0$ for all compactly supported continuous functions.

With this at hand, we give the proof of the theorem.

Proof of Theorem : Let $K^N = [-N,N]^d$. Then, $\mu_n$ induces a subprobability $\nu_n^N := \mu_n|_{K^N}$ on $K^N$. By weak$^\ast$ compactness, we can find a subsequence $\nu_{n_k}^N$ such that $\nu_{n_k}^N \xrightarrow[]{w^\ast} \nu^N$ for a subprobability measure $\nu^N$ on $K^N$. Repeating the argument starting with this subsequence on $K^{N+1}$ gives a new limit $\nu^{N+1}$. Repeating this procedure countably many times (and noticing that $\nu^N = \nu^{N+1}$ on $K^N$), we get
$$\mu(A) := \lim_{N\to \infty} \nu^N \left(A\cap K^N\right)$$
for $A\in \mathcal B\left(\mathbb R^d\right)$.

Now, we will show that $\mu$ is a probability measure. From here, it follows using the mentioned fact that our subsequence converges weakly to $\mu$.

Clearly $\mu\left(\Omega\right)\le 1$ as each $\nu^N$ is a subprobability. Given $\varepsilon>0$, tightness yields $N_\varepsilon$ such that $\mu_n\left(K^{N_\varepsilon}\right)\ge 1-\varepsilon$ for all $n\ge 1$. It follows that $\mu\left(\Omega\right)\ge \mu\left(K^{N_\varepsilon}\right)\ge 1-\varepsilon$. This shows that $\mu\left(\Omega\right)=1$. It remains to prove that $\mu$ is $\sigma$-additive. Clearly $\mu$ is monotone. Let first
$$A = \bigcup_{i=1}^I A_i$$
be a disjoint union of finitely many measurable sets. Then
$$\left\lvert \mu\left(\bigcup_{i=1}^I A_i\right) - \sum_{i=1}^I \mu\left(A_i\right)\right\rvert \le \left\lvert \mu\left(\bigcup_{i=1}^I A_i\cap K^{N_\varepsilon}\right) - \sum_{i=1}^I \mu\left(A_i \cap K^{N_\varepsilon}\right)\right\rvert + \left(I+1\right)\varepsilon = \left(I+1\right)\varepsilon$$
since the restriction of $\mu$ to $K^{N_\varepsilon}$ (which is $\nu^{N_\varepsilon}$) is $\sigma$-additive. Thus $\mu$ is finitely additive. It follows that $\mu$ is $\sigma$-superadditive as
$$\mu(A) \ge \mu\left(\bigcup_{i=1}^I A_i\right) = \sum_{i=1}^I \mu\left(A_i\right) \to \sum_{i=1}^\infty \mu\left(A_i\right)$$
by letting $I\to \infty$.

Conversely, using again that the restriction of $\mu$ to $K^{N_\varepsilon}$ is $\sigma$-additive, we have
\begin{align*} \mu(A) &= \mu\left(A\setminus K^{N_\varepsilon}\right) + \mu\left(\bigcup_{i=1}^\infty A_i \cap K^{N_\varepsilon}\right)\\ &= \mu\left(A\setminus K^{N_\varepsilon}\right) + \sum_{i=1}^\infty \mu\left(A_i\cap K^{N_\varepsilon}\right)\\ &\le \varepsilon + \sum_{i=1}^\infty \mu\left(A_i\right) \end{align*}
thus completing the proof.

On the Neighbour Sum Problem

As a kid, I remember being utterly fascinated by this deceptively simple Olympiad problem. The 64 squares of an 8 × 8 chessboard are filled ...