Search

On the Neighbour Sum Problem

As a kid, I remember being utterly fascinated by this deceptively simple Olympiad problem.

The 64 squares of an 8 × 8 chessboard are filled with positive integers in such a way that each integer is the average of the integers on the neighboring squares. (Two squares are neighbors if they share a common edge or a common vertex. Thus a square can have 8, 5 or 3 neighbors depending on its position). Show that all the 64 integer entries are in fact equal.

Brief Solution : Any given entry must lie in between the smallest and largest entries of its neighbors. Thus, the largest entry on the board must be surrounded by identical entries. This forces all entries to be equal.

Couple of years ago, I got a little cheeky and decided to ask myself what might happen if we take sums instead of averages. Fortunately, I was able to rope in a few equally deranged friends who were as interested as I was in useless problems, and we got a very precise answer to it.

Question : An $n \times n$ chessboard for $n \geq 3$, with each square bearing an integer, is said to be a solution to the neighbor sum problem if each number is the sum of the numbers on the neighboring squares. Two squares are neighbors if they share a common edge or a common vertex. A chessboard with all squares bearing the number zero is said to be a trivial solution. How many such non-trivial solutions exist for a chessboard of given size $n\times n$?

The idea we used to tackle this problem is to imagine a chessboard full of integers and consider the linear transform
$$x \mapsto - x + \sum_{x\sim y} y$$
where $a\sim b$ represents the relation that $a$ is a neighbour of $b$. The kernel of this linear map is precisely the answer to our question.

It turned out that we were lucky and the matrix representing this linear map can be easily studied using theorems that others have proved before (in other words, we let smarter people from the past do the heavy lifting). Using these theorems, it soon became clear to us that an $n\times n$ chessboard admits a non-trivial solution to the neighbour sum problem if and only if there exist $p, q \in \mathbb{N}$ such that
$$\left(1 + 2\cos\left(\frac{p\pi}{n + 1}\right)\right) \left(1 + 2\cos\left(\frac{q\pi}{n + 1}\right)\right) = 2$$
and $1 \leq p, q \leq n$.

Solving this diophantine requires tools from Algebraic Number Theory (which I will not go into in this post) but it turns out that it has a solution if and only if $6\mid (n+1)$. One solution for a $5\times 5$ board is
$$\begin{array}{|c|c|c|c|c|}\hline 1 & 0 & -1 & 0 & 1 \\\hline 1 & 0 & -1 & 0 & 1 \\\hline \phantom{-}0\phantom{-} & \phantom{-}0\phantom{-} & \phantom{-}0\phantom{-} & \phantom{-}0\phantom{-} & \phantom{-}0\phantom{-} \\\hline -1 & 0 & 1 & 0 & -1 \\\hline -1 & 0 & 1 & 0 & -1 \\\hline \end{array}$$
and another is its transpose.

Soon, we realised that these techniques can also be extended to other kinds of more general boards. For example, we proved that non-trivial $m\times n$ chessboards satisfying the neighbor-sum property exist if and only if $2 \mid (m+1)$ and $3 \mid (n+1)$ and that non-trivial $m\times n$ toroidal chessboards satisfying the neighbor-sum property exist if and only if $4 \mid m$ and $6 \mid n$. Of course, the next question to ask is about the dimension of the solution space and that turned out to be more difficult than the first problem. To know more, one can consult our original paper (that has now been accepted in American Mathematical Monthly).

As some of you have already figured out, the more general question to ask is whether it is possible to play the same game on a general graph. This is the question we asked in the first paper and tried to partially answer in a recent preprint where we studied the problem on trees.

Among other things, what we found was an algorithm to test whether a given finite tree satisfies the neighbour sum property. Very briefly, for a given tree and for a fixed (root) vertex $v_0$, we set up the following indexing scheme.



















For notational clarity we will use $\bar{r}_l$ to denote a general indexing $r_1,\: r_2,\: \ldots,\:r_l$.

Finally, for a finite tree with a root vertex $v_0$ such that the longest path starting at $v_0$ is of length $k$, we also define
$$S_{1,k}= S_{1,k}(v_0) := \sum_{r_1}\left(1-\sum_{r_{2}}\left(1-\ldots\sum_{r_{k-1}}\left(1-\sum_{r_k} 1\right)^{-1}\ldots\right)^{-1}\right)^{-1}$$
and generally
$$S_{j,k}(\bar{r}_{j-1}) = S_{j,k}(\bar{r}_{j-1}, v_0) := \sum_{r_j}\left(1-\sum_{r_{j+1}}\left(1-\ldots\sum_{r_{k-1}}\left(1-\sum_{r_k} 1\right)^{-1}\ldots\right)^{-1}\right)^{-1}$$
for $1<j\leq k$.

One of our main theorems is that for a finite tree $\mathcal{T}=(\mathcal{V},\:\mathcal{E})$, there exists a pair $(\mathcal{T}, f)$ satisfying the neighbour sum property if and only if there is at least one vertex $v_0\in \mathcal{V}$ such that $S_{1k}(v_0)=1$.

Using this algorithm, we then wrote a code giving us the number of trees on $n$ vertices that satisfies the neighbour sum property and the output was as follows
$$0,1,0,0,1,2,2,6,14,29,63,166,405,977,2481,6530,16757,43534, 115700,308527, 820856, 2205628, 5971743$$
which we have not been able to explain fully. However, we were able to prove that the set of unlabelled trees on $n$ vertices has a positive density subset all of whose elements satisfy the neighbour sum property.

Finally, we also tackled this problem on infinite trees and found a couple of nice results. To know more, the interested reader is suggested to take a look into the two mentioned papers.

Infinitely many Proofs of Pythagoras' Theorem

This post is inspired by an MSE question and the brilliant answers that it received. However, it can be understood independently. Everything here is joint work with Satvik Saha.

Consider the following diagram made of 16 right triangles of sides $a\le b<c$ and four squares of side $b-a$. This being a tiling of the outer square (of side length $2c$), we have
$$(2c)^2 = 4(b-a)^2 + 16\times \frac 12 ab$$
hence implying
$$c^2 = a^2 + b^2$$
thus proving the Pythagoras' Theorem.

Outer square of side $2c$



















This is similar to previously known proofs (eg. see proof #3 and #116 here), but it is hard to say whether this exact tiling appears anywhere before.

Now, consider the following diagram made up of right triangles of sides $a\le b<c$, (pink) sqaures of side $a$, (yellow) squares of side $b$, and (green) squares of side $(b-a)$. The reader is welcome to check that this also gives a completely valid proof of the Pythagoras' Theorem.

Outer square of side $3c$



















Some of the readers might have started asking the right question. And the answer is yes, it can be generalised! With the outer sqaure of side $nc$, we get $4(n-2)$ of the red and yellow squares, $n^2-4n+8$ of the green squares, and $4(n^2-4n+8)$ of the triangles. This gives
$$n^2 c^2 = 4(n-2) (a^2 + b^2) + (n^2-4n+8) (a-b)^2 + 2(n^2-4n+8) ab$$
hence proving the Pythagoras' Theorem. For your viewing pleasure, here's the $16\times 16$ big square. Feel free to print it out and put it on your wall.

Outer square of side $16c$

Of course, the keen ones among you would ask whether this is truly a tiling for any values of $a$ and $b$ or is it a coincindence of the value chosen in the diagrams. To clear this doubt, here's an animated version of the diagrams. Notice that the animation also shows that it doesn't matter whether you call the larger side of the triangle $a$ or $b$.

Outer square of side $6c$



















The keener ones would of course ask for a proof that this is indeed a tiling. For that, one only needs to note that the diagram is made of four spiky triangles (made of a layer of blue triangles followed by a layer of yellow and a layer of pink squares followed by a number of layers of blue triangles) - this is almost a triangle except two spikes on two sides. All one needs to convince oneself is that these spiky triangles fit with each other properly and leave a space in the middle which can be nicely filled by the green squares.

Feel free to play with the animation here (p5js code here). Also, feel free to check out my answer in the aforementioned MSE post.

Worst Way to Calculate Area of a Triangle

A useful technique from Complex Analysis helps us to identify whether a real number $u$ is positive or negative by checking a discontinuous integral. We have
$$\frac{1}{2\pi i} \int_{c-i\infty}^{c+i\infty} e^{su}\;\frac{\text{d}s}{s} = \begin{cases}0 &\text{ if } u<0\\ \frac 12 &\text{ if } u=0\\ 1 &\text{ if } u>0\end{cases}$$
for $c>0$.

We will apply this to calculate the area of the triangle
$$T(N) := \left\{(x,y)\in \mathbb R_{\ge 0} : ax+by\le N\right\}$$
with $(a,b)=1$. This area is given by
\begin{align*}&\int_{x,y\ge 0} \mathbf{1}_{N-ax-by\ge 0} \;\text{d}x\;\text{d}y\\=&\int_{x,y\ge 0} \frac 1{2\pi i} \int_{s=c-i\infty}^{c+i\infty} e^{s(N-ax-by)}\;\frac{\text{d}s}s \;\text{d}x\;\text{d}y\\=&\frac 1{2\pi i} \int_{s=c-i\infty}^{c+i\infty} \left(\int_{x\ge 0} e^{-asx}\;\text{d}x\right) \left(\int_{y\ge 0} e^{-bsy}\;\text{d}y\right) \frac{e^{sN}}{s}\;\text{d}s\\=&\frac{1}{ab}\cdot \frac{1}{2\pi i} \int_{s=c-i\infty}^{c+i\infty} \frac{e^{sN}}{s^3}\;\text{d}s\end{align*}
by the above mentioned trick. Notice that the integral with $N-ax-by=0$ is zero, and hence was ignored.

Now, moving the contour far to the left we pick up the residue from the pole of order $3$ at $s=0$ and hence the area comes out to be
$$\frac{1}{ab}\cdot \frac{N^2}{2} = \frac{N^2}{2ab}$$
which matches with the area calculated by other known methods!

Acknowledgement : I learnt this from lectures by Prof. Andrew Granville for the course Distribution of Prime Numbers.

Introduction to Sumsets II

This is a sequel to a previous post. However, one can technically read it without reading the previous post.

Given $A,B\subseteq Z$ for a commutative ring $Z$, we define
$$A+B := \{a+b : a\in A, b\in B\}$$
to be the sumset of $A$ and $B$.

Similarly,
$$A-B := \{a-b : a\in A, b\in B\}$$
$$kA := \underbrace{A+\dots A}_{k \text{ times}}$$
$$b+A := \{b\}+A$$
for $A,B\subseteq Z$.

Theorem 1 : We have
$$\left\vert A-C\right\vert \left\vert B\right\vert \le \left\vert A-B\right\vert \left\vert B-C\right\vert$$
where $A$, $B$ and $C$ are finite subset of $\mathbb R$.

Proof : For all $d\in A-C$, we fix the representation $d=a_d-c_d$ where $a_d\in A$ and $c_d\in C$. Now, we define
\begin{align*} &f:(A-C)\times B \to (A-B)\times (B-C)\\ &(d,b)\mapsto (a_d-b, b-c_d) \end{align*}
Note that if $f(x,y)=(u,v)$, then we must have $x=u+v$ and $y$ is determined by the equations $y=a_x-u$ and $y=c_x+v$. So, an image determines its unique inverse, that is, $f$ is injective.

This completes the proof.

Remark : Theorem 1 can be interpreted as a triangle inequality under the norm
$$\Delta(A,B)=\log\frac{\left\vert A-B\right\vert}{\sqrt{\left\vert A\right\vert\left\vert B\right\vert}}$$
although this definition of $\Delta$ is not really a distance as $\Delta(A,A)\neq 0$.

Putting $C=A$ and then $B=-A$ in Theorem 1 we have

Corollary 2 : For all $A,B\subset \mathbb R$, we have
\begin{align*} &\left\vert A-A\right\vert \le \frac{\left\vert A-B\right\vert^2}{\left\vert B\right\vert}\\ &\left\vert A-A\right\vert \le \frac{\left\vert A+A\right\vert^2}{\left\vert A\right\vert} \end{align*}

The last inequality basically says that if $A$ has small doubling, then $A-A$ is also small. Now, we want to show that $A-A$ being small also implies that $A+A$ is. For that, we proceed with

Lemma 3 : Let $A,B\subset \mathbb R$ be finite. If there is a $c>0$ so that
$$\left\vert A+B\right\vert =c\left\vert A\right\vert \qquad \text{and}\qquad \left\vert A^\prime +B\right\vert \ge c\left\vert A^\prime\right\vert\;\;\;\forall A^\prime \subset A,$$
then, we have
$$\left\vert A+B+C\right\vert \le c\left\vert A+C\right\vert$$
for any finite $C\subset \mathbb R$.

Proof : We use induction on the size of $C$. For $\left\vert C\right\vert =1$, i.e., $C=\{x\}$, we have
$$\left\vert A+B+x\right\vert =\left\vert A+B\right\vert =c\left\vert A\right\vert =c\left\vert A+x\right\vert$$
and hence the statement is trivial.

Now, let us assume that the statement is true for a non-empty set $C$, and define $C^\prime := C\cup \{x\}$ for some $x\notin C$. We define
$$A^\prime := \{a\in A : a+x\in A+C\} = A\cap (A+C-x)\subset A$$
so that
$$\left\vert A+C^\prime\right\vert =\left\vert A+C\right\vert +\left\vert A\right\vert -\left\vert A^\prime\right\vert$$
since $A^\prime +x=(A+x)\cap (A+C)$ and $A+C^\prime =(A+C)\cup (A+x)$.

Now, $A^\prime +x\subset A+C \implies A^\prime +B+x\subset A+B+C$ so that
$$A+B+C^\prime = (A+B+C)\cup (A+B+x) = (A+B+C)\cup \left [(A+B+x)\setminus (A^\prime +B+x)\right ]$$
and
$$\left\vert A+B+C^\prime\right\vert \le \left\vert A+B+C\right\vert + \left\vert A+B\right\vert - \left\vert A^\prime +B\right\vert$$
and hence, using the Induction Hypothesis, we have
$$\left\vert A+B+C^\prime\right\vert \le c\left\vert A+C\right\vert +c\left\vert A\right\vert -c\left\vert A^\prime\right\vert =c\left\vert A+C^\prime\right\vert$$
hence completing the proof.

This lemma will help us to prove the two following important consequences.

Theorem 4 : We have
$$\left\vert A+C\right\vert \left\vert B\right\vert \le \left\vert A+B\right\vert \left\vert B+C\right\vert$$
for all $A,B,C\subset \mathbb R$.

Proof : Let $U\subset B$ be such that $\frac{\left\vert U+A\right\vert}{\left\vert U\right\vert}=c$ is minimal. This implies
$$\left\vert U+A+C\right\vert \le c\left\vert U+C\right\vert$$
using the previous lemma.

Again, since $B$ itself is considered when the minimum is selected, we have $\left\vert A+C\right\vert \le \left\vert U+A+C\right\vert$, $\left\vert U+C\right\vert \le \left\vert B+C\right\vert$ and $c\le \frac{\left\vert A+B\right\vert}{\left\vert B\right\vert}$. These four inequalities complete the proof.

Corollary 5 : We have
\begin{align*} &\left\vert A+A\right\vert \le \frac{\left\vert A+B\right\vert^2}{\left\vert B\right\vert}\\ &\left\vert A+A\right\vert \le \frac{\left\vert A-A\right\vert^2}{\left\vert A\right\vert} \end{align*}
for all $A, B$.

Proof : Put $C=A$ and then $B=-A$ in the previous theorem.

Corollary 6 : We have
\begin{align*} &\left\vert A-B\right\vert \le \frac{\left\vert A+B\right\vert^3}{\left\vert A\right\vert \left\vert B\right\vert}\\ &\left\vert A+B\right\vert \le \frac{\left\vert A-B\right\vert^3}{\left\vert A\right\vert \left\vert B\right\vert} \end{align*}
for all $A,B$.

Proof : Use the previous corollary in Theorem 1 and then set $-B$ to $B$.

Corollary 7 : Let $A,B\subset \mathbb R$ be finite. If we have
$$\left\vert A+B\right\vert =c\left\vert A\right\vert \qquad \text{and}\qquad \left\vert A^\prime +B\right\vert \ge c\left\vert A^\prime\right\vert\;\;\;\forall A^\prime\subset A$$
for some $c>0$, then
$$\left\vert A+hB\right\vert \le c^h \left\vert A\right\vert$$
for all positive integers $h$.

Proof : Set $C=(h-1)B$ in Lemma 3 to get $\left\vert A+hB\right\vert \le c\left\vert A+(h-1)B\right\vert$ and use Induction.

Theorem 8 (Plünnecke-Ruzsa Inequality) : Let $B\subset \mathbb R$ be finite satisfying $\left\vert B+B\right\vert \le c\left\vert B\right\vert$ (or $\left\vert B-B\right\vert \le c\left\vert B\right\vert$). Then
$$\left\vert mB-nB\right\vert \le c^{m+n}\left\vert B\right\vert$$
for all $m,n\in \mathbb N$.

Proof : Let the non empty set $A\subset B$ (or $A\subset -B$) be one of the sets that minimizes $\frac{\left\vert A+B\right\vert}{\left\vert A\right\vert}$ so that $\left\vert A+B\right\vert =c^\prime \left\vert A\right\vert$. Obviously $c^\prime\le c$. Also, $\left\vert A^\prime +B\right\vert \le c^\prime \left\vert A^\prime\right\vert$ for all $A^\prime \subset A$ and hence the conditions of Corollary 7 are satisfied. This gives
$$\left\vert hB\right\vert \le \left\vert A+hB\right\vert \le (c^\prime)^h \left\vert A\right\vert \le c^h\left\vert B\right\vert$$
hence proving the statement for the case $m=0$ or $n=0$.

Now, if $m,n\neq 0$, we use Theorem 1 and set $A\mapsto mB$, $B\mapsto -A$ and $C\mapsto nB$ to get
$$\left\vert mB-nB\right\vert \left\vert A\right\vert \le \left\vert A+mB\right\vert \left\vert A+nB\right\vert \le (c^\prime)^m\left\vert A\right\vert (c^\prime)^n \left\vert A\right\vert \le c^{m+n}\left\vert B\right\vert \left\vert A\right\vert$$
hence completing the proof.

Introduction to Sumsets

After finding a quetsion on Math Stack Exchange, I decided to take a large chunk out of my MS thesis and present it.

Given $A,B\subseteq Z$ for a commutative ring $Z$, we define
$$A+B := \{a+b : a\in A, b\in B\}$$
to be the sumset of $A$ and $B$.

Similarly,
$$A-B := \{a-b : a\in A, b\in B\}$$
$$kA := \underbrace{A+\dots A}_{k \text{ times}}$$
$$b+A := \{b\}+A$$
for $A,B\subseteq Z$.

Theorem 1 : If $A$ and $B$ are two finite subsets of $\mathbb R$, then
$$\vert A\vert + \vert B\vert -1 \le \vert A+B\vert \le \vert A\vert \vert B\vert$$
and the lower bound is sharp iff $A$ and $B$ are arithmetic progressions of same difference.

Proof : The upper bound is obvious as we can just choose $A$ and $B$ in such a manner that all the sums of the form $a+b$ are different. One example of such a set is
$$A=\{0,1,\dots ,r-1\},\;\; B=r\cdot\{0,1,\dots ,s-1\}$$
so that
$$A+B=\{0,1,\dots , rs-1\}$$
hence completing the proof.
  
Denoting $A=\{a_1<a_2<\dots <a_r\}$ and $B=\{b_1<b_2<\dots <b_s\}$, and noting that the following chain of inequalities
$$a_1+b_1<a_1+b_2<a_1+b_3<\dots <a_1+b_s<a_2+b_s<\dots <a_r+b_s$$
contains $r+s-1$ different elements of $A+B$ immediately completes the proof of the lower bound.
  
Now, let us assume that the lower bound is tight. Then, the following chain of inequalities
$$a_1+b_1<a_2+b_1<a_2+b_2<a_2+b_3<\dots <a_2+b_s<a_3+b_s<\dots <a_r+b_s$$
and the previous one must be the same.

The first and the last $r-1$ terms are clearly equal, the other $s-1$ terms determine the equations
\begin{align*} a_1+b_2&=a_2+b_1\\ a_1+b_3&=a_2+b_2\\ &.\\ &.\\ &.\\ a_1+b_s&=a_2+b_{s-1} \end{align*}
which on rearranging gives
\begin{align*} b_2-b_1&=a_2-a_1\\ b_3-b_2&=a_2-a_1\\ &.\\ &.\\ &.\\ b_s-b_{s-1}&=a_2-a_1 \end{align*}
and hence $B$ is an arithmetic progression with difference $a_2-a_1$. Since the roles of $A$ and $B$ are symmetric in the statement, then so must be $A$.

In fact, a more general statement is true.

Theorem 2 : If $A_1,A_2,\dots ,A_k$, $k\ge 2$ are finite subsets of $\mathbb R$, then
$$\vert A_1\vert + \vert A_2\vert +\dots \vert A_k\vert -k+1 \le \vert A_1+A_2+\dots A_k\vert \le \vert A_1\vert \vert A_2\vert\dots \vert A_k\vert$$
and the lower bound is sharp iff $A_1,A_2,\dots ,A_k$ are arithmetic progressions of same difference.

Proof : We proceed by Induction.

The case $k=2$ is precisely the last theorem.

Now, let us assume that the statement is true for $k-1$.

Let $A=A_1+A_2+\dots +A_{k-1}$ and $B=A_k$ so that $A+B=A_1+A_2+\dots +A_{k}$. Using the previous theorem and the Induction Hypothesis, we get
\begin{align*} \vert A_1\vert + \vert A_2\vert +\dots \vert A_{k-1}\vert -k+2 +\vert A_k\vert -1 &\le \vert A\vert + \vert B\vert -1\\ &\le \vert A+B\vert\\ &\le \vert A\vert \vert B\vert\\ &\le \vert A_1\vert \vert A_2\vert\dots \vert A_k\vert \end{align*}
and hence the proof follows.
  
Now, we just notice that the left inequality can be an equality only if
$$\vert A_1\vert + \vert A_2\vert +\dots \vert A_{k-1}\vert -k+2 = \vert A_1+A_2+\dots +A_{k-1} \vert = \vert A \vert$$
and
$$\vert A\vert + \vert B\vert -1 = \vert A+B\vert.$$

The first relation implies that $A_1,\dots ,A_{k-1}$ and hence $A_1+\dots +A_{k-1}=A$ are arithmetic progressions of same difference, and the second relation implies that $B$ is also an arithmetic progression of same difference. This completes the proof.


Corollary 3 : Let $A$ be a finite subset of $\mathbb R$. Then, for an integer $k\ge 2$, we have
$$k\vert A \vert -k+1\le \vert kA \vert\le \vert A \vert^k$$
and the lower bound is sharp iff $A$ is an arithmetic progression.


However, it should be noted that the upper bound in the last corollary is not sharp. The sharp upper bound is given by the following theorem.

Theorem 4 : Let $A$ be a finite subset of $\mathbb R$. Then,
$$\vert kA \vert\le \binom{\vert A \vert +k-1}{k}$$
for any integer $k\ge 1$.

Proof : The number of ways of choosing unordered $k$-tuples from a set of cardinality $\vert A \vert$ is the number of non-zero integer solutions of the equation
$$x_1+\dots +x_{\vert A \vert}=k$$
which is $\binom{\vert A \vert+k-1}{k}$.

This upper bound is indeed sharp and the case $k=2$ of the sharp upper bound is called a Sidon set.


The moral of Corollary 3 is that if $A+A$ is smallest possible, then $A$ has a well defined structure, namely it is an arithmetic progression. We want to understand whether $A+A$ being small, say $\le C\left\vert A \right\vert$ still implies some structure in $A$. To do that, we investigate some examples now.

Clearly, if $A$ is a set of $r$ members of an arithmetic progression of length $2r$, then $A+A$ is part of an arithmetic progression of length $4r-1$ and hence $\left\vert A+A \right\vert \le 4r-1$.

Now, considering $A = \{0,1,\dots, r-2, N\}$ for some $N > r-2$, we see that
$$A+A = \{0,1,\dots, 2r-4\} \cup \{N,N+1,\dots,N+r-2\} \cup \{2N\}$$
and so, if $r-2 < N \le 2r-4$, then $\left\vert A+A \right\vert = N + r \le 3r - 4$, and if $N > 2r - 4$, then $\left\vert A+A \right\vert = 3r - 3$. The first one is an example of having $r$ members of an arithmetic progression of size $N + 1$ ($\le 2r - 3$) such that $\left\vert A+A \right\vert \le 3r - 4$. And the second one is an example of the same where $\left\vert A+A \right\vert = 3r - 3$ but $A$ cannot be covered with any arithmetic progression of reasonable size.

And next, we consider
$$Q = \{a_0 + a_1n_1 + \dots + a_dn_d : 0 \le n_j \le N_j,\ j=1,\dots,d\}$$
where $a_0, a_1, \dots, a_d$ are fixed, and $N_1, \dots, N_d$ are integers $\ge 2$. This is a generalized arithmetic progression of dimension $\operatorname{dim}(Q) = d$ and volume $\operatorname{vol}(Q) = N_1 \dots N_d$.

Obviously, $\left\vert Q \right\vert \le \operatorname{vol}(Q)$, and the equality holds only if all the elements are different in which case we call $Q$ to be proper.

It was Gregori Freiman in the 1960’s who recognized if $A + A$ is small, say $\le C\left\vert A \right\vert$, then $A$ still has a special structure, though not that strong. Freiman's theorem is one of the deepest results in the theory of sumsets and it is obligatory to state it in a discussion about Additive Combinatorics.

Theorem 5 (Freiman, Ruzsa) : If $A$ is a set of positive integers and $C \ge 2$ such that $\left\vert A + A \right\vert \le C \left\vert A \right\vert$, then there is a generalized AP, $Q$ with $A \subset Q$, $\operatorname{vol}(Q) \le F(C) \left\vert A \right\vert$ and $\operatorname{dim}(Q) \le d(C)$ for some functions $F(C)$ and $d(C)$.

A Surprising Proof of an Inequality

Problem : Prove that for real numbers $x_1,\dots ,x_n$,
$$\sum_{i=1}^n \sum_{j=1}^n \left\lvert x_i - x_j\right\rvert^r \le \sum_{i=1}^n \sum_{j=1}^n \left\lvert x_i + x_j\right\rvert^r$$
for any $r\in [0,2]$.

Solution : The cases $r=0,2$ can be checked by hand. So, we assume $r\in (0,2)$ and begin by noting that the integral
$$\mathcal I (\alpha) = \int_{0}^\infty \frac{1-\cos \left(\alpha x\right)}{x^{r+1}}\;\text{d}x$$
is positive and finite. Also, it is easy to prove using the substitution $y=|\alpha|x$ that $\mathcal I(\alpha)=\sqrt{|\alpha|}\cdot \mathcal I(1)$.

Now, using this, we have
\begin{align*} |a+b|^r-|a-b|^r &= \frac{1}{\mathcal I(1)} \int_0^\infty \frac{\cos \left((a-b)x\right) - \cos \left((a+b)x\right)}{x^{r+1}} \;\text{d}x\\ &= \frac{1}{\mathcal I(1)} \int_0^\infty \frac{2 \sin (ax) \sin (bx)}{x^{r+1}} \;\text{d}x \end{align*}
and hence
$$\sum_{i=1}^n \sum_{j=1}^n \left\lvert x_i - x_j\right\rvert^r - \sum_{i=1}^n \sum_{j=1}^n \left\lvert x_i + x_j\right\rvert^r = \frac{2}{\mathcal I(1)} \int_0^\infty \frac{\left(\sum_{i=1}^n \sin (x_i x)\right)^2}{x^{r+1}} \;\text{d}x \ge 0$$
hence completing the proof.

Prokhorov’s Theorem

The aim is to give a proof of Prokhorov’s Theorem.

Theorem : Let $\{\mu_n : n\ge 1\}$ be a sequence of probability measures on $\left(\mathbb R^d, \mathcal B\left(\mathbb R^d\right)\right)$. If $\{\mu_n : n\ge 1\}$ is tight, then every sequence has a further subsequence that converges weakly to some probability measure.

We will use the following (well known) fact in our proof.
Fact : Let $\{\mu_n : n\ge 1\}$ be a sequence of probability measures on $\left(\mathbb R^d, \mathcal B\left(\mathbb R^d\right)\right)$. Then, weak convergence $\mu_n\xrightarrow[]{d} \mu_0$ is equivalent to $\int g \;\text{d}\mu_n \to \int g \;\text{d}\mu_0$ for all compactly supported continuous functions.

With this at hand, we give the proof of the theorem.

Proof of Theorem : Let $K^N = [-N,N]^d$. Then, $\mu_n$ induces a subprobability $\nu_n^N := \mu_n|_{K^N}$ on $K^N$. By weak$^\ast$ compactness, we can find a subsequence $\nu_{n_k}^N$ such that $\nu_{n_k}^N \xrightarrow[]{w^\ast} \nu^N$ for a subprobability measure $\nu^N$ on $K^N$. Repeating the argument starting with this subsequence on $K^{N+1}$ gives a new limit $\nu^{N+1}$. Repeating this procedure countably many times (and noticing that $\nu^N = \nu^{N+1}$ on $K^N$), we get
$$\mu(A) := \lim_{N\to \infty} \nu^N \left(A\cap K^N\right)$$
for $A\in \mathcal B\left(\mathbb R^d\right)$.

Now, we will show that $\mu$ is a probability measure. From here, it follows using the mentioned fact that our subsequence converges weakly to $\mu$.

Clearly $\mu\left(\Omega\right)\le 1$ as each $\nu^N$ is a subprobability. Given $\varepsilon>0$, tightness yields $N_\varepsilon$ such that $\mu_n\left(K^{N_\varepsilon}\right)\ge 1-\varepsilon$ for all $n\ge 1$. It follows that $\mu\left(\Omega\right)\ge \mu\left(K^{N_\varepsilon}\right)\ge 1-\varepsilon$. This shows that $\mu\left(\Omega\right)=1$. It remains to prove that $\mu$ is $\sigma$-additive. Clearly $\mu$ is monotone. Let first
$$A = \bigcup_{i=1}^I A_i$$
be a disjoint union of finitely many measurable sets. Then
$$\left\lvert \mu\left(\bigcup_{i=1}^I A_i\right) - \sum_{i=1}^I \mu\left(A_i\right)\right\rvert \le \left\lvert \mu\left(\bigcup_{i=1}^I A_i\cap K^{N_\varepsilon}\right) - \sum_{i=1}^I \mu\left(A_i \cap K^{N_\varepsilon}\right)\right\rvert + \left(I+1\right)\varepsilon = \left(I+1\right)\varepsilon$$
since the restriction of $\mu$ to $K^{N_\varepsilon}$ (which is $\nu^{N_\varepsilon}$) is $\sigma$-additive. Thus $\mu$ is finitely additive. It follows that $\mu$ is $\sigma$-superadditive as
$$\mu(A) \ge \mu\left(\bigcup_{i=1}^I A_i\right) = \sum_{i=1}^I \mu\left(A_i\right) \to \sum_{i=1}^\infty \mu\left(A_i\right)$$
by letting $I\to \infty$.

Conversely, using again that the restriction of $\mu$ to $K^{N_\varepsilon}$ is $\sigma$-additive, we have
\begin{align*} \mu(A) &= \mu\left(A\setminus K^{N_\varepsilon}\right) + \mu\left(\bigcup_{i=1}^\infty A_i \cap K^{N_\varepsilon}\right)\\ &= \mu\left(A\setminus K^{N_\varepsilon}\right) + \sum_{i=1}^\infty \mu\left(A_i\cap K^{N_\varepsilon}\right)\\ &\le \varepsilon + \sum_{i=1}^\infty \mu\left(A_i\right) \end{align*}
thus completing the proof.

On the Neighbour Sum Problem

As a kid, I remember being utterly fascinated by this deceptively simple Olympiad problem. The 64 squares of an 8 × 8 chessboard are filled ...