# Expanders, Property Testing and the PCP theorem

## Lecture 7: Random walks on expanders are rapidly mixing

Posted in lectures by HQN on September 19, 2008

Let $\hat{\mathbf A}$ be the normalized adjacency matrix of a $d$-regular graph, $1=\hat \lambda_1 \geq \dots \geq \hat \lambda_n$ be the eigenvalues of this matrix, and define $\hat\lambda(G) = \max\{|\hat\lambda_2|, |\hat\lambda_n|\}$.

Suppose we take a random walk on $G$ starting from an initial distribution $\pi^{(0)}$. Then, the distribution after $t$ steps is $\pi^{(t)} = \hat{\mathbf A}^t\pi^{(0)}$. Using the spectral theorem, it is not difficult to prove that, if $\hat\lambda(G) <1$ then $\lim_{t\to \infty} \pi^{(t)} = \mathbf u$, where $\mathbf u$ is the uniform distribution. The more difficult question is “how fast is the walk converging to this limit?”

To answer this question, we need to define a notion of distance between two distributions. The total variation distance is probably the most commonly used. Since we are working on a finite probability space, the total variation distance is half the $l_1$-norm. Thus, we would like a theorem bounding the rate at which $\|\pi^{(t)}-\mathbf u\|_1$ tends to zero. We proved that

Theorem 1 (Convergence in $l_1$) $\|\pi^{(t)}-\mathbf u\|_1 \leq \sqrt n \hat\lambda^t$

Theorem 2 (Convergence in $l_2$) $\|\pi^{(t)}-\mathbf u\|_2 \leq \hat\lambda^t$

Theorem 1 follows from Theorem 2 by the Cauchy-Schwarz inequality. To prove Theorem 2, note that

$\|\pi^{(t)}-\mathbf u\|_2 = \|\hat{\mathbf A}^t\pi^{(0)} - \mathbf u\|_2 = \|\hat{\mathbf A}^t(\pi^{(0)} - \mathbf u)\|_2 = \|\hat{\mathbf A}^t\mathbf v\|_2$

Where, $\mathbf v = \pi^{(0)} - \mathbf u$ is perpendicular to $\mathbf u$ and $\|\mathbf v\|_2 \leq 1$. Each time we apply $\hat{\mathbf A}$ to a vector $\mathbf y$ perpendicular to $\mathbf u$ (its first eigenvector), we shrink the length of $\mathbf y$ by a factor of at least $\hat\lambda$. Hence the theorem follows.

An $(n,d,\alpha)$-spectral expander is a $d$-regular graph on $n$ vertices such that $\hat \lambda(G) \leq \alpha$. If $\alpha < 1$ then an $(n,d,\alpha)$-spectral expander certainly has spectral gap bounded away from zero. Conversely, a graph whose spectral gap is bounded away from zero (plus self-loops) is a spectral expander.

## Lecture 6: Cheeger-Alon-Milman Inequality

Posted in lectures by HQN on September 16, 2008

1. We showed that Gap-Max-3SAT$(d')_{1,s}$ is NP-Hard for some constants $s<1, d'$ using $(n,d,1)$-edge expanders ($d$ is a constant). The proof relies on the fact that such an expander can be constructed in polynomial time. In the last lecture, we only proved that such expanders exist for all sufficiently large $n$, but we have not discussed how to construct expanders in polynomial time yet. Efficient constructions of good expanders will be discussed probably next week.

2. We proved half of the so-called Cheeger-Alon-Milman inequality, which states that

$\frac{d-\lambda_2}{2} \leq h(G) \leq \sqrt{2d(d-\lambda_2)}$

for any $d$-regular graph $G$. Here, $\lambda_2$ is the second largest eigenvalue and $h(G)$ is the edge-isoperimetric number of the graph. We showed only that $\frac{d-\lambda_2}{2} \leq h(G)$. Briefly, the proof went as follows.

First, we “symmetrize” $h(G)$.

$\displaystyle{h(G) = \min_{|S|\leq n/2} \frac{|E(S,\bar S)|}{|S|} \geq \min_{|S|\leq n/2} \frac{|E(S,\bar S)|}{|S|\frac{2|\bar S|}{n}}} = \frac 1 2 \min_{\emptyset \neq S \subset V} \frac{|E(S,\bar S)|}{\frac 1 n |S||\bar S|}$

Define the sparsity of $G$ to be

$\displaystyle{\phi(G) = \min_{\emptyset \neq S \subset V} \frac{|E(S,\bar S)|}{\frac 1 n |S||\bar S|}}$

Then, we just showed that $h(G) \geq \frac 1 2 \phi(G)$. It remains to show that $\phi(G) \geq d - \lambda_2$. To do so, we turn the expression for $\phi(G)$ into a more algebraic form. Imagine assigning a value of 1 to each vertex in $S$ and 0 to each vertex in $\bar S$, we get a vector $\mathbf x \in \{0,1\}^n$. It is not difficult to see that

$\displaystyle{\phi(G) = \min_{\mathbf x \in \{0,1\}^n, \mathbf x \neq \mathbf{0,1}} \frac{\sum_{uv\in E}(x_u-x_v)^2}{\frac{1}{2n} \sum_u\sum_v (x_u-x_v)^2}}$

So, $\phi(G)$ is the objective value of a certain optimization problem over zero-one vectors in $\mathbb R^n$. If we relax the zero-one condition, we get a lower bound for $\phi(G)$. Specifically,

$\displaystyle{\phi(G) \geq \min_{\mathbf x \in \mathbb R^n, \mathbf x \neq \mathbf{0,1}} \frac{\sum_{uv\in E}(x_u-x_v)^2}{\frac{1}{2n} \sum_u\sum_v (x_u-x_v)^2}}$

Now, subtract (or add) the same amount from (to) each $x_u$ will not change the min. Hence, we can replace each vector $\mathbf x \in \mathbb R^n$ by $\mathbf z \in \mathbb R^n$ where $z_u = x_u - \frac 1 n \sum_v x_v$. This way, $\sum_u z_u = 0$, or simply $\mathbf z \perp \mathbf 1$. Consequently,

$\displaystyle{\phi(G) \geq \min_{\mathbf z \in \mathbb R^n, \mathbf z \neq \mathbf 0, \mathbf z \perp \mathbf 1} \frac{\sum_{uv\in E}(z_u-z_v)^2}{\frac{1}{2n} \sum_u\sum_v (z_u-z_v)^2}}$

Now, let $\mathbf A$ be the adjacency matrix of $G$, then it can be verified straightforwardly that

$\sum_{uv\in E}(z_u-z_v)^2 = d\mathbf z^T\mathbf z - \mathbf z^T\mathbf A\mathbf z$

Moreover, if $\mathbf z \perp \mathbf 1$, then $\frac{1}{2n} \sum_u\sum_v (z_u-z_v)^2 = \mathbf z^T\mathbf z$. Consequently,

$\displaystyle{\phi(G) \geq \min_{\mathbf z \in \mathbb R^n, \mathbf z \neq \mathbf 0, \mathbf z \perp \mathbf 1} \left( d - \frac{\mathbf z^T\mathbf A\mathbf z}{\mathbf z^T\mathbf z} \right) = d - \max_{\mathbf z \in \mathbb R^n, \mathbf z \neq \mathbf 0, \mathbf z \perp \mathbf 1} \frac{\mathbf z^T\mathbf A\mathbf z}{\mathbf z^T\mathbf z} = d-\lambda_2}$

For a more elaborate discussion on Cheeger-Alon-Milman inequality, see a nice series of posts by Luca Trevisan on the topic.

## Lecture 5: Expanders, Probabilistic Proof of Existence

Posted in lectures by HQN on September 14, 2008

We gave a proof showing that most regular graphs are good (edge/vertex) expanders. In terms of edge-expansion, Bella Bollobas in this paper showed that, for every $d\geq 3$, most random $d$-regular graphs have edge-expansion rate (i.e. edge-isoperimetric number) at least $d/2 - c_1\sqrt d$ for some constant $c_1$. (Specifically, we can set $c_1=\sqrt{\lg 2}$. On the other hand, given any $d$-regular graph $G$ of order $n$, if we select a random subset $U$ of vertices of size $\lfloor n/2 \rfloor$, then the expected number of edges crossing the $(U,\bar U)$ cut is $\frac{\frac{dn}{2}\lfloor n/2 \rfloor \lceil n/2 \rceil}{\binom n 2}$

So the edge-isoperimetric number is bounded above by $\frac d 2 \frac{n+1}{n-1}$. Thus, for large graphs the edge expansion rate is (roughly) at most $\frac d 2$.

Better yet, Noga Alon proved in this paper that there is a constant $c_2$ such that the edge-isoperimetric number of any $d$-regular graph (with sufficiently large number of vertices) is at most $d/2 - c_2 \sqrt d$.

In summary, we can roughly think of edge-isoperimetric numbers of random $d$-regular graphs to be about $d/2 - \Theta(\sqrt d)$.

For small values of $d$, Bollobas’ analysis yields the following. For every sufficiently large $n$,

• there exists a $(n,3,2/11)$-edge expander
• there exists a $(n,4,11/25)$-edge expander
• there exists a $(n,5,18/25)$-edge expander
• there exists a $(n,6,26/25)$-edge expander
• there exists a $(n,9,2.06)$-edge expander

## Lecture 4: Proof that PCP Theorem is equivalent to hardness of Gap-Max-SAT

Posted in lectures by HQN on September 8, 2008

In today’s lecture, I proved that the PCP theorem is equivalent to the fact that

• Gap-MaxE3SAT$_{1,s}$ is NP-Hard for some constant $s \in (0,1)$

From the fact, it’s easy to show that

• MaxE3SAT is NP-Hard to approximate to within some constant $\alpha <1$

I also outlined things that will be discussed about expanders.

## Lecture 3: Outline of Dinur’s Proof

Posted in lectures by HQN on September 5, 2008

In today’s lecture, I discussed the following:

1. A list of results regarding various PCPs following the outline in Madhu Sudan’s lecture notes (See Section 2.2. on History of Results).
2. Mentioned that the PCP theorem is equivalent to the NP-Hardness of Gap-MaxE3SAT$_{1,s}$ for some constant $s \in (0,1)$, which is equivalent to the NP-Hardness of approximating MaxE3SAT to a certain constant ratio $\alpha<1$.
3. Defined the so-called Gap-Max-CG-$\Sigma_{1,s}$ problem, whose NP-Hardness is also equivalent to the PCP theorem
4. Briefly showed one direction of the PCP theorem, namely PCP[O(log n), O(1)]$\subseteq$ NP
5. Outlined Dinur’s proof of the PCP theorem.

Another article of interest is from Radnakrishnan and Sudan.

On Monday, I will prove #2 above formally, and start talking about expanders.

## Lecture 2: The two views of PCP

Posted in lectures by atri on August 29, 2008

In today’s lecture, we saw more on the prover-verifier view of the PCPs and approximation algorithms. We also briefly heard some hand-wavy reasons why we should expects codes to prop up in the proof of the PCP theorem.

Next lecture, Hung will connect these two “disparate” view and give an overview of the main steps in Dinur’s proof.

## Lecture 1: Introduction

Posted in lectures by atri on August 28, 2008

In today’s lecture, we saw two “disparate” interpretations of languages in NP: in particular, the proof-verifier definition of NP (and PCPs) and approximation algorithms for NP-complete problems (3ESAT in particular). We also saw the statement of the wonderful and mysterious PCP theorem.

Next lecture, we will wrap up some left over discussion about these two views and then go on to show that the views are actually the same.

Today’s lecture was based on these lecture notes.

Also in class I mentioned that the PCP theorem has an very interesting history. For a very nice description, see this writeup by Ryan O’Donnell.