Expanders, Property Testing and the PCP theorem

Lecture 6: Cheeger-Alon-Milman Inequality

Posted in lectures by HQN on September 16, 2008

1. We showed that Gap-Max-3SAT(d')_{1,s} is NP-Hard for some constants s<1, d' using (n,d,1)-edge expanders (d is a constant). The proof relies on the fact that such an expander can be constructed in polynomial time. In the last lecture, we only proved that such expanders exist for all sufficiently large n, but we have not discussed how to construct expanders in polynomial time yet. Efficient constructions of good expanders will be discussed probably next week.

2. We proved half of the so-called Cheeger-Alon-Milman inequality, which states that

\frac{d-\lambda_2}{2} \leq h(G) \leq \sqrt{2d(d-\lambda_2)}

for any d-regular graph G. Here, \lambda_2 is the second largest eigenvalue and h(G) is the edge-isoperimetric number of the graph. We showed only that \frac{d-\lambda_2}{2} \leq h(G). Briefly, the proof went as follows.

First, we “symmetrize” h(G).

\displaystyle{h(G) = \min_{|S|\leq n/2} \frac{|E(S,\bar S)|}{|S|} \geq \min_{|S|\leq n/2} \frac{|E(S,\bar S)|}{|S|\frac{2|\bar S|}{n}}} = \frac 1 2 \min_{\emptyset \neq S \subset V} \frac{|E(S,\bar S)|}{\frac 1 n |S||\bar S|}

Define the sparsity of G to be

\displaystyle{\phi(G) = \min_{\emptyset \neq S \subset V} \frac{|E(S,\bar S)|}{\frac 1 n |S||\bar S|}}

Then, we just showed that h(G) \geq \frac 1 2 \phi(G). It remains to show that \phi(G) \geq d - \lambda_2. To do so, we turn the expression for \phi(G) into a more algebraic form. Imagine assigning a value of 1 to each vertex in S and 0 to each vertex in \bar S, we get a vector \mathbf x \in \{0,1\}^n. It is not difficult to see that

\displaystyle{\phi(G) = \min_{\mathbf x \in \{0,1\}^n, \mathbf x \neq \mathbf{0,1}} \frac{\sum_{uv\in E}(x_u-x_v)^2}{\frac{1}{2n} \sum_u\sum_v (x_u-x_v)^2}}

So, \phi(G) is the objective value of a certain optimization problem over zero-one vectors in \mathbb R^n. If we relax the zero-one condition, we get a lower bound for \phi(G). Specifically,

\displaystyle{\phi(G) \geq \min_{\mathbf x \in \mathbb R^n, \mathbf x \neq \mathbf{0,1}} \frac{\sum_{uv\in E}(x_u-x_v)^2}{\frac{1}{2n} \sum_u\sum_v (x_u-x_v)^2}}

Now, subtract (or add) the same amount from (to) each x_u will not change the min. Hence, we can replace each vector \mathbf x \in \mathbb R^n by \mathbf z \in \mathbb R^n where z_u = x_u - \frac 1 n \sum_v x_v. This way, \sum_u z_u = 0, or simply \mathbf z \perp \mathbf 1. Consequently,

\displaystyle{\phi(G) \geq \min_{\mathbf z \in \mathbb R^n, \mathbf z \neq \mathbf 0, \mathbf z \perp \mathbf 1} \frac{\sum_{uv\in E}(z_u-z_v)^2}{\frac{1}{2n} \sum_u\sum_v (z_u-z_v)^2}}

Now, let \mathbf A be the adjacency matrix of G, then it can be verified straightforwardly that

\sum_{uv\in E}(z_u-z_v)^2 = d\mathbf z^T\mathbf z - \mathbf z^T\mathbf A\mathbf z

Moreover, if \mathbf z \perp \mathbf 1, then \frac{1}{2n} \sum_u\sum_v (z_u-z_v)^2 = \mathbf z^T\mathbf z. Consequently,

\displaystyle{\phi(G) \geq \min_{\mathbf z \in \mathbb R^n, \mathbf z \neq \mathbf 0, \mathbf z \perp \mathbf 1} \left( d - \frac{\mathbf z^T\mathbf A\mathbf z}{\mathbf z^T\mathbf z} \right) = d - \max_{\mathbf z \in \mathbb R^n, \mathbf z \neq \mathbf 0, \mathbf z \perp \mathbf 1} \frac{\mathbf z^T\mathbf A\mathbf z}{\mathbf z^T\mathbf z} = d-\lambda_2}

For a more elaborate discussion on Cheeger-Alon-Milman inequality, see a nice series of posts by Luca Trevisan on the topic.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: