# Expanders, Property Testing and the PCP theorem

## Lecture 8: Confining a random walk on an expander is hard

Posted in lectures by HQN on September 23, 2008

Let $G$ be an $(n,d,\alpha)$-spectral expander, $B$ be a subset of vertices of $G$ of size $\beta n$. Suppose we uniformly choose a random vertex of $G$ and walk randomly for $t$ steps.

Theorem 1 (Confining a random walk is hard). Let $(B,t)$ be the event that the walk is confined within $B$ the entire time.Then, Prob$\left[(B,t)\right] \leq \sqrt\beta (\alpha + (1-\alpha)\beta)^t$. In particular, if $\alpha, \beta<1$ then this confinement probability is exponentially small.

Proof. Let $\mathbf P = (p_{ij})$ be the “projection into $B$ matrix“, i.e. $p_{ii} = 1, \forall i \in B$ and $p_{ij}=0$ for all other $i,j$. Noting that $\mathbf P$ is idempotent, it is not difficult to see that

Prob$\left[(B,t)\right] = \|(\mathbf{P\hat A})^t\mathbf{Pu}\|_1 = \|(\mathbf{P\hat AP})^t\mathbf{Pu}\|_1 \leq \sqrt n \|(\mathbf{P\hat AP})^t\mathbf{Pu}\|_2$

Thus, to bound Prob$\left[(B,t)\right]$ we need to know how much the matrix $\mathbf{M = P\hat AP}$ shrinks a vector after each multiplication. The same trick we did in the last lecture gives, for any non-zero vector $\mathbf y$

$\frac{\|\mathbf{My}\|}{\|\mathbf y\|} \leq \lambda_1(\mathbf M) = \max_{\mathbf z\neq \mathbf 0} \frac{\mathbf z^T\mathbf{Mz}}{\mathbf z^T\mathbf z}$

Now, consider any non-zero vector $\mathbf z$. Let $\mathbf{x = Pz}$. Then, $\mathbf z^T\mathbf{Mz} = \mathbf z^T\mathbf{P\hat APz} = \langle \mathbf{\hat Ax, x} \rangle$. First, express $\mathbf x$ as a linear combination of the orthonormal eigenvectors of $\mathbf{\hat A}$:

$\mathbf x = c_1\mathbf u_1 + c_2 \mathbf u_2 + \cdots c_n \mathbf u_n$

Let $\mathbf x^{\|} = c_1\mathbf u_1$ and $\mathbf x^{\perp} = \mathbf{x-x^{\|}}$. Then,

$\langle \mathbf{\hat Ax, x} \rangle = \langle \mathbf{\hat A x^{\|}} + \mathbf{\hat A x^{\perp}}, \mathbf x^{\|} + \mathbf x^{\perp} \rangle = \|\mathbf x^{\|}\|^2 + \langle \mathbf{\hat A x^\perp, x^\perp} \rangle$

The usual trick gives

$\langle \mathbf{\hat A x^\perp, x^\perp} \rangle = \|\mathbf x^\perp\|^2 \frac{\langle \mathbf{\hat A x^\perp, x^\perp} \rangle}{\|\mathbf x^\perp\|^2} \leq \hat\lambda \|\mathbf x^\perp\|^2 = \hat\lambda \left(\|\mathbf x\|^2 - \|\mathbf x^{\|}\|^2\right)$

By Cauchy-Schwarz, $\|\mathbf x^{\|}\|^2 = c_1^2 = \langle \mathbf x, \mathbf u_1\rangle^2 \leq \beta \|\mathbf x\|^2$, we conclude that

$\langle \mathbf{\hat Ax, x} \rangle \leq \hat\lambda\|\mathbf x\|^2 + (1-\hat\lambda)\|\mathbf x^\|\|^2 \leq (\alpha + (1-\alpha)\beta)\|\mathbf x\|^2 \leq (\alpha + (1-\alpha)\beta)\|\mathbf z\|^2$

Consequently, each time we apply $\mathbf M$ to a vector $\mathbf y$ we reduce its length by a ratio of at least $(\alpha + (1-\alpha)\beta)$. There are $t$ applications, and the initial vector $\mathbf{Pu}$ has length $\sqrt{\beta/n}$. The theorem follows.

Theorem 2 (Saving random bits for RP). A language $L$ is in the class RP if there exists a polynomial time randomized algorithm $A$ satisfying the following conditions:

• $x \in L \Rightarrow$ Prob$(A$ accepts $x) \geq 1/2$
• $x \notin L \Rightarrow$ Prob$(A$ accepts $x) = 0$

Suppose $A$ uses $r$ random bits. To reduce the error probability to $1/2^k$, the easy way is to run the algorithm $k$ times and accept the input if the algorithm accepts it at least once. However, this approach requires $kr$ random bits. By imposing a (strongly explicit) $(n,d,\alpha)$-spectral expander on the space of random strings of length $r$, we can obtain the same effect with only $r + O(k)\log d$ random bits. Here, we require $(\alpha + (1-\alpha)1/2)<1$. A strongly explicit expander is an expander which, given a vertex, there’s a poly-time algorithm computing the neighboring vertices.