Expanders, Property Testing and the PCP theorem

Presentation guidelines

Posted in presentations, spr09 by atri on April 6, 2009

I wrote down a guidelines/advice post for the presentations in my coding theory course. Please follow the guidelines while preparing for your talk in the seminar.

Hardness of approximation — Lecture 3 & 4

Posted in lectures by HQN on March 27, 2009

We proved

Theorem (Hastad 1997)

NP = PCP_{1-\epsilon, 1/2+\delta}[O(\log n), 3] for any given \epsilon, \delta > 0.

The outline of the proof is as follows. (This exact outline will be used at least one more time, starting from a slightly different version of LabelCover.)

  1. We start from the NP-hard problem Gap-Max-LabelCover_\Sigma(1,\tau), and design a 3-bit PCP verifier for it (with logarithmic randomness).
  2. The verifier expects labels to be encoded with the (binary) long code, which is a map LC: \Sigma \to \{0,1\}^{2^m}, where |\Sigma|=m. Each 01-vector of length 2^m can be viewed as the truth table of a function f: \{0,1\}^m \to \{0,1\}. Thus, the long code LC(a) of a symbol a is also one such function. In particular, it is the dictator function LC(a)(x_1,\dots,x_m) = x_a. Then, the verifier chooses an edge of the graph at random, pick 3 bits from the (supposed) long codes of the two labels and perform a simple linear test on those bits.
  3. The completeness of the verifier is straightforward.
  4. For soundness, we prove the contra-positive, meaning if the test passes with high probability, then there’s a labelling satisfying > \tau-fraction of the edges of the LabelCover instance. To show that such a labelling exists, we use the probabilistic method to choose a random labelling based on the Fourier coefficients of the functions representing (and perhaps pretending to be) long codes.

Bellare-Goldreich-Sudan introduced the long code. This is an excellent expository paper on many ideas we have and will discuss. For Fourier analysis of boolean functions, O’Donnell’s tutorial at STOC is a good starting point.

Hardness of approximation — Lecture 2

Posted in lectures by HQN on March 27, 2009

Actually part of the blog post on Lecture 1 was presented in Lecture 2. The main theme of lecture 2 was the following:

  • We showed that the PCP theorem is equivalent to the NP-hardness of several gap problems, Gap-Max-E3SAT and Gap-LabelCover in particular. The last post has shown that Gap-Max-E3SAT is NP-hard. To show that Gap-Max-LabelCover(1,\rho) is NP-hard for some constant $\rho$ is not difficult: put all variables on the left, clauses on the right, connect a variable and a clause if the variable belongs to the clause, labels for clauses are 001,010, …, 111 corresponding to combinations of literals which satisfies the clause; labels for variables are 001 or 010 which “stand for” TRUE or FALSE; finally, the constraint on an edge “projects” the clause’s label to the literal’s truth assignment.
  • The above reduction yields bipartite graphs which are 7-regular on the right, but may not be regular on the left, since each variable can appear in an arbitrary number of clauses. For our purposes, we also want left-regular bipartite instances, which can easily be done by reducing from Gap-Max-E3SAT(d). Check Luca Trevisan’s survey for a proof that Gap-Max-E3SAT(d) is NP-hard for some constant d. (Vazirani’s book also contains a proof with d=29, I think.) The proof involves a very nice (but standard) application of expanders.
  • A natural PCP verifier for the Gap-LabelCover problem can be viewed as a verifier of a 2-player 1-round game (2P1R)
  • Then, Raz’s Parallel Repetition theorem can be applied to exponentially reduce the soundness of the PCP verifier for Gap-LabelCover. Since this result will be used to construct Hastad’s 3-bit PCP, we formally state it here.

Theorem (Raz’s Gap-LabelCover):

Given any \tau > 0, there exists an alphabet \Sigma with size |\Sigma| = poly(1/\tau) for which Gap-LabelCover(1,\tau) is NP-hard. Moreover, bipartite graph instances of this Gap-LabelCover problem can be assumed to be d_1-regular on the left and d_2-regular on the right where d_1,d_2 are constants. Furthermore, the constraint h_{u,v} for every edge (u,v) of the graph satisfies the projection property, i.e. it checks if L(u) = h_{u,v}(L(v)), where L(u), L(v) are the labels for u, v respectively.

Hardness of approximation – Lecture 1

Posted in lectures, spr09 by HQN on March 16, 2009

The second half of this semester is devoted to proving hardness of approximation. For example, we will show that it is {\mathop{\mathbf{NP}}}-hard to approximate MAX-3SAT to within any constant better than {7/8} (of the opimal). In their FOCS 97 paper, Karloff and Zwick have shown us how to use SPD to design a {7/8}-approximation algorithm. Thus the above hardness result is essentially optimal.

I am typing this lecture to test Luca Trevisan’s latex2wp converter (thanks, Luca!). I probably won’t have the time to type lectures any more this semester. Here’s a brief outline of what I will be talking about in the next 7 lectures. I hope I can finish them on time:

Lecture 1: gap-producing reduction from PCP.

  • How do we show that an optimization problem is {\mathop{\mathbf{NP}}}-hard to approximate to within some ratio? Answer: design a gap-producing reduction from an {\mathop{\mathbf{NP}}}-hard problem, which is equivalent to showing that the corresponding gap-version of the problem is {\mathop{\mathbf{NP}}}-hard.
  • How do we design such a gap-producing reduction? There are two basic strategies:

    • Start from a problem which already has a gap, i.e. an {\mathop{\mathbf{NP}}}-hard gap-version of some problem. Then, the reduction has to be “gap-preserving” somehow. We will not discuss this strategy in Lecuture 1. We will see many more examples along this line later.
    • Use the PCP theorem. In particular, use the PCP verifier for some/any {\mathop{\mathbf{NP}}}-complete problem as a subroutine in the gap-producing reduction. I have already given one example of this last semester. I will re-state the example again below. The FGLSS reduction will be the main example this time.

Lectures 2 + 3 gap-amplification.

  • The “reduction from PCP” strategy may not produce very good gap. To prove strong hardness results, we need to “amplify” the gap.
  • There are several ways of doing gap-amplifiction:

    • Repeat the verifier independently many times (at the expense of query and random bits)
    • Use expanders! (still too many query bits)
    • Use parallel-repetition and then alphabet reduction (somehow). We will discuss Hastad’s 3-bit PCP, its analysis, and some consequences.

Lectures 4 + 5 unique games conjecture (UGC)

  • UGC is a conjecture regarding the {\mathop{\mathbf{NP}}}-hardness of a certain gap problem. Using it, we can design nice gap-producing reduction.
  • There’ll be quite a bit of Fourier analysis of boolean functions. Majority is stablest theorem. Hardness of approximating MAX-CUT.

Lectures 6 + 7 gap-preserving reductions + time filler.

1. How do we show that a problem is {\mathop{\mathbf{NP}}}-hard to approximate to within a certain ratio {\rho}?

To be concrete, take MAX-3SAT as an example. The general strategy is:

  • start from {\mathop{\mathbf{NP}}}-complete problem {\Pi}
  • let {\mathop{\mathsf{opt}}(I)} denote the optimal cost of an instance {I} of MAX-3SAT; design a polynomial-time (Karp/Cook) reduction {f: \Pi \rightarrow} MAX-3SAT such that, given any input {x} to {\Pi},

    • if {x} is a YES-instance of {\Pi}, then {\mathop{\mathsf{opt}}(f(x)) \geq g(|f(x)|)} for some function {g}
    • if {x} is a NO-instance of {\Pi}, then {\mathop{\mathsf{opt}}(f(x)) < \rho \cdot g(|f(x)|)}

Such a reduction is called a gap-producing reduction. A typical {\mathop{\mathbf{NP}}}-hardness is too weak to produce any “good” gap (for example, with {\rho=7/8} for MAX-3SAT). Here, we use {|y|} to denote the length of an input {y} to the problem at hand (MAX-3SAT in this case).

Let {c,s: {\mathbb N} \rightarrow {\mathbb R}^+} be any two functions. Let Gap-MAX-3SAT{(c,s)} be the (decision) problem of distinguishing between

  • instances {\varphi} of MAX-3SAT for which {\mathop{\mathsf{opt}}(\varphi) \geq c(|\varphi|)}, and
  • instances {\varphi} of MAX-3SAT for which {\mathop{\mathsf{opt}}(\varphi) < s(|\varphi|)}

Proposition 1 The existence of a reduction as described above is equivalent to the fact that Gap-MAX-3SAT{(g,\rho\cdot g)} is {\mathop{\mathbf{NP}}}-hard.

Proposition 2 If Gap-MAX-3SAT{(c,s)} is {\mathop{\mathbf{NP}}}-hard then MAX-3SAT is {\mathop{\mathbf{NP}}}-hard to approximate to within {s/c}.

Proof: Suppose there is an approximation algorithm {A} with ratio {s/c}; namely, for any input {\varphi}, we always have {A(\varphi) \geq (s/c) \cdot \mathop{\mathsf{opt}}(\varphi)}. (Here, {A(\varphi)} be the number of satisfied clauses returned by {A}.)

If {\mathop{\mathsf{opt}}(\varphi) \geq c}, then certainly {A(\varphi) \geq s}. If {\mathop{\mathsf{opt}}(\varphi) < s}, then {A(\varphi) \leq \mathop{\mathsf{opt}}(\varphi) < s}. Thus, we can use {A} do decide in polynomial time if {\varphi} is a YES- or a NO-instsance of the gap problem, a contradiction to the fact that it is {\mathop{\mathbf{NP}}}-hard. \Box

Certainly, the above line of reasoning is not limited to MAX-3SAT. We could have replace MAX-3SAT by MAX-{\Pi} for any problem {\Pi}, and Gap-MAX-3SAT by Gap-MAX-{\Pi}. It is also convinient to normalize the objective function of {\Pi} so that the cost is between {0} and {1}, so that {0 < s < c \leq 1}. For example, for MAX-3SAT we can define the objective function to be the fraction of satisfiable clauses of an input formula {\varphi}. Last but not least, the same line of reasoning works for MIN-{\Pi} and Gap-Min-{\Pi} too! I’ll leave the technical details to you.

2. How do we design a gap-producing reduction?

Equivalently, how to we prove that a gap-problem is {\mathop{\mathbf{NP}}}-hard? As we have mentioned, the typical {\mathop{\mathbf{NP}}}-hardness reduction is — in most cases — too weak for this purpose. Fortunately, the PCP theorem gives us precisely one such reduction. Moreover, this PCP “technology” is sufficiently strong that it can be used to design many gap-producing reductions based on it.

Note that, it is somewhat misleading to talk about the PCP theorem. There are many PCP theorems, each with different parameters. Different PCP theorems give us different starting points for designing gap-producing reductions. When people say the PCP theorem, they mean the following theorem:

Theorem 3 (The PCP Theorem) {\mathop{\mathbf{NP}} = \mathop{\mathbf{PCP}}[O(\log n), O(1)]}

We will prove other PCP theorems in the next few weeks. To illustrate the PCP “technology”, we first show that it is actually equivalent to the hardness of some gap problem.

Theorem 4 The PCP theorem is equivalent to the fact that, there is some constant {\rho<1} for which Gap-MAX-E3SAT{(1,\rho)} is {\mathop{\mathbf{NP}}}-hard.

Proof: Let’s assume the PCP theorem first. We will produce a reduction from an {\mathop{\mathbf{NP}}}-complete language {L} to Gap-MAX-E3SAT{(1,\rho)}. More concretely, consider any {\mathop{\mathbf{NP}}}-complete language {L}. The reduction works by constructing in polynomial time an E3-CNF formula {\varphi_x} with {m} clauses, given an input {x}. The construction satisfies the following properties, for some constant {\rho<1}:

\displaystyle  \begin{array}{rcl}  x \in L & \Longrightarrow & \mathop{\mathsf{opt}}(\varphi_x) = 1 \\ x \notin L & \Longrightarrow & \mathop{\mathsf{opt}}(\varphi_x) < \rho.  \end{array}

By the PCP theorem, there is some {(r, q)}-restricted verifier {V} recognizing {L}, where {r = O(\log n)} and {q} is a fixed constant. We will use {V} to construct {\varphi_x} for each input string {x}. In other words, {V} is a sub-routine in the gap-producing reduction we are designing.

Note that, when {V} is adaptive the length of the proof does not need to be more than {2^r2^q}. When {V} is non-adaptive, the proof’s length does not need to be more than {q2^r}. In both cases, {V} only needs polynomial-size proofs. Let {p=2^{r+q} \geq q2^r} be the upperbound on proof sizes.

Construct {\varphi_x} as follows. Create {p} variables {x_1, \dots, x_p}, so that each truth assignment to these variables corresponds to a proof presented to {V}. For each random string {R} of length {r}, there are some combinations of the answers to {V}‘s queries that make {V} accept. We can model this fact by a CNF formula {\psi_R} on {\{x_1,\dots,x_p\}} such that {\psi_R(\mathbf x) = {\tt TRUE}} iff {V} accepts the proof {\mathbf x}. The formula {\psi_R} can be constructed in polynomial time by simulating {V} on the random string {R} and generating all possible combinations of answers. Since {q} is a constant, there are only constantly ({2^q}) many answer combinations. By adding a few auxiliary variables, we can convert {\psi_R} into E3-CNF form. Originally {\psi_R} has {\leq 2^q} clauses. Each clause gives rise to at most {q} size-{3} clauses. Hence, after the E3-CNF conversion {\psi_R} has at most {q2^q} clauses.

Finally, let {\varphi_x = \bigwedge_{R} \psi_R}, then {\varphi_x} itself can be constructed in polynomial time since there are only polynomially many random strings {R}. (This is why the randomness of {O(\log n)} is crucial!) Let {m} be the total number of {3}-CNF clauses of {\varphi_x}, then {m \leq r(|x|)q2^q = O(\log n)q2^q}.

  • When {x\in L}, there is a proof {\pi} (a truth assignment) such that {V} always accepts. Hence, under this assignment {\varphi_x} is satisfiable.

  • When {x \notin L}, set {\pi_i = x_i} for all {i} and feed {\pi} as a proof to {V}. In this case, {V} only accepts with probability {< 1/2}. Hence, at least half of the {\psi_R} are not satisfiable by any truth assignment. For each {\psi_R} that is not satisfied, there is at least one clause that is not satisfied. The number of non-satisfied clauses is thus at least {\frac 1 2 r(|x|)}. Consequently, setting {\rho = (1-\frac{1}{2q2^q})} we have

    \displaystyle  \mathop{\mathsf{opt}}(\varphi_x) < \frac 1 m \left(m - \frac 1 2 r(|x|)\right) \rho m.

Conversely, assume Gap-MAX-E3SAT{(1,\rho)} is {\mathop{\mathbf{NP}}}-hard for some constant {\rho<1}. Let us prove the PCP theorem. The fact that {\mathop{\mathbf{PCP}}[O(\log n), O(1)] \subseteq \mathop{\mathbf{NP}}} is easy. We show {\mathop{\mathbf{NP}} \subseteq \mathop{\mathbf{PCP}}[O(\log n), O(1)]} by designing an {(r,q)}-verifier {V} for some {\mathop{\mathbf{NP}}}-complete language {L}, with {r=O(\log n)} and {q=O(1)}.

Since Gap-MAX-E3SAT{(1,\rho)} is {\mathop{\mathbf{NP}}}-hard, there’s a poly-time reduction from {L} to Gap-MAX-E3SAT{(1,\rho)}. Consider any input string {x}. Use the assumed reduction to construct {\varphi_x}. The strategy for {V} is to pick a constant number {k} of clauses of {\varphi_x} at random, ask the prover for the values of (at most {3k}) variables in these clauses, and accept iff all the clauses are satisfied. Clearly {V} has perfect completeness. When {x\notin L}, at most {\rho m} clauses are satisfied. Hence, the probability that {V} accepts is at most

\displaystyle  \frac{\binom{\rho m}{k}}{\binom{m}{k}} = \frac{(\rho m)(\rho m-1)\dots(\rho m-k+1)} {m(m-1)\dots (m-k+1)} < \rho^k \leq 1/2

when {k \geq \ln 2/ \ln(1/\rho)}. Since {m = {\tt poly}(|x|)}, the number of random bits {V} used is {O(\lg m) = O(\lg |x|)}, and the number of query bits needed is at most {3\ln 2/\ln (1/\rho)}, which is a constant. \Box

3. Max-Clique and the FGLSS Reduction

We give another example of a gap-producing reduction using a PCP verifier as a sub-routine.

The PCP connection refers to the use of a PCP characterization of {\mathop{\mathbf{NP}}} to show hardness results for optimization problems. This connection was first noticed via a reduction from interactive proofs to Max-Clique in the pioneering work of Feige, Goldwasser, Lovász, Safra, and Szegedy. Since then, the reduction is referred to as the FGLSS reduction.

Consider an {(r,q)}-restricted verifier {V} for a language {L \in \mathop{\mathbf{PCP}}_{c,s}[q, r]}. On input {x} a transcript is a tuple {T = \langle R,Q_1,a_1,\dots,Q_q,a_q \rangle} such that {|R|=r} is a random string, the {Q_i} and {a_i} are the queries and corresponding answers that {V} made and received, in that order, given the random string. {T} is an accepting transcript if {V} accepts {x} after seeing the answers.

Two transcripts {T = \langle R,Q_1,a_1,\dots,Q_q,a_q \rangle} and {T' = \langle R',Q'_1,a'_1,\dots,Q'_q,a'_q \rangle} are consistent with each other if {Q_i=Q'_j \Rightarrow a_i=a'_j \ \forall i,j}, i.e. if for the same questions we get the same answers.

On an input {x} which {V} tries to verify whether {x \in L} or not, we will construct a graph {G_x} in polynomial time such that, for any epsilon>0,

\displaystyle  \begin{array}{rcl}  x \in L & \Rightarrow & \mathop{\mathsf{opt}}(G_x) \geq \frac{c}{2^q}|V_x| \\ x \notin L & \Rightarrow & \mathop{\mathsf{opt}}(G_x) \leq \frac{s}{2^q}|V_x|. \end{array}

Let {G_x =(V_x,E_x)}, where {V_x} represents all accepting transcripts of {V} on {x} and {E_x} consists of edges connecting consistent pairs of transcripts. It follows that {|V_x| \leq 2^{r+q}}. We can add dummy vertices so that {|V_x| = 2^{r+q}}.

Note that the first question {V} asks is deterministic, knowing {x} and {R}. Then, knowing the first answer the second question is known, etc. Thus, the questions in a transcript are in fact redundant for the encoding of transcripts. Consequently, the vertices of {G_x} with the same random string {R} form a cluster of independent vertices.

If {x \in L}, then there is some proof {\pi} such that {\mathop{\mathbf{Prob}}[V^{\pi}(x) \ {\tt accepts}] \geq c}. Consider the set of all transcripts whose answers come from {\pi}, then all these transcripts are consistent with each other. In other words, they form a clique. The fact that {\mathop{\mathbf{Prob}}[V^{\pi}(x) {\tt accepts}] \geq c}. implies that the clique size is at least {c2^r}. Hence,

\displaystyle  \mathop{\mathsf{opt}}(G_x) \geq c2^r = \frac{c}{2^q}|V_x|.

Conversely, from a clique of {G_x} of size {k}, say, we can construct a proof {\pi} for which {V^{\pi}} accepts with probability {k/2^r}. The proof is constructed by taking the union of the answers of the transcripts from the clique, adding dummy answers if they were not part of any transcript in the clique. Consequently, when {x \notin L} there cannot be a clique of size more than {s2^r}, otherwise there would be a proof {\pi} for which {V^\pi} accepts with probability more than {s}. Hence, in this case

\displaystyle  \mathop{\mathsf{opt}}(G_x) \leq s2^r = \frac{s}{2^q}|V_x|.

Remark: The FGLSS reduction runs in time {poly(|x|) \cdot poly(2^{r+q})}

Lemma 5 If {\mathop{\mathbf{NP}} \subseteq \mathop{\mathbf{PCP}}_{c,s}[r,q]}, and if {2^{r+q} = {\tt poly}(n)}, then Max-Clique is hard to approximate to within {\frac s c+\epsilon} for any \epsilon>0.

Theorem 6 It is {\mathop{\mathbf{NP}}}-hard to approximate Max-Clique to within any constant {\rho > \frac 1 2}.

Next time, we will see how to “amplify” the gap to prove stronger in-approximation results for Max-Clique.

Theory Seminar Talks

Posted in annoucements, presentations, spr09 by atri on February 6, 2009

We have two confirmed talks in the theory seminar for this semester. The first one is on March 2nd and the next one is on May 4th. For the latter please use the comments section to let me know what times work for you to attend the talk (it is the finals week).

Due to the March 2nd theory seminar, the first four student presentation dates have been moved up. I have updated the dates in the presentation schedule accordingly.

Spr09 1st presentation schedule

Posted in presentations, spr09 by atri on February 3, 2009

Here is the schedule of the first set of presentations:

  1. Thanh (Feb 16 ): Lecture notes on Parallel repetition from Venkat and Ryan’s course. 
  2. Steve (Feb 18 ) Ben-Sasson, Sudan: Short PCPs wth polylog query complexity.
  3. Nathan (Feb 23  ): Continue with lectures notes on the Parallel repetition from Venkat and Ryan’s course.
  4. Swapnoneel (Feb 25 )
  5. Yang (Mar 4 )

Please let us know once you have chosen your paper so that we can make a note of it above. For your reference here is a link to the suggested list of papers.

List of PCP-related papers

Posted in annoucements, presentations, spr09 by atri on January 14, 2009

The list of suggested papers for the first set of presentations is now up on the webpage. Some of them have some caveats, so read them carefully. These papers are probably much much harder than any you presented last semester, so we highly encourage you to start early on picking/preparing for your presentations.

Welcome to part II!

Posted in lectures, spr09 by atri on January 13, 2009

In today’s lecture Hung and I did a quick recap on what we covered in the last semester. On Wednesday, we will start with Dinur’s proof of the PCP theorem.

I finally put up the summaries of your talks that you sent on the blog. Sorry for the delay.

Student Presentation #8

Posted in presentations by atri on January 13, 2009

(Guest post by Swapnoneel Roy)

I presented the paper titled Bounds on 2-Query Codeword Testing by Eli Ben-Sasson, Oded Goldreich, and Madhu Sudan. In the paper, the authors study  2-query codeword testers. The main results in the paper are the upper bounds on the size of linear (respectively binary) codes that admit such testers (respectively testers of perfect completeness).

In other words, it was showed, if C\subseteq F^n be a (2, c, s)-locally testable linear code with minimal relative distance \delta>0, for c> s, we have |C| \le |F|^{3/\delta}.

Student Presentation # 9

Posted in presentations by atri on January 13, 2009

(Guest post by Steve Uurtamo)

I presented the result (by Noga Alon, Eldar Fischer, Ilan Newman and Asaf Shapira) that using the dense graph model and allowing for two-sided error, the set of graph properties that can be tested for using a constant number of queries to the adjacency matrix of a graph (constant for any fixed error distance \epsilon) exactly correspond with those that can be determined using a set of Szemeredi regularity constraints.

Three examples of such reductions are given in the paper; vertex k-colorability (testable), co-subgraph isomorphism (testable) and graph isomorphism (not testable).

Follow

Get every new post delivered to your Inbox.