Hardness of approximation – Lecture 1
The second half of this semester is devoted to proving hardness of approximation. For example, we will show that it is -hard to approximate MAX-3SAT to within any constant better than
(of the opimal). In their FOCS 97 paper, Karloff and Zwick have shown us how to use SPD to design a
-approximation algorithm. Thus the above hardness result is essentially optimal.
I am typing this lecture to test Luca Trevisan’s latex2wp converter (thanks, Luca!). I probably won’t have the time to type lectures any more this semester. Here’s a brief outline of what I will be talking about in the next 7 lectures. I hope I can finish them on time:
Lecture 1: gap-producing reduction from PCP.
- How do we show that an optimization problem is
-hard to approximate to within some ratio? Answer: design a gap-producing reduction from an
-hard problem, which is equivalent to showing that the corresponding gap-version of the problem is
-hard.
- How do we design such a gap-producing reduction? There are two basic strategies:
- Start from a problem which already has a gap, i.e. an
-hard gap-version of some problem. Then, the reduction has to be “gap-preserving” somehow. We will not discuss this strategy in Lecuture 1. We will see many more examples along this line later.
- Use the PCP theorem. In particular, use the PCP verifier for some/any
-complete problem as a subroutine in the gap-producing reduction. I have already given one example of this last semester. I will re-state the example again below. The FGLSS reduction will be the main example this time.
- Start from a problem which already has a gap, i.e. an
Lectures 2 + 3 gap-amplification.
- The “reduction from PCP” strategy may not produce very good gap. To prove strong hardness results, we need to “amplify” the gap.
- There are several ways of doing gap-amplifiction:
- Repeat the verifier independently many times (at the expense of query and random bits)
- Use expanders! (still too many query bits)
- Use parallel-repetition and then alphabet reduction (somehow). We will discuss Hastad’s 3-bit PCP, its analysis, and some consequences.
Lectures 4 + 5 unique games conjecture (UGC)
- UGC is a conjecture regarding the
-hardness of a certain gap problem. Using it, we can design nice gap-producing reduction.
- There’ll be quite a bit of Fourier analysis of boolean functions. Majority is stablest theorem. Hardness of approximating MAX-CUT.
Lectures 6 + 7 gap-preserving reductions + time filler.
1. How do we show that a problem is -hard to approximate to within a certain ratio
?
To be concrete, take MAX-3SAT as an example. The general strategy is:
- start from
-complete problem
- let
denote the optimal cost of an instance
of MAX-3SAT; design a polynomial-time (Karp/Cook) reduction
MAX-3SAT such that, given any input
to
,
- if
is a YES-instance of
, then
for some function
- if
is a NO-instance of
, then
- if
Such a reduction is called a gap-producing reduction. A typical -hardness is too weak to produce any “good” gap (for example, with
for MAX-3SAT). Here, we use
to denote the length of an input
to the problem at hand (MAX-3SAT in this case).
Let be any two functions. Let Gap-MAX-3SAT
be the (decision) problem of distinguishing between
- instances
of MAX-3SAT for which
, and
- instances
of MAX-3SAT for which
Proposition 1 The existence of a reduction as described above is equivalent to the fact that Gap-MAX-3SAT
is
-hard.
Proposition 2 If Gap-MAX-3SAT
is
-hard then MAX-3SAT is
-hard to approximate to within
.
Proof: Suppose there is an approximation algorithm with ratio
; namely, for any input
, we always have
. (Here,
be the number of satisfied clauses returned by
.)
If , then certainly
. If
, then
. Thus, we can use
do decide in polynomial time if
is a YES- or a NO-instsance of the gap problem, a contradiction to the fact that it is
-hard.
Certainly, the above line of reasoning is not limited to MAX-3SAT. We could have replace MAX-3SAT by MAX- for any problem
, and Gap-MAX-3SAT by Gap-MAX-
. It is also convinient to normalize the objective function of
so that the cost is between
and
, so that
. For example, for MAX-3SAT we can define the objective function to be the fraction of satisfiable clauses of an input formula
. Last but not least, the same line of reasoning works for MIN-
and Gap-Min-
too! I’ll leave the technical details to you.
2. How do we design a gap-producing reduction?
Equivalently, how to we prove that a gap-problem is -hard? As we have mentioned, the typical
-hardness reduction is — in most cases — too weak for this purpose. Fortunately, the PCP theorem gives us precisely one such reduction. Moreover, this PCP “technology” is sufficiently strong that it can be used to design many gap-producing reductions based on it.
Note that, it is somewhat misleading to talk about the PCP theorem. There are many PCP theorems, each with different parameters. Different PCP theorems give us different starting points for designing gap-producing reductions. When people say the PCP theorem, they mean the following theorem:
Theorem 3 (The PCP Theorem)
![]()
We will prove other PCP theorems in the next few weeks. To illustrate the PCP “technology”, we first show that it is actually equivalent to the hardness of some gap problem.
Theorem 4 The PCP theorem is equivalent to the fact that, there is some constant
for which Gap-MAX-E3SAT
is
-hard.
Proof: Let’s assume the PCP theorem first. We will produce a reduction from an -complete language
to Gap-MAX-E3SAT
. More concretely, consider any
-complete language
. The reduction works by constructing in polynomial time an E3-CNF formula
with
clauses, given an input
. The construction satisfies the following properties, for some constant
:
By the PCP theorem, there is some -restricted verifier
recognizing
, where
and
is a fixed constant. We will use
to construct
for each input string
. In other words,
is a sub-routine in the gap-producing reduction we are designing.
Note that, when is adaptive the length of the proof does not need to be more than
. When
is non-adaptive, the proof’s length does not need to be more than
. In both cases,
only needs polynomial-size proofs. Let
be the upperbound on proof sizes.
Construct as follows. Create
variables
, so that each truth assignment to these variables corresponds to a proof presented to
. For each random string
of length
, there are some combinations of the answers to
‘s queries that make
accept. We can model this fact by a CNF formula
on
such that
iff
accepts the proof
. The formula
can be constructed in polynomial time by simulating
on the random string
and generating all possible combinations of answers. Since
is a constant, there are only constantly (
) many answer combinations. By adding a few auxiliary variables, we can convert
into E3-CNF form. Originally
has
clauses. Each clause gives rise to at most
size-
clauses. Hence, after the E3-CNF conversion
has at most
clauses.
Finally, let , then
itself can be constructed in polynomial time since there are only polynomially many random strings
. (This is why the randomness of
is crucial!) Let
be the total number of
-CNF clauses of
, then
.
- When
, there is a proof
(a truth assignment) such that
always accepts. Hence, under this assignment
is satisfiable.
- When
, set
for all
and feed
as a proof to
. In this case,
only accepts with probability
. Hence, at least half of the
are not satisfiable by any truth assignment. For each
that is not satisfied, there is at least one clause that is not satisfied. The number of non-satisfied clauses is thus at least
. Consequently, setting
we have
Conversely, assume Gap-MAX-E3SAT is
-hard for some constant
. Let us prove the PCP theorem. The fact that
is easy. We show
by designing an
-verifier
for some
-complete language
, with
and
.
Since Gap-MAX-E3SAT is
-hard, there’s a poly-time reduction from
to Gap-MAX-E3SAT
. Consider any input string
. Use the assumed reduction to construct
. The strategy for
is to pick a constant number
of clauses of
at random, ask the prover for the values of (at most
) variables in these clauses, and accept iff all the clauses are satisfied. Clearly
has perfect completeness. When
, at most
clauses are satisfied. Hence, the probability that
accepts is at most
when . Since
, the number of random bits
used is
, and the number of query bits needed is at most
, which is a constant.
3. Max-Clique and the FGLSS Reduction
We give another example of a gap-producing reduction using a PCP verifier as a sub-routine.
The PCP connection refers to the use of a PCP characterization of to show hardness results for optimization problems. This connection was first noticed via a reduction from interactive proofs to Max-Clique in the pioneering work of Feige, Goldwasser, Lovász, Safra, and Szegedy. Since then, the reduction is referred to as the FGLSS reduction.
Consider an -restricted verifier
for a language
. On input
a transcript is a tuple
such that
is a random string, the
and
are the queries and corresponding answers that
made and received, in that order, given the random string.
is an accepting transcript if
accepts
after seeing the answers.
Two transcripts and
are consistent with each other if
, i.e. if for the same questions we get the same answers.
On an input which
tries to verify whether
or not, we will construct a graph
in polynomial time such that, for any
,
Let , where
represents all accepting transcripts of
on
and
consists of edges connecting consistent pairs of transcripts. It follows that
. We can add dummy vertices so that
.
Note that the first question asks is deterministic, knowing
and
. Then, knowing the first answer the second question is known, etc. Thus, the questions in a transcript are in fact redundant for the encoding of transcripts. Consequently, the vertices of
with the same random string
form a cluster of independent vertices.
If , then there is some proof
such that
. Consider the set of all transcripts whose answers come from
, then all these transcripts are consistent with each other. In other words, they form a clique. The fact that
. implies that the clique size is at least
. Hence,
Conversely, from a clique of of size
, say, we can construct a proof
for which
accepts with probability
. The proof is constructed by taking the union of the answers of the transcripts from the clique, adding dummy answers if they were not part of any transcript in the clique. Consequently, when
there cannot be a clique of size more than
, otherwise there would be a proof
for which
accepts with probability more than
. Hence, in this case
Remark: The FGLSS reduction runs in time
Lemma 5 If
, and if
, then Max-Clique is hard to approximate to within
for any
.
Theorem 6 It is
-hard to approximate Max-Clique to within any constant
.
Next time, we will see how to “amplify” the gap to prove stronger in-approximation results for Max-Clique.
Could you please give one more example of gap-reducing reduction?
Thanh
Sorry, I meant: gap-producing reduction using a PCP verifier as a sub-routine
We’ll see more examples today and next week, especially on LabelCover, CSP, and MaxCut