Results 1  10
of
20
Linear Degree Extractors and the Inapproximability of MAX CLIQUE and CHROMATIC NUMBER
 THEORY OF COMPUTING
, 2007
"... ... that for all ε> 0, approximating MAX CLIQUE and CHROMATIC NUMBER to within n1−ε are NPhard. We further derandomize results of Khot (FOCS ’01) and show that for some γ> 0, no quasipolynomial time algorithm approximates MAX CLIQUE or CHROMATIC NUMBER to within n/2 (logn)1−γ, unless N˜P = ˜ ..."
Abstract

Cited by 66 (2 self)
 Add to MetaCart
... that for all ε> 0, approximating MAX CLIQUE and CHROMATIC NUMBER to within n1−ε are NPhard. We further derandomize results of Khot (FOCS ’01) and show that for some γ> 0, no quasipolynomial time algorithm approximates MAX CLIQUE or CHROMATIC NUMBER to within n/2 (logn)1−γ, unless N˜P = ˜P. The key to these results is a new construction of dispersers, which are related to randomness extractors. A randomness extractor is an algorithm which extracts randomness from a lowquality random source, using some additional truly random bits. We construct new extractors which require only log2 n + O(1) additional random bits for sources with constant entropy rate, and have constant error. Our dispersers use an arbitrarily small constant
Inapproximability of edgedisjoint paths and low congestion routing on undirected graphs
 Combinatorica
, 2010
"... In the undirected EdgeDisjoint Paths problem with Congestion (EDPwC), we are given an undirected graph with V nodes, a set of terminal pairs and an integer c. The objective is to route as many terminal pairs as possible, subject to the constraint that at most c demands can be routed through any edg ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
(Show Context)
In the undirected EdgeDisjoint Paths problem with Congestion (EDPwC), we are given an undirected graph with V nodes, a set of terminal pairs and an integer c. The objective is to route as many terminal pairs as possible, subject to the constraint that at most c demands can be routed through any edge in the graph. When c = 1, the problem is simply referred to as the EdgeDisjoint Paths (EDP) problem. In this paper, we study the hardness of EDPwC in undirected graphs. Our main result is that for every ε> 0 there exists an α> 0 such that for 1 � c � αlog log V log log log V, it is hard to distinguish between instances where we can route all terminal pairs on edgedisjoint paths, and instances where we can route at most a 1/(log V) 1−ε c+2 fraction of the terminal pairs, even if we allow congestion c. This implies a (log V) 1−ε c+2 hardness of approximation
Hardness of Routing with Congestion in Directed Graphs
 In Proc. of STOC
, 2007
"... Given as input a directed graph on N vertices and a set of sourcedestination pairs, we study the problem of routing the maximum possible number of sourcedestination pairs on paths, such that at most c(N) paths go through any edge. We show that the problem is hard to approximate within an N Ω(1/c(N ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
(Show Context)
Given as input a directed graph on N vertices and a set of sourcedestination pairs, we study the problem of routing the maximum possible number of sourcedestination pairs on paths, such that at most c(N) paths go through any edge. We show that the problem is hard to approximate within an N Ω(1/c(N)) factor even when we compare to the optimal solution that routes pairs on edgedisjoint paths, assuming NP doesn’t have N O(log log N)time randomized algorithms. Here the congestion c(N) can be any function in the range 1 � c(N) � α log N / log log N for some absolute constant α> 0. The hardness result is in the right ballpark since a factor N O(1/c(N)) approximation algorithm is known for this problem, via rounding a natural multicommodityflow relaxation. We also give a simple integrality gap construction that shows that the multicommodityflow relaxation has an integrality gap of N Ω(1/c) for c ranging from 1 to Θ( log n log log n). A solution to the routing problem involves selecting which pairs to be routed and what paths to assign to each routed pair. Two natural restrictions can be placed on input instances to eliminate one of these aspects of the problem complexity. The first restriction is to consider instances with perfect completeness; an optimal solution is able to route all pairs with congestion 1 in such instances. The second restriction to consider is the unique paths property where each sourcedestination pair has a unique path connecting ∗ Supported by a grant of the state of New Jersey to the
2006), Complexity of wavelength assignment in optical network optimization
 in Proceedings of IEEE INFOCOM
"... ..."
(Show Context)
Sound 3query PCPPs are long
, 2008
"... We initiate the study of the tradeoff between the length of a probabilistically checkable proof of proximity (PCPP) and the maximal soundness that can be guaranteed by a 3query verifier with oracle access to the proof. Our main observation is that a verifier limited to querying a short proof cannot ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
We initiate the study of the tradeoff between the length of a probabilistically checkable proof of proximity (PCPP) and the maximal soundness that can be guaranteed by a 3query verifier with oracle access to the proof. Our main observation is that a verifier limited to querying a short proof cannot obtain the same soundness as that obtained by a verifier querying a long proof. Moreover, we quantify the soundness deficiency as a function of the prooflength and show that any verifier obtaining “best possible” soundness must query an exponentially long proof. In terms of techniques, we focus on the special class of inspective verifiers that read at most 2 proofbits per invocation. For such verifiers we prove exponential lengthsoundness tradeoffs that are later on used to imply our main results for the case of general (i.e., not necessarily inspective) verifiers. To prove the exponential tradeoff for inspective verifiers we show a connection between PCPP proof length and propertytesting query complexity, that may be of independent interest. The connection is that any linear property that can be verified with proofs of length ℓ by linear inspective verifiers must be testable with query complexity ≈ log ℓ.
On the Usefulness of Predicates
, 2012
"... Motivated by the pervasiveness of strong inapproximability results for MaxCSPs, we introduce a relaxed notion of an approximate solution of a MaxCSP. In this relaxed version, loosely speaking, the algorithm is allowed to replace the constraints of an instance by some other (possibly realvalued) c ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Motivated by the pervasiveness of strong inapproximability results for MaxCSPs, we introduce a relaxed notion of an approximate solution of a MaxCSP. In this relaxed version, loosely speaking, the algorithm is allowed to replace the constraints of an instance by some other (possibly realvalued) constraints, and then only needs to satisfy as many of the new constraints as possible. To be more precise, we introduce the following notion of a predicate P being useful for a (realvalued) objective Q: given an almost satisfiable MaxP instance, there is an algorithm that beats a random assignment on the corresponding MaxQ instance applied to the same sets of literals. The standard notion of a nontrivial approximation algorithm for a MaxCSP with predicate P is exactly the same as saying that P is useful for P itself. We say that P is useless if it is not useful for any Q. This turns out to be equivalent to the following pseudorandomness property: given an almost satisfiable instance of MaxP it is hard to find an assignment such that the induced distribution on kbit strings defined by the instance is not essentially uniform. Under the Unique Games Conjecture, we give a complete and simple characterization of useful MaxCSPs defined by a predicate: such a MaxCSP is useless if and only if there is a pairwise independent distribution supported on the satisfying assignments of the predicate. It is natural to also consider the case when no negations are allowed in the CSP instance, and we derive a similar complete characterization (under the UGC) there as well. Finally, we also include some results and examples shedding additional light on the approximability of certain MaxCSPs.
Independent set, induced matching, and pricing: connections and tight (subexponential time) approximation hardnesses
 CORR ABS/1308.2617
, 2013
"... ..."
(Show Context)
Goldreich’s PRG: Evidence for nearoptimal polynomial stretch
, 2013
"... We explore the connection between pseudorandomness of local functions and integrality gaps for constraint satisfaction problems. Specifically, we study candidate pseudorandom generators f: {0, 1} n → {0, 1} m constructed by applying some fixed predicate P to m randomly chosen sets of input bits. Gol ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
We explore the connection between pseudorandomness of local functions and integrality gaps for constraint satisfaction problems. Specifically, we study candidate pseudorandom generators f: {0, 1} n → {0, 1} m constructed by applying some fixed predicate P to m randomly chosen sets of input bits. Goldreich first considered using functions of this form for cryptographic purposes. The security of these functions against LP and SDP hierarchies is related to the integrality gap of random instances of the MaxCSP problem with predicate P: If a random (highly unsatisfiable) instance “looks ” fully satisfiable to an LP or SDP, the LP or SDP cannot distinguish between the output of the PRG and a random string. For a linear number of rounds of the LS+ and SA+ hierarchies, integrality gaps are known for the MaxCSP problem with pairwiseindependent predicate P [BGMT12, TW13]. However, these works typically take m = O(n), whereas for our application to PRGs, we would prefer to take m = n 1+Ω(1) to get PRGs with polynomial stretch. We show integrality gaps for instances with n 1+Ω(1) constraints and further show integrality gaps for instances with twise independent predicates such that m increases with t. In particular, if we consider random instances, we get integrality gap instances with Ω(n t/2+1/6−ɛ) constraints for both the SA+ and LS+ hierarchies after n Ω(1) rounds. If we allow the deletion of a small number of constraints, we obtain an integrality gap instance with Ω(n t/2+1/2−ɛ) constraints. This result is, in a sense, optimal as random planted instances of twise independent CSPs with Õ(n t+1 2) constraints can be solved efficiently. These gap instances can then be used as PRGs with polynomial stretch that are secure against nΩ(1) rounds of SA+ and LS+. 1
Using the FGLSSreduction to Prove Inapproximability Results for Minimum Vertex Cover in Hypergraphs
 IN HYPERGRAPHS. ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY (ECCC) 102
, 2001
"... Using known results regarding PCP, we present simple proofs of the inapproximability of vertex cover for hypergraphs. Specifically, we show that 1. Approximating the size of the minimum vertex cover in O(1)regular hypergraphs to within a factor of 1.99999 is NPhard. 2. Approximating the ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Using known results regarding PCP, we present simple proofs of the inapproximability of vertex cover for hypergraphs. Specifically, we show that 1. Approximating the size of the minimum vertex cover in O(1)regular hypergraphs to within a factor of 1.99999 is NPhard. 2. Approximating the
ThreeQuery PCPs with Perfect Completeness over nonBoolean Domains
 In Proceedings of the 18th IEEE Conference on Computational Complexity
, 2002
"... We study nonBoolean PCPs that have perfect completeness and read three positions from the proof. For the case when the proof consists of values from a domain of size d for some integer constant d 2, we construct a nonadaptive PCP with perfect completeness and soundness d +", for any ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
We study nonBoolean PCPs that have perfect completeness and read three positions from the proof. For the case when the proof consists of values from a domain of size d for some integer constant d 2, we construct a nonadaptive PCP with perfect completeness and soundness d +", for any constant " > 0, and an adaptive PCP with perfect completeness and + ", for any constant " > 0. These results match the best known constructions for the case d = 2 and our proofs also show that the particular predicates we use in our PCPs are nonapproximable beyond the random assignment threshold.