Results 1  10
of
151
Optimal inapproximability results for MAXCUT and other 2variable CSPs?
, 2005
"... In this paper we show a reduction from the Unique Games problem to the problem of approximating MAXCUT to within a factor of ffGW + ffl, for all ffl> 0; here ffGW ss.878567 denotes the approximation ratio achieved by the GoemansWilliamson algorithm [25]. This implies that if the Unique Games ..."
Abstract

Cited by 223 (32 self)
 Add to MetaCart
In this paper we show a reduction from the Unique Games problem to the problem of approximating MAXCUT to within a factor of ffGW + ffl, for all ffl> 0; here ffGW ss.878567 denotes the approximation ratio achieved by the GoemansWilliamson algorithm [25]. This implies that if the Unique Games
The Unique Games Conjecture, integrality gap for cut problems and embeddability of negative type metrics into `1
 In Proc. 46th IEEE Symp. on Foundations of Comp. Sci
, 2005
"... In this paper we disprove the following conjecture due to Goemans [17] and Linial [25] (also see [5, 27]): “Every negative type metric embeds into `1 with constant distortion. ” We show that for every δ> 0, and for large enough n, there is an npoint negative type metric which requires distortion ..."
Abstract

Cited by 170 (11 self)
 Add to MetaCart
In this paper we disprove the following conjecture due to Goemans [17] and Linial [25] (also see [5, 27]): “Every negative type metric embeds into `1 with constant distortion. ” We show that for every δ> 0, and for large enough n, there is an npoint negative type metric which requires distortion atleast (log log n)1/6−δ to embed into `1. Surprisingly, our construction is inspired by the Unique Games Conjecture (UGC) of Khot [20], establishing a previously unsuspected connection between PCPs and the theory of metric embeddings. We first prove that the UGC implies superconstant hardness results for (nonuniform) Sparsest Cut and Minimum Uncut problems. It is already known that the UGC also implies an optimal hardness result for Maximum Cut [21]. Though these hardness results rely on the UGC, we demonstrate, nevertheless, that the corresponding PCP reductions can be used to construct “integrality gap instances ” for the respective problems. Towards this, we first construct an integrality gap instance for a natural SDP relaxation of Unique Games. Then, we “simulate ” the PCP reduction, and “translate ” the integrality gap instance of Unique Games to integrality gap instances for the respective cut problems! This enables us to prove
Noise stability of functions with low influences: invariance and optimality
"... In this paper we study functions with low influences on product probability spaces. The analysis of boolean functions f: {−1, 1} n → {−1, 1} with low influences has become a central problem in discrete Fourier analysis. It is motivated by fundamental questions arising from the construction of proba ..."
Abstract

Cited by 128 (17 self)
 Add to MetaCart
In this paper we study functions with low influences on product probability spaces. The analysis of boolean functions f: {−1, 1} n → {−1, 1} with low influences has become a central problem in discrete Fourier analysis. It is motivated by fundamental questions arising from the construction of probabilistically checkable proofs in theoretical computer science and from problems in the theory of social choice in economics. We prove an invariance principle for multilinear polynomials with low influences and bounded degree; it shows that under mild conditions the distribution of such polynomials is essentially invariant for all product spaces. Ours is one of the very few known nonlinear invariance principles. It has the advantage that its proof is simple and that the error bounds are explicit. We also show that the assumption of bounded degree can be eliminated if the polynomials are slightly “smoothed”; this extension is essential for our applications to “noise stability”type problems. In particular, as applications of the invariance principle we prove two conjectures: the “Majority Is Stablest ” conjecture [27] from theoretical computer science, which was the original motivation for this work, and the “It Ain’t Over Till It’s Over” conjecture [25] from social choice theory. The “Majority Is Stablest ” conjecture and its generalizations proven here in conjunction with “Unique Games” and its variants imply a number of (optimal) inapproximability results for graph problems.
On the Hardness of Approximating Multicut and SparsestCut
 In Proceedings of the 20th Annual IEEE Conference on Computational Complexity
, 2005
"... We show that the MULTICUT, SPARSESTCUT, and MIN2CNF ≡ DELETION problems are NPhard to approximate within every constant factor, assuming the Unique Games Conjecture of Khot [STOC, 2002]. A quantitatively stronger version of the conjecture implies inapproximability factor of Ω(log log n). 1. ..."
Abstract

Cited by 102 (5 self)
 Add to MetaCart
(Show Context)
We show that the MULTICUT, SPARSESTCUT, and MIN2CNF ≡ DELETION problems are NPhard to approximate within every constant factor, assuming the Unique Games Conjecture of Khot [STOC, 2002]. A quantitatively stronger version of the conjecture implies inapproximability factor of Ω(log log n). 1.
Parameterized complexity and approximation algorithms
 Comput. J
, 2006
"... Approximation algorithms and parameterized complexity are usually considered to be two separate ways of dealing with hard algorithmic problems. In this paper, our aim is to investigate how these two fields can be combined to achieve better algorithms than what any of the two theories could offer. We ..."
Abstract

Cited by 56 (2 self)
 Add to MetaCart
(Show Context)
Approximation algorithms and parameterized complexity are usually considered to be two separate ways of dealing with hard algorithmic problems. In this paper, our aim is to investigate how these two fields can be combined to achieve better algorithms than what any of the two theories could offer. We discuss the different ways parameterized complexity can be extended to approximation algorithms, survey results of this type and propose directions for future research. 1.
Gaussian Bounds for Noise Correlation of Functions and Tight Analysis of Long Codes
 In IEEE Symposium on Foundations of Computer Science (FOCS
, 2008
"... In this paper we derive tight bounds on the expected value of products of low influence functions defined on correlated probability spaces. The proofs are based on extending Fourier theory to an arbitrary number of correlated probability spaces, on a generalization of an invariance principle recentl ..."
Abstract

Cited by 55 (6 self)
 Add to MetaCart
In this paper we derive tight bounds on the expected value of products of low influence functions defined on correlated probability spaces. The proofs are based on extending Fourier theory to an arbitrary number of correlated probability spaces, on a generalization of an invariance principle recently obtained with O’Donnell and Oleszkiewicz for multilinear polynomials with low influences and bounded degree and on properties of multidimensional Gaussian distributions. We present two applications of the new bounds to the theory of social choice. We show that Majority is asymptotically the most predictable function among all low influence functions given a random sample of the voters. Moreover, we derive an almost tight bound in the context of Condorcet aggregation and low influence voting schemes on a large number of candidates. In particular, we show that for every low influence aggregation function, the probability that Condorcet voting on k candidates will result in a unique candidate that is preferable to all others is k−1+o(1). This matches the asymptotic behavior of the majority function for which the probability is k−1−o(1). A number of applications in hardness of approximation in theoretical computer science were
A new multilayered PCP and the hardness of hypergraph vertex cover
 In Proceedings of the 35th Annual ACM Symposium on Theory of Computing
, 2003
"... Abstract Given a kuniform hypergraph, the EkVertexCover problem is to find the smallest subsetof vertices that intersects every hyperedge. We present a new multilayered PCP construction that extends the Raz verifier. This enables us to prove that EkVertexCover is NPhard toapproximate within a ..."
Abstract

Cited by 55 (11 self)
 Add to MetaCart
(Show Context)
Abstract Given a kuniform hypergraph, the EkVertexCover problem is to find the smallest subsetof vertices that intersects every hyperedge. We present a new multilayered PCP construction that extends the Raz verifier. This enables us to prove that EkVertexCover is NPhard toapproximate within a factor of ( k 1 &quot;) for arbitrary constants &quot;> 0 and k> = 3. The resultis nearly tight as this problem can be easily approximated within factor k. Our constructionmakes use of the biased LongCode and is analyzed using combinatorial properties of swise tintersecting families of subsets.We also give a different proof that shows an inapproximability factor of b k 2 c &quot;. In additionto being simpler, this proof also works for superconstant values of k up to (log N)1/c where
Nearoptimal algorithms for Unique Games
 In Proceedings of the 38th Annual ACM Symposium on Theory of Computing
, 2006
"... Unique games are constraint satisfaction problems that can be viewed as a generalization of MaxCut to a larger domain size. The Unique Games Conjecture states that it is hard to distinguish between instances of unique games where almost all constraints are satisfiable and those where almost none ar ..."
Abstract

Cited by 46 (8 self)
 Add to MetaCart
Unique games are constraint satisfaction problems that can be viewed as a generalization of MaxCut to a larger domain size. The Unique Games Conjecture states that it is hard to distinguish between instances of unique games where almost all constraints are satisfiable and those where almost none are satisfiable. It has been shown to imply a number of inapproximability results for fundamental problems that seem difficult to obtain by more standard complexity assumptions. Thus, proving or refuting this conjecture is an important goal. We present significantly improved approximation algorithms for unique games. For instances with domain size k where the optimal solution satisfies 1 − ε fraction of all constraints, our algorithms satisfy roughly k −ε/(2−ε) and 1 − O ( √ ε log k) fraction of all constraints. Our algorithms are based on rounding a natural semidefinite programming relaxation for the problem and their performance almost matches the integrality gap of this relaxation. Our results are near optimal if the Unique Games Conjecture is true, i.e. any improvement (beyond low order terms) would refute the conjecture. 1