Results 1  10
of
69
The Importance of Being Biased
, 2002
"... The Minimum Vertex Cover problem is the problem of, given a graph, finding a smallest set of vertices that touches all edges. We show that it is NPhard to approximate this problem 1.36067, improving on the previously known hardness result for a 7/6 factor. ..."
Abstract

Cited by 88 (7 self)
 Add to MetaCart
The Minimum Vertex Cover problem is the problem of, given a graph, finding a smallest set of vertices that touches all edges. We show that it is NPhard to approximate this problem 1.36067, improving on the previously known hardness result for a 7/6 factor.
Ruling out PTAS for graph minbisection, dense ksubgraph, and bipartite clique
 SIAM J. Comput
"... Abstract Assuming that NP 6 ` "ffl?0 BPTIME(2nffl), we show that Graph MinBisection, Dense kSubgraph and Bipartite Clique have no Polynomial Time Approximation Scheme (PTAS). We give a reduction from the Minimum Distance of Code Problem (MDC). Starting with an instance of MDC, we build a Q ..."
Abstract

Cited by 56 (0 self)
 Add to MetaCart
Abstract Assuming that NP 6 ` &quot;ffl?0 BPTIME(2nffl), we show that Graph MinBisection, Dense kSubgraph and Bipartite Clique have no Polynomial Time Approximation Scheme (PTAS). We give a reduction from the Minimum Distance of Code Problem (MDC). Starting with an instance of MDC, we build a Quasirandom PCP that suffices to prove the desired inapproximability results. In a Quasirandom PCP, the query pattern of the verifier looks random in certain precise sense. Among the several new techniques we introduce, the most interesting one gives a way of certifying that a given polynomial belongs to a given linear subspace of polynomials. As is important for our purpose, the certificate itself happens to be another polynomial and it can be checked probabilistically by reading a constant number of its values.
New results for learning noisy parities and halfspaces
 In Proceedings of the 47th Annual Symposium on Foundations of Computer Science (FOCS
, 2006
"... We address wellstudied problems concerning the learnability of parities and halfspaces in the presence of classification noise. Learning of parities under the uniform distribution with random classification noise, also called the noisy parity problem is a famous open problem in computational learni ..."
Abstract

Cited by 55 (11 self)
 Add to MetaCart
(Show Context)
We address wellstudied problems concerning the learnability of parities and halfspaces in the presence of classification noise. Learning of parities under the uniform distribution with random classification noise, also called the noisy parity problem is a famous open problem in computational learning. We reduce a number of basic problems regarding learning under the uniform distribution to learning of noisy parities, thus highlighting the central role of this problem for learning under the uniform distribution. We show that under the uniform distribution, learning parities with adversarial classification noise reduces to learning parities with random classification noise. Together with the parity learning algorithm of Blum et al. [BKW03], this gives the first nontrivial algorithm for learning parities with adversarial noise. We show that learning of DNF expressions reduces to learning noisy parities of just logarithmic number of variables. We show that learning of juntas reduces to learning noisy parities of variables. These reductions work even in the presence of random classification noise in the original DNF or junta. We then consider the problem of learning halfspaces over with adversarial noise or finding a halfspace that maximizes the agreement rate with a given set of examples. Finding the best halfspace is known to behard [GJ79, PV88] and many inapproximability results are known for this problem [ABSS97, HSH95, AK95, BDEL00, BB02]. We show that even if there is a halfspace that correctly classifies fraction of the given examples, it is hard to find a halfspace that is correct on a fraction for any
assuming
Conditional hardness for approximate coloring
 In STOC 2006
, 2006
"... We study the APPROXIMATECOLORING(q, Q) problem: Given a graph G, decide whether χ(G) ≤ q or χ(G) ≥ Q (where χ(G) is the chromatic number of G). We derive conditional hardness for this problem for any constant 3 ≤ q < Q. For q ≥ 4, our result is based on Khot’s 2to1 conjecture [Khot’02]. For ..."
Abstract

Cited by 46 (14 self)
 Add to MetaCart
We study the APPROXIMATECOLORING(q, Q) problem: Given a graph G, decide whether χ(G) ≤ q or χ(G) ≥ Q (where χ(G) is the chromatic number of G). We derive conditional hardness for this problem for any constant 3 ≤ q < Q. For q ≥ 4, our result is based on Khot’s 2to1 conjecture [Khot’02]. For q = 3, we base our hardness result on a certain ‘⊲< shaped ’ variant of his conjecture. We also prove that the problem ALMOST3COLORINGε is hard for any constant ε> 0, assuming Khot’s Unique Games conjecture. This is the problem of deciding for a given graph, between the case where one can 3color all but a ε fraction of the vertices without monochromatic edges, and the case where the graph contains no independent set of relative size at least ε. Our result is based on bounding various generalized noisestability quantities using the invariance principle of Mossel et al [MOO’05].
A New Trust Region Technique for the Maximum Weight Clique Problem
 Discrete Applied Mathematics
, 2002
"... A new simple generalization of the MotzkinStraus theorem for the maximum weight clique problem is formulated and directly proved. Within this framework a new trust region heuristic is developed. In contrast to usual trust region methods, it regards not only the global optimum of a quadratic objecti ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
(Show Context)
A new simple generalization of the MotzkinStraus theorem for the maximum weight clique problem is formulated and directly proved. Within this framework a new trust region heuristic is developed. In contrast to usual trust region methods, it regards not only the global optimum of a quadratic objective over a sphere, but also a set of other stationary points of the program. We formulate and prove a condition when a MotzkinStraus optimum coincides with such a point. The developed method has complexity O(n ), where n is the number of graph vertices. It was implemented in a publicly available software package QUALEXMS.
Hardness Results for Coloring 3Colorable 3Uniform Hypergraphs
, 2002
"... In this paper, we consider the problem of coloring a 3colorable 3uniform hypergraph. In the minimization version of this problem, given a 3colorable 3uniform hypergraph, one seeks an algorithm tocolor the hypergraph with as few colors as possible. We show that it is NPhard to color a 3colorab ..."
Abstract

Cited by 25 (9 self)
 Add to MetaCart
In this paper, we consider the problem of coloring a 3colorable 3uniform hypergraph. In the minimization version of this problem, given a 3colorable 3uniform hypergraph, one seeks an algorithm tocolor the hypergraph with as few colors as possible. We show that it is NPhard to color a 3colorable 3uniform hypergraph with constantly many colors. In fact, we show a stronger result that it is NPhard to distinguish whether a 3uniform hypergraph with n vertices is 3colorable or it contains no independentset of size ffin for an arbitrarily small constant ffi? 0. In the maximization version of the problem, givena 3uniform hypergraph, the goal is to color the vertices with 3 colors so as to maximize the number ofnonmonochromatic edges. We show that it is NPhard to distinguish whether a 3uniform hypergraphis 3colorable or any coloring of the vertices with 3 colors has at most 89 + ffl fraction of the edges nonmonochromatic where ffl? 0 is an arbitrarily small constant. This result is tight since assigning a randomcolor independently to every vertex makes 8 9 fraction of the edges nonmonochromatic.These results are obtained via a new construction of a probabilistically checkable proof system (PCP) for NP. We develop a new construction of the PCP Outer Verifier. An important feature of this construction is smoothening of the projection maps. We believe that the techniques in this paper would be quite useful in future. As an application of ourtechniques, we give a simpler proof of H*astad's result [11] that for every constant ffl? 0, it is NPhardto distinguish satisfiable instances of Max3SAT from instances where no assignment satisfies more that 78 + ffl fraction of the clauses. Dinur, Regev and Smyth [6] independently showed that it is NPhard to color a 2colorable 3uniformhypergraph with constantly many colors. In the &quot;good case&quot;, the hypergraph they construct is 2colorableand hence their result is stronger. In the &quot;bad case &quot; however, the hypergraph we construct has a stronger property, namely, it does not even contain an independent set of size ffin.
Inapproximability of Vertex Cover and Independent Set in Bounded Degree Graphs
"... We study the inapproximability of Vertex Cover and Independent Set on degree d graphs. We prove that: • Vertex Cover is Unique Gameshard to approximate log log d to within a factor 2−(2+od(1)). This exactly log d matches the algorithmic result of Halperin [1] up to the od(1) term. • Independent Set ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
We study the inapproximability of Vertex Cover and Independent Set on degree d graphs. We prove that: • Vertex Cover is Unique Gameshard to approximate log log d to within a factor 2−(2+od(1)). This exactly log d matches the algorithmic result of Halperin [1] up to the od(1) term. • Independent Set is Unique Gameshard to approxid mate to within a factor O( log2). This improves the d d logO(1) Unique Games hardness result of Samorod
Amplifying lower bounds by means of selfreducibility
 IN IEEE CONFERENCE ON COMPUTATIONAL COMPLEXITY
, 2008
"... We observe that many important computational problems in NC¹ share a simple selfreducibility property. We then show that, for any problem A having this selfreducibility property, A has polynomial size TC 0 circuits if and only if it has TC⁰ circuits of size n 1+ɛ for every ɛ>0 (counting the num ..."
Abstract

Cited by 20 (6 self)
 Add to MetaCart
(Show Context)
We observe that many important computational problems in NC¹ share a simple selfreducibility property. We then show that, for any problem A having this selfreducibility property, A has polynomial size TC 0 circuits if and only if it has TC⁰ circuits of size n 1+ɛ for every ɛ>0 (counting the number of wires in a circuit as the size of the circuit). As an example of what this observation yields, consider the Boolean Formula Evaluation problem (BFE), which is complete for NC¹ and has the selfreducibility property. It follows from a lower bound of Impagliazzo, Paturi, and Saks, that BFE requires depth d TC 0 circuits of size n 1+ɛd. If one were able to improve this lower bound to show that there is some constant ɛ>0 such that every TC 0 circuit family recognizing BFE has size n 1+ɛ, then it would follow that TC⁰ ̸ = NC¹. We show that proving lower bounds of the form n 1+ɛ is not ruled out by the Natural Proof framework of Razborov and Rudich and hence there is currently no known barrier for separating classes such as ACC⁰, TC⁰ and NC¹ via existing “natural ” approaches to proving circuit lower bounds. We also show that problems with small uniform constantdepth circuits have algorithms that simultaneously have small space and time bounds. We then make use of known timespace tradeoff lower bounds to show that SAT requires uniform depth d TC⁰ and AC⁰ [6] circuits of size n 1+c for some constant c depending on d.
On Agnostic Learning of Parities, Monomials and Halfspaces
, 2006
"... We study the learnability of several fundamental concept classes in the agnostic learning framework of Haussler [Hau92] and Kearns et al. [KSS94]. We show that under the uniform distribution, agnostically learning parities reduces to learning parities with random classification noise, commonly refer ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
We study the learnability of several fundamental concept classes in the agnostic learning framework of Haussler [Hau92] and Kearns et al. [KSS94]. We show that under the uniform distribution, agnostically learning parities reduces to learning parities with random classification noise, commonly referred to as the noisy parity problem. Together with the parity learning algorithm of Blum et al. [BKW03], this gives the first nontrivial algorithm for agnostic learning of parities. We use similar techniques to reduce learning of two other fundamental concept classes under the uniform distribution to learning of noisy parities. Namely, we show that learning of DNF expressions reduces to learning noisy parities of just logarithmic number of variables and learning of kjuntas reduces to learning noisy parities of k variables. We give essentially optimal hardness results for agnostic learning of monomials over {0, 1} n and halfspaces over Q n. We show that for any constant ɛ finding a monomial (halfspace) that agrees with an unknown function on 1/2 + ɛ fraction of examples is NPhard even when there exists a monomial (halfspace) that agrees with the unknown function on 1 − ɛ fraction of examples. This resolves an open question due to Blum and significantly improves on a number of previous hardness results for these problems. We extend these results to ɛ = 2 − log1−λ n (ɛ = 2 − √ log n in the case of halfspaces) for any constant λ> 0 under stronger complexity assumptions.
Every 2CSP Allows Nontrivial Approximation
"... We use semidefinite programming to prove that any constraint satisfaction problem in two variables over any domain allows an efficient approximation algorithm that does better than picking a random assignment. Specifically we consider the case when each variable can take values in [d] and that each ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
We use semidefinite programming to prove that any constraint satisfaction problem in two variables over any domain allows an efficient approximation algorithm that does better than picking a random assignment. Specifically we consider the case when each variable can take values in [d] and that each constraint rejects t out of the d2 possible input pairs. Then, for some universal constant c, wecan,in probabilistic polynomial time, find an assignment whose objective value is, in expectation, within a factor 1 − t d2 + ct d4 log d of optimal, improving on the trivial bound of 1 − t/d².