Results 11  20
of
101
Fixedparameter tractability of multicut parameterized by the size of the cutset
, 2011
"... Given an undirected graph G, a collection {(s1, t1),...,(sk, tk)} of pairs of vertices, and an integer p, the EDGE MULTICUT problem ask if there is a set S of at most p edges such that the removal of S disconnects every si from the corresponding ti. VERTEX MULTICUT is the analogous problem where S i ..."
Abstract

Cited by 32 (6 self)
 Add to MetaCart
Given an undirected graph G, a collection {(s1, t1),...,(sk, tk)} of pairs of vertices, and an integer p, the EDGE MULTICUT problem ask if there is a set S of at most p edges such that the removal of S disconnects every si from the corresponding ti. VERTEX MULTICUT is the analogous problem where S is a set of at most p vertices. Our main result is that both problems can be solved in time 2O(p3) · nO(1), i.e., fixedparameter tractable parameterized by the size p of the cutset in the solution. By contrast, it is unlikely that an algorithm with running time of the form f (p) · nO(1) exists for the directed version of the problem, as we show it to be W[1]hard parameterized by the size of the cutset.
New approximation guarantee for chromatic number
 STOC'06
, 2006
"... We describe how to color every 3colorable graph with O(n0.2111) colors, thus improving an algorithm of Blum and Karger from almost a decade ago. Our analysis uses new geometric ideas inspired by the recent work of Arora, Rao, and Vazirani on SPARSEST CUT, and these ideas show promise of leading to ..."
Abstract

Cited by 31 (3 self)
 Add to MetaCart
We describe how to color every 3colorable graph with O(n0.2111) colors, thus improving an algorithm of Blum and Karger from almost a decade ago. Our analysis uses new geometric ideas inspired by the recent work of Arora, Rao, and Vazirani on SPARSEST CUT, and these ideas show promise of leading to further improvements.
Towards Sharp Inapproximability For Any 2CSP
"... We continue the recent line of work on the connection between semidefinite programmingbased approximation algorithms and the Unique Games Conjecture. Given any boolean 2CSP (or more generally, any nonnegative objective function on two boolean variables), we show how to reduce the search for a good ..."
Abstract

Cited by 29 (1 self)
 Add to MetaCart
(Show Context)
We continue the recent line of work on the connection between semidefinite programmingbased approximation algorithms and the Unique Games Conjecture. Given any boolean 2CSP (or more generally, any nonnegative objective function on two boolean variables), we show how to reduce the search for a good inapproximability result to a certain numeric minimization problem. The key objects in our analysis are the vector triples arising when doing clausebyclause analysis of algorithms based on semidefinite programming. Given a weighted set of such triples of a certain restricted type, which are “hard ” to round in a certain sense, we obtain a Unique Gamesbased inapproximability matching this “hardness ” of rounding the set of vector triples. Conversely, any instance together with an SDP solution can be viewed as a set of vector triples, and we show that we can always find an assignment to the instance which is at least as good as the “hardness ” of rounding the corresponding set of vector triples. We conjecture that the restricted type required for the hardness result is in fact no restriction, which would imply that these upper and lower bounds match exactly. This conjecture is supported by all existing results for specific 2CSPs. As an application, we show that MAX 2AND is hard to approximate within 0.87435. This improves upon the best previous hardness of αGW + ɛ ≈ 0.87856, and comes very close to matching the approximation ratio of the best algorithm known, 0.87401. It also establishes that balanced instances of MAX 2AND, i.e., instances in which each variable occurs positively and negatively equally often, are not the hardest to approximate, as these can be approximated within a factor αGW.
On nonapproximability for quadratic programs
 IN 46TH ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE
, 2005
"... This paper studies the computational complexity of the following type of quadratic programs: given an arbitrary matrix whose diagonal elements are zero, find x ∈ {−1, 1} n that maximizes x T Mx. This problem recently attracted attention due to its application in various clustering settings, as well ..."
Abstract

Cited by 28 (4 self)
 Add to MetaCart
This paper studies the computational complexity of the following type of quadratic programs: given an arbitrary matrix whose diagonal elements are zero, find x ∈ {−1, 1} n that maximizes x T Mx. This problem recently attracted attention due to its application in various clustering settings, as well as an intriguing connection to the famous Grothendieck inequality. It is approximable to within a factor of O(log n), and known to be NPhard to approximate within any factor better than 13/11 − ɛ for all ɛ> 0. We show that it is quasiNPhard to approximate to a factor better than O(log γ n) for some γ> 0. The integrality gap of the natural semidefinite relaxation for this problem is known as the Grothendieck constant of the complete graph, and known to be Θ(log n). The proof of this fact was nonconstructive, and did not yield an explicit problem instance where this integrality gap is achieved. Our techniques yield an explicit instance for which the integrality gap is log n Ω ( log log n), essentially answering one of the open problems of Alon et al. [AMMN].
Understanding Parallel Repetition Requires Understanding Foams
, 2007
"... Motivated by the study of Parallel Repetition and also by the Unique Games Conjecture, we investigate the value of the “Odd Cycle Games ” under parallel repetition. Using tools from discrete harmonic analysis, we show that after d rounds on the cycle of length m, the value of the game is at most 1 − ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
Motivated by the study of Parallel Repetition and also by the Unique Games Conjecture, we investigate the value of the “Odd Cycle Games ” under parallel repetition. Using tools from discrete harmonic analysis, we show that after d rounds on the cycle of length m, the value of the game is at most 1 − (1/m) · Ω̃( d) (for d ≤ m2, say). This beats the natural barrier of 1−Θ(1/m)2 ·d for Razstyle proofs [Raz98, Hol06] (see [Fei95]) and also the SDP bound of FeigeLovász [FL92, GW95]; however, it just barely fails to have implications for Unique Games. On the other hand, we also show that improving our bound would require proving nontrivial lower bounds on the surface area of highdimensional foams. Specifically, one would need to answer: What is the least surface area of a cell that tiles Rd by the lattice Z d?
Inapproximability results for sparsest cut, optimal linear arrangement, and precedence constrained scheduling
 IN PROCEEDINGS OF 48TH ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS
, 2007
"... This paper is published in the proceedings of FOCS 2007 and is subject to some copyright restrictions. We consider (Uniform) Sparsest Cut, Optimal Linear Arrangement and the precedence constrained scheduling problem 1prec  � wjCj. So far, these three notorious NPhard problems have resisted all at ..."
Abstract

Cited by 27 (9 self)
 Add to MetaCart
This paper is published in the proceedings of FOCS 2007 and is subject to some copyright restrictions. We consider (Uniform) Sparsest Cut, Optimal Linear Arrangement and the precedence constrained scheduling problem 1prec  � wjCj. So far, these three notorious NPhard problems have resisted all attempts to prove inapproximability results. We show that they have no Polynomial Time Approximation Scheme (PTAS), unless NPcomplete problems can be solved in randomized subexponential time. Furthermore, we prove that the scheduling problem is as hard to approximate as Vertex Cover when the socalled fixed cost, that is present in all feasible solutions, is subtracted from the objective function.
SDP gaps and UGChardness for MaxCutGain
, 2008
"... Given a graph with maximum cut of (fractional) size c, the Goemans–Williamson semidefinite programming (SDP)based algorithm is guaranteed to find a cut of size at least.878 · c. However this guarantee becomes trivial when c is near 1/2, since making random cuts guarantees a cut of size 1/2 (i.e., ..."
Abstract

Cited by 25 (4 self)
 Add to MetaCart
Given a graph with maximum cut of (fractional) size c, the Goemans–Williamson semidefinite programming (SDP)based algorithm is guaranteed to find a cut of size at least.878 · c. However this guarantee becomes trivial when c is near 1/2, since making random cuts guarantees a cut of size 1/2 (i.e., half of all edges). A few years ago, Charikar and Wirth (analyzing an algorithm of Feige and Langberg) showed that given a graph with maximum cut 1/2 + ε, one can find a cut of size 1/2 + Ω(ε/log(1/ε)). The main contribution of our paper is twofold: 1. We give a natural and explicit 1/2 + ε vs. 1/2 + O(ε/log(1/ε)) integrality gap for the MaxCut SDP based on Euclidean space with the Gaussian probability distribution. This shows that the SDProunding algorithm of CharikarWirth is essentially best possible. 2. We show how this SDP gap can be translated into a Long Code test with the same parameters. This implies that beating the CharikarWirth guarantee with any efficient algorithm is NPhard, assuming the Unique Games Conjecture (UGC). This result essentially settles the asymptotic approximability of MaxCut, assuming UGC. Building on the first contribution, we show how “randomness reduction ” on related SDP gaps for the QuadraticProgramming problem lets us make the Ω(log(1/ε)) gap as large as Ω(logn) for nvertex graphs. In addition to optimally answering an open question
Polynomial Flowcut Gaps and Hardness of Directed Cut Problems
 In Proc. of STOC, 2007
"... We study the multicut and the sparsest cut problems in directed graphs. In the multicut problem, we are a given an nvertex graph G along with k sourcesink pairs, and the goal is to find the minimum cardinality subset of edges whose removal separates all sourcesink pairs. The sparsest cut problem ..."
Abstract

Cited by 25 (0 self)
 Add to MetaCart
(Show Context)
We study the multicut and the sparsest cut problems in directed graphs. In the multicut problem, we are a given an nvertex graph G along with k sourcesink pairs, and the goal is to find the minimum cardinality subset of edges whose removal separates all sourcesink pairs. The sparsest cut problem has the same input, but the goal is to find a subset of edges to delete so as to minimize the ratio of the number of deleted edges to the number of sourcesink pairs that are separated by this deletion. The natural linear programming relaxation for multicut corresponds, by LPduality, to the wellstudied maximum (fractional) multicommodity flow problem, while the standard LPrelaxation for sparsest cut corresponds to maximum concurrent flow. Therefore, the integrality gap of the linear programming relaxation for multicut/sparsest cut is also the flowcut gap: the largest gap, achievable for any graph, between the maximum flow value and the minimum cost solution for the corresponding cut problem. Our first result is that the flowcut gap between maximum multicommodity flow and minimum multicut is ˜ Ω(n 1/7) in directed graphs. We show a similar result for the gap between maximum concurrent flow and sparsest cut in directed graphs. These results improve upon a
A unified approach to approximating partial covering problems
 IN PROCEEDINGS OF THE 14TH ANNUAL EUROPEAN SYMPOSIUM ON ALGORITHMS
, 2006
"... An instance of the generalized partial cover problem consists of a ground set U and a family of subsets S ⊆ 2 U. Each element e ∈ U is associated with a profit p(e), whereas each subset S ∈ S has a cost c(S). The objective is to find a minimum cost subcollection S ′ ⊆ S such that the combined prof ..."
Abstract

Cited by 24 (2 self)
 Add to MetaCart
(Show Context)
An instance of the generalized partial cover problem consists of a ground set U and a family of subsets S ⊆ 2 U. Each element e ∈ U is associated with a profit p(e), whereas each subset S ∈ S has a cost c(S). The objective is to find a minimum cost subcollection S ′ ⊆ S such that the combined profit of the elements covered by S ′ is at least P, a specified profit bound. In the prizecollecting version of this problem, there is no strict requirement to cover any element; however, if the subsets we pick leave an element e ∈ U uncovered, we incur a penalty of π(e). The goal is to identify a subcollection S ′ ⊆ S that minimizes the cost of S ′ plus the penalties of uncovered elements. Although problemspecific connections between the partial cover and the prizecollecting variants of a given covering problem have been explored and exploited, a more general connection remained open. The main contribution of this paper is to establish a formal relationship between these two variants. As a result, we present a unified framework for approximating problems that can be formulated or interpreted as special cases of generalized partial cover. We demonstrate the applicability of our method on a diverse collection of covering problems, for some of which we obtain the first nontrivial approximability results.
Approximating unique games
 In Proc. SODA’06
, 2006
"... The Unique Games problem is the following: we are given a graph G = (V, E), with each edge e = (u, v) having a weight we and a permutation πuv on [k]. The objective is to find a labeling of each vertex u with a label fu ∈ [k] to minimize the weight of unsatisfied edges—where an edge (u, v) is satisf ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
(Show Context)
The Unique Games problem is the following: we are given a graph G = (V, E), with each edge e = (u, v) having a weight we and a permutation πuv on [k]. The objective is to find a labeling of each vertex u with a label fu ∈ [k] to minimize the weight of unsatisfied edges—where an edge (u, v) is satisfied if fv = πuv(fu). The Unique Games Conjecture of Khot [8] essentially says that for each ε> 0, there is a k such that it is NPhard to distinguish instances of Unique games with (1−ε) satisfiable edges from those with only ε satisfiable edges. Several hardness results have recently been proved based on this assumption, including optimal ones for MaxCut, VertexCover and other problems, making it an important challenge to prove or refute the conjecture. In this paper, we give an O(log n)approximation algorithm for the problem of minimizing the number of unsatisfied edges in any Unique game. Previous results of Khot [8] and Trevisan [12] imply that if the optimal solution has OPT = εm unsatisfied edges, semidefinite relaxations of the problem could give labelings with min{k2ε1/5, (ε log n) 1/2}m unsatisfied edges. In this paper we show how to round a LP relaxation to get an O(log n)approximation to the problem; i.e., to find a labeling with only O(εm log n) = O(OPT log n) unsatisfied edges. 1