Results 1  10
of
25
Semidefinite Programming and Combinatorial Optimization
 DOC. MATH. J. DMV
, 1998
"... We describe a few applications of semide nite programming in combinatorial optimization. ..."
Abstract

Cited by 109 (1 self)
 Add to MetaCart
(Show Context)
We describe a few applications of semide nite programming in combinatorial optimization.
Outward rotations: a tool for rounding solutions of semidefinite programming relaxations, with applications to MAX CUT and other problems
, 1999
"... We present a tool, outward rotations, for enhancing the performance of several semidefinite programming based approximation algorithms. Using outward rotations, we obtain an approximation algorithm for MAX CUT that, in many interesting cases, performs better than the algorithm of Goemans and William ..."
Abstract

Cited by 67 (7 self)
 Add to MetaCart
We present a tool, outward rotations, for enhancing the performance of several semidefinite programming based approximation algorithms. Using outward rotations, we obtain an approximation algorithm for MAX CUT that, in many interesting cases, performs better than the algorithm of Goemans and Williamson. We also obtain an improved approximation algorithm for MAX NAEf3gSAT. Finally, we provide some evidence that outward rotations can also be used to obtain improved approximation algorithms for MAX NAESAT and MAX SAT.
Hardness of learning halfspaces with noise
 In Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science
, 2006
"... Learning an unknown halfspace (also called a perceptron) from labeled examples is one of the classic problems in machine learning. In the noisefree case, when a halfspace consistent with all the training examples exists, the problem can be solved in polynomial time using linear programming. However ..."
Abstract

Cited by 40 (3 self)
 Add to MetaCart
(Show Context)
Learning an unknown halfspace (also called a perceptron) from labeled examples is one of the classic problems in machine learning. In the noisefree case, when a halfspace consistent with all the training examples exists, the problem can be solved in polynomial time using linear programming. However, under the promise that a halfspace consistent with a fraction (1 − ε) of the examples exists (for some small constant ε> 0), it was not known how to efficiently find a halfspace that is correct on even 51 % of the examples. Nor was a hardness result that ruled out getting agreement on more than 99.9 % of the examples known. In this work, we close this gap in our understanding, and prove that even a tiny amount of worstcase noise makes the problem of learning halfspaces intractable in a strong sense. Specifically, for arbitrary ε, δ> 0, we prove that given a set of exampleslabel pairs from the hypercube a fraction (1 − ε) of which can be explained by a halfspace, it is NPhard to find a halfspace that correctly labels a fraction (1/2 + δ) of the examples. The hardness result is tight since it is trivial to get agreement on 1/2 the examples. In learning theory parlance, we prove that weak proper agnostic learning of halfspaces is hard. This settles a question that was raised by Blum et al. in their work on learning halfspaces in the presence of random classification noise [10], and in some more recent works as well. Along the way, we also obtain a strong hardness result for another basic computational problem: solving a linear system over the rationals. 1
How to Round Any CSP
"... A large number of interesting combinatorial optimization ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
(Show Context)
A large number of interesting combinatorial optimization
Approximation algorithms for MAX 4SAT and rounding procedures for semidefinite programs
 Journal of Algorithms
, 1999
"... Karloff and Zwick obtained recently an optimal 7=8approximation algorithm for MAX 3SAT. In an attempt to see whether similar methods can be used to obtain a 7=8approximation algorithm for MAX SAT, we consider the most natural generalization of MAX 3SAT, namely MAX 4SAT. We present a semidefi ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
(Show Context)
Karloff and Zwick obtained recently an optimal 7=8approximation algorithm for MAX 3SAT. In an attempt to see whether similar methods can be used to obtain a 7=8approximation algorithm for MAX SAT, we consider the most natural generalization of MAX 3SAT, namely MAX 4SAT. We present a semidefinite programming relaxation of MAX 4SAT and a new family of rounding procedures that try to cope well with clauses of various sizes. We study the potential, and the limitations, of the relaxation and of the proposed family of rounding procedures using a combination of theoretical and experimental means. We select two rounding procedures from the proposed family of rounding procedures. Using the first rounding procedure we seem to obtain an almost optimal 0:8721approximation algorithm for MAX 4SAT. Using the second rounding procedure we seem to obtain an optimal 7=8approximation algorithm for satisfiable instances of MAX 4SAT. On the other hand, we show that no rounding procedure from the family considered can be shown, using the current techniques, to yield an approximation algorithm for MAX 4SAT whose performance guarantee on all instances of the problem is greater than 0:8724. We also show that the the integrality ratio of the proposed relaxation, as a relaxation of MAX f1; 4gSAT is at most 0:8754.
The Complexity of Minimal Satisfiability Problems
, 2001
"... A dichotomy theorem for a class of decision problems is a result asserting that certain problems in the class are solvable in polynomial time, while the rest are NPcomplete. The first remarkable such dichotomy theorem was proved by T.J. Schaefer in 1978. It concerns the class of generalized satisfi ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
A dichotomy theorem for a class of decision problems is a result asserting that certain problems in the class are solvable in polynomial time, while the rest are NPcomplete. The first remarkable such dichotomy theorem was proved by T.J. Schaefer in 1978. It concerns the class of generalized satisfiability problems Sat(S), whose input is a CNF(S)formula, i.e., a formula constructed from elements of a fixed set S of generalized connectives using conjunctions and substitutions by variables. Here, we investigate the complexity of minimal satisfiability problems Min Sat(S), where S is a fixed set of generalized connectives. The input to such a problem is a CNF(S)formula and a satisfying truth assignment; the question is to decide whether there is another satisfying truth assignment that is strictly smaller than the given truth assignment with respect to the coordinatewise partial order on truth assignments. Minimal satis ability problems were first studied by researchers in artificial intelligence while investigating the computational complexity of propositional circumscription. The question of whether dichotomy theorems can be proved for these problems was raised at that time, but was left open. We settle this question armatively by establishing a dichotomy theorem for the class of all Min Sat(S)problems, where S is a finite set of generalized connectives. We also prove a dichotomy theorem for a variant of Min Sat(S) in which the minimization is restricted to a subset of the variables, whereas the remaining variables may vary arbitrarily (this variant is related to extensions of propositional circumscription and was first studied by Cadoli). Moreover, we show that similar dichotomy theorems hold also when some of the variables are assigned constant values. Fi...
Nearoptimal algorithms for maximum constraint satisfaction problems
 In SODA ’07: Proceedings of the eighteenth annual ACMSIAM symposium on Discrete algorithms
, 2007
"... In this paper we present approximation algorithms for the maximum constraint satisfaction problem with k variables in each constraint (MAX kCSP). Given a (1 − ε) satisfiable 2CSP our first algorithm finds an assignment of variables satisfying a 1 − O ( √ ε) fraction of all constraints. The best pr ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
(Show Context)
In this paper we present approximation algorithms for the maximum constraint satisfaction problem with k variables in each constraint (MAX kCSP). Given a (1 − ε) satisfiable 2CSP our first algorithm finds an assignment of variables satisfying a 1 − O ( √ ε) fraction of all constraints. The best previously known result, due to Zwick, was 1 − O(ε 1/3). The second algorithm finds a ck/2 k approximation for the MAX kCSP problem (where c> 0.44 is an absolute constant). This result improves the previously best known algorithm by Hast, which had an approximation guarantee of Ω(k/(2 k log k)). Both results are optimal assuming the Unique Games Conjecture and are based on rounding natural semidefinite programming relaxations. We also believe that our algorithms and their analysis are simpler than those previously known. 1
Analyzing the MAX 2SAT and MAX DICUT Approximation Algorithms of Feige and Goemans
, 2000
"... We present a complete analysis of the MAX 2SAT and MAX DICUT approximation algorithms of Feige and Goemans using various analytical and computational tools. By finetuning the rotation functions used we obtain minutely improved performance ratios for these problems. The rotation functions used ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
(Show Context)
We present a complete analysis of the MAX 2SAT and MAX DICUT approximation algorithms of Feige and Goemans using various analytical and computational tools. By finetuning the rotation functions used we obtain minutely improved performance ratios for these problems. The rotation functions used for getting these improvements are essentially optimal as the performance ratios obtained using them almost completely match upper bounds that we obtain on the performance ratios that can be achieved using any rotation function. We also discuss possibilities of getting improved approximation algorithms for these problems.