Results 1 - 10
of
53
Linear vs. Semidefinite Extended Formulations: Exponential Separation and Strong Lower Bounds
, 2012
"... We solve a 20-year old problem posed by Yannakakis and prove that there exists no polynomial-size linear program (LP) whose associated polytope projects to the traveling salesman polytope, even if the LP is not required to be symmetric. Moreover, we prove that this holds also for the cut polytope an ..."
Abstract
-
Cited by 51 (11 self)
- Add to MetaCart
We solve a 20-year old problem posed by Yannakakis and prove that there exists no polynomial-size linear program (LP) whose associated polytope projects to the traveling salesman polytope, even if the LP is not required to be symmetric. Moreover, we prove that this holds also for the cut polytope and the stable set polytope. These results were discovered through a new connection that we make between one-way quantum communication protocols and semidefinite programming reformulations of LPs.
All quantum adversary methods are equivalent
- THEORY OF COMPUTING
, 2006
"... The quantum adversary method is one of the most versatile lower-bound methods for quantum algorithms. We show that all known variants of this method are equivalent: spectral adversary (Barnum, Saks, and Szegedy, 2003), weighted adversary (Ambainis, 2003), strong weighted adversary (Zhang, 2005), an ..."
Abstract
-
Cited by 50 (4 self)
- Add to MetaCart
(Show Context)
The quantum adversary method is one of the most versatile lower-bound methods for quantum algorithms. We show that all known variants of this method are equivalent: spectral adversary (Barnum, Saks, and Szegedy, 2003), weighted adversary (Ambainis, 2003), strong weighted adversary (Zhang, 2005), and the Kolmogorov complexity adversary (Laplante and Magniez, 2004). We also present a few new equivalent formulations of the method. This shows that there is essentially one quantum adversary method. From our approach, all known limitations of these versions of the quantum adversary method easily follow.
Theta Bodies for Polynomial Ideals
, 2009
"... Inspired by a question of Lovász, we introduce a hierarchy of nested semidefinite relaxations of the convex hull of real solutions to an arbitrary polynomial ideal, called theta bodies of the ideal. For the stable set problem in a graph, the first theta body in this hierarchy is exactly Lovász’s th ..."
Abstract
-
Cited by 43 (8 self)
- Add to MetaCart
(Show Context)
Inspired by a question of Lovász, we introduce a hierarchy of nested semidefinite relaxations of the convex hull of real solutions to an arbitrary polynomial ideal, called theta bodies of the ideal. For the stable set problem in a graph, the first theta body in this hierarchy is exactly Lovász’s theta body of the graph. We prove that theta bodies are, up to closure, a version of Lasserre’s relaxations for real solutions to ideals, and that they can be computed explicitly using combinatorial moment matrices. Theta bodies provide a new canonical set of semidefinite relaxations for the max cut problem. For vanishing ideals of finite point sets, we give several equivalent characterizations of when the first theta body equals the convex hull of the points. We also determine the structure of the first theta body for all ideals.
Quantum query complexity of state conversion
- In Proc. of 52nd IEEE FOCS
"... State conversion generalizes query complexity to the problem of converting between two input-dependent quantum states by making queries to the input. We characterize the complexity of this problem by introducing a natural information-theoretic norm that extends the Schur product operator norm. The c ..."
Abstract
-
Cited by 35 (2 self)
- Add to MetaCart
State conversion generalizes query complexity to the problem of converting between two input-dependent quantum states by making queries to the input. We characterize the complexity of this problem by introducing a natural information-theoretic norm that extends the Schur product operator norm. The complexity of converting between two systems of states is given by the distance between them, as measured by this norm. In the special case of function evaluation, the norm is closely related to the general adversary bound, a semi-definite program that lower-bounds the number of input queries needed by a quantum algorithm to evaluate a function. We thus obtain that the general adversary bound characterizes the quantum query complexity of any function whatsoever. This generalizes and simplifies the proof of the same result in the case of boolean input and output. Also in the case of function evaluation, we show that our norm satisfies a remarkable composition property, implying that the quantum query complexity of the composition of two functions is at most the product of the query complexities of the functions, up to a constant. Finally, our result implies that discrete and continuous-time query models are equivalent in the bounded-error setting, even for the general state-conversion problem. 1
Bounded-error quantum state identification and exponential separations in communication complexity
- In Proc. of the 38th Symposium on Theory of Computing (STOC
, 2006
"... We consider the problem of bounded-error quantum state identification: given either state α0 or state α1, we are required to output ‘0’, ‘1 ’ or ‘? ’ (“don’t know”), such that conditioned on outputting ‘0 ’ or ‘1’, our guess is correct with high probability. The goal is to maximize the probability o ..."
Abstract
-
Cited by 29 (16 self)
- Add to MetaCart
(Show Context)
We consider the problem of bounded-error quantum state identification: given either state α0 or state α1, we are required to output ‘0’, ‘1 ’ or ‘? ’ (“don’t know”), such that conditioned on outputting ‘0 ’ or ‘1’, our guess is correct with high probability. The goal is to maximize the probability of not outputting ‘?’. We prove a direct product theorem: if we are given two such problems, with optimal probabilities a and b, respectively, and the states in the first problem are pure, then the optimal probability for the joint bounded-error state identification problem is O(ab). Our proof is based on semidefinite programming duality. Using this result, we present two exponential separations in the simultaneous message passing model of communication complexity. First, we describe a relation that can be computed with O(log n) classical bits of communication in the presence of shared randomness, but needs Ω(n 1/3) communication if the parties don’t share randomness, even if communication is quantum. This shows the optimality of Yao’s recent exponential simulation of shared-randomness protocols by quantum protocols without shared randomness. Combined with an earlier separation in the other direction due to Bar-Yossef et al., this shows that the quantum SMP model is incomparable with the classical shared-randomness SMP model. Second, we describe a relation that can be computed with O(log n) classical bits of communication in the presence of shared entanglement, but needs Ω((n / log n) 1/3) communication if the parties share randomness but no entanglement, even if communication is quantum. This is the first example in communication complexity of a situation where entanglement buys you much more than quantum communication.
Semidefinite programs for completely bounded norms
, 2009
"... The completely bounded trace and spectral norms in finite dimensions are shown to be expressible by semidefinite programs. This provides an efficient method by which these norms may be both calculated and verified, and gives alternate proofs of some known facts about them. ..."
Abstract
-
Cited by 20 (3 self)
- Add to MetaCart
The completely bounded trace and spectral norms in finite dimensions are shown to be expressible by semidefinite programs. This provides an efficient method by which these norms may be both calculated and verified, and gives alternate proofs of some known facts about them.
Maximum algebraic connectivity augmentation is NP-hard
- Oper. Res. Lett
"... The algebraic connectivity of a graph, which is the second-smallest eigenvalue of the Laplacian of the graph, is a measure of connectivity. We show that the problem of adding a specified number of edges to an input graph to maximize the algebraic connectivity of the augmented graph is NP-hard. ..."
Abstract
-
Cited by 19 (0 self)
- Add to MetaCart
(Show Context)
The algebraic connectivity of a graph, which is the second-smallest eigenvalue of the Laplacian of the graph, is a measure of connectivity. We show that the problem of adding a specified number of edges to an input graph to maximize the algebraic connectivity of the augmented graph is NP-hard.
Online Oblivious Routing
- In Proceedings of ACM Symposium in Parallelism in Algorithms and Architectures (SPAA
, 2003
"... We consider an online version of the oblivious routing problem. Oblivious routing is the problem of picking a routing between each pair of nodes (or a set of flows), without knowledge of the traffic or demand between each pair, with the goal of minimizing the maximum congestion on any edge in the gr ..."
Abstract
-
Cited by 18 (3 self)
- Add to MetaCart
(Show Context)
We consider an online version of the oblivious routing problem. Oblivious routing is the problem of picking a routing between each pair of nodes (or a set of flows), without knowledge of the traffic or demand between each pair, with the goal of minimizing the maximum congestion on any edge in the graph. In the online version of the problem, we consider a "repeatedgame" setting, in which the algorithm is allowed to choose a new routing each night, but is still oblivious to the demands that will occur the next day. The cost of the algorithm at every time step is its competitive ratio, or the ratio of its congestion to the minimum possible congestion for the demands at that time step.
Integrality gaps of 2 − o(1) for vertex cover sdps in the lovász-schrijver hierarchy
- IN: ECCCTR: ELECTRONIC COLLOQUIUM ON COMPUTATIONAL COMPLEXITY, TECHNICAL REPORTS
, 2006
"... Linear and semidefinite programming are highly suc-cessful approaches for obtaining good approximations for NP-hard optimization problems. For example, break-through approximation algorithms for MAX CUT and SPARSEST CUT use semidefinite programming. Perhaps the most prominent NP-hard problem whose e ..."
Abstract
-
Cited by 15 (6 self)
- Add to MetaCart
Linear and semidefinite programming are highly suc-cessful approaches for obtaining good approximations for NP-hard optimization problems. For example, break-through approximation algorithms for MAX CUT and SPARSEST CUT use semidefinite programming. Perhaps the most prominent NP-hard problem whose exact approximation factor is still unresolved is VER-TEX COVER. PCP-based techniques of Dinur and Safra [7] show that it is not possible to achieve a factor better than 1.36; on the other hand no known algorithm does better than the factor of 2 achieved by the simple greedy algorithm. Furthermore, there is a widespread belief that SDP techniques are the most promising meth-ods available for improving upon this factor of 2. Following a line of study initiated by Arora et al. [3], our aim is to show that a large family of LP and SDP based algorithms fail to produce an approximation for VERTEX COVER better than 2. Lovász and Schri-jver [21] introduced the systems LS and LS+ for sys-tematically tightening LP and SDP relaxations, respec-tively, over many rounds. These systems naturally cap-ture large classes of LP and SDP relaxations; indeed, LS+ captures the celebrated SDP-based algorithms for MAX CUT and SPARSEST CUT mentioned above. We rule out polynomial-time 2 − Ω(1) approxima-tions for VERTEX COVER using LS+. In particular, we prove an integrality gap of 2−o(1) for VERTEX COVER SDPs obtained by tightening the standard LP relaxation with Ω( logn / log logn) rounds of LS+. While tight integrality gaps were known for VERTEX COVER in the weaker LS system [23], previous results did not rule out Funded in part by NSERC a 2−Ω(1) approximation after even two rounds of LS+.
Convex hulls of algebraic sets
, 2010
"... This article describes a method to compute successive convex approximations of the convex hull of a set of points in Rn that are the solutions to a system of polynomial equations over the reals. The method relies on sums of squares of polynomials and the dual theory of moment matrices. The main fea ..."
Abstract
-
Cited by 10 (1 self)
- Add to MetaCart
This article describes a method to compute successive convex approximations of the convex hull of a set of points in Rn that are the solutions to a system of polynomial equations over the reals. The method relies on sums of squares of polynomials and the dual theory of moment matrices. The main feature of the technique is that all computations are done modulo the ideal generated by the polynomials defining the set to the convexified. This work was motivated by questions raised by Lovász concerning extensions of the theta body of a graph to arbitrary real algebraic varieties, and hence the relaxations described here are called theta bodies. The convexification process can be seen as an incarnation of Lasserre’s hierarchy of convex relaxations of a semialgebraic set in Rn. When the defining ideal is real radical the results become especially nice. We provide several examples of the method and discuss convergence issues. Finite convergence, especially after the first step of the method, can be described explicitly for finite point sets.