Results 1  10
of
151
TENSOR RANK AND THE ILLPOSEDNESS OF THE BEST LOWRANK APPROXIMATION PROBLEM
"... There has been continued interest in seeking a theorem describing optimal lowrank approximations to tensors of order 3 or higher, that parallels the Eckart–Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, te ..."
Abstract

Cited by 193 (13 self)
 Add to MetaCart
There has been continued interest in seeking a theorem describing optimal lowrank approximations to tensors of order 3 or higher, that parallels the Eckart–Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rankr approximations. The phenomenon is much more widespread than one might suspect: examples of this failure can be constructed over a wide range of dimensions, orders and ranks, regardless of the choice of norm (or even Brègman divergence). Moreover, we show that in many instances these counterexamples have positive volume: they cannot be regarded as isolated phenomena. In one extreme case, we exhibit a tensor space in which no rank3 tensor has an optimal rank2 approximation. The notable exceptions to this misbehavior are rank1 tensors and order2 tensors (i.e. matrices). In a more positive spirit, we propose a natural way of overcoming the illposedness of the lowrank approximation problem, by using weak solutions when true solutions do not exist. For this to work, it is necessary to characterize the set of weak solutions, and we do this in the case of rank 2, order 3 (in arbitrary dimensions). In our work we emphasize the importance of closely studying concrete lowdimensional examples as a first step towards more general results. To this end, we present a detailed analysis of equivalence classes of 2 × 2 × 2 tensors, and we develop methods for extending results upwards to higher orders and dimensions. Finally, we link our work to existing studies of tensors from an algebraic geometric point of view. The rank of a tensor can in theory be given a semialgebraic description; in other words, can be determined by a system of polynomial inequalities. We study some of these polynomials in cases of interest to us; in particular we make extensive use of the hyperdeterminant ∆ on R 2×2×2.
Approximating the cutnorm via Grothendieck’s inequality
 Proc. of the 36 th ACM STOC
, 2004
"... ..."
(Show Context)
Consequences and Limits of Nonlocal Strategies
, 2010
"... Thispaperinvestigatesthepowersandlimitationsofquantum entanglementinthecontext of cooperative games of incomplete information. We give several examples of such nonlocal games where strategies that make use of entanglement outperform all possible classical strategies. One implication ofthese examples ..."
Abstract

Cited by 120 (20 self)
 Add to MetaCart
(Show Context)
Thispaperinvestigatesthepowersandlimitationsofquantum entanglementinthecontext of cooperative games of incomplete information. We give several examples of such nonlocal games where strategies that make use of entanglement outperform all possible classical strategies. One implication ofthese examplesis that entanglement canprofoundly affectthesoundness property of twoprover interactive proof systems. We then establish limits on the probability with which strategies making use of entanglement can win restricted types of nonlocal games. These upperbounds mayberegardedasgeneralizationsof Tsirelsontypeinequalities, which place bounds on the extent to which quantum information can allow for the violation of Bell inequalities. We also investigate the amount of entanglement required by optimal and nearly optimal quantum strategies forsome games.
Grothendieck’s theorem for operator spaces
 Invent. Math
"... We prove several versions of Grothendieck’s Theorem for completely bounded linear maps T: E → F ∗ , when E and F are operator spaces. We prove that if E, F are C ∗algebras, of which at least one is exact, then every completely bounded T: E → F ∗ can be factorized through the direct sum of the row a ..."
Abstract

Cited by 54 (13 self)
 Add to MetaCart
(Show Context)
We prove several versions of Grothendieck’s Theorem for completely bounded linear maps T: E → F ∗ , when E and F are operator spaces. We prove that if E, F are C ∗algebras, of which at least one is exact, then every completely bounded T: E → F ∗ can be factorized through the direct sum of the row and column Hilbert operator spaces. Equivalently T can be decomposed as T = Tr + Tc where Tr (resp. Tc) factors completely boundedly through a row (resp. column) Hilbert operator space. This settles positively (at least partially) some earlier conjectures of EffrosRuan and Blecher on the factorization of completely bounded bilinear forms on C∗algebras. Moreover, our result holds more generally for any pair E, F of “exact ” operator spaces. This yields a characterization of the completely bounded maps from a C∗algebra (or from an exact operator space) to the operator Hilbert space OH. As a corollary we prove that, up to a complete isomorphism, the row and column Hilbert operator spaces and their direct sums are the only operator spaces E such that both E and its dual E ∗ are exact. We also characterize the Schur multipliers which are completely bounded from the space of compact operators to the
Quadratic forms on graphs
 Invent. Math
, 2005
"... We introduce a new graph parameter, called the Grothendieck constant of a graph G = (V, E), which is defined as the least constant K such that for every A: E → R, ..."
Abstract

Cited by 51 (10 self)
 Add to MetaCart
(Show Context)
We introduce a new graph parameter, called the Grothendieck constant of a graph G = (V, E), which is defined as the least constant K such that for every A: E → R,
TWO PROPOSALS FOR ROBUST PCA USING SEMIDEFINITE PROGRAMMING
, 1012
"... Abstract. The performance of principal component analysis (PCA) suffers badly in the presence of outliers. This paper proposes two novel approaches for robust PCA based on semidefinite programming. The first method, maximum mean absolute deviation rounding (MDR), seeks directions of large spread in ..."
Abstract

Cited by 50 (2 self)
 Add to MetaCart
Abstract. The performance of principal component analysis (PCA) suffers badly in the presence of outliers. This paper proposes two novel approaches for robust PCA based on semidefinite programming. The first method, maximum mean absolute deviation rounding (MDR), seeks directions of large spread in the data while damping the effect of outliers. The second method produces a lowleverage decomposition (LLD) of the data that attempts to form a lowrank model for the data by separating out corrupted observations. This paper also presents efficient computational methods for solving these SDPs. Numerical experiments confirm the value of these new techniques. 1.
A mad day’s work: from Grothendieck to Connes and Kontsevich The evolution of concepts of space and symmetry
 Bull. Amer. Math. Soc. (N.S
, 2001
"... To add to the chorus of praise by referring to my own experience would be of little interest, but I am in no way forgetting the facilities for work provided by the Institut des Hautes Études Scientifiques (IHES) for so many years, particularly the constantly renewed opportunities for meetings and ex ..."
Abstract

Cited by 48 (0 self)
 Add to MetaCart
(Show Context)
To add to the chorus of praise by referring to my own experience would be of little interest, but I am in no way forgetting the facilities for work provided by the Institut des Hautes Études Scientifiques (IHES) for so many years, particularly the constantly renewed opportunities for meetings and exchanges. While there have
Embedding of the operator space OH and the logarithmic “little Grothendieck inequality
 Invent. Math
"... ar ..."
(Show Context)
SDP gaps and UGChardness for MaxCutGain
, 2008
"... Given a graph with maximum cut of (fractional) size c, the Goemans–Williamson semidefinite programming (SDP)based algorithm is guaranteed to find a cut of size at least.878 · c. However this guarantee becomes trivial when c is near 1/2, since making random cuts guarantees a cut of size 1/2 (i.e., ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
Given a graph with maximum cut of (fractional) size c, the Goemans–Williamson semidefinite programming (SDP)based algorithm is guaranteed to find a cut of size at least.878 · c. However this guarantee becomes trivial when c is near 1/2, since making random cuts guarantees a cut of size 1/2 (i.e., half of all edges). A few years ago, Charikar and Wirth (analyzing an algorithm of Feige and Langberg) showed that given a graph with maximum cut 1/2 + ε, one can find a cut of size 1/2 + Ω(ε/log(1/ε)). The main contribution of our paper is twofold: 1. We give a natural and explicit 1/2 + ε vs. 1/2 + O(ε/log(1/ε)) integrality gap for the MaxCut SDP based on Euclidean space with the Gaussian probability distribution. This shows that the SDProunding algorithm of CharikarWirth is essentially best possible. 2. We show how this SDP gap can be translated into a Long Code test with the same parameters. This implies that beating the CharikarWirth guarantee with any efficient algorithm is NPhard, assuming the Unique Games Conjecture (UGC). This result essentially settles the asymptotic approximability of MaxCut, assuming UGC. Building on the first contribution, we show how “randomness reduction ” on related SDP gaps for the QuadraticProgramming problem lets us make the Ω(log(1/ε)) gap as large as Ω(logn) for nvertex graphs. In addition to optimally answering an open question