Results 1  10
of
62
Approximating the cutnorm via Grothendieck’s inequality
 Proc. of the 36 th ACM STOC
, 2004
"... ..."
(Show Context)
Quadratic forms on graphs
 Invent. Math
, 2005
"... We introduce a new graph parameter, called the Grothendieck constant of a graph G = (V, E), which is defined as the least constant K such that for every A: E → R, ..."
Abstract

Cited by 51 (10 self)
 Add to MetaCart
(Show Context)
We introduce a new graph parameter, called the Grothendieck constant of a graph G = (V, E), which is defined as the least constant K such that for every A: E → R,
The Grothendieck constant is strictly smaller than Krivine’s bound
 IN 52ND ANNUAL IEEE SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE. PREPRINT AVAILABLE AT HTTP://ARXIV.ORG/ABS/1103.6161
, 2011
"... The (real) Grothendieck constant KG is the infimum over those K ∈ (0, ∞) such that for every m, n ∈ N and every m × n real matrix (aij) we have m ∑ n∑ m ∑ n∑ aij〈xi, yj 〉 � K max aijεiδj. max {xi} m i=1,{yj}n j=1 ⊆Sn+m−1 i=1 j=1 {εi} m i=1,{δj}n j=1⊆{−1,1} i=1 j=1 2 log(1+ √ 2) The classical Groth ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
(Show Context)
The (real) Grothendieck constant KG is the infimum over those K ∈ (0, ∞) such that for every m, n ∈ N and every m × n real matrix (aij) we have m ∑ n∑ m ∑ n∑ aij〈xi, yj 〉 � K max aijεiδj. max {xi} m i=1,{yj}n j=1 ⊆Sn+m−1 i=1 j=1 {εi} m i=1,{δj}n j=1⊆{−1,1} i=1 j=1 2 log(1+ √ 2) The classical Grothendieck inequality asserts the nonobvious fact that the above inequality does hold true for some K ∈ (0, ∞) that is independent of m, n and (aij). Since Grothendieck’s 1953 discovery of this powerful theorem, it has found numerous applications in a variety of areas, but despite attracting a lot of attention, the exact value of the Grothendieck constant KG remains a mystery. The last progress on this problem was in π 1977, when Krivine proved that KG � and conjectured that his bound is optimal. Krivine’s conjecture has been restated repeatedly since 1977, resulting in focusing the subsequent research on the search for examples of matrices (aij) which exhibit (asymptotically, as m, n → ∞) a lower bound on KG that matches Krivine’s bound. Here we obtain an improved Grothendieck inequality that holds for all matrices (aij) and yields a bound KG < π 2 log(1+ √ 2) − ε0 for some effective constant ε0> 0. Other than disproving Krivine’s conjecture, and along the way also disproving an intermediate conjecture of König that was made in 2000 as a step towards Krivine’s conjecture, our main contribution is conceptual: despite dealing with a binary rounding problem, random 2dimensional projections, when combined with a careful partition of R 2 in order to round the projected vectors to values in {−1, 1}, perform better than the ubiquitous random hyperplane technique. By establishing the usefulness of higher dimensional rounding schemes, this fact has consequences in approximation algorithms. Specifically, it yields the best known polynomial time approximation algorithm for the FriezeKannan Cut Norm problem, a generic and wellstudied optimization problem with many applications.
Towards computing the grothendieck constant
 In SODA ’09: Proceedings of the 20th Annual ACMSIAM Symposium on Discrete Algorithms
, 2009
"... The Grothendieck constant KG is the smallest constant such that for every d ∈ N and every matrix A = (aij), sup u i,v j ∈B (d) X aij〈ui, vj 〉 � KG · ij sup x i,y j ∈[−1,1] X ij aijxiyj, where B (d) is the unit ball in R d. Despite several efforts [15, 23], the value of the constant KG remains unkno ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
(Show Context)
The Grothendieck constant KG is the smallest constant such that for every d ∈ N and every matrix A = (aij), sup u i,v j ∈B (d) X aij〈ui, vj 〉 � KG · ij sup x i,y j ∈[−1,1] X ij aijxiyj, where B (d) is the unit ball in R d. Despite several efforts [15, 23], the value of the constant KG remains unknown. The Grothendieck constant KG is precisely the integrality gap of a natural SDP relaxation for the KM,NQuadratic Programming problem. The input to this problem is a matrix A = (aij) and the objective is to maximize the quadratic form P ij aijxiyj over xi, yj ∈ [−1, 1]. In this work, we apply techniques from [22] to the KM,NQuadratic Programming problem. Using some standard but nontrivial modifications, the reduction in [22] yields the following hardness result: Assuming the Unique Games Conjecture [9], it is NPhard to approximate the KM,NQuadratic Programming problem to any factor better than the Grothendieck constant KG. By adapting a “bootstrapping ” argument used in a proof of Grothendieck inequality [5], we are able to perform a tighter analysis than [22]. Through this careful analysis, we obtain the following new results: ◦ An approximation algorithm for KM,NQuadratic Programming that is guaranteed to achieve an approximation ratio arbitrarily close to the Grothendieck constant KG (optimal approximation ratio assuming the Unique Games Conjecture). ◦ We show that the Grothendieck constant KG can be computed within an error η, in time depending only on η. Specifically, for each η, we formulate an explicit finite linear program, whose optimum is ηclose to the Grothendieck constant. We also exhibit a simple family of operators on the Gaussian Hilbert space that is guaranteed to contain tight examples for the Grothendieck inequality.
Separable Lifting Property And Extensions Of Local Reflexivity
, 2000
"... . A Banach space X is said to have the separable lifting property if for every subspace Y of X ## containing X and such that Y/X is separable there exists a bounded linear lifting from Y/X to Y . We show that if a sequence of Banach spaces E 1 , E 2 , . . . has the joint uniform approximation prope ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
. A Banach space X is said to have the separable lifting property if for every subspace Y of X ## containing X and such that Y/X is separable there exists a bounded linear lifting from Y/X to Y . We show that if a sequence of Banach spaces E 1 , E 2 , . . . has the joint uniform approximation property and En is ccomplemented in E ## n for every n (with c fixed), then \Gamma P n En \Delta 0 has the separable lifting property. In particular, if En is a L pn ,# space for every n (1 < pn < #, # independent of n), an L# or an L1 space, then \Gamma P n En \Delta 0 has the separable lifting property. We also show that there exists a Banach space X which is not extendably locally reflexive; moreover, for every n there exists an ndimensional subspace E ## X ## such that if u : X ## # X ## is an operator (= bounded linear operator) such that u(E) # X, then (u E ) 1  u # c # n, where c is a numerical constant. 1. Introduction At the root of this investigat...
Finite dimensional subspaces of L_p
"... this article, we chose to devote this section to describing the change of densities that arise later. It turns out that the framework in which this technique is most naturally used is that of an L p () space when is a probability. For us there is no loss of generality in restricting to that case si ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
this article, we chose to devote this section to describing the change of densities that arise later. It turns out that the framework in which this technique is most naturally used is that of an L p () space when is a probability. For us there is no loss of generality in restricting to that case since the space #
Convex games in banach spaces
, 2009
"... We study the regret of an online learner playing a multiround game in a Banach space B against an adversary that plays a convex function at each round. We characterize the minimax regret when the adversary plays linear functions in terms of the Martingale type of the dual of B. The cases when the a ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
We study the regret of an online learner playing a multiround game in a Banach space B against an adversary that plays a convex function at each round. We characterize the minimax regret when the adversary plays linear functions in terms of the Martingale type of the dual of B. The cases when the adversary plays bounded and uniformly convex functions respectively are also considered. Our results connect online convex learning to the study of the geometry of Banach spaces. We also show that appropriate modifications of the Mirror Descent algorithm from convex optimization can be used to achieve our regret upper bounds. Finally, we provide a version of Mirror Descent that adapts to the changing exponent of uniform convexity of the adversary’s functions. This adaptive mirror descent strategy provides new algorithms even for the more familiar Hilbert space case where the loss functions on each round have varying exponents of uniform convexity (curvature). 1