Results 1  10
of
134
The Convex Geometry of Linear Inverse Problems
, 2010
"... In applications throughout science and engineering one is often faced with the challenge of solving an illposed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constr ..."
Abstract

Cited by 180 (19 self)
 Add to MetaCart
In applications throughout science and engineering one is often faced with the challenge of solving an illposed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered are those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include wellstudied cases such as sparse vectors (e.g., signal processing, statistics) and lowrank matrices (e.g., control, statistics), as well as several others including sums of a few permutations matrices (e.g., ranked elections, multiobject tracking), lowrank tensors (e.g., computer vision, neuroscience), orthogonal matrices (e.g., machine learning), and atomic measures (e.g., system identification). The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial
Consequences and Limits of Nonlocal Strategies
, 2010
"... Thispaperinvestigatesthepowersandlimitationsofquantum entanglementinthecontext of cooperative games of incomplete information. We give several examples of such nonlocal games where strategies that make use of entanglement outperform all possible classical strategies. One implication ofthese examples ..."
Abstract

Cited by 119 (20 self)
 Add to MetaCart
(Show Context)
Thispaperinvestigatesthepowersandlimitationsofquantum entanglementinthecontext of cooperative games of incomplete information. We give several examples of such nonlocal games where strategies that make use of entanglement outperform all possible classical strategies. One implication ofthese examplesis that entanglement canprofoundly affectthesoundness property of twoprover interactive proof systems. We then establish limits on the probability with which strategies making use of entanglement can win restricted types of nonlocal games. These upperbounds mayberegardedasgeneralizationsof Tsirelsontypeinequalities, which place bounds on the extent to which quantum information can allow for the violation of Bell inequalities. We also investigate the amount of entanglement required by optimal and nearly optimal quantum strategies forsome games.
Convergent Sequences of Dense Graphs I: Subgraph Frequencies, Metric Properties and Testing
, 2006
"... We consider sequences of graphs (Gn) and define various notions of convergence related to these sequences: “left convergence” defined in terms of the densities of homomorphisms from small graphs into Gn; “right convergence” defined in terms of the densities of homomorphisms from Gn into small graphs ..."
Abstract

Cited by 98 (6 self)
 Add to MetaCart
We consider sequences of graphs (Gn) and define various notions of convergence related to these sequences: “left convergence” defined in terms of the densities of homomorphisms from small graphs into Gn; “right convergence” defined in terms of the densities of homomorphisms from Gn into small graphs; and convergence in a suitably defined metric. In Part I of this series, we show that left convergence is equivalent to convergence in metric, both for simple graphs Gn, and for graphs Gn with nodeweights and edgeweights. One of the main steps here is the introduction of a cutdistance comparing graphs, not necessarily of the same size. We also show how these notions of convergence provide natural
Revisiting frankwolfe: Projectionfree sparse convex optimization
 In ICML
, 2013
"... We provide stronger and more general primaldual convergence results for FrankWolfetype algorithms (a.k.a. conditional gradient) for constrained convex optimization, enabled by a simple framework of duality gap certificates. Our analysis also holds if the linear subproblems are only solved approxi ..."
Abstract

Cited by 80 (2 self)
 Add to MetaCart
(Show Context)
We provide stronger and more general primaldual convergence results for FrankWolfetype algorithms (a.k.a. conditional gradient) for constrained convex optimization, enabled by a simple framework of duality gap certificates. Our analysis also holds if the linear subproblems are only solved approximately (as well as if the gradients are inexact), and is proven to be worstcase optimal in the sparsity of the obtained solutions. On the application side, this allows us to unify a large variety of existing sparse greedy methods, in particular for optimization over convex hulls of an atomic set, even if those sets can only be approximated, including sparse (or structured sparse) vectors or matrices, lowrank matrices, permutation matrices, or maxnorm bounded matrices. We present a new general framework for convex optimization over matrix factorizations, where every FrankWolfe iteration will consist of a lowrank update, and discuss the broad application areas of this approach. 1.
Szemerédi’s Lemma for the analyst
 J. GEOM. FUNC. ANAL
, 2006
"... Szemerédi’s Regularity Lemma is a fundamental tool in graph theory: it has many applications to extremal graph theory, graph property testing, combinatorial number theory, etc. The goal of this paper is to point out that Szemerédi’s Lemma can be thought of as a result in analysis. We show three diff ..."
Abstract

Cited by 69 (9 self)
 Add to MetaCart
(Show Context)
Szemerédi’s Regularity Lemma is a fundamental tool in graph theory: it has many applications to extremal graph theory, graph property testing, combinatorial number theory, etc. The goal of this paper is to point out that Szemerédi’s Lemma can be thought of as a result in analysis. We show three different analytic interpretations.
Parallel stochastic gradient algorithms for largescale matrix completion
 MATHEMATICAL PROGRAMMING COMPUTATION
, 2013
"... This paper develops Jellyfish, an algorithm for solving dataprocessing problems with matrixvalued decision variables regularized to have low rank. Particular examples of problems solvable by Jellyfish include matrix completion problems and leastsquares problems regularized by the nuclear norm or ..."
Abstract

Cited by 69 (7 self)
 Add to MetaCart
This paper develops Jellyfish, an algorithm for solving dataprocessing problems with matrixvalued decision variables regularized to have low rank. Particular examples of problems solvable by Jellyfish include matrix completion problems and leastsquares problems regularized by the nuclear norm or γ2norm. Jellyfish implements a projected incremental gradient method with a biased, random ordering of the increments. This biased ordering allows for a parallel implementation that admits a speedup nearly proportional to the number of processors. On largescale matrix completion tasks, Jellyfish is orders of magnitude more efficient than existing codes. For example, on the Netflix Prize data set, prior art computes rating predictions in approximately 4 hours, while Jellyfish solves the same problem in under 3 minutes on a 12 core workstation.
Counting graph homomorphisms
 IN:TOPICS IN DISCRETE MATH
, 2006
"... Counting homomorphisms between graphs (often with weights) comes up in a wide variety of areas, including extremal graph theory, properties of graph products, partition functions in statistical physics and property testing of large graphs. In this paper we survey recent developments in the study of ..."
Abstract

Cited by 58 (16 self)
 Add to MetaCart
(Show Context)
Counting homomorphisms between graphs (often with weights) comes up in a wide variety of areas, including extremal graph theory, properties of graph products, partition functions in statistical physics and property testing of large graphs. In this paper we survey recent developments in the study of homomorphism numbers, including the characterization of the homomorphism numbers in terms of the semidefiniteness of “connection matrices”, and some applications of this fact in extremal graph theory. We define a distance of two graphs in terms of similarity of their global structure, which also reflects the closeness of (appropriately scaled) homomorphism numbers into the two graphs. We use homomorphism numbers to define convergence of a sequence of graphs, and show that a graph sequence is convergent if and only if it is Cauchy in this distance. Every convergent graph sequence has a limit in the form of a symmetric measurable function in two variables. We use these notions of distance and graph limits to give a general theory for parameter testing. The convergence can also be characterized in terms of mappings of the graphs into fixed small graphs, which is strongly connected to important parameters like ground state energy in statistical physics, and to weighted maximum cut problems in computer science.
Quadratic forms on graphs
 Invent. Math
, 2005
"... We introduce a new graph parameter, called the Grothendieck constant of a graph G = (V, E), which is defined as the least constant K such that for every A: E → R, ..."
Abstract

Cited by 50 (10 self)
 Add to MetaCart
(Show Context)
We introduce a new graph parameter, called the Grothendieck constant of a graph G = (V, E), which is defined as the least constant K such that for every A: E → R,
Approximation bounds for quadratic optimization with homogeneous quadratic constraints
 SIAM J. Optim
, 2007
"... Abstract. We consider the NPhard problem of finding a minimum norm vector in ndimensional real or complex Euclidean space, subject to m concave homogeneous quadratic constraints. We show that a semidefinite programming (SDP) relaxation for this nonconvex quadratically constrained quadratic program ..."
Abstract

Cited by 48 (24 self)
 Add to MetaCart
(Show Context)
Abstract. We consider the NPhard problem of finding a minimum norm vector in ndimensional real or complex Euclidean space, subject to m concave homogeneous quadratic constraints. We show that a semidefinite programming (SDP) relaxation for this nonconvex quadratically constrained quadratic program (QP) provides an O(m 2) approximation in the real case and an O(m) approximation in the complex case. Moreover, we show that these bounds are tight up to a constant factor. When the Hessian of each constraint function is of rank 1 (namely, outer products of some given socalled steering vectors) and the phase spread of the entries of these steering vectors are bounded away from π/2, we establish a certain “constant factor ” approximation (depending on the phase spread but independent of m and n) for both the SDP relaxation and a convex QP restriction of the original NPhard problem. Finally, we consider a related problem of finding a maximum norm vector subject to m convex homogeneous quadratic constraints. We show that an SDP relaxation for this nonconvex QP provides an O(1 / ln(m)) approximation, which is analogous to a result of Nemirovski et al. [Math. Program., 86 (1999), pp. 463–473] for the real case. Key words. semidefinite programming relaxation, nonconvex quadratic optimization, approximation bound