Results 1  10
of
10
TwiceRamanujan sparsifiers
 IN PROC. 41ST STOC
, 2009
"... We prove that for every d> 1 and every undirected, weighted graph G = (V, E), there exists a weighted graph H with at most ⌈d V  ⌉ edges such that for every x ∈ IR V, 1 ≤ xT LHx x T LGx ≤ d + 1 + 2 √ d d + 1 − 2 √ d, where LG and LH are the Laplacian matrices of G and H, respectively. ..."
Abstract

Cited by 89 (12 self)
 Add to MetaCart
(Show Context)
We prove that for every d> 1 and every undirected, weighted graph G = (V, E), there exists a weighted graph H with at most ⌈d V  ⌉ edges such that for every x ∈ IR V, 1 ≤ xT LHx x T LGx ≤ d + 1 + 2 √ d d + 1 − 2 √ d, where LG and LH are the Laplacian matrices of G and H, respectively.
A unified framework for approximating and clustering data
, 2011
"... Given a set F of n positive functions over a ground set X, we consider the problem of computing x ∗ that minimizes the expression ∑ f∈F f(x), over x ∈ X. A typical application is shape fitting, where we wish to approximate a set P of n elements (say, points) by a shape x from a (possibly infinite) f ..."
Abstract

Cited by 33 (6 self)
 Add to MetaCart
Given a set F of n positive functions over a ground set X, we consider the problem of computing x ∗ that minimizes the expression ∑ f∈F f(x), over x ∈ X. A typical application is shape fitting, where we wish to approximate a set P of n elements (say, points) by a shape x from a (possibly infinite) family X of shapes. Here, each point p ∈ P corresponds to a function f such that f(x) is the distance from p to x, and we seek a shape x that minimizes the sum of distances from each point in P. In the kclustering variant, each x ∈ X is a tuple ofk shapes, andf(x) is the distance frompto its closest shape inx. Our main result is a unified framework for constructing coresets and approximate clustering for such general sets of functions. To achieve our results, we forge a link between the classic and well defined notion of εapproximations from the theory of PAC Learning and VC dimension, to the relatively new (and not so consistent) paradigm of coresets, which are some kind of “compressed representation " of the input set F. Using traditional techniques, a coreset usually implies an LTAS (linear time approximation scheme) for the corresponding optimization problem, which can be computed in parallel, via one pass over the data, and using only polylogarithmic space (i.e, in the streaming model). For several function families F for which coresets are known not to exist, or the corresponding (approximate) optimization problems are hard, our framework yields bicriteria approximations, or coresets that are large, but contained in a lowdimensional space. We demonstrate our unified framework by applying it on projective clustering problems. We obtain new coreset constructions and significantly smaller coresets, over the ones that
Paved with good intentions: Analysis of a randomized Kaczmarz method
, 2012
"... ABSTRACT. The block Kaczmarz method is an iterative scheme for solving overdetermined leastsquares problems. At each step, the algorithm projects the current iterate onto the solution space of a subset of the constraints. This paper describes a block Kaczmarz algorithm that uses a randomized contro ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
ABSTRACT. The block Kaczmarz method is an iterative scheme for solving overdetermined leastsquares problems. At each step, the algorithm projects the current iterate onto the solution space of a subset of the constraints. This paper describes a block Kaczmarz algorithm that uses a randomized control scheme to choose the subset at each step. This algorithm is the first block Kaczmarz method with an (expected) linear rate of convergence that can be expressed in terms of the geometric properties of the matrix and its submatrices. The analysis reveals that the algorithm is most effective when it is given a good row paving of the matrix, a partition of the rows into wellconditioned blocks. The operator theory literature provides detailed information about the existence and construction of good row pavings. Together, these results yield an efficient block Kaczmarz scheme that applies to many overdetermined leastsquares problem. 1.
Spectral Sparsification of Graphs: Theory and Algorithms
, 2013
"... Graph sparsification is the approximation of an arbitrary graph by a sparse graph. We explain what it means for one graph to be a spectral approximation of another and review the development of algorithms for spectral sparsification. In addition to being an interesting concept, spectral sparsificati ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Graph sparsification is the approximation of an arbitrary graph by a sparse graph. We explain what it means for one graph to be a spectral approximation of another and review the development of algorithms for spectral sparsification. In addition to being an interesting concept, spectral sparsification has been an important tool in the design of nearly lineartime algorithms for solving systems of linear equations in symmetric, diagonally dominant matrices. The fast solution of these linear systems has already led to breakthrough results in combinatorial optimization, including a faster algorithm for finding approximate maximum flows and minimum cuts in an undirected network.
THRIFTY APPROXIMATIONS OF CONVEX BODIES BY POLYTOPES
, 2012
"... Abstract. Given a convex body C ⊂ Rd containing the origin in its interior and a real number τ> 1 we seek to construct a polytope P ⊂ C with as few vertices as possible such that C ⊂ τP. Our construction is nearly optimal for a wide range of d and τ. In particular, we prove that if C = −C then fo ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Given a convex body C ⊂ Rd containing the origin in its interior and a real number τ> 1 we seek to construct a polytope P ⊂ C with as few vertices as possible such that C ⊂ τP. Our construction is nearly optimal for a wide range of d and τ. In particular, we prove that if C = −C then for any 1> ɛ> 0 and τ = 1 + ɛ one can choose P having roughly ɛ−d/2 vertices and for τ = √ ɛd one can choose P having roughly d1/ɛ vertices. Similarly, we prove that if C ⊂ Rd is a convex body such that −C ⊂ µC for some µ ≥ 1 then one can choose P having roughly ( ) d/2 (µ + 1)/(τ − 1) vertices provided (τ − 1)/(µ + 1) ≪ 1. 1. Introduction and
A NOTE ON COLUMN SUBSET SELECTION
, 2013
"... ABSTRACT. Given a matrix U, using a deterministic method, we extract a "large " submatrix of Ũ (whose columns are obtained by normalizing those of U) and estimate its smallest and largest singular value. We apply this result to the study of contact points of the unit ball with its maximal ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
ABSTRACT. Given a matrix U, using a deterministic method, we extract a "large " submatrix of Ũ (whose columns are obtained by normalizing those of U) and estimate its smallest and largest singular value. We apply this result to the study of contact points of the unit ball with its maximal volume ellipsoid. We consider also the paving problem and give a deterministic algorithm to partition a matrix into almost isometric blocks recovering previous results of BourgainTzafriri and Tropp. Finally, we partially answer a question raised by Naor about finding an algorithm in the spirit of BatsonSpielmanSrivastava’s work to extract a "large " square submatrix of "small" norm.