Results 1  10
of
37
TwiceRamanujan sparsifiers
 IN PROC. 41ST STOC
, 2009
"... We prove that for every d> 1 and every undirected, weighted graph G = (V, E), there exists a weighted graph H with at most ⌈d V  ⌉ edges such that for every x ∈ IR V, 1 ≤ xT LHx x T LGx ≤ d + 1 + 2 √ d d + 1 − 2 √ d, where LG and LH are the Laplacian matrices of G and H, respectively. ..."
Abstract

Cited by 89 (12 self)
 Add to MetaCart
(Show Context)
We prove that for every d> 1 and every undirected, weighted graph G = (V, E), there exists a weighted graph H with at most ⌈d V  ⌉ edges such that for every x ∈ IR V, 1 ≤ xT LHx x T LGx ≤ d + 1 + 2 √ d d + 1 − 2 √ d, where LG and LH are the Laplacian matrices of G and H, respectively.
A local clustering algorithm for massive graphs and its application to nearlylinear time graph partitioning
, 2013
"... We study the design of local algorithms for massive graphs. A local graph algorithm is one that finds a solution containing or near a given vertex without looking at the whole graph. We present a local clustering algorithm. Our algorithm finds a good cluster—a subset of vertices whose internal conn ..."
Abstract

Cited by 58 (8 self)
 Add to MetaCart
(Show Context)
We study the design of local algorithms for massive graphs. A local graph algorithm is one that finds a solution containing or near a given vertex without looking at the whole graph. We present a local clustering algorithm. Our algorithm finds a good cluster—a subset of vertices whose internal connections are significantly richer than its external connections—near a given vertex. The running time of our algorithm, when it finds a nonempty local cluster, is nearly linear in the size of the cluster it outputs. The running time of our algorithm also depends polylogarithmically on the size of the graph and polynomially on the conductance of the cluster it produces. Our clustering algorithm could be a useful primitive for handling massive graphs, such as social networks and webgraphs. As an application of this clustering algorithm, we present a partitioning algorithm that finds an approximate sparsest cut with nearly optimal balance. Our algorithm takes time nearly linear in the number edges of the graph. Using the partitioning algorithm of this paper, we have designed a nearly linear time algorithm for constructing spectral sparsifiers of graphs, which we in turn use in a nearly linear time algorithm for solving linear systems in symmetric, diagonally dominant matrices. The linear system solver also leads to a nearly linear time algorithm for approximating the secondsmallest eigenvalue and corresponding eigenvector of the Laplacian matrix of a graph. These other results are presented in two companion papers.
Fast Approximation Algorithms for Cutbased Problems in Undirected Graphs
"... We present a general method of designing fast approximation algorithms for cutbased minimization problems in undirected graphs. In particular, we develop a technique that given any such problem that can be approximated quickly on trees, allows approximating it almost as quickly on general graphs wh ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
We present a general method of designing fast approximation algorithms for cutbased minimization problems in undirected graphs. In particular, we develop a technique that given any such problem that can be approximated quickly on trees, allows approximating it almost as quickly on general graphs while only losing a polylogarithmic factor in the approximation guarantee. To illustrate the applicability of our paradigm, we focus our attention on the undirected sparsest cut problem with general demands and the balanced separator problem. By a simple use of our framework, we obtain polylogarithmic approximation algorithms for these problems that run in time close to linear. The main tool behind our result is an efficient procedure that decomposes general graphs into simpler ones while approximately preserving the cutflow structure. This decomposition is inspired by the cutbased graph decomposition of Räcke that was developed in the context of oblivious routing schemes, as well as, by the construction of the ultrasparsifiers due to Spielman and Teng that was employed to preconditioning symmetric diagonallydominant matrices. 1
A General Framework for Graph Sparsification
, 2011
"... We present a general framework for constructing cut sparsifiers in undirected graphs — weighted subgraphs for which every cut has the same weight as the original graph, up to a multiplicative factor of (1 ± ǫ). Using this framework, we simplify, unify and improve upon previous sparsification results ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
We present a general framework for constructing cut sparsifiers in undirected graphs — weighted subgraphs for which every cut has the same weight as the original graph, up to a multiplicative factor of (1 ± ǫ). Using this framework, we simplify, unify and improve upon previous sparsification results. As simple instantiations of this framework, we show that sparsifiers can be constructed by sampling edges according to their strength (a result of Benczúr and Karger), effective resistance (a result of Spielman and Srivastava), edge connectivity, or by sampling random spanning trees. Sampling according to edge connectivity is the most aggressive method, and the most challenging to analyze. Our proof that this method produces sparsifiers resolves an open question of Benczúr and Karger. While the above results are interesting from a combinatorial standpoint, we also prove new algorithmic results. In particular, we develop techniques that give the first (optimal) O(m)time sparsification algorithm for unweighted graphs. Our algorithm has a running time of O(m) + Õ(n/ǫ²) for weighted graphs, which is also linear unless the input graph is very sparse itself. In both cases, this improves upon the previous best running times (due to Benczúr and Karger) of O(m log² n) (for the unweighted case) and O(m log³ n) (for the weighted case) respectively. Our algorithm constructs sparsifiers that contain O(n log n/ǫ²) edges in expectation; the only known construction of sparsifiers with fewer edges is by a substantially slower algorithm running in O(n 3 m/ǫ 2) time. A key ingredient of our proofs is a natural generalization of Karger’s bound on the number of small cuts in an undirected graph. Given the numerous applications of Karger’s bound, we suspect that our generalization will also be of independent interest.
Subgraph Sparsification and Nearly Optimal Ultrasparsifiers
, 2010
"... We consider a variation of the spectral sparsification problem where we are required to keep a subgraph of the original graph. Formally, given a union of two weighted graphs G and W and an integer k, we are asked to find a kedge weighted graph Wk such that G + Wk is a good spectral sparsifer of G + ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
We consider a variation of the spectral sparsification problem where we are required to keep a subgraph of the original graph. Formally, given a union of two weighted graphs G and W and an integer k, we are asked to find a kedge weighted graph Wk such that G + Wk is a good spectral sparsifer of G + W. We will refer to this problem as the subgraph (spectral) sparsification. We present a nontrivial condition on G and W such that a good sparsifier exists and give a polynomialtime algorithm to find the sparsifer. As a significant application of our technique, we show that for each positive integer k, every nvertex weighted graph has an (n − 1 + k)edge spectral sparsifier with relative condition number at most n log n Õ(log log n) where Õ() hides k lower order terms. Our bound nearly settles a question left open by Spielman and Teng about ultrasparsifiers, which is a key component in their nearly lineartime algorithms for solving diagonally dominant symmetric linear systems. We also present another application of our technique to spectral optimization in which the goal is to maximize the algebraic connectivity of a graph (e.g. turn it into an expander) with a limited number of edges.
Global Computation in a Poorly Connected World: Fast Rumor Spreading with No Dependence on Conductance
, 2012
"... In this paper, we study the question of how efficiently a collection of interconnected nodes can perform a global computation in the GOSSIP model of communication. In this model, nodes do not know the global topology of the network, and they may only initiate contact with a single neighbor in each r ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
(Show Context)
In this paper, we study the question of how efficiently a collection of interconnected nodes can perform a global computation in the GOSSIP model of communication. In this model, nodes do not know the global topology of the network, and they may only initiate contact with a single neighbor in each round. This model contrasts with the much less restrictive LOCAL model, where a node may simultaneously communicate with all of its neighbors in a single round. A basic question in this setting is how many rounds of communication are required for the information dissemination problem, in which each node has some piece of information and is required to collect all others. In the LOCAL model, this is quite simple: each node broadcasts all of its information in each round, and the number of rounds required will be equal to the diameter of the underlying communication graph. In the GOSSIP model, each node must independently choose a single neighbor to contact, and the lack of global information makes it difficult to make any sort of principled choice. As such, researchers have focused on the uniform gossip algorithm, in which each node independently selects a neighbor uniformly at random. When the graph is wellconnected, this works quite well. In a string of beautiful papers, researchers proved a sequence of successively stronger bounds on the number of rounds required in terms of the conductance φ and graph size n, culminating in a bound of O(φ −1 log n). In this paper, we show that a fairly simple modification of the protocol gives an algorithm that solves the information dissemination problem in at most O(D + polylog(n)) rounds in a network of diameter D, with no dependence on the conductance. This is
Spectral Sparsification and Restricted Invertibility
, 2010
"... In this thesis we prove the following two basic statements in linear algebra. Let B be an arbitrary n × m matrix where m ≥ n and suppose 0 < ε < 1 is given. 1. Spectral Sparsification. There is a nonnegative diagonal matrix Sm×m with at most ⌈n/ε2 ⌉ nonzero entries for which (1 − ε) 2BBT ≼ BSB ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
In this thesis we prove the following two basic statements in linear algebra. Let B be an arbitrary n × m matrix where m ≥ n and suppose 0 < ε < 1 is given. 1. Spectral Sparsification. There is a nonnegative diagonal matrix Sm×m with at most ⌈n/ε2 ⌉ nonzero entries for which (1 − ε) 2BBT ≼ BSBT ≼ (1 + ε) 2BBT. Thus the spectral behavior of BBT is captured by a weighted subset of the columns of B, of size proportional to its rank n. 2. Restricted Invertibility. There is a diagonal Sm×m with at least k = (1 − ε) 2 ‖B‖2 F ‖B‖2 2 nonzero entries, all equal to 1, for which BSBT has k eigenvalues greater than ε2 ‖B‖2 F. Thus there is a large coordinate restriction of B (i.e., a submatrix of its m columns, given by S), of size proportional to its numerical rank ‖B‖2 F ‖B‖2, which is 2 wellinvertible. This improves a theorem of Bourgain and Tzafriri [14]. We give deterministic algorithms for constructing the promised diagonal matrices S in time O(mn 3 /ε 2) and O((1 − ε) 2 mn 3), respectively. By applying (1) to the class of Laplacian matrices of graphs, we show that every graph on n vertices can be spectrally approximated by a weighted graph with O(n) edges, thus generalizing the concept of expanders, which are constantdegree approximations of the complete graph. Our quantitative bounds are within a factor of two of those achieved by the celebrated Ramanujan graphs. We then present a second graph sparsification algorithm based on random sampling, which produces weaker sparsifiers with O(n log n) edges but runs in nearlylinear time. We also prove a refinement of (1) for the special case of B arising from John’s decompositions of the identity, which allows us to show that every convex body is close to one which has very few contact points with its minimum volume ellipsoid.
SPARSE QUADRATIC FORMS AND THEIR GEOMETRIC APPLICATIONS
"... In what follows all matrices are assumed to have real entries, and square matrices are always assumed to be symmetric unless stated otherwise. The support of a k × n matrix A = (aij) will be denoted below by supp(A) = { (i, j) ∈ {1,..., k} × {1,..., n} : aij ̸ = 0}. ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
In what follows all matrices are assumed to have real entries, and square matrices are always assumed to be symmetric unless stated otherwise. The support of a k × n matrix A = (aij) will be denoted below by supp(A) = { (i, j) ∈ {1,..., k} × {1,..., n} : aij ̸ = 0}.
Effective resistances, statistical leverage, and applications to linear equation solving
 CoRR
"... ar ..."