Results 1  10
of
72
Graph sparsification by effective resistances
 SIAM J. Comput
"... We present a nearlylinear time algorithm that produces highquality sparsifiers of weighted graphs. Given as input a weighted graph G = (V, E, w) and a parameter ǫ> 0, we produce a weighted subgraph H = (V, ˜ E, ˜w) of G such that  ˜ E  = O(n log n/ǫ 2) and for all vectors x ∈ R V (1 − ǫ) ∑ ..."
Abstract

Cited by 140 (9 self)
 Add to MetaCart
(Show Context)
We present a nearlylinear time algorithm that produces highquality sparsifiers of weighted graphs. Given as input a weighted graph G = (V, E, w) and a parameter ǫ> 0, we produce a weighted subgraph H = (V, ˜ E, ˜w) of G such that  ˜ E  = O(n log n/ǫ 2) and for all vectors x ∈ R V (1 − ǫ) ∑ (x(u) − x(v)) 2 wuv ≤ ∑ (x(u) − x(v)) 2 ˜wuv ≤ (1 + ǫ) ∑ (x(u) − x(v)) 2 wuv. (1) uv∈E uv ∈ ˜ E This improves upon the sparsifiers constructed by Spielman and Teng, which had O(n log c n) edges for some large constant c, and upon those of Benczúr and Karger, which only satisfied (1) for x ∈ {0, 1} V. We conjecture the existence of sparsifiers with O(n) edges, noting that these would generalize the notion of expander graphs, which are constantdegree sparsifiers for the complete graph. A key ingredient in our algorithm is a subroutine of independent interest: a nearlylinear time algorithm that builds a data structure from which we can query the approximate effective resistance between any two vertices in a graph in O(log n) time. uv∈E
TwiceRamanujan sparsifiers
 IN PROC. 41ST STOC
, 2009
"... We prove that for every d> 1 and every undirected, weighted graph G = (V, E), there exists a weighted graph H with at most ⌈d V  ⌉ edges such that for every x ∈ IR V, 1 ≤ xT LHx x T LGx ≤ d + 1 + 2 √ d d + 1 − 2 √ d, where LG and LH are the Laplacian matrices of G and H, respectively. ..."
Abstract

Cited by 87 (12 self)
 Add to MetaCart
(Show Context)
We prove that for every d> 1 and every undirected, weighted graph G = (V, E), there exists a weighted graph H with at most ⌈d V  ⌉ edges such that for every x ∈ IR V, 1 ≤ xT LHx x T LGx ≤ d + 1 + 2 √ d d + 1 − 2 √ d, where LG and LH are the Laplacian matrices of G and H, respectively.
A local clustering algorithm for massive graphs and its application to nearlylinear time graph partitioning
, 2013
"... We study the design of local algorithms for massive graphs. A local graph algorithm is one that finds a solution containing or near a given vertex without looking at the whole graph. We present a local clustering algorithm. Our algorithm finds a good cluster—a subset of vertices whose internal conn ..."
Abstract

Cited by 59 (9 self)
 Add to MetaCart
(Show Context)
We study the design of local algorithms for massive graphs. A local graph algorithm is one that finds a solution containing or near a given vertex without looking at the whole graph. We present a local clustering algorithm. Our algorithm finds a good cluster—a subset of vertices whose internal connections are significantly richer than its external connections—near a given vertex. The running time of our algorithm, when it finds a nonempty local cluster, is nearly linear in the size of the cluster it outputs. The running time of our algorithm also depends polylogarithmically on the size of the graph and polynomially on the conductance of the cluster it produces. Our clustering algorithm could be a useful primitive for handling massive graphs, such as social networks and webgraphs. As an application of this clustering algorithm, we present a partitioning algorithm that finds an approximate sparsest cut with nearly optimal balance. Our algorithm takes time nearly linear in the number edges of the graph. Using the partitioning algorithm of this paper, we have designed a nearly linear time algorithm for constructing spectral sparsifiers of graphs, which we in turn use in a nearly linear time algorithm for solving linear systems in symmetric, diagonally dominant matrices. The linear system solver also leads to a nearly linear time algorithm for approximating the secondsmallest eigenvalue and corresponding eigenvector of the Laplacian matrix of a graph. These other results are presented in two companion papers.
Approaching optimality for solving SDD linear systems
, 2010
"... We present an algorithm that on input a graph G with n vertices and m + n − 1 edges and a value k, produces an incremental sparsifier ˆ G with n − 1+m/k edges, such that the condition number of G with ˆ G is bounded above by Õ(k log2 n), with probability 1 − p. The algorithm runs in time Õ((m log n ..."
Abstract

Cited by 44 (7 self)
 Add to MetaCart
(Show Context)
We present an algorithm that on input a graph G with n vertices and m + n − 1 edges and a value k, produces an incremental sparsifier ˆ G with n − 1+m/k edges, such that the condition number of G with ˆ G is bounded above by Õ(k log2 n), with probability 1 − p. The algorithm runs in time Õ((m log n + n log 2 n) log(1/p)). 1 As a result, we obtain an algorithm that on input an n × n symmetric diagonally dominant matrix A with m + n − 1 nonzero entries and a vector b, computes a vector ¯x satisfying x − A + bA <ɛA + bA, in time Õ(m log 2 n log(1/ɛ)). The solver is based on a recursive application of the incremental sparsifier that produces a hierarchy of graphs which is then used to construct a recursive preconditioned Chebyshev iteration.
Finding sparse cuts locally using evolving sets
 In STOC'09: Proceedings of the 41st Annual ACM symposium on Theory of Computing
, 2009
"... A local graph partitioning algorithm finds a set of vertices with small conductance (i.e. a sparse cut) by adaptively exploring part of a large graph G, starting from a specified vertex. For the algorithm to be local, its complexity must be bounded in terms of the size of the set that it outputs, wi ..."
Abstract

Cited by 41 (0 self)
 Add to MetaCart
(Show Context)
A local graph partitioning algorithm finds a set of vertices with small conductance (i.e. a sparse cut) by adaptively exploring part of a large graph G, starting from a specified vertex. For the algorithm to be local, its complexity must be bounded in terms of the size of the set that it outputs, with at most a weak dependence on the number n of vertices in G. Previous local partitioning algorithms find sparse cuts using random walks and personalized PageRank. In this paper, we introduce a randomized local partitioning algorithm that finds a sparse cut by simulating the volumebiased evolving set process, which is a Markov chain on sets of vertices. We prove that for any set of vertices A that has conductance at most φ, for at least half of the starting vertices in A our algorithm will output (with probability at least half), a set of conductance O(φ 1/2 log 1/2 n). We prove that for a given run of the algorithm, the expected ratio between its computational complexity and the volume of the set that it outputs is O(φ −1/2 polylog(n)). In comparison, the best previous local partitioning algorithm, due to Andersen, Chung, and Lang, has the same approximation guarantee, but a larger ratio of O(φ −1 polylog(n)) between the complexity and output volume. Using our local partitioning algorithm as a subroutine, we construct a fast algorithm for finding balanced cuts. Given a fixed value of φ, the resulting algorithm has complexity (m + nφ −1/2)) · O(polylog(n)) and returns a cut with conductance O(φ 1/2 log 1/2 n) and volume at least vφ/2, where vφ is the largest volume of any set with conductance at most φ. 1 1
Electrical Flows, Laplacian Systems, and Faster Approximation of Maximum Flow in Undirected Graphs
, 2010
"... We introduce a new approach to computing an approximately maximum st flow in a capacitated, undirected graph. This flow is computed by solving a sequence of electrical flow problems. Each electrical flow is given by the solution of a system of linear equations in a Laplacian matrix, and thus may be ..."
Abstract

Cited by 40 (5 self)
 Add to MetaCart
(Show Context)
We introduce a new approach to computing an approximately maximum st flow in a capacitated, undirected graph. This flow is computed by solving a sequence of electrical flow problems. Each electrical flow is given by the solution of a system of linear equations in a Laplacian matrix, and thus may be approximately computed in nearlylinear time. Using this approach, we develop the fastest known algorithm for computing approximately maximum st flows. For a graph having n vertices and m edges, our algorithm computes a (1−ɛ)approximately maximum st flow in time 1 Õ ( mn 1/3 ɛ −11/3). A dual version of our approach computes a (1 + ɛ)approximately minimum st cut in time Õ ( m + n 4/3 ɛ −16/3) , which is the fastest known algorithm for this problem as well. Previously, the best dependence on m and n was achieved by the algorithm of Goldberg and Rao (J. ACM 1998), which can be used to compute approximately maximum st flows in time Õ ( m √ nɛ −1) , and approximately minimum st cuts in time Õ ( m + n 3/2 ɛ −3). Research partially supported by NSF grant CCF0843915.
Algorithms, Graph Theory, and Linear Equations in Laplacian Matrices
"... Abstract. The Laplacian matrices of graphs are fundamental. In addition to facilitating the application of linear algebra to graph theory, they arise in many practical problems. In this talk we survey recent progress on the design of provably fast algorithms for solving linear equations in the Lapla ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
Abstract. The Laplacian matrices of graphs are fundamental. In addition to facilitating the application of linear algebra to graph theory, they arise in many practical problems. In this talk we survey recent progress on the design of provably fast algorithms for solving linear equations in the Laplacian matrices of graphs. These algorithms motivate and rely upon fascinating primitives in graph theory, including lowstretch spanning trees, graph sparsifiers, ultrasparsifiers, and local graph clustering. These are all connected by a definition of what it means for one graph to approximate another. While this definition is dictated by Numerical Linear Algebra, it proves useful and natural from a graph theoretic perspective.
Cover times, blanket times, and majorizing measures
 IN ACM SYMPOSIUM ON THEORY OF COMPUTING
, 2011
"... ..."
Faster generation of random spanning trees
"... Abstract — In this paper, we set forth a new algorithm for generating approximately uniformly random spanning trees in undirected graphs. We show how to sample from a distribution that is within a multiplicative (1 + δ) of uniform in expected time e O(m √ n log 1/δ). This improves the sparse graph c ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
(Show Context)
Abstract — In this paper, we set forth a new algorithm for generating approximately uniformly random spanning trees in undirected graphs. We show how to sample from a distribution that is within a multiplicative (1 + δ) of uniform in expected time e O(m √ n log 1/δ). This improves the sparse graph case of the best previously known worstcase bound of O(min{mn, n 2.376}), which has stood for twenty years. To achieve this goal, we exploit the connection between random walks on graphs and electrical networks, and we use this to introduce a new approach to the problem that integrates discrete random walkbased techniques with continuous linear algebraic methods. We believe that our use of electrical networks and sparse linear system solvers in conjunction with random walks and combinatorial partitioning techniques is a useful paradigm that will find further applications in algorithmic graph theory. Keywordsspanning trees; random walks on graphs; electrical flows; 1.