Results 1  10
of
24
Approaching optimality for solving SDD linear systems
, 2010
"... We present an algorithm that on input a graph G with n vertices and m + n − 1 edges and a value k, produces an incremental sparsifier ˆ G with n − 1+m/k edges, such that the condition number of G with ˆ G is bounded above by Õ(k log2 n), with probability 1 − p. The algorithm runs in time Õ((m log n ..."
Abstract

Cited by 44 (7 self)
 Add to MetaCart
(Show Context)
We present an algorithm that on input a graph G with n vertices and m + n − 1 edges and a value k, produces an incremental sparsifier ˆ G with n − 1+m/k edges, such that the condition number of G with ˆ G is bounded above by Õ(k log2 n), with probability 1 − p. The algorithm runs in time Õ((m log n + n log 2 n) log(1/p)). 1 As a result, we obtain an algorithm that on input an n × n symmetric diagonally dominant matrix A with m + n − 1 nonzero entries and a vector b, computes a vector ¯x satisfying x − A + bA <ɛA + bA, in time Õ(m log 2 n log(1/ɛ)). The solver is based on a recursive application of the incremental sparsifier that produces a hierarchy of graphs which is then used to construct a recursive preconditioned Chebyshev iteration.
Electrical Flows, Laplacian Systems, and Faster Approximation of Maximum Flow in Undirected Graphs
, 2010
"... We introduce a new approach to computing an approximately maximum st flow in a capacitated, undirected graph. This flow is computed by solving a sequence of electrical flow problems. Each electrical flow is given by the solution of a system of linear equations in a Laplacian matrix, and thus may be ..."
Abstract

Cited by 40 (5 self)
 Add to MetaCart
(Show Context)
We introduce a new approach to computing an approximately maximum st flow in a capacitated, undirected graph. This flow is computed by solving a sequence of electrical flow problems. Each electrical flow is given by the solution of a system of linear equations in a Laplacian matrix, and thus may be approximately computed in nearlylinear time. Using this approach, we develop the fastest known algorithm for computing approximately maximum st flows. For a graph having n vertices and m edges, our algorithm computes a (1−ɛ)approximately maximum st flow in time 1 Õ ( mn 1/3 ɛ −11/3). A dual version of our approach computes a (1 + ɛ)approximately minimum st cut in time Õ ( m + n 4/3 ɛ −16/3) , which is the fastest known algorithm for this problem as well. Previously, the best dependence on m and n was achieved by the algorithm of Goldberg and Rao (J. ACM 1998), which can be used to compute approximately maximum st flows in time Õ ( m √ nɛ −1) , and approximately minimum st cuts in time Õ ( m + n 3/2 ɛ −3). Research partially supported by NSF grant CCF0843915.
Algorithms, Graph Theory, and Linear Equations in Laplacian Matrices
"... Abstract. The Laplacian matrices of graphs are fundamental. In addition to facilitating the application of linear algebra to graph theory, they arise in many practical problems. In this talk we survey recent progress on the design of provably fast algorithms for solving linear equations in the Lapla ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
Abstract. The Laplacian matrices of graphs are fundamental. In addition to facilitating the application of linear algebra to graph theory, they arise in many practical problems. In this talk we survey recent progress on the design of provably fast algorithms for solving linear equations in the Laplacian matrices of graphs. These algorithms motivate and rely upon fascinating primitives in graph theory, including lowstretch spanning trees, graph sparsifiers, ultrasparsifiers, and local graph clustering. These are all connected by a definition of what it means for one graph to approximate another. While this definition is dictated by Numerical Linear Algebra, it proves useful and natural from a graph theoretic perspective.
A General Framework for Graph Sparsification
, 2011
"... We present a general framework for constructing cut sparsifiers in undirected graphs — weighted subgraphs for which every cut has the same weight as the original graph, up to a multiplicative factor of (1 ± ǫ). Using this framework, we simplify, unify and improve upon previous sparsification results ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
We present a general framework for constructing cut sparsifiers in undirected graphs — weighted subgraphs for which every cut has the same weight as the original graph, up to a multiplicative factor of (1 ± ǫ). Using this framework, we simplify, unify and improve upon previous sparsification results. As simple instantiations of this framework, we show that sparsifiers can be constructed by sampling edges according to their strength (a result of Benczúr and Karger), effective resistance (a result of Spielman and Srivastava), edge connectivity, or by sampling random spanning trees. Sampling according to edge connectivity is the most aggressive method, and the most challenging to analyze. Our proof that this method produces sparsifiers resolves an open question of Benczúr and Karger. While the above results are interesting from a combinatorial standpoint, we also prove new algorithmic results. In particular, we develop techniques that give the first (optimal) O(m)time sparsification algorithm for unweighted graphs. Our algorithm has a running time of O(m) + Õ(n/ǫ²) for weighted graphs, which is also linear unless the input graph is very sparse itself. In both cases, this improves upon the previous best running times (due to Benczúr and Karger) of O(m log² n) (for the unweighted case) and O(m log³ n) (for the weighted case) respectively. Our algorithm constructs sparsifiers that contain O(n log n/ǫ²) edges in expectation; the only known construction of sparsifiers with fewer edges is by a substantially slower algorithm running in O(n 3 m/ǫ 2) time. A key ingredient of our proofs is a natural generalization of Karger’s bound on the number of small cuts in an undirected graph. Given the numerous applications of Karger’s bound, we suspect that our generalization will also be of independent interest.
SPECTRAL SPARSIFICATION IN THE SEMISTREAMING SETTING
"... Abstract. Let G be a graph with n vertices and m edges. A sparsifier of G is a sparse graph on the same vertex set approximating G in some natural way. It allows us to say useful things about G while considering much fewer than m edges. The strongest commonlyused notion of sparsification is spectra ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Let G be a graph with n vertices and m edges. A sparsifier of G is a sparse graph on the same vertex set approximating G in some natural way. It allows us to say useful things about G while considering much fewer than m edges. The strongest commonlyused notion of sparsification is spectral sparsification; H is a spectral sparsifier of G if the quadratic forms induced by the Laplacians of G and H approximate one another well. This notion is strictly stronger than the earlier concept of combinatorial sparsification. In this paper, we consider a semistreaming setting, where we have only Õ(n) storage space, and we thus cannot keep all of G. In this case, maintaining a sparsifier instead gives us a useful approximation to G, allowing us to answer certain questions about the original graph without storing all of it. In this paper, we introduce an algorithm for constructing a spectral sparsifier of G with O(n log n/ɛ 2) edges (where ɛ is a parameter measuring the quality of the sparsifier), taking Õ(m) time and requiring only one pass over G. In addition, our algorithm has the property that it maintains at all times a valid sparsifier for the subgraph of G that we have received. Our algorithm is natural and conceptually simple. As we read edges of G, we add them to the sparsifier H. Whenever H gets too big, we resparsify it in Õ(n) time. Adding edges to a graph changes the structure of its sparsifier’s restriction to the already existing edges. It would thus seem that the above procedure would cause errors to compound each time that we resparsify, and that we should need to either retain significantly more information or reexamine previously discarded edges in order to construct the new sparsifier. However, we show how to use the information contained in H to perform this resparsification using only the edges retained by earlier steps in nearly linear time. 1.
Improved spectral sparsification and numerical algorithms for sdd matrices
 STACS
, 2012
"... We present three spectral sparsification algorithms that, on input a graph G with n vertices and m edges, return a graph H with n vertices and O(n log n/ɛ 2) edges that provides a strong approximation of G. Namely, for all vectors x and any ɛ> 0, we have (1 − ɛ)x T LGx ≤ x T LHx ≤ (1 + ɛ)x T LGx, ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
We present three spectral sparsification algorithms that, on input a graph G with n vertices and m edges, return a graph H with n vertices and O(n log n/ɛ 2) edges that provides a strong approximation of G. Namely, for all vectors x and any ɛ> 0, we have (1 − ɛ)x T LGx ≤ x T LHx ≤ (1 + ɛ)x T LGx, where LG and LH are the Laplacians of the two graphs. The first algorithm is a simple modification of the fastest known algorithm and runs in Õ(m log2 n) time, an O(log n) factor faster than before. The second algorithm runs in Õ(m log n) time and generates a sparsifier with Õ(n log3 n) edges. The third algorithm applies to graphs where m> n log 5 n and runs in Õ(m logm/n log5 n n) time. In the range where m> n1+r for some constant r this becomes Õ(m). The improved sparsification algorithms are employed to accelerate linear system solvers and algorithms for computing fundamental eigenvectors of dense SDD matrices.
An almostlineartime algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations
"... In this paper we present an almost linear time algorithm for solving approximate maximum flow in undirected graphs. In particular, given a graph with m edges we show how to produce a 1−ε approximate maximum flow in time O(m 1+o(1) · ε −2). Furthermore, we present this algorithm as part of a general ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
In this paper we present an almost linear time algorithm for solving approximate maximum flow in undirected graphs. In particular, given a graph with m edges we show how to produce a 1−ε approximate maximum flow in time O(m 1+o(1) · ε −2). Furthermore, we present this algorithm as part of a general framework that also allows us to achieve a running time of O(m 1+o(1) ε −2 k 2) for the maximum concurrent kcommodity flow problem, the first such algorithm with an almost linear dependence on m. We also note that independently Jonah Sherman has produced an almost linear time algorithm for maximum flow and we thank him for coordinating submissions.
Navigating Central Path with Electrical Flows: From Flows to Matchings, and Back
 FOCS
, 2013
"... We present an Õ(m ..."
Approximating the exponential, the Lanczos method, and an Õ(m)time spectral algorithm for balanced separator
 IN: PROCEEDINGS OF THE 44TH ANNUAL ACM SYMPOSIUM ON THEORY OF COMPUTING (STOC
, 2012
"... We give a novel spectral approximation algorithm for the balanced separator problem that, given a graph G, a constant balance b ∈ (0, 1/2], and a parameter γ, either finds an Ω(b)balanced cut of conductance O ( √ γ) in G, or outputs a certificate that all bbalanced cuts in G have conductance at le ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
We give a novel spectral approximation algorithm for the balanced separator problem that, given a graph G, a constant balance b ∈ (0, 1/2], and a parameter γ, either finds an Ω(b)balanced cut of conductance O ( √ γ) in G, or outputs a certificate that all bbalanced cuts in G have conductance at least γ, and runs in time Õ(m). This settles the question of designing asymptotically optimal spectral algorithms for balanced separator. Our algorithm relies on a variant of the heat kernel random walk and requires, as a subroutine, an algorithm to compute exp(−L)v where L is the Laplacian of a graph related to G and v is a vector. Algorithms for computing the matrixexponentialvector product efficiently comprise our next set of results. Our main result here is a new algorithm which computes a good approximation to exp(−A)v for a class of symmetric positive semidefinite (PSD) matrices A and a given vector u, in time roughly Õ(m A), where m A is the number of nonzero entries of A. This uses, in a nontrivial way, the breakthrough result of Spielman and Teng on inverting symmetric and diagonallydominant matrices in Õ(m A) time. Finally, we prove that e −x can be uniformly approximated up to a small additive error, in a nonnegative interval [a, b] with a polynomial of