Results 1  10
of
17
TwiceRamanujan sparsifiers
 IN PROC. 41ST STOC
, 2009
"... We prove that for every d> 1 and every undirected, weighted graph G = (V, E), there exists a weighted graph H with at most ⌈d V  ⌉ edges such that for every x ∈ IR V, 1 ≤ xT LHx x T LGx ≤ d + 1 + 2 √ d d + 1 − 2 √ d, where LG and LH are the Laplacian matrices of G and H, respectively. ..."
Abstract

Cited by 88 (12 self)
 Add to MetaCart
(Show Context)
We prove that for every d> 1 and every undirected, weighted graph G = (V, E), there exists a weighted graph H with at most ⌈d V  ⌉ edges such that for every x ∈ IR V, 1 ≤ xT LHx x T LGx ≤ d + 1 + 2 √ d d + 1 − 2 √ d, where LG and LH are the Laplacian matrices of G and H, respectively.
Approaching optimality for solving SDD linear systems
, 2010
"... We present an algorithm that on input a graph G with n vertices and m + n − 1 edges and a value k, produces an incremental sparsifier ˆ G with n − 1+m/k edges, such that the condition number of G with ˆ G is bounded above by Õ(k log2 n), with probability 1 − p. The algorithm runs in time Õ((m log n ..."
Abstract

Cited by 45 (7 self)
 Add to MetaCart
(Show Context)
We present an algorithm that on input a graph G with n vertices and m + n − 1 edges and a value k, produces an incremental sparsifier ˆ G with n − 1+m/k edges, such that the condition number of G with ˆ G is bounded above by Õ(k log2 n), with probability 1 − p. The algorithm runs in time Õ((m log n + n log 2 n) log(1/p)). 1 As a result, we obtain an algorithm that on input an n × n symmetric diagonally dominant matrix A with m + n − 1 nonzero entries and a vector b, computes a vector ¯x satisfying x − A + bA <ɛA + bA, in time Õ(m log 2 n log(1/ɛ)). The solver is based on a recursive application of the incremental sparsifier that produces a hierarchy of graphs which is then used to construct a recursive preconditioned Chebyshev iteration.
Single pass sparsification in the streaming model with edge deletions. arXiv preprint arXiv:1203.4900
, 2012
"... ar ..."
SPARSE QUADRATIC FORMS AND THEIR GEOMETRIC APPLICATIONS
"... In what follows all matrices are assumed to have real entries, and square matrices are always assumed to be symmetric unless stated otherwise. The support of a k × n matrix A = (aij) will be denoted below by supp(A) = { (i, j) ∈ {1,..., k} × {1,..., n} : aij ̸ = 0}. ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
In what follows all matrices are assumed to have real entries, and square matrices are always assumed to be symmetric unless stated otherwise. The support of a k × n matrix A = (aij) will be denoted below by supp(A) = { (i, j) ∈ {1,..., k} × {1,..., n} : aij ̸ = 0}.
A nearlym logn time solver for SDD linear systems
 In Proceedings of the IEEE 52nd Annual Symposium on Foundations of Computer Science (FOCS
, 2011
"... ar ..."
(Show Context)
Gelling, and Melting, Large Graphs by Edge Manipulation
"... Controlling the dissemination of an entity (e.g., meme, virus, etc) on a large graph is an interesting problem in many disciplines. Examples include epidemiology, computer security, marketing, etc. So far, previous studies have mostly focused on removing or inoculating nodes to achieve the desired o ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
Controlling the dissemination of an entity (e.g., meme, virus, etc) on a large graph is an interesting problem in many disciplines. Examples include epidemiology, computer security, marketing, etc. So far, previous studies have mostly focused on removing or inoculating nodes to achieve the desired outcome. We shift the problem to the level of edges and ask: which edges should we add or delete in order to speedup or contain a dissemination? First, we propose effective and scalable algorithms to solve these dissemination problems. Second, we conduct a theoretical study of the two problems and our methods, including the hardness of the problem, the accuracy and complexity of our methods, and the equivalence between the different strategies and problems. Third and lastly, we conduct experiments on real topologies of varying sizes to demonstrate the effectiveness and scalability of our approaches.
Spectral Sparsification of Graphs: Theory and Algorithms
, 2013
"... Graph sparsification is the approximation of an arbitrary graph by a sparse graph. We explain what it means for one graph to be a spectral approximation of another and review the development of algorithms for spectral sparsification. In addition to being an interesting concept, spectral sparsificati ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Graph sparsification is the approximation of an arbitrary graph by a sparse graph. We explain what it means for one graph to be a spectral approximation of another and review the development of algorithms for spectral sparsification. In addition to being an interesting concept, spectral sparsification has been an important tool in the design of nearly lineartime algorithms for solving systems of linear equations in symmetric, diagonally dominant matrices. The fast solution of these linear systems has already led to breakthrough results in combinatorial optimization, including a faster algorithm for finding approximate maximum flows and minimum cuts in an undirected network.
NEARLY LINEAR TIME ALGORITHMS FOR PRECONDITIONING AND SOLVING SYMMETRIC, DIAGONALLY DOMINANT LINEAR SYSTEMS
, 2014
"... We present a randomized algorithm that on input a symmetric, weakly diagonally dominant nbyn matrix A with m nonzero entries and an nvector b produces an x ̃ such that ‖x ̃ − A†b‖A ≤ ‖A†b‖A in expected time O(m logc n log(1/)) for some constant c. By applying this algorithm inside the inverse p ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We present a randomized algorithm that on input a symmetric, weakly diagonally dominant nbyn matrix A with m nonzero entries and an nvector b produces an x ̃ such that ‖x ̃ − A†b‖A ≤ ‖A†b‖A in expected time O(m logc n log(1/)) for some constant c. By applying this algorithm inside the inverse power method, we compute approximate Fiedler vectors in a similar amount of time. The algorithm applies subgraph preconditioners in a recursive fashion. These preconditioners improve upon the subgraph preconditioners first introduced by Vaidya in 1990. For any symmetric, weakly diagonally dominant matrix A with nonpositive offdiagonal entries and k ≥ 1, we construct in time O(m logc n) a preconditioner B of A with at most 2(n − 1) +O((m/k) log39 n) nonzero offdiagonal entries such that the finite generalized condition number κf (A,B) is at most k, for some other constant c. In the special case when the nonzero structure of the matrix is planar the corresponding linear system solver runs in expected time O(n log2 n+n logn log logn log(1/)). We hope that our introduction of algorithms of low asymptotic complexity will lead to the development of algorithms that are also fast in practice.