Results 11  20
of
24
A nearlym logn time solver for SDD linear systems
 In Proceedings of the IEEE 52nd Annual Symposium on Foundations of Computer Science (FOCS
, 2011
"... ar ..."
(Show Context)
A fast solver for a class of linear systems
, 2008
"... The solution of linear systems is a problem of fundamental theoretical importance but also one with a myriad of applications in numerical mathematics, engineering and science. Linear systems that are generated by realworld applications frequently fall into special classes. Recent research led to a ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
(Show Context)
The solution of linear systems is a problem of fundamental theoretical importance but also one with a myriad of applications in numerical mathematics, engineering and science. Linear systems that are generated by realworld applications frequently fall into special classes. Recent research led to a fast algorithm for solving symmetric diagonally dominant (SDD) linear systems. We give an overview of this solver and survey the underlying notions and tools from algebra, probability and graph algorithms. We also discuss some of the many and diverse applications of SDD solvers.
Towards an SDPbased Approach to Spectral Methods A NearlyLinearTime Algorithm for Graph Partitioning and Decomposition
"... In this paper, we consider the following graph partitioning problem: The input is an undirected graph G = (V, E), a balance parameter b ∈ (0, 1/2] and a target conductance value γ ∈ (0, 1). The output is a cut which, if nonempty, is of conductance at most O ( f), for some function f (G, γ), and whi ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
In this paper, we consider the following graph partitioning problem: The input is an undirected graph G = (V, E), a balance parameter b ∈ (0, 1/2] and a target conductance value γ ∈ (0, 1). The output is a cut which, if nonempty, is of conductance at most O ( f), for some function f (G, γ), and which is either balanced or well correlated with all cuts of conductance at most γ. In a seminal paper, Spielman and Teng γ log 3 V [16] gave an Õ(E/γ2)time algorithm for f = and used it to decompose graphs into a collection of nearexpanders [18]. We present a new spectral algorithm for this problem which runs in time Õ(E/γ) for f = √ γ. Our result yields the first nearlylinear time algorithm for the classic Balanced Separator problem that achieves the asymptotically optimal approximation guarantee for spectral methods. Our method has the advantage of being conceptually simple and relies on a primaldual semidefiniteprogramming (SDP) approach. We first consider a natural SDP relaxation for the Balanced Separator problem. While it is easy to obtain from this SDP a certificate of the fact that the graph has no balanced cut of conductance less than γ, somewhat surprisingly, we can obtain a certificate for the stronger correlation condition. This is achieved via a novel separation oracle for our SDP and by appealing to Arora and Kale’s [3] framework to bound the running time. Our result contains technical ingredients that may be of independent interest.
R.: Solving SDD linear systems in time Õ(m log n log 1/ɛ). Arxiv Preprint, available at http://arxiv.org/abs
"... We present an algorithm that on input of an n × n symmetric diagonally dominant matrix A with m nonzero entries constructs in time Õ(m log n) a solver which on input of a vector b computes a vector x satisfying x − A + bA < A + bA in time Õ(m log n log(1 /)) 1. The new algorithm exploits ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
We present an algorithm that on input of an n × n symmetric diagonally dominant matrix A with m nonzero entries constructs in time Õ(m log n) a solver which on input of a vector b computes a vector x satisfying x − A + bA < A + bA in time Õ(m log n log(1 /)) 1. The new algorithm exploits previously unknown structural properties of the output of the incremental sparsification algorithm given in [Koutis,Miller,Peng, FOCS 2010]. We also accelerate the construction of lowstretch spanning trees by rounding the edge weights to ensure that each iteration of the hierarchical star decomposition encounters a small number of distinct edge lengths. 1
Approximate spectral clustering via randomized sketching. preprint arXiv:1311.2854
, 2013
"... Spectral clustering is arguably one of the most important algorithms in data mining and machine intelligence; however, its computational complexity makes it a challenge to use it for large scale data analysis. Recently, several approximation algorithms for spectral clustering have been developed in ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Spectral clustering is arguably one of the most important algorithms in data mining and machine intelligence; however, its computational complexity makes it a challenge to use it for large scale data analysis. Recently, several approximation algorithms for spectral clustering have been developed in order to alleviate the relevant costs, but theoretical results are lacking. In this paper, we present a novel approximation algorithm for spectral clustering with strong theoretical evidence of its performance. Our algorithm is based on approximating the eigenvectors of the Laplacian matrix using random projections, a.k.a randomized sketching. Our experimental results demonstrate that the proposed approximation algorithm compares remarkably well to the exact algorithm. 1
Least squares ranking on graphs
, 2011
"... Given a set of alternatives to be ranked, and some pairwise comparison data, ranking is a least squares computation on a graph. The vertices are the alternatives, and the edge values comprise the comparison data. The basic idea is very simple and old – come up with values on vertices such that their ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Given a set of alternatives to be ranked, and some pairwise comparison data, ranking is a least squares computation on a graph. The vertices are the alternatives, and the edge values comprise the comparison data. The basic idea is very simple and old – come up with values on vertices such that their differences match the given edge data. Since an exact match will usually be impossible, one settles for matching in a least squares sense. This formulation was first described by Leake in 1976 for ranking football teams and appears as an example in Professor Gilbert Strang’s classic linear algebra textbook. If one is willing to look into the residual a little further, then the problem really comes alive, as shown effectively by the remarkable recent paper of Jiang et al. With or without this twist, the humble least squares problem on graphs has farreaching connections with many current areas of research. These connections are to theoretical computer science (spectral graph theory, and multilevel methods for graph Laplacian systems); numerical analysis (algebraic multigrid, and finite element exterior calculus); other mathematics (Hodge decomposition, and random clique complexes); and applications (arbitrage, and ranking of sports teams). Not all of these connections are explored in this paper, but many are. The underlying ideas are easy to explain, requiring only the four funda
Hierarchical Diagonal Blocking and Precision Reduction Applied to Combinatorial Multigrid ∗
"... Abstract—Memory bandwidth is a major limiting factor in the scalability of parallel iterative algorithms that rely on sparse matrixvector multiplication (SpMV). This paper introduces Hierarchical Diagonal Blocking (HDB), an approach which we believe captures many of the existing optimization techni ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Memory bandwidth is a major limiting factor in the scalability of parallel iterative algorithms that rely on sparse matrixvector multiplication (SpMV). This paper introduces Hierarchical Diagonal Blocking (HDB), an approach which we believe captures many of the existing optimization techniques for SpMV in a common representation. Using this representation in conjuction with precisionreduction techniques, we develop and evaluate highperformance SpMV kernels. We also study the implications of using our SpMV kernels in a complete iterative solver. Our method of choice is a Combinatorial Multigrid solver that can fully utilize our fastest reducedprecision SpMV kernel without sacrificing the quality of the solution. We provide extensive empirical evaluation of the effectiveness of the approach on a variety of benchmark matrices, demonstrating substantial speedups on all matrices considered. I.