Results 1  10
of
30
Approaching optimality for solving SDD linear systems
, 2010
"... We present an algorithm that on input a graph G with n vertices and m + n − 1 edges and a value k, produces an incremental sparsifier ˆ G with n − 1+m/k edges, such that the condition number of G with ˆ G is bounded above by Õ(k log2 n), with probability 1 − p. The algorithm runs in time Õ((m log n ..."
Abstract

Cited by 44 (7 self)
 Add to MetaCart
(Show Context)
We present an algorithm that on input a graph G with n vertices and m + n − 1 edges and a value k, produces an incremental sparsifier ˆ G with n − 1+m/k edges, such that the condition number of G with ˆ G is bounded above by Õ(k log2 n), with probability 1 − p. The algorithm runs in time Õ((m log n + n log 2 n) log(1/p)). 1 As a result, we obtain an algorithm that on input an n × n symmetric diagonally dominant matrix A with m + n − 1 nonzero entries and a vector b, computes a vector ¯x satisfying x − A + bA <ɛA + bA, in time Õ(m log 2 n log(1/ɛ)). The solver is based on a recursive application of the incremental sparsifier that produces a hierarchy of graphs which is then used to construct a recursive preconditioned Chebyshev iteration.
Cover times, blanket times, and majorizing measures
 In ACM Symposium on Theory of Computing
, 2011
"... ar ..."
Algorithms for leader selection in stochastically forced consensus networks
 IEEE Trans. Automat. Control
"... Abstract—We are interested in assigning a prespecified number of nodes as leaders in order to minimize the meansquare deviation from consensus in stochastically forced networks. This problem arises in several applications including control of vehicular formations and localization in sensor networ ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
(Show Context)
Abstract—We are interested in assigning a prespecified number of nodes as leaders in order to minimize the meansquare deviation from consensus in stochastically forced networks. This problem arises in several applications including control of vehicular formations and localization in sensor networks. For networks with leaders subject to noise, we show that the Boolean constraints (which indicate whether a node is a leader) are the only source of nonconvexity. By relaxing these constraints to their convex hull we obtain a lower bound on the global optimal value. We also use a simple but efficient greedy algorithm to identify leaders and to compute an upper bound. For networks with leaders that perfectly follow their desired trajectories, we identify an additional source of nonconvexity in the form of a rank constraint. Removal of the rank constraint and relaxation of the Boolean constraints yields a semidefinite program for which we develop a customized algorithm wellsuited for large networks. Several examples ranging from regular lattices to random graphs are provided to illustrate the effectiveness of the developed algorithms. Index Terms—Alternating direction method of multipliers (ADMMs), consensus networks, convex optimization, convex relaxations, greedy algorithm, leader selection, performance bounds, semidefinite programming (SDP), sensor selection, variance amplification. I.
A nearlym logn time solver for SDD linear systems
 In Proceedings of the IEEE 52nd Annual Symposium on Foundations of Computer Science (FOCS
, 2011
"... ar ..."
Lean algebraic multigrid (LAMG): Fast graph Laplacian linear solver
 ArXiV eprints
"... Abstract. Laplacian matrices of graphs arise in largescale computational applications such as semisupervised machine learning; spectral clustering of images, genetic data and web pages; transportation network flows; electrical resistor circuits; and elliptic partial differential equations discreti ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Abstract. Laplacian matrices of graphs arise in largescale computational applications such as semisupervised machine learning; spectral clustering of images, genetic data and web pages; transportation network flows; electrical resistor circuits; and elliptic partial differential equations discretized on unstructured grids with finite elements. A Lean Algebraic Multigrid (LAMG) solver of the symmetric linear system Ax = b is presented, where A is a graph Laplacian. LAMG’s run time and storage are empirically demonstrated to scale linearly with the number of edges. LAMG consists of a setup phase during which a sequence of increasinglycoarser Laplacian systems is constructed, and an iterative solve phase using multigrid cycles. General graphs pose algorithmic challenges not encountered in traditional multigrid applications. LAMG combines a lean piecewiseconstant interpolation, judicious node aggregation based on a new node proximity measure (the affinity), and an energy correction of coarselevel systems. This results in fast convergence and substantial setup and memory savings. A serial LAMG implementation scaled linearly for a diverse set of 3774 realworld graphs with up to 47 million edges, with no parameter tuning. LAMG was more robust than the UMFPACK direct solver and Combinatorial Multigrid (CMG), although CMG was faster than LAMG on average. Our methodology is extensible to eigenproblems and other graph
Towards an SDPbased Approach to Spectral Methods A NearlyLinearTime Algorithm for Graph Partitioning and Decomposition
"... In this paper, we consider the following graph partitioning problem: The input is an undirected graph G = (V, E), a balance parameter b ∈ (0, 1/2] and a target conductance value γ ∈ (0, 1). The output is a cut which, if nonempty, is of conductance at most O ( f), for some function f (G, γ), and whi ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
In this paper, we consider the following graph partitioning problem: The input is an undirected graph G = (V, E), a balance parameter b ∈ (0, 1/2] and a target conductance value γ ∈ (0, 1). The output is a cut which, if nonempty, is of conductance at most O ( f), for some function f (G, γ), and which is either balanced or well correlated with all cuts of conductance at most γ. In a seminal paper, Spielman and Teng γ log 3 V [16] gave an Õ(E/γ2)time algorithm for f = and used it to decompose graphs into a collection of nearexpanders [18]. We present a new spectral algorithm for this problem which runs in time Õ(E/γ) for f = √ γ. Our result yields the first nearlylinear time algorithm for the classic Balanced Separator problem that achieves the asymptotically optimal approximation guarantee for spectral methods. Our method has the advantage of being conceptually simple and relies on a primaldual semidefiniteprogramming (SDP) approach. We first consider a natural SDP relaxation for the Balanced Separator problem. While it is easy to obtain from this SDP a certificate of the fact that the graph has no balanced cut of conductance less than γ, somewhat surprisingly, we can obtain a certificate for the stronger correlation condition. This is achieved via a novel separation oracle for our SDP and by appealing to Arora and Kale’s [3] framework to bound the running time. Our result contains technical ingredients that may be of independent interest.
Near LinearWork Parallel SDD Solvers, LowDiameter Decomposition, and LowStretch Subgraphs
"... This paper presents the design and analysis of a near linearwork parallel algorithm for solving symmetric diagonally dominant (SDD) linear systems. On input of a SDD nbyn matrix A with m nonzero entries and a vector b, our algorithm computes a vector ˜x such that ‖˜x − A + b‖A ≤ ε · ‖A + b‖A in O ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
This paper presents the design and analysis of a near linearwork parallel algorithm for solving symmetric diagonally dominant (SDD) linear systems. On input of a SDD nbyn matrix A with m nonzero entries and a vector b, our algorithm computes a vector ˜x such that ‖˜x − A + b‖A ≤ ε · ‖A + b‖A in O(m log O(1) n log 1 ε and O(m 1/3+θ log 1) depth for any fixed θ> 0.) work ε The algorithm relies on a parallel algorithm for generating lowstretch spanning trees or spanning subgraphs. To this end, we first develop a parallel decomposition algorithm that in polylogarithmic depth and Õ(E) work1, partitions a graph into components with polylogarithmic diameter such that only a small fraction of the original edges are between the components. This can be used to generate lowstretch spanning trees with average stretch O(n α) in O(n 1+α) work and O(n α) depth. Alternatively, it can be used to generate spanning subgraphs with polylogarithmic average stretch in Õ(E) work and polylogarithmic depth. We apply this subgraph construction to derive our solver. By using the linear system solver in known applications, our results imply improved parallel randomized algorithms for several problems, including singlesource shortest paths, maximum flow, mincost flow, and approximate maxflow.
Generalized subgraph preconditioners for largescale bundle adjustment
 In ICCV
, 2011
"... We present a generalized subgraph preconditioning (GSP) technique to solve largescale bundle adjustment problems efficiently. In contrast with previous work which uses either direct or iterative methods as the linear solver, GSP combines their advantages and is significantly faster on large dataset ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
We present a generalized subgraph preconditioning (GSP) technique to solve largescale bundle adjustment problems efficiently. In contrast with previous work which uses either direct or iterative methods as the linear solver, GSP combines their advantages and is significantly faster on large datasets. Similar to [11], the main idea is to identify a subproblem (subgraph) that can be solved efficiently by sparse factorization methods and use it to build a preconditioner for the conjugate gradient method. The difference is that GSP is more general and leads to much more effective preconditioners. We design a greedy algorithm to build subgraphs which have bounded maximum clique size in the factorization phase, and also result in smaller condition numbers than standard preconditioning techniques. When applying the proposed method to the “bal ” datasets [1], GSP displays promising performance. 1.