Results 1 
4 of
4
Fast Iterative Graph Computation with Block Updates
"... Scaling iterative graph processing applications to large graphs is an important problem. Performance is critical, as data scientists need to execute graph programs many times with varying parameters. The need for a highlevel, highperformance programming model has inspired much research on graph pr ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Scaling iterative graph processing applications to large graphs is an important problem. Performance is critical, as data scientists need to execute graph programs many times with varying parameters. The need for a highlevel, highperformance programming model has inspired much research on graph programming frameworks. In this paper, we show that the important class of computationally light graph applications – applications that perform little computation per vertex – has severe scalability problems across multiple cores as these applications hit an early “memory wall ” that limits their speedup. We propose a novel blockoriented computation model, in which computation is iterated locally over blocks of highly connected nodes, significantly improving the amount of computation per cache miss. Following this model, we describe the design and implementation of a blockaware graph processing runtime that keeps the familiar vertexcentric programming paradigm while reaping the benefits of blockoriented execution. Our experiments show that blockoriented execution significantly improves the performance of our framework for several graph applications. 1.
Revisiting asynchronous linear solvers: Provable convergence rate through randomization. IPDPS
, 2014
"... Asynchronous methods for solving systems of linear equations have been researched since Chazan and Miranker published their pioneering paper on chaotic relaxation in 1969. The underlying idea of asynchronous methods is to avoid processor idle time by allowing the processors to continue to work and m ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Asynchronous methods for solving systems of linear equations have been researched since Chazan and Miranker published their pioneering paper on chaotic relaxation in 1969. The underlying idea of asynchronous methods is to avoid processor idle time by allowing the processors to continue to work and make progress even if not all progress made by other processors has been communicated to them. Historically, work on asynchronous methods for solving linear equations focused on proving convergence in the limit. How the rate of convergence compares to the rate of convergence of the synchronous counterparts, and how it scales when the number of processors increase, was seldom studied and is still not well understood. Furthermore, the applicability of these methods was limited to restricted classes of matrices (e.g., diagonally dominant matrices). We propose a randomized sharedmemory asynchronous method for general symmetric positive definite matrices. We rigorously analyze the convergence rate and prove that it is linear and close to that of our method’s synchronous counterpart as long as not too many processors are used (relative to the size and sparsity of the matrix). Our analysis presents a significant improvement, both in convergence analysis and in the applicability, of asynchronous linear solvers, and suggests randomization as a key paradigm to serve as a foundation for asynchronous methods thereof. 1
ON CONVERGENCE OF THE MAXIMUM BLOCK IMPROVEMENT METHOD∗
"... Abstract. The MBI (maximum block improvement) method is a greedy approach to solving optimization problems where the decision variables can be grouped into a finite number of blocks. Assuming that optimizing over one block of variables while fixing all others is relatively easy, the MBI method updat ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
Abstract. The MBI (maximum block improvement) method is a greedy approach to solving optimization problems where the decision variables can be grouped into a finite number of blocks. Assuming that optimizing over one block of variables while fixing all others is relatively easy, the MBI method updates the block of variables corresponding to the maximally improving block at each iteration, which is arguably a most natural and simple process to tackle blockstructured problems with great potentials for engineering applications. In this paper we establish global and local linear convergence results for this method. The global convergence is established under the Lojasiewicz inequality assumption, while the local analysis invokes secondorder assumptions. We study in particular the tensor optimization model with spherical constraints. Conditions for linear convergence of the famous power method for computing the maximum eigenvalue of a matrix follow in this framework as a special case. The condition is interpreted in various other forms for the rankone tensor optimization model under spherical constraints. Numerical experiments are shown to support the convergence property of the MBI method.
ITERATIVE GRAPH COMPUTATION IN THE BIG DATA ERA
, 2015
"... Iterative graph computation is a key component in many realworld applications, as the graph data model naturally captures complex relationships between entities. The big data era has seen the rise of several new challenges to this classic computation model. In this dissertation we describe three p ..."
Abstract
 Add to MetaCart
Iterative graph computation is a key component in many realworld applications, as the graph data model naturally captures complex relationships between entities. The big data era has seen the rise of several new challenges to this classic computation model. In this dissertation we describe three projects that address different aspects of these challenges. First, because of the increasing volume of data, it is increasingly important to scale iterative graph computation to large graphs. We observe that an important class of graph applications performing little computation per vertex scales poorly when running on multiple cores. These computationally light applications are limited by memory access rates, and cannot fully utilize the benefits of multiple cores. We propose a new blockoriented computation model which creates two levels of iterative computation. On each processor, a small block of highly connected vertices is iterated locally, while the blocks are updated iteratively at the global level. We show that blockoriented execution reduces the communicationtocomputation ratio and significantly improves the perfor