Results 1  10
of
33
Simplified Distributed LTL Model Checking by Localizing Cycles
, 2002
"... Distributed Model Checking avoids the state explosion problem by using the computational resources of parallel environments LTL model checking mainly entails detecting accepting cycles in a state transition graph. The nested depthfirst search algorithm used for this purpose is difficult to parallel ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
(Show Context)
Distributed Model Checking avoids the state explosion problem by using the computational resources of parallel environments LTL model checking mainly entails detecting accepting cycles in a state transition graph. The nested depthfirst search algorithm used for this purpose is difficult to parallelize since it is based on the depthfirst search traversal order which is inherently sequential. Proposed solutions make use of data structures and synchronization mechanisms in order to preserve the depthfirst order. We propose a simple distributed algorithm that assumes cycles to be localized by the partition function. Cycles can then be checked without requiring particular synchronization mechanisms. Methods for constructing such kind of partition functions are also proposed.
HipG: Parallel Processing of LargeScale Graphs
"... Distributed processing of realworld graphs is challenging duetotheirsizeandtheinherentirregularstructureofgraph computations. We present HipG, a distributed framework that facilitates programming parallel graph algorithms by composing the parallel application automatically from the userdefined pie ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
Distributed processing of realworld graphs is challenging duetotheirsizeandtheinherentirregularstructureofgraph computations. We present HipG, a distributed framework that facilitates programming parallel graph algorithms by composing the parallel application automatically from the userdefined pieces of sequential work on graph nodes. To make the user code highlevel, the framework provides a unified interface to executing methods on local and nonlocal graph nodes and an abstraction of exclusive execution. The graph computations are managed by logical objects called synchronizers, which we used, for example, to implement distributed divideandconquer decomposition into strongly connected components. The code written in HipG is independent of a particular graph representation, to the point that the graph can be created onthefly, i.e. by the algorithm that computes on this graph, which we used to implement a distributed model checker. HipG programs are in general short and elegant; they achieve good portability, memory utilization, and performance. 1.
Parallel Algorithms for Radiation Transport on Unstructured Grids
, 2000
"... The method of discrete ordinates is commonly used to solve the Boltzmann radiation transport equation for applications ranging from fire simulation to weapon effects. The equations are most efficiently solved by sweeping the radiation flux across the computational grid. For unstructured grids thi ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
The method of discrete ordinates is commonly used to solve the Boltzmann radiation transport equation for applications ranging from fire simulation to weapon effects. The equations are most efficiently solved by sweeping the radiation flux across the computational grid. For unstructured grids this poses several interesting challenges, particularly when implemented on distributedmemory parallel machines where the grid geometry is scattered across processors. We describe an asynchronous, parallel, messagepassing algorithm that performs sweeps simultaneously from many directions across unstructured grids. We identify key factors that limit the algorithm's parallel scalability and discuss two enhancements we have made to the basic algorithm: one to prioritize the work within a processor's subdomain and the other to better decompose the unstructured grid across processors. Performance results are given for the basic and enhanced algorithms implemented within a radiation solver ...
A HighLevel Framework for Distributed Processing of LargeScale Graphs
"... Abstract. Distributed processing of realworld graphs is challenging due to their size and the inherent irregular structure of graph computations. We present HIPG, a distributed framework that facilitates highlevel programming of parallel graph algorithms by expressing them as a hierarchy of distri ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Distributed processing of realworld graphs is challenging due to their size and the inherent irregular structure of graph computations. We present HIPG, a distributed framework that facilitates highlevel programming of parallel graph algorithms by expressing them as a hierarchy of distributed computations executed independently and managed by the user. HIPG programs are in general short and elegant; they achieve good portability, memory utilization and performance. 1
Optimizing Graph Algorithms on Pregellike Systems
, 2014
"... We study the problem of implementing graph algorithms efficiently on Pregellike systems, which can be surprisingly challenging. Standard graph algorithms in this setting can incur unnecessary inefficiencies such as slow convergence or high communication or computation cost, typically due to structu ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
We study the problem of implementing graph algorithms efficiently on Pregellike systems, which can be surprisingly challenging. Standard graph algorithms in this setting can incur unnecessary inefficiencies such as slow convergence or high communication or computation cost, typically due to structural properties of the input graphs such as large diameters or skew in component sizes. We describe several optimization techniques to address these inefficiencies. Our most general technique is based on the idea of performing some serial computation on a tiny fraction of the input graph, complementing Pregel’s vertexcentric parallelism. We base our study on thorough implementations of several fundamental graph algorithms, some of which have, to the best of our knowledge, not been implemented on Pregellike systems before. The algorithms and optimizations we describe are fully implemented in our opensource Pregel implementation. We present detailed experiments showing that our optimization techniques improve runtime significantly on a variety of very large graph datasets.
SingleSource Shortest Paths with the Parallel Boost Graph Library
"... The Parallel Boost Graph Library (Parallel BGL) is a library of graph algorithms and data structures for distributedmemory computation on large graphs. Developed with the Generic Programming paradigm, the Parallel BGL is highly customizable, supporting various graph data structures, arbitrary verte ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
The Parallel Boost Graph Library (Parallel BGL) is a library of graph algorithms and data structures for distributedmemory computation on large graphs. Developed with the Generic Programming paradigm, the Parallel BGL is highly customizable, supporting various graph data structures, arbitrary vertex and edge properties, and different communication media. In this paper, we describe the implementation of two parallel variants of Dijkstra’s singlesource shortest paths algorithm in the Parallel BGL. We also provide an experimental evaluation of these implementations using synthetic and realworld benchmark graphs from the 9 th DIMACS Implementation Challenge.
Finding strongly connected components in distributed graphs
 J. Parallel Distrib. Comput
"... Abstract The traditional, serial, algorithm for finding the strongly connected components in a graph is based on depth first search and has complexity which is linear in the size of the graph. Depth first search is difficult to parallelize, which creates a need for a different parallel algorithm fo ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
Abstract The traditional, serial, algorithm for finding the strongly connected components in a graph is based on depth first search and has complexity which is linear in the size of the graph. Depth first search is difficult to parallelize, which creates a need for a different parallel algorithm for this problem. We describe the implementation of a recently proposed parallel algorithm that finds strongly connected components in distributed graphs, and discuss how it is used in a radiation transport solver.
Finding Strongly Connected Components in Parallel using O(log 2 n) Reachability Queries
, 2007
"... We give a randomized (LasVegas) parallel algorithm for computing strongly connected components of a graph with n vertices and m edges. The runtime is dominated by O(log 2 n) parallel reachability queries; i.e. O(log 2 n) calls to a subroutine that computes the descendants of a given vertex in a giv ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
We give a randomized (LasVegas) parallel algorithm for computing strongly connected components of a graph with n vertices and m edges. The runtime is dominated by O(log 2 n) parallel reachability queries; i.e. O(log 2 n) calls to a subroutine that computes the descendants of a given vertex in a given digraph. Our algorithm also topologically sorts the strongly connected components. Using Ullman and Yannakakis’s [21] techniques for the reachability subroutine gives our algorithm runtime Õ(t) using mn/t2 processors for any (n 2 /m) 1/3 ≤ t ≤ n. On sparse graphs, this improves the number of processors needed to compute strongly connected components and topological sort within time n 1/3 ≤ t ≤ n from the previously best known (n/t) 3 [19] to (n/t) 2. 1 Introduction and main results Breadthfirst and depthfirst search have many applications in the analysis of directed graphs. Breadthfirst search can be used to compute the vertices that are reachable from a given vertex and directed spanning trees. Depthfirst search can: solve these problems, determine if a graph is acyclic, topologically sort an acyclic graph and compute strongly connected components (SCCs) [20]. Efforts