Results 1  10
of
19
Optimal and Sublogarithmic Time Randomized Parallel Sorting Algorithms
 SIAM JOURNAL ON COMPUTING
, 1989
"... We assume a parallel RAM model which allows both concurrent reads and concurrent writes of a global memory. Our main result is an optimal randomized parallel algorithm for INTEGER SORT (i.e., for sorting n integers in the range [1; n]). Our algorithm costs only logarithmic time and is the first know ..."
Abstract

Cited by 73 (14 self)
 Add to MetaCart
(Show Context)
We assume a parallel RAM model which allows both concurrent reads and concurrent writes of a global memory. Our main result is an optimal randomized parallel algorithm for INTEGER SORT (i.e., for sorting n integers in the range [1; n]). Our algorithm costs only logarithmic time and is the first known that is optimal: the product of its time and processor bounds is upper bounded by a linear function of the input size. We also give a deterministic sublogarithmic time algorithm for prefix sum. In addition we present a sublogarithmic time algorithm for obtaining a random permutation of n elements in parallel. And finally, we present sublogarithmic time algorithms for GENERAL SORT and INTEGER SORT. Our sublogarithmic GENERAL SORT algorithm is also optimal.
Graph partitioning into isolated, high conductance clusters: theory, computation and . . .
, 2008
"... ..."
Fast Generation of Random Permutations Via Networks Simulation
 ALGORITHMICA
, 1998
"... We consider the problem of generating random permutations with uniform distribution. That is, we require that for an arbitrary permutation π of n elements, with probability 1/n! the machine halts with the ith output cell containing π(i), for 1 ≤ i ≤ n. We study this problem on two models of parall ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
We consider the problem of generating random permutations with uniform distribution. That is, we require that for an arbitrary permutation π of n elements, with probability 1/n! the machine halts with the ith output cell containing π(i), for 1 ≤ i ≤ n. We study this problem on two models of parallel computations: the CREW PRAM and the EREW PRAM. The main result of the paper is an algorithm for generating random permutations that runs in O(log log n) time and uses O(n1+o(1) ) processors on the CREW PRAM. This is the first o(log n)time CREW PRAM algorithm for this problem. On the EREW PRAM we present a simple algorithm that generates a random permutation in time O(log n) using n processors and O(n) space. This algorithm outperforms each of the previously known algorithms for the exclusive write PRAMs. The common and novel feature of both our algorithms is first to design a suitable random switching network generating a permutation and then to simulate this network on the PRAM model in a fast way.
An Optimal Parallel Matching Algorithm for Cographs
 Journal of Parallel and Distributed Computing
, 1994
"... The class of cographs, or complementreducible graphs, arises naturally in many different areas of applied mathematics and computer science. We show that the problem of finding a maximum matching in a cograph can be solved optimally in parallel by reducing it to parenthesis matching. With an $n$ver ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
The class of cographs, or complementreducible graphs, arises naturally in many different areas of applied mathematics and computer science. We show that the problem of finding a maximum matching in a cograph can be solved optimally in parallel by reducing it to parenthesis matching. With an $n$vertex cograph $G$ represented by its parse tree as input, our algorithm finds a maximum matching in $G$ in O($logn$) time using O($n0$) processors in the EREWPRAM model. Key Words: list ranking, tree contraction, matching, parenthesis matching, scheduling, operating systems, cographs, parallel algorithms, EREWPRAM. 1. Introduction A wellknown class of graphs arising in a wide spectrum of practical applications [1,2,7] is the class of cographs, or complementreducible graphs. The cographs are defined recursively as follows: . a singlevertex graph is a cograph; . if $G$ is a cograph, then its complement $G bar$ is also a cograph; . if $G$ and $H$ are cographs, then their union is also a cog...
Systematic Derivation of Tree Contraction Algorithms
 In Proceedings of INFOCOM '90
, 2005
"... While tree contraction algorithms play an important role in e#cient tree computation in parallel, it is di#cult to develop such algorithms due to the strict conditions imposed on contracting operators. In this paper, we propose a systematic method of deriving e#cient tree contraction algorithms f ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
While tree contraction algorithms play an important role in e#cient tree computation in parallel, it is di#cult to develop such algorithms due to the strict conditions imposed on contracting operators. In this paper, we propose a systematic method of deriving e#cient tree contraction algorithms from recursive functions on trees in any shape. We identify a general recursive form that can be parallelized to obtain e#cient tree contraction algorithms, and present a derivation strategy for transforming general recursive functions to parallelizable form. We illustrate our approach by deriving a novel parallel algorithm for the maximum connectedset sum problem on arbitrary trees, the treeversion of the famous maximum segment sum problem.
Scalable parallel implementations of list ranking on finegrained machines
 IEEE Transactions on Parallel and Distributed Systems
, 1997
"... Abstract—We present analytical and experimental results for finegrained list ranking algorithms. We compare the scalability of two representative algorithms on random lists, then address the question of how the locality properties of image edge lists can be used to improve the performance of this h ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Abstract—We present analytical and experimental results for finegrained list ranking algorithms. We compare the scalability of two representative algorithms on random lists, then address the question of how the locality properties of image edge lists can be used to improve the performance of this highly datadependent operation. Starting with Wyllie’s algorithm and Anderson and Miller’s randomized algorithm as bases, we use the spatial locality of edge links to derive scalable algorithms designed to exploit the characteristics of image edges. Tested on actual and synthetic edge data, this approach achieves significant speedup on the MasPar MP1 and MP2, compared to the standard list ranking algorithms. The modified algorithms exhibit good scalability and are robust across a wide variety of image types. We also show that load balancing on fine grained machines performs well only for large problem to machine size ratios. Index Terms—List ranking, parallel algorithms, image processing, computer vision, finegrained parallel processing, scalable algorithms.
Planar Strong Connectivity Helps in Parallel DepthFirst Search
 SIAM Journal on Computing
, 1992
"... . This paper shows that for a strongly connected planar directed graph of size n, a depthfirst search tree rooted a specified vertex can be computed in O(log 5 n) time using n= log n processors. Previously, for planar directed graphs that may not be strongly connected, the best depthfirst searc ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
. This paper shows that for a strongly connected planar directed graph of size n, a depthfirst search tree rooted a specified vertex can be computed in O(log 5 n) time using n= log n processors. Previously, for planar directed graphs that may not be strongly connected, the best depthfirst search algorithm runs in O(log 10 n) time using n processors. Both algorithms run on a parallel random access machine that allows concurrent reads and concurrent writes in its shared memory, and in case of a write conflict, permits an arbitrary processor to succeed. Key words. linearprocessor NC algorithms, graph separators, depthfirst search, planar directed graphs, strong connectivity, bubble graphs, st graphs AMS(MOS) subject classification. 68Q10, 05C99 1. Introduction. Depthfirst search is one of the most useful tools in graph theory [32], [4]. The depthfirst search problem is the following: given a graph and a distinguished vertex, construct a tree that corresponds to performing de...
Fast Parallel and Adaptive Updates for DualDecomposition Solvers

, 2011
"... Dualdecomposition (DD) methods are quickly becoming important tools for estimating the minimum energy state of a graphical model. DD methods decompose a complex model into a collection of simpler subproblems that can be solved exactly (such as trees), that in combination provide upper and lower bou ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Dualdecomposition (DD) methods are quickly becoming important tools for estimating the minimum energy state of a graphical model. DD methods decompose a complex model into a collection of simpler subproblems that can be solved exactly (such as trees), that in combination provide upper and lower bounds on the exact solution. Subproblem choice can play a major role: larger subproblems tend to improve the bound more per iteration, while smaller subproblems enable highly parallel solvers and can benefit from reusing past solutions when there are few changes between iterations. We propose an algorithm that can balance many of these aspects to speed up convergence. Our method uses a cluster tree data structure that has been proposed for adaptive exact inference tasks, and we apply it in this paper to dualdecomposition approximate inference. This approach allows us to process large subproblems to improve the bounds at each iteration, while allowing a high degree of parallelizability and taking advantage of subproblems with sparse updates. For both synthetic inputs and a realworld stereo matching problem, we demonstrate that our algorithm is able to achieve significant improvement in convergence time.
Parallel Maximum Independent Set In Convex Bipartite Graphs
, 1995
"... A bipartite graph G = (V; W;E) is called convex if the vertices in W can be ordered in such a way that the elements of W adjacent to any vertex v 2 V form an interval (i.e. a sequence consecutively numbered vertices). Such a graph can be represented in a compact form that requires O(n) space, where ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
A bipartite graph G = (V; W;E) is called convex if the vertices in W can be ordered in such a way that the elements of W adjacent to any vertex v 2 V form an interval (i.e. a sequence consecutively numbered vertices). Such a graph can be represented in a compact form that requires O(n) space, where n = max fjV j; jW jg. Given a convex bipartite graph G in the compact form Dekel and Sahni designed an O(log²(n))time, nprocessor EREW PRAM algorithm to compute a maximum matching in G. We show that the matching produced by their algorithm can be used to construct optimally in parallel a maximum set of independent vertices. Our algorithm runs in O(log n) time with n/log n processors on a CRCW PRAM.