Results 1 
6 of
6
The Complexity of Computation on the Parallel Random Access Machine
, 1993
"... PRAMs also approximate the situation where communication to and from shared memory is much more expensive than local operations, for example, where each processor is located on a separate chip and access to shared memory is through a combining network. Not surprisingly, abstract PRAMs can be much m ..."
Abstract

Cited by 34 (3 self)
 Add to MetaCart
PRAMs also approximate the situation where communication to and from shared memory is much more expensive than local operations, for example, where each processor is located on a separate chip and access to shared memory is through a combining network. Not surprisingly, abstract PRAMs can be much more powerful than restricted instruction set PRAMs. THEOREM 21.16 Any function of n variables can be computed by an abstract EROW PRAM in O(log n) steps using n= log 2 n processors and n=2 log 2 n shared memory cells. PROOF Each processor begins by reading log 2 n input values and combining them into one large value. The information known by processors are combined in a binarytreelike fashion. In each round, the remaining processors are grouped into pairs. In each pair, one processor communicates the information it knows about the input to the other processor and then leaves the computation. After dlog 2 ne rounds, one processor knows all n input values. Then this processor computes th...
Transforming comparison model lower bounds to the parallelrandomaccessmachine
 INFORMATION PROCESSING LETTERS
, 1997
"... We provide general transformations of lower bounds in Valiant's parallelcomparisondecisiontree model to lower bounds in the priority concurrentread concurrentwrite parallelrandomaccessmachine model. The proofs rely on standard Ramseytheoretic arguments that simplify the structure of th ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We provide general transformations of lower bounds in Valiant's parallelcomparisondecisiontree model to lower bounds in the priority concurrentread concurrentwrite parallelrandomaccessmachine model. The proofs rely on standard Ramseytheoretic arguments that simplify the structure of the computation by restricting the input domain. The transformation of comparison model lower bounds, which are usually easier to obtain, to the parallelrandomaccessmachine, unifies some known lower bounds and gives new lower bounds for several problems.
Lower Bound for String Matching on PRAM
, 1995
"... Breslauer and Galil have shown that the string matching problem requires \Theta(d n p e + log log d1+p=ne 2p) rounds in the parallel comparison tree model with p comparisons in each round. In this note we show that the same lower bound even holds for the pprocessor abstract PriorityCRCW PRAMs w ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Breslauer and Galil have shown that the string matching problem requires \Theta(d n p e + log log d1+p=ne 2p) rounds in the parallel comparison tree model with p comparisons in each round. In this note we show that the same lower bound even holds for the pprocessor abstract PriorityCRCW PRAMs with bounded but arbitrary large memory. In this model one assumes that the internal computational power of the processors is unlimited, thus we lower bound the number of neccessary rounds of parallel read/write accesses to the shared memory. 1 Introduction Given two strings X[1 \Delta \Delta \Delta n], called the text, and Y[1 \Delta \Delta \Delta m], called the pattern, over an alphabet \Sigma. The string matching problem is to find all occurrences of the pattern in the text, that is, to find all i; 1 i n \Gamma m, such that X[i \Delta \Delta \Delta i + m \Gamma 1] = Y[1 \Delta \Delta \Delta m]. The problem of finding the period of a text Text[1 \Delta \Delta \Delta n] is to find the mi...
Transforming Comparison Model
, 909
"... Reproduction of all or part of this work is permitted for educational or research use on condition that this copyright notice is included in any copy. See back inner page for a list of recent publications in the BRICS Report Series. Copies may be obtained by contacting: BRICS ..."
Abstract
 Add to MetaCart
(Show Context)
Reproduction of all or part of this work is permitted for educational or research use on condition that this copyright notice is included in any copy. See back inner page for a list of recent publications in the BRICS Report Series. Copies may be obtained by contacting: BRICS
Large Parallel Machines can be Extremely Slow for Small Problems
"... We consider concurrentwrite PRAMs with a large number of processors of unlimited computational power and an infinite shared memory. Our adversary chooses a small number of our processors and gives them a 01 input sequence (each chosen processor gets a bit, and each bit is given to one processor). ..."
Abstract
 Add to MetaCart
(Show Context)
We consider concurrentwrite PRAMs with a large number of processors of unlimited computational power and an infinite shared memory. Our adversary chooses a small number of our processors and gives them a 01 input sequence (each chosen processor gets a bit, and each bit is given to one processor). The chosen processors are required to compute the PARITY of their input, while the others do not take part in the computation. If at most q processors are chosen and q 1 2 log log n then we show that computing PARITY needs q steps in the worst case. On the other hand, there exists an algorithm which computes PARITY in q steps (for any q n) in this model, thus our result is sharp. Surprisingly, if our adversary chooses exactly q of our processors, then they can compute PARITY in [q=2] + 2 steps, and in this case we show that it needs at least [q=2] steps. Our result implies that one cannot construct large parallel machines which are efficient when only a small number of their processors are active. On the other hand, a result of Ajtai and BenOr [1] shows that if we have n input bits which contain at most polylog n 1's and polynomially many processors which are all allowed to work, then the PARITY can be solved in constant time. Current affiliation: Mathematical Institute of the Hungarian Academy of Sciences; Re'altanoda u. 1315, H1053 BUDAPEST HUNGARY 1 1.
Ramsey Theory Applications
"... There are many interesting applications of Ramsey theory, these include results in number theory, algebra, geometry, topology, set theory, logic, ergodic theory, information theory and theoretical computer science. Relations of Ramseytype theorems to various fields in mathematics are well documente ..."
Abstract
 Add to MetaCart
(Show Context)
There are many interesting applications of Ramsey theory, these include results in number theory, algebra, geometry, topology, set theory, logic, ergodic theory, information theory and theoretical computer science. Relations of Ramseytype theorems to various fields in mathematics are well documented in published books and monographs. The main objective of this survey is to list applications mostly in theoretical computer science of the last two decades not contained in these. 1