Results 1 
4 of
4
Scheduling DAGs on asynchronous processors
 19TH ACM SYMP. ON PARALLEL ALGORITHMS AND ARCHITECTURES
, 2007
"... This paper addresses the problem of scheduling a DAG of unitlength tasks on asynchronous processors, that is, processors having different and changing speeds. The objective is to minimize the makespan, that is, the time to execute the entire DAG. Asynchrony is modeled by an oblivious adversary, whi ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
This paper addresses the problem of scheduling a DAG of unitlength tasks on asynchronous processors, that is, processors having different and changing speeds. The objective is to minimize the makespan, that is, the time to execute the entire DAG. Asynchrony is modeled by an oblivious adversary, which is assumed to determine the processor speeds at each point in time. The oblivious adversary may change processor speeds arbitrarily and arbitrarily often, but makes speed decisions independently of any random choices of the scheduling algorithm. This paper gives bounds on the makespan of two randomized online firingsquad scheduling algorithms, All and Level. These two schedulers are shown to have good makespan even when asynchrony is arbitrarily extreme. Let W and D denote, respectively, the number of tasks and the longest path in the DAG, and let πave denote the average speed of the p processors during the execution. In All each processor repeatedly chooses a random task to execute from among all ready tasks (tasks whose predecessors have been executed). Scheduler All is shown to have a makespan Tp = W
DISTRIBUTED ALGORITHMS TO PERFORM INDEPENDENT TASKS IN NETWORKS WITH PROCESSOR FAULTS
, 2001
"... ii iii Acknowledgement Thanks are due to my thesis supervisor Bogdan Chlebus for his guidance and inspiration. This dissertation contains results obtained as a joint work with Bogdan Chlebus, Leszek G,asieniec, Andrzej Lingas and Alex Shvartsman. I thank them all for a rewarding and fruitful coopera ..."
Abstract
 Add to MetaCart
(Show Context)
ii iii Acknowledgement Thanks are due to my thesis supervisor Bogdan Chlebus for his guidance and inspiration. This dissertation contains results obtained as a joint work with Bogdan Chlebus, Leszek G,asieniec, Andrzej Lingas and Alex Shvartsman. I thank them all for a rewarding and fruitful cooperation.
Revised (1/6/2005)
"... The abstract problem of using P failureprone processors to cooperatively update all locations of an Nelement shared array is called WriteAll. Solutions to WriteAll can be used iteratively to construct efficient simulations of pram algorithms on failureprone prams. Such use of WriteAll in simul ..."
Abstract
 Add to MetaCart
(Show Context)
The abstract problem of using P failureprone processors to cooperatively update all locations of an Nelement shared array is called WriteAll. Solutions to WriteAll can be used iteratively to construct efficient simulations of pram algorithms on failureprone prams. Such use of WriteAll in simulations is abstracted in terms of the iterative WriteAll problem. The efficiency of the algorithmic solutions for WriteAll and iterative WriteAll is measured in terms of work complexity where all processing steps taken by the processors are counted. This paper considers determinitic solutions for the WriteAll and iterative WriteAll problems in the failstop synchronous crcw pram model where memory access concurrency needs to be controlled. A deterministic algorithm of Kanellakis, Michailidis, and Shvartsman [16] efficiently solves the WriteAll problem in this model, while controlling read and write memory access concurrency. However it was not shown how the number of processor failures f affects the work efficiency of the algorithm. The results herein give a new analysis of the algorithm [16] that obtain failuresensitive work bounds, while retaining the known memory access concurrency bounds. Specifically, the new result expresses the work bound as a function of N, P and f. Another contribution in this paper is the new failuresensitive analysis for iterative WriteAll with controlled memory access concurrency. This result yields tighter bounds on work (vs. [16]) for simulations of pram algorithms on failstop prams.