Results 1 
7 of
7
Hundreds of Impossibility Results for Distributed Computing
 Distributed Computing
, 2003
"... We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, faulttolerance, different communication media, and randomization. The resource bounds refe ..."
Abstract

Cited by 47 (5 self)
 Add to MetaCart
We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, faulttolerance, different communication media, and randomization. The resource bounds refer to time, space and message complexity. These results are useful in understanding the inherent difficulty of individual problems and in studying the power of different models of distributed computing.
A Simple Proof of the Uniform Consensus Synchronous Lower Bound
, 2002
"... We give a simple and intuitive proof of a f + 2 round lower bound for uniform consensus. ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
We give a simple and intuitive proof of a f + 2 round lower bound for uniform consensus.
The DoAll Problem in Broadcast Networks
, 2001
"... The problem of performing t tasks in a distributed system on p failureprone processors is one of the fundamental problems in distributed computing. If the tasks are similar and independent and the processors communicate by sending messages then the problem is called DoAll . In our work the communi ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
(Show Context)
The problem of performing t tasks in a distributed system on p failureprone processors is one of the fundamental problems in distributed computing. If the tasks are similar and independent and the processors communicate by sending messages then the problem is called DoAll . In our work the communication is over a multipleaccess channel, and the attached stations may fail by crashing. The measure of performance is work, defined as the number of the available processor steps. Algorithms are required to be reliable in that they perform all the tasks as long as at least one station remains operational. We show that each reliable algorithm always needs to perform at least the minimum amount t + p p t) of work. We develop an optimal deterministic algorithm for the channel with collision detection performing only the minimum work (t + p p t). Another algorithm is given for the channel without collision detection, it performs work O(t+p p t+p minff; tg), where f < p is the number of failures. It is proved to be optimal if the number of faults is the only restriction on the adversary. Finally we consider the question if randomization helps for the channel without collision detection against weaker adversaries. We develop a randomized algorithm which needs to perform only the expected minimum work if the adversary may fail a constant fraction of stations, but it has to select the failureprone stations prior to the start of an algorithm.
Operationvalency and the cost of coordination
 In Proceedings of the 22nd Annual ACM Symposium on Principles of Distributed Computing (PODC
, 2003
"... This paper introduces operationvalency, a generalization of the valency proof technique originated by Fischer, Lynch, and Paterson. By focusing on critical events that influence the return values of individual operations rather then on critical events that influence a protocol's single return ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
This paper introduces operationvalency, a generalization of the valency proof technique originated by Fischer, Lynch, and Paterson. By focusing on critical events that influence the return values of individual operations rather then on critical events that influence a protocol's single return value, the new technique allows us to derive a collection of realistic lower bounds for lockfree implementations of concurrent objects such as linearizable queues, stacks, sets, hash tables, shared counters, approximate agreement, and more. By realistic we mean that they follow the realworld model introduced by Dwork, Herlihy, and Waarts, counting both memoryreferences and memorystalls due to contention, and that they allow the combined use of read, write, and readmodifywrite operations available on current machines. By using the operationvalency technique, we derive an f~(X/~) noncached shared memory accesses lower bound on the worstcase time complexity of lockfree implementations of objects in Influence(n), a wide class of concurrent objects including all of those mentioned above, in which an individual operation can be influenced by all others. We also prove the existence of a fundamental relationship between the space complexity, latency, contention, and &quot;influence level &quot; of any lockfree object implementation. Our results are broad in that they hold for implementations combining read/write memory and any collection of readmodifywrite operations, and in that they apply even if shared memory words have unbounded size.
Performing Work in Broadcast Networks
, 2001
"... We consider the problem of how to schedule t similar and independent tasks to be performed in a synchronous distributed system of p stations communicating via multipleaccess channels. Stations are prone to crashes whose patterns of occurrence are specified by adversarial models. Work, defined as th ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
We consider the problem of how to schedule t similar and independent tasks to be performed in a synchronous distributed system of p stations communicating via multipleaccess channels. Stations are prone to crashes whose patterns of occurrence are specified by adversarial models. Work, defined as the number of the available processor steps, is the complexity measure. We consider only reliable algorithms that perform all the tasks as long as at least one station remains operational. It is shown that every reliable algorithm has to perform work Ω(t + p √ t) even when no failures occur. An optimal deterministic algorithm for the channel with collision detection is developed, which performs work O(t + p √ t). Another algorithm, for the channel without collision detection, performs work O(t + p √ t + p · min{f, t}), where f < p is the number of failures. This algorithm is proved to be optimal, provided that the adversary is restricted in failing no more than f stations. Finally, we consider the question if randomization helps against weaker adversaries for the channel without collision detection. A randomized algorithm is developed which performs the expected minimum amount O(t + p √ t) of work, provided that the adversary may fail a constant fraction of stations and it has to select failureprone stations prior to the start of an execution of the algorithm.
Performing Work in Broadcast Networks ∗
"... We consider the problem of how to schedule t similar and independent tasks to be performed in a synchronous distributed system of p stations communicating via multipleaccess channels. Stations are prone to crashes whose patterns of occurrence are specified by adversarial models. Work, defined as the ..."
Abstract
 Add to MetaCart
(Show Context)
We consider the problem of how to schedule t similar and independent tasks to be performed in a synchronous distributed system of p stations communicating via multipleaccess channels. Stations are prone to crashes whose patterns of occurrence are specified by adversarial models. Work, defined as the number of the available processor steps, is the complexity measure. We consider only reliable algorithms that perform all the tasks as long as at least one station remains operational. It is shown that every reliable algorithm has to perform work Ω(t + p √ t) even when no failures occur. An optimal deterministic algorithm for the channel with collision detection is developed, which performs only work O(t + p √ t). Another algorithm, for the channel without collision detection, performs work O(t + p √ t + p · min{f, t}), where f < p is the number of failures. This algorithm is proved to be optimal if upper bound f on the number of faults is the only restriction on the adversary. Finally, we consider the question if randomization helps against weaker adversaries for the channel without collision detection. A randomized algorithm is developed which performs only the expected minimum amount O(t + p √ t) of work, if the adversary may fail a constant fraction of stations only and it has to select failureprone stations prior to the start of an execution. Key words: distributed algorithm, multipleaccess channel, failstop failure, adversary, work, lower bound, independent tasks.
unknown title
"... ABSTRACT The problem of performing t tasks in a distributed system on p failureprone processors is one of the fundamental problems in distributed computing. If the tasks are similar and independent and the processors communicate by sending messages then the problem is called DoAll. In our work the ..."
Abstract
 Add to MetaCart
(Show Context)
ABSTRACT The problem of performing t tasks in a distributed system on p failureprone processors is one of the fundamental problems in distributed computing. If the tasks are similar and independent and the processors communicate by sending messages then the problem is called DoAll. In our work the communication is over a multipleaccess channel, and the attached stations may fail by crashing. The measure of performance is work, defined as the number of the available processor steps. Algorithms are required to be reliable in that they perform all the tasks as long as at least one station remains operational. We show that each reliable algorithm always needs to perform at least the minimum amount \Omega (t+p p t) of work. We develop an optimal deterministic algorithm for the channel with collision detection performing only the minimum work \Theta (t + p p t). Another algorithm is given for the channel without collision detection, it performs work O(t + p p t + p \Delta minff; tg), where f! p is the number of failures. It is proved to be optimal if the number of faults is the only restriction on the adversary. Finally we consider the question if randomization helps for the channel without collision detection against weaker adversaries. We develop a randomized algorithm which needs to perform only the expected minimum work if the adversary may fail a constant fraction of stations, but it has to select the failureprone stations prior to the start of an algorithm.