Results 1  10
of
44
Hundreds of Impossibility Results for Distributed Computing
 Distributed Computing
, 2003
"... We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, faulttolerance, different communication media, and randomization. The resource bounds refe ..."
Abstract

Cited by 52 (5 self)
 Add to MetaCart
We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, faulttolerance, different communication media, and randomization. The resource bounds refer to time, space and message complexity. These results are useful in understanding the inherent difficulty of individual problems and in studying the power of different models of distributed computing.
Conditions on input vectors for consensus solvability in asynchronous distributed systems
 Journal of the ACM
, 2001
"... Abstract. This article introduces and explores the conditionbased approach to solve the consensus problem in asynchronous systems. The approach studies conditions that identify sets of input vectors for which it is possible to solve consensus despite the occurrence of up to f process crashes. The f ..."
Abstract

Cited by 39 (13 self)
 Add to MetaCart
Abstract. This article introduces and explores the conditionbased approach to solve the consensus problem in asynchronous systems. The approach studies conditions that identify sets of input vectors for which it is possible to solve consensus despite the occurrence of up to f process crashes. The first main result defines acceptable conditions and shows that these are exactly the conditions for which a consensus protocol exists. Two examples of realistic acceptable conditions are presented, and proved to be maximal, in the sense that they cannot be extended and remain acceptable. The second main result is a generic consensus sharedmemory protocol for any acceptable condition. The protocol always guarantees agreement and validity, and terminates (at least) when the inputs satisfy the condition with which the protocol has been instantiated, or when there are no crashes. An efficient version of the protocol is then designed for the message passing model that works when f < n/2, and it is shown that no such protocol exists when f ≥ n/2. It is also shown how the protocol’s safety can be traded for its liveness.
The IOA Language and Toolset: Support for Designing, Analyzing, and Building Distributed Systems
, 1998
"... This report describes a new language for distributed programming, the IOA language, together with a highlevel design and preliminary implementation for a suite of tools, the IOA toolset, to support the production of highquality distributed software. The language and tools are based on the I/O a ..."
Abstract

Cited by 34 (10 self)
 Add to MetaCart
This report describes a new language for distributed programming, the IOA language, together with a highlevel design and preliminary implementation for a suite of tools, the IOA toolset, to support the production of highquality distributed software. The language and tools are based on the I/O automaton model, which has been used to describe and verify distributed algorithms. The toolset supports a development process that begins with a highlevel specification, refines that specification via successively more detailed designs, and ends by automatically generating distributed programs. The toolset encourages system decomposition, which helps make distributed programs understandable and easy to modify. It also provides a variety of validation methods (theorem proving, model checking, and simulation), which can be used to ensure that the generated programs are correct, subject to assumptions about externallyprovided system services (e.g., communication services), and about the correctness of handcoded data type implementations.
Algorithms adaptive to point contention
 Journal of the ACM
, 2003
"... Abstract. This article introduces the sieve, a novel building block that allows to adapt to the number of simultaneously active processes (the point contention) during the execution of an operation. We present an implementation of the sieve in which each sieve operation requires O(k log k) steps, wh ..."
Abstract

Cited by 21 (8 self)
 Add to MetaCart
Abstract. This article introduces the sieve, a novel building block that allows to adapt to the number of simultaneously active processes (the point contention) during the execution of an operation. We present an implementation of the sieve in which each sieve operation requires O(k log k) steps, where k is the point contention during the operation. The sieve is the cornerstone of the first waitfree algorithms that adapt to point contention using only read and write operations. Specifically, we present efficient algorithms for longlived renaming, timestamping and collecting information.
Adaptive LongLived Renaming with Read and Write Operations
, 1999
"... This paper presents an adaptive algorithm for longlived renaming using only read and write operations. A process p i obtains a new name in the range f1; : : : ; k(2k \Gamma 1)g, where k is the maximal number of processes simultaneously participating with p i ; the number of steps performed by p i ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
(Show Context)
This paper presents an adaptive algorithm for longlived renaming using only read and write operations. A process p i obtains a new name in the range f1; : : : ; k(2k \Gamma 1)g, where k is the maximal number of processes simultaneously participating with p i ; the number of steps performed by p i is O(k 2 log k). The range of names is reduced to f1; : : : ; 6k \Gamma 1g, by employing the algorithm of Burns and Peterson. 1 Introduction Distributed coordination algorithms are designed to accommodate a large number of processes, each with a distinct identifier; in order to coordinate, processes must obtain uptodate information from each other. A waitfree algorithm guarantee that a process completes its operation within a finite number of its own steps regardless of the behavior of other processes; in a typical waitfree algorithm, information is collected by reading from an array indexed with process' identifiers. This scheme is an overkill in a welldesigned system where often ...
A Simple Algorithmic Characterization of Uniform Solvability (Extended Abstract)
 Proceedings of the 43rd Annual IEEE Symposium on Foundations of Computer Science (FOCS 2002
, 2002
"... The HerlihyShavit (HS) conditions characterizing the solvability of asynchronous tasks over n processors have been a milestone in the development of the theory of distributed computing. Yet, they were of no help when researcher sought algorithms that do not depend on n. To help in this pursuit we i ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
(Show Context)
The HerlihyShavit (HS) conditions characterizing the solvability of asynchronous tasks over n processors have been a milestone in the development of the theory of distributed computing. Yet, they were of no help when researcher sought algorithms that do not depend on n. To help in this pursuit we investigate the uniform solvability of an infinite uniform sequence of tasks T 0 , T 1 , T 2 , ..., where T i is a task over processors p 0 , p 1 , ..., p i , and T i extends T i1 . We say that such a sequence is uniformly solvable if there exit protocols to solve each T i and the protocol for T i extends the protocol for T i1 . This paper establishes that although each T i may be solvable, the uniform sequence is not necessarily uniformly solvable. We show this by proposing a novel uniform sequence of solvable tasks and proving that the sequence is not amenable to a uniform solution. We then extend the HS conditions for a task over n processors, to uniform solvability in a natural way. The technique we use to accomplish this is to generalize the alternative algorithmic proof, by Borowsky and Gafni, of the HS conditions, by showing that the infinite uniform sequence of task of Immediate Snapshots is uniformly solvable. A side benefit of the technique is a widely applicable methodology for the development of uniform protocols.
Efficient and Robust Sharing of Memory in MessagePassing Systems
 Journal of Algorithms
, 1996
"... A simulation of a waitfree, atomic, singlewriter multireader register in an asynchronous message passing system is presented. The simulation can withstand the failure of up to half of the processors, and requires O(n) messages (for each read or write operation), assuming there are n+ 1 processors ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
(Show Context)
A simulation of a waitfree, atomic, singlewriter multireader register in an asynchronous message passing system is presented. The simulation can withstand the failure of up to half of the processors, and requires O(n) messages (for each read or write operation), assuming there are n+ 1 processors in the system. It improves on the previous simulation, which requires O(n 2 ) messages (for each read or write operation). The message complexity of the new simulation is within a constant factor of the optimum. The new simulation improves the complexity of algorithms for the following problems in the messagepassing model in the presence of processor failures: multiwriter multireader registers, concurrent timestamp systems, `exclusion, atomic snapshots, randomized consensus, implementation of data structures, as well as improved faulttolerant algorithms for any solvable decision task. Keywords: faulttolerance, shared memory, message passing, waitfree algorithms, processor failures,...
Towards a topological characterization of asynchronous complexity
 In Proceedings of the 16th Annual ACM Symposium on Principles of Distributed Computing
, 1997
"... Abstract. This paper introduces the use of topological models and methods, formerly used to analyze computability, as tools for the quantification and classification of asynchronous complexity. We present the first asynchronous complexity theorem, applied to decision tasks in the iterated immediate ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Abstract. This paper introduces the use of topological models and methods, formerly used to analyze computability, as tools for the quantification and classification of asynchronous complexity. We present the first asynchronous complexity theorem, applied to decision tasks in the iterated immediate snapshot (IIS) model of Borowsky and Gafni. We do so by introducing a novel form of topological tool called the nonuniform chromatic subdivision. Building on the framework of Herlihy and Shavit’s topological computability model, our theorem states that the time complexity of any asynchronous algorithm is directly proportional to the level of nonuniform chromatic subdivisions necessary to allow a simplicial map from a task’s input complex to its output complex. To show the power of our theorem, we use it to derive a new tight bound on the time to achieve n process approximate agreement in the IIS model: � max input−min input � logd, where d = 3 for two processes ɛ and d = 2 for three or more. This closes an intriguing gap between the known upper and lower bounds implied by the work of Aspnes and Herlihy. More than the new bounds themselves, the importance of our asynchronous complexity theorem is that the algorithms and lower bounds it allows us to derive are intuitive and simple, with topological proofs that require no mention of concurrency at all.
The Disagreement Power of an Adversary
"... Abstract. At the heart of distributed computing lies the fundamental result that the level of agreement that can be obtained in an asynchronous shared memory model where t processes can crash is exactly t + 1. In other words, an adversary that can crash any subset of size at most t can prevent the p ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Abstract. At the heart of distributed computing lies the fundamental result that the level of agreement that can be obtained in an asynchronous shared memory model where t processes can crash is exactly t + 1. In other words, an adversary that can crash any subset of size at most t can prevent the processes from agreeing on t values. But what about the remaining (2 2n − n) adversaries that might crash certain combination of processes and not others? This paper presents a precise way to characterize such adversaries by introducing the notion of disagreement power: the biggest integer k for which the adversary can prevent processes from agreeing on k values. We show how to compute the disagreement power of an adversary and how this notion enables to derive n equivalence classes of adversaries. 1