Results 1 
9 of
9
Hundreds of Impossibility Results for Distributed Computing
 Distributed Computing
, 2003
"... We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, faulttolerance, different communication media, and randomization. The resource bounds refe ..."
Abstract

Cited by 47 (5 self)
 Add to MetaCart
We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, faulttolerance, different communication media, and randomization. The resource bounds refer to time, space and message complexity. These results are useful in understanding the inherent difficulty of individual problems and in studying the power of different models of distributed computing.
Lower Bounds in Distributed Computing
, 2000
"... This paper discusses results that say what cannot be computed in certain environments or when insucient resources are available. A comprehensive survey would require an entire book. As in Nancy Lynch's excellent 1989 paper, \A Hundred Impossibility Proofs for Distributed Computing" [86], w ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
This paper discusses results that say what cannot be computed in certain environments or when insucient resources are available. A comprehensive survey would require an entire book. As in Nancy Lynch's excellent 1989 paper, \A Hundred Impossibility Proofs for Distributed Computing" [86], we shall restrict ourselves to some of the results we like best or think are most important. Our aim is to give you the avour of the results and some of the techniques that have been used. We shall also mention some interesting open problems and provide an extensive list of references. The focus will be on results from the past decade.
A Lower Bound on the Local Time Complexity of Universal Constructions
 In Proceedings of the 17th Annual ACM Symposium on Principles of Distributed Computing
"... Nonblocking and waitfree universal constructions have been a subject of active research in recent years. A universal construction is attractive because, no matter what types of shared objects are needed by applications, they can be implemented simply by instantiating the universal construction wi ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
Nonblocking and waitfree universal constructions have been a subject of active research in recent years. A universal construction is attractive because, no matter what types of shared objects are needed by applications, they can be implemented simply by instantiating the universal construction with appropriate types. This flexibility, however, comes at a cost: for each universal construction U , we prove that there is a type T such that, if O is an nprocess type T object implemented using U , in the worstcase some process must perform# n) local computation in order to complete a single operation on O. A universal construction is oblivious if it does not exploit the semantics of the type that it is instantiated with. Our lower bound implies that if a shared object O is implemented using an oblivious universal construction, then no matter what O's type is, in the worstcase some process must perform# n) local computation in order to complete a single operation on O. Thu...
Efficient waitfree implementation of multiword LL/SC variables
 In Proceedings of the 25th IEEE International Conference on Distributed Computing Systems (ICDCS
, 2005
"... Since the design of lockfree data structures often poses a formidable intellectual challenge, researchers are constantly in search of abstractions and primitives that simplify this design. The multiword LL/SC object is such a primitive: many existing algorithms are based on this primitive, includin ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Since the design of lockfree data structures often poses a formidable intellectual challenge, researchers are constantly in search of abstractions and primitives that simplify this design. The multiword LL/SC object is such a primitive: many existing algorithms are based on this primitive, including the nonblocking and waitfree universal constructions [1], the closed objects construction [4] and the snapshot algorithms [12, 13]. In this paper, we consider the problem of implementing a Wword LL/SC object shared by N processes. The previous best algorithm, due to Anderson and Moir [1], is time optimal (LL and SC operations run in O(W) time), but has a space complexity of O(N 2 W). We present an algorithm that uses novel buffer management ideas to cut down the space complexity by a factor of N to O(NW), while still being time optimal. 1.
An adaptive technique for constructing robust and highthroughput shared objects
 In OPODIS
, 2010
"... Abstract. Shared counters are the key to solving a variety of coordination problems on multiprocessor machines, such as barrier synchronization and index distribution. It is desired that they, like shared objects in general, be robust, linearizable and scalable. We present the first linearizable and ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Shared counters are the key to solving a variety of coordination problems on multiprocessor machines, such as barrier synchronization and index distribution. It is desired that they, like shared objects in general, be robust, linearizable and scalable. We present the first linearizable and waitfree shared counter algorithm that achieves high throughput without apriori knowledge about the system’s level of asynchrony. Our algorithm can be easily adapted to any other combinable objects as well, such as stacks and queues. In particular, in an Nprocess execution E, our algorithm achieves high N throughput of Ω(), where φE is E’s level of asynchrony. φ2 E log2 φE log N Moreover, our algorithm stands any constant number of faults. If E contains a constant number of faults, then our algorithm still achieves high N throughput of Ω( φ ′2 E log2 φ ′ E log N), where φ ′ E bounds the relative speeds of any two processes, at a time that both of them participated in E and none of them failed. Our algorithm can be viewed as an adaptive version of the BoundedWaitCombining (BWC) prior art algorithm. BWC receives as an input an argument φ as a (supposed) upper bound of φE, and achieves optimal throughput if φ = φE. However, if the given φ happens to be lower than the actual φE, or much greater than φE, then the throughput of BWC degraded significantly. Moreover, whereas BWC is only lockfree, our algorithm is more robust, since it is waitfree. To achieve high throughput and waitfreedom, we present a method that guarantees (for some common kind of procedures) the procedure’s successful termination in a bounded time, regardless of shared memory contention. This method may prove useful by itself, for other problems. 1
On the complexity of implementing . . .
, 2003
"... We consider shared memory systems in which asynchronous processes cooperate with each other by communicating via shared data objects, such as counters, queues, stacks, and priority queues. The common approach to implementing such shared objects is based on locking: To perform an operation on a share ..."
Abstract
 Add to MetaCart
We consider shared memory systems in which asynchronous processes cooperate with each other by communicating via shared data objects, such as counters, queues, stacks, and priority queues. The common approach to implementing such shared objects is based on locking: To perform an operation on a shared object, a process obtains a lock, accesses the object, and then releases the lock. Locking, however, has several drawbacks, including convoying, priority inversion, and deadlocks. Furthermore, lockbased implementations are not faulttolerant: if a process crashes while holding a lock, other processes can end up waiting forever for the lock. Waitfree linearizable implementations were conceived to overcome most of the above drawbacks of locking. A waitfree implementation guarantees that if a process repeatedly takes steps, then its operation on the implemented data object will eventually complete, regardless of whether other processes are slow, or fast, or have crashed. In this thesis, we first present an efficient waitfree linearizable implementation of a class of object types, called closed and closable types, and then prove time and space lower bounds on waitfree linearizable implementations of another class of object types, called perturbable types.