Results 1 
6 of
6
Linear lower bounds on realworld implementations of concurrent objects
 In Proceedings of the 46th Annual Symposium on Foundations of Computer Science (FOCS
, 2005
"... Abstract This paper proves \Omega (n) lower bounds on the time to perform a single instance of an operationin any implementation of a large class of data structures shared by n processes. For standarddata structures such as counters, stacks, and queues, the bound is tight. The implementations consid ..."
Abstract

Cited by 19 (10 self)
 Add to MetaCart
(Show Context)
Abstract This paper proves \Omega (n) lower bounds on the time to perform a single instance of an operationin any implementation of a large class of data structures shared by n processes. For standarddata structures such as counters, stacks, and queues, the bound is tight. The implementations considered may apply any deterministic primitives to a base object. No bounds are assumedon either the number of base objects or their size. Time is measured as the number of steps a process performs on base objects and the number of stalls it incurs as a result of contentionwith other processes. 1
Synchronizing without locks is inherently expensive
 In Proceedings of the ACM Symposium on Principles of Distributed Computing
, 2006
"... It has been politically correct to blame locks for their fragility, especially since researchers identified obstructionfreedom: a progress condition that precludes locking while being weak enough to raise the hope for good performance. This paper attenuates this hope by establishing lower bounds on ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
It has been politically correct to blame locks for their fragility, especially since researchers identified obstructionfreedom: a progress condition that precludes locking while being weak enough to raise the hope for good performance. This paper attenuates this hope by establishing lower bounds on the complexity of obstructionfree implementations in contentionfree executions: those where obstructionfreedom was precisely claimed to be effective. Through our lower bounds, we argue for an inherent cost of concurrent computing without locks. We first prove that obstructionfree implementations of a large class of objects, using only overwriting or trivial primitives in contentionfree executions, have Ω(n) space complexity and Ω(log 2 n) (obstructionfree) step complexity. These bounds apply to implementations of many popular objects, including variants of fetch&add, counter, compare&swap, and LL/SC. When arbitrary primitives can be applied in contentionfree executions, we show that, in any implementation of binary consensus, or any perturbable object, the number of distinct base objects accessed and memory stalls incurred by some process in a contention free execution is Ω ( √ n). All these results hold regardless of the behavior of processes after they become aware of contention. We also prove that, in any obstructionfree implementation of a perturbable object in which processes are not allowed to fail their operations, the number of memory stalls incurred by some process that is unaware of contention is Ω(n).
ON THE INHERENT SEQUENTIALITY OF CONCURRENT OBJECTS
"... Abstract. We present Ω(n) lower bounds on the worst case time to perform a single instance of an operation in any nonblocking implementation of a large class of concurrent data structures shared by n processes. Time is measured by the number of stalls a process incurs as a result of contention with ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We present Ω(n) lower bounds on the worst case time to perform a single instance of an operation in any nonblocking implementation of a large class of concurrent data structures shared by n processes. Time is measured by the number of stalls a process incurs as a result of contention with other processes. For standard data structures such as counters, stacks, and queues, our bounds are tight. The implementations considered may apply any primitives to a base object. No upper bounds are assumed on either the number of base objects or their size.
Lower bounds for adaptive collect and related objects (Extended Abstract)
 IN PROC. 23 ANNUAL ACM SYMP. ON PRINCIPLES OF DISTRIBUTED COMPUTING
, 2004
"... An adaptive algorithm, whose step complexity adjusts to the number of active processes, is attractive for situations in which the number of participating processes is highly variable. This paper studies the number and type of multiwriter registers that are needed for adaptive algorithms. We prove th ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
An adaptive algorithm, whose step complexity adjusts to the number of active processes, is attractive for situations in which the number of participating processes is highly variable. This paper studies the number and type of multiwriter registers that are needed for adaptive algorithms. We prove that if a collect algorithm is fadaptive to total contention, namely, its step complexity is f(k), where k is the number of processes that ever tooka step, then it uses Ω(f −1 (n)) multiwriter registers, where n is the total number of processes in the system. Furthermore, we show that competition for the underlying registers is inherent for adaptive collect algorithms. We consider cwrite registers, to which at most c processes can be concurrently about to write. Special attention is given to exclusivewrite registers, the case c = 1 where no competition is allowed, and concurrentwrite registers, the case c = n where any amount of competition is allowed. A collect algorithm is fadaptive to point contention, if its step complexity is f(k), where k is the maximum number of simultaneously active processes. Such an algorithm is shown to require Ω(f −1 ( n c)) concurrentwrite registers, even if an unlimited number of cwrite registers are available. A smaller lower bound is also obtained in this situation for collect algorithms that are fadaptive to total contention. The lower bounds also hold for nondeterministic implementations of sensitive objects from historyless objects. Finally, we present lower bounds on the step complexity in solo executions (i.e., without any contention), when only cwrite registers are used: For weaktest&set objects, we log n present an Ω() lower bound. Our lower bound log c+log log n for collect and sensitive objects is Ω ( n−1 c).
Remote Storage with Byzantine Servers [Extended abstract]
"... We consider the problem of providing byzantinetolerant storage in distributed systems where clientserver links are much thinner and slower than serverserver links. We provide storage algorithms that are unique in two ways. First, our algorithms take into consideration the asymmetry in network con ..."
Abstract
 Add to MetaCart
(Show Context)
We consider the problem of providing byzantinetolerant storage in distributed systems where clientserver links are much thinner and slower than serverserver links. We provide storage algorithms that are unique in two ways. First, our algorithms take into consideration the asymmetry in network connectivity by minimizing clientserver communication. To provide this property, we rely on a small amount of partial (eventual) synchrony. Second, our algorithms provide a new property called limited effect, which is important for storage systems. To provide the latter property, we use synchronized clocks, which are increasingly common due to GPS devices and NTP, even in otherwise “asynchronous systems ” like the Internet. We present two algorithms called QUAD and LINEAR, which provide a tradeoff between failure resiliency and efficiency. Our algorithms implement an abortable register [3], which is an abstraction used in some real storage systems, but abortable registers are weaker than atomic registers. Thus, one might wonder if we could have implemented atomic registers instead. We answer this question in the negative: we prove that there are no implementations of atomic registers that provide the limited effect property in systems with failures, even with synchronized clocks.