Results 1 
8 of
8
The Complexity of Renaming
"... We study the complexity of renaming, a fundamental problem in distributed computing in which a set of processes need to pick distinct names from a given namespace. We prove an individual lower bound of Ω(k) process steps for deterministic renaming into any namespace of size subexponential in k, whe ..."
Abstract

Cited by 15 (10 self)
 Add to MetaCart
(Show Context)
We study the complexity of renaming, a fundamental problem in distributed computing in which a set of processes need to pick distinct names from a given namespace. We prove an individual lower bound of Ω(k) process steps for deterministic renaming into any namespace of size subexponential in k, where k is the number of participants. This bound is tight: it draws an exponential separation between deterministic and randomized solutions, and implies new tight bounds for deterministic fetchandincrement registers, queues and stacks. The proof of the bound is interesting in its own right, for it relies on the first reduction from renaming to another fundamental problem in distributed computing: mutual exclusion. We complement our individual bound with a global lower bound of Ω(k log(k/c)) on the total step complexity of renaming into a namespace of size ck, for any c ≥ 1. This applies to randomized algorithms against a strong adversary, and helps derive new global lower bounds for randomized approximate counter and fetchandincrement implementations, all tight within logarithmic factors. 1
Closing the complexity gap between FCFS mutual exclusion and mutual exclusion
 Distributed Computing
, 2010
"... Abstract. FirstComeFirstServed (FCFS) mutual exclusion (ME) is the problem of ensuring that processes attempting to concurrently access a shared resource do so one by one, in a fair order. In this paper, we close the complexity gap between FCFS ME and ME in the asynchronous shared memory model wh ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
Abstract. FirstComeFirstServed (FCFS) mutual exclusion (ME) is the problem of ensuring that processes attempting to concurrently access a shared resource do so one by one, in a fair order. In this paper, we close the complexity gap between FCFS ME and ME in the asynchronous shared memory model where processes communicate using atomic reads and writes only, and do not fail. Our main result is the first known FCFS ME algorithm that makes O(logN) remote memory references (RMRs) per passage and uses only atomic reads and writes. Our algorithm is also adaptive to point contention. More precisely, the number of RMRs a process makes per passage in our algorithm is Θ(min(k, logN)), where k is the point contention. Our algorithm matches known RMR complexity lower bounds for the class of ME algorithms that use reads and writes only, and beats the RMR complexity of prior algorithms in this class that have the FCFS property. 1
An O(1) RMRs leader election algorithm
 In Proc. ACM PODC 2006
, 2006
"... The leader election problem is a fundamental coordination problem. We present leader election algorithms for multiprocessor systems where processes communicate by reading and writing shared memory asynchronously, and do not fail. In particular, we consider the cachecoherent (CC) and distributed shar ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
(Show Context)
The leader election problem is a fundamental coordination problem. We present leader election algorithms for multiprocessor systems where processes communicate by reading and writing shared memory asynchronously, and do not fail. In particular, we consider the cachecoherent (CC) and distributed shared memory (DSM) models of such systems. We present leader election algorithms that perform a constant number of remote memory references (RMRs) in the worst case. Our algorithms use splitterlike objects [6, 9] in a novel way, by organizing active processes into teams that share work. As there is an Ω(log n) lower bound on the RMR complexity of mutual exclusion for n processes using reads and writes only [10], our result separates the mutual exclusion and leader election problems in terms of RMR complexity in both the CC and DSM models. Our result also implies that any algorithm using reads, writes and onetime testandset objects can be simulated by an algorithm using reads and writes with only a constant blowup of the RMR complexity; proving this is easy in the CC model, but presents subtle challenges in
Adaptive Randomized Mutual Exclusion in SubLogarithmic Expected Time
 PODC'10
, 2010
"... Mutual exclusion is a fundamental distributed coordination problem. Sharedmemory mutual exclusion research focuses on localspin algorithms and uses the remote memory references (RMRs) metric. A mutual exclusion algorithm is adaptive to point contention, if its RMR complexity is a function of the m ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Mutual exclusion is a fundamental distributed coordination problem. Sharedmemory mutual exclusion research focuses on localspin algorithms and uses the remote memory references (RMRs) metric. A mutual exclusion algorithm is adaptive to point contention, if its RMR complexity is a function of the maximum number of processes concurrently executing their entry, critical, or exit section. In the best prior art deterministic adaptive mutual exclusion algorithm, presented by Kim and Anderson [22], a process performs O ( min(k, log N) ) RMRs as it enters and exits its critical section, where k is point contention and N is the number of processes in the system. Kim and Anderson also proved that a deterministic algorithm with o(k) RMR complexity does not exist [21]. However, they describe a randomized mutual exclusion algorithm that has O(log k) expected RMR complexity against an oblivious adversary. All these results apply for algorithms that use only atomic read and write operations. We present a randomized adaptive mutual exclusion algorithms with O(log k / log log k) expected amortized RMR complexity, even against a strong adversary, for the cachecoherent shared memory read/write model. Using techniques similar to those used in [17], our algorithm can be adapted for the distributed shared memory read/write model. This establishes that sublogarithmic adaptive mutual exclusion, using reads and writes only, is possible.
Mutual Exclusion withO(log 2 logn) Amortized Work
"... Abstract — This paper presents a new algorithm for mutual exclusion in which each passage through the critical section costs amortized O(log 2 logn) RMRs with high probability. The algorithm operates in a standard asynchronous, local spinning, sharedmemory model with an oblivious adversary. It guara ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Abstract — This paper presents a new algorithm for mutual exclusion in which each passage through the critical section costs amortized O(log 2 logn) RMRs with high probability. The algorithm operates in a standard asynchronous, local spinning, sharedmemory model with an oblivious adversary. It guarantees that every process enters the critical section with high probability. The algorithm achieves its efficient performance by exploiting a connection between mutual exclusion and approximate counting. 1.
A Time Complexity Lower Bound for Adaptive Mutual Exclusion
, 2011
"... We consider the time complexity of adaptive mutual exclusion algorithms, where “time ” is measured by counting the number of remote memory references required per criticalsection access. For systems that support (only) read, write, and comparison primitives (such as compareandswap), we establish ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We consider the time complexity of adaptive mutual exclusion algorithms, where “time ” is measured by counting the number of remote memory references required per criticalsection access. For systems that support (only) read, write, and comparison primitives (such as compareandswap), we establish a lower bound that precludes a deterministic algorithm with o(k) time complexity, where k is point contention. In particular, it is impossible to construct a deterministic O(log k) algorithm based on such primitives.
Lower bounds in distributed computing
, 2008
"... Distributed computing is the study of achieving cooperative behavior between independent computing processes with possibly conflicting goals. Distributed computing is ubiquitous in the Internet, wireless networks, multicore and multiprocessor computers, teams of mobile robots, etc. In this thesis, ..."
Abstract
 Add to MetaCart
Distributed computing is the study of achieving cooperative behavior between independent computing processes with possibly conflicting goals. Distributed computing is ubiquitous in the Internet, wireless networks, multicore and multiprocessor computers, teams of mobile robots, etc. In this thesis, we study two fundamental distributed computing problems, clock synchronization and mutual exclusion. Our contributions are as follows. 1. We introduce the gradient clock synchronization (GCS) problem. As in traditional clock synchronization, a group of nodes in a bounded delay communication network try to synchronize their logical clocks, by reading their hardware clocks and exchanging messges. We say the distance between two nodes is the uncertainty in message delay between the nodes, and we say the clock skew between the nodes is their difference in logical clock values. GCS studies clock skew as a function of distance. We show that surprisingly, every clock synchronization log D algorithm exhibits some execution in which two nodes at distance one apart have Ω( log log D)
Tight TimeSpace Tradeoff for Mutual Exclusion
, 2011
"... Mutual Exclusion is a fundamental problem in distributed computing. Proving upper and lower bounds on the RMR complexity of this problem and its variants has been a topic of intense research in the last two decades. We add a novel dimension to this research by proving matching lower and upper bounds ..."
Abstract
 Add to MetaCart
(Show Context)
Mutual Exclusion is a fundamental problem in distributed computing. Proving upper and lower bounds on the RMR complexity of this problem and its variants has been a topic of intense research in the last two decades. We add a novel dimension to this research by proving matching lower and upper bounds on how RMR complexity trades off with space. Two exciting implications of our results are that constant RMR complexity is impossible with subpolynomial space and subpolynomial RMR complexity is impossible with constant space (for cachecoherent multiprocessors, regardless of how strong the hardware synchronization operations are). We believe that our technical contributions are equally exciting. A highlight is that, even though mutual exclusion is a “messy ” problem to analyze because of system details such as asynchrony and cache coherence, we show that a simple and purely combinatorial binpebble game that we design exactly captures the complexity of the mutual exclusion problem. Lower bound proofs in distributed computing are typically based on covering, bivalency, or other indistinguishability arguments. In contrast, our lower bounds are based on the potential method, and we believe this is the first use of this method in lower bounds for distributed computing.