Results 1  10
of
27
Probabilistic Algorithms for the Wakeup Problem in SingleHop Radio Networks
 In Proceedings of 13 th Annual International Symposium on Algorithms and Computation (ISAAC
, 2002
"... We consider the problem of waking up n processors in a completely broadcast system. We analyze this problem in both globally and locally synchronous models, with or without n being known to processors and with or without labeling of processors. The main question we answer is: how fast we can wake ..."
Abstract

Cited by 66 (0 self)
 Add to MetaCart
We consider the problem of waking up n processors in a completely broadcast system. We analyze this problem in both globally and locally synchronous models, with or without n being known to processors and with or without labeling of processors. The main question we answer is: how fast we can wake all the processors up with probability 1e in each of these eight models. In [11] a logarithmic waking algorithm for the strongest set of assumptions is described, while for weaker models only linear and quadratic algorithms were obtained. We prove that in the weakest model (local synchronization, no knowledge of n or labeling) the best waking time is O(n/logn). We also show logarithmic or polylogarithmic waking algorithms for all stronger models, which in some cases gives an exponential improvement over previous results.
The wakeup problem in synchronous broadcast systems (Extended Abstract)
, 2000
"... This paper studies the differences between two levels of synchronization in a distributed broadcast system (or a multiple access channel). In the globally synchronous model, all processors have access to a global clock. In the locally synchronous model, processors have local clocks ticking at the s ..."
Abstract

Cited by 56 (7 self)
 Add to MetaCart
This paper studies the differences between two levels of synchronization in a distributed broadcast system (or a multiple access channel). In the globally synchronous model, all processors have access to a global clock. In the locally synchronous model, processors have local clocks ticking at the same rate, but each clock starts individually, when the processor wakes up. We consider the fundamental problem of waking up all of n processors of a completely connected broadcast system. Some processors wake up spontaneously, while others have to be woken up. Only wake processors can...
Hundreds of Impossibility Results for Distributed Computing
 Distributed Computing
, 2003
"... We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, faulttolerance, different communication media, and randomization. The resource bounds refe ..."
Abstract

Cited by 52 (5 self)
 Add to MetaCart
We survey results from distributed computing that show tasks to be impossible, either outright or within given resource bounds, in various models. The parameters of the models considered include synchrony, faulttolerance, different communication media, and randomization. The resource bounds refer to time, space and message complexity. These results are useful in understanding the inherent difficulty of individual problems and in studying the power of different models of distributed computing.
A Better Wakeup in Radio Networks
, 2004
"... We present an improved algorithm to wake up a multihop adhoc radio network. The goal is to have all the nodes activated, when some of them may wake up spontaneously at arbitrary times and the remaining nodes need to be awoken by the already active ones. The best previously known wakeup algorithm ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
We present an improved algorithm to wake up a multihop adhoc radio network. The goal is to have all the nodes activated, when some of them may wake up spontaneously at arbitrary times and the remaining nodes need to be awoken by the already active ones. The best previously known wakeup algorithm was given by Chrobak, G¸asieniec and Kowalski [11], and operated in time O(n 5/3 log n), where n is the number of nodes. We give an algorithm with the running time O(n 3/2 log n). This also yields better algorithms for other synchronizationtype primitives, like leader election and localclocks synchronization, each with a time performance that differs from that of wakeup by an extra factor of O(log n) only, and improves the best previously known method for the problem by a factor of n 1/6. A wakeup algorithm is a schedule of transmissions for each node. It can be represented as a collection of binary sequences. Useful properties of such collections have been abstracted to define a (radio) synchronizer. It has been known that good radio synchronizers exist and previous algorithms [17, 11] relied on this. We show how to construct such synchronizers in polynomial time, from suitable constructible expanders. As an application, we obtain a wakeup protocol for a multipleaccess channel that activates the network in time O(k 2 polylog n), where k is the number of stations that wake up spontaneously, and which can be found in time polynomial in n. We extend the notion of synchronizers to universal synchronizers. We show that there exist universal synchronizers with parameters that guarantee time O(n 3/2 log n) of wakeup.
Computing in Totally Anonymous Asynchronous Shared Memory Systems (Extended Abstract)
 INFORMATION AND COMPUTATION
, 2002
"... In the totally anonymous shared memory model of asynchronous distributed computing, processes have no id's and run identical programs. Moreover, processes have identical interface to the shared memory, and in particular, there are no singlewriter registers. This paper assumes that processe ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
In the totally anonymous shared memory model of asynchronous distributed computing, processes have no id's and run identical programs. Moreover, processes have identical interface to the shared memory, and in particular, there are no singlewriter registers. This paper assumes that processes do not fail, and the shared memory consists only of read/write registers, which are initialized to some default value. A complete characterization of the functions and relations that can be computed within this model is presented. The consensus problem is an important relation which can be computed. Unlike functions, which can be computed with two registers, the consensus protocol uses a linear number of shared registers and rounds. The paper proves logarithmic lower bounds on the number of registers and rounds needed for solving consensus in this model, indicating the d...
A Time Complexity Lower Bound for Randomized Implementations of Some Shared Objects
 In Symposium on Principles of Distributed Computing
, 1998
"... Many recent waitfree implementations are based on a sharedmemory that supports a pair of synchronization operations, known as LL and SC. In this paper, we establish an intrinsic performance limitation of these operations: even the simple wakeup problem [16], which requires some process to detect th ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
Many recent waitfree implementations are based on a sharedmemory that supports a pair of synchronization operations, known as LL and SC. In this paper, we establish an intrinsic performance limitation of these operations: even the simple wakeup problem [16], which requires some process to detect that all n processes are up, cannot be solved unless some process performs#for n) sharedmemory operations. Using this basic result, we derive a#230 n) lower bound on the worstcase sharedaccess time complexity of nprocess implementations of several types of objects, including fetch&increment, fetch&multiply, fetch&and, queue, and stack. (The worstcase sharedaccess time complexity of an implementation is the number of sharedmemory operations that a process performs, in the worstcase, in order to complete a single operation on the implementation.) Our lower bound is strong in several ways: it holds even if (1) sharedmemory has an infinite number of words, each of unbounded size, (2) sh...
Computing with Faulty Shared Objects
, 1995
"... This paper investigates the effects of the failure of shared objects on distributed systems. First the notion of a faulty shared object is introduced. Then upper and lower bounds on the space complexity of implementing reliable shared objects are provided. ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
This paper investigates the effects of the failure of shared objects on distributed systems. First the notion of a faulty shared object is introduced. Then upper and lower bounds on the space complexity of implementing reliable shared objects are provided.
Time and space lower bounds for implementations using cas
 In DISC
, 2005
"... Abstract. This paper presents lower bounds on the time and spacecomplexity of implementations that use the k compareandswap (kCAS) synchronization primitives. We prove that the use of kCAS primitives cannot improve neither the time nor the spacecomplexity of implementations of widelyused con ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Abstract. This paper presents lower bounds on the time and spacecomplexity of implementations that use the k compareandswap (kCAS) synchronization primitives. We prove that the use of kCAS primitives cannot improve neither the time nor the spacecomplexity of implementations of widelyused concurrent objects, such as counter, stack, queue, and collect. Surprisingly, the use of kCAS may even increase the space complexity required by such implementations. We prove that the worstcase average number of steps performed by processes for any nprocess implementation of a counter, stack or queue object is Ω(log k+1 n), even if the implementation can use jCAS for j ≤ k. This bound holds even if a kCAS operation is allowed to read the k values of the objects it accesses and return these values to the calling process. This bound is tight. We also consider more realistic nonreading kCAS primitives. An operation of a nonreading kCAS primitive is only allowed to return a success/failure indication. For implementations of the collect object that use such primitives, we prove that the worstcase average number of steps performed by processes is Ω(log 2 n), regardless of the value of k. This implies a round complexity lower bound of Ω(log 2 n) for such implementations. As there is an O(log 2 n) round complexity implementation of collect that uses only reads and writes, these results establish that nonreading kCAS is no stronger than read and write for collect implementation round complexity. We also prove that kCAS does not improve the space complexity of implementing many objects (including counter, stack, queue, and singlewriter snapshot). An implementation has to use at least n base objects even if kCAS is allowed, and if all operations (other than read) swap exactly k base objects, then the space complexity must be at least k · n. 1
Optimal Scheduling for Disconnected Cooperation
, 2001
"... We consider a distributed environment consisting of n processors that need to perform t tasks. We assume that communication is initially unavailable and that processors begin work in isolation. At some unknown point of time an unknown collection of processors may establish communication. Before proc ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
We consider a distributed environment consisting of n processors that need to perform t tasks. We assume that communication is initially unavailable and that processors begin work in isolation. At some unknown point of time an unknown collection of processors may establish communication. Before processors begin communication they execute tasks in the order given by their schedules. Our goal is to schedule work of isolated processors so that when communication is established for the rst time, the number of redundantly executed tasks is controlled. We quantify worst case redundancy as a function of processor advancements through their schedules. In this work we rene and simplify an extant deterministic construction for schedules with n t, and we develop a new analysis of its waste. The new analysis shows that for any pair of schedules, the number of redundant tasks can be controlled for the entire range of t tasks. Our new result is asymptotically optimal: the tails of these schedules are within a 1 +O(n 1 4 ) factor of the lower bound. We also present two new deterministic constructions one for t n, and the other for t n 3=2 , which substantially improve pairwise waste for all prexes of length t= p n, and oer near optimal waste for the tails of the schedules. Finally, we present bounds for waste of any collection of k 2 processors for both deterministic and randomized constructions. 1
Contentionfree Complexity of Shared Memory Algorithms
 Information and Computation
, 1994
"... Worstcase time complexity is a measure of the maximumtime needed to solve a problem over all runs. Contentionfree time complexity indicates the maximum time needed when a process executes by itself, without competition from other processes. Since contention is rare in welldesigned systems, it is ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Worstcase time complexity is a measure of the maximumtime needed to solve a problem over all runs. Contentionfree time complexity indicates the maximum time needed when a process executes by itself, without competition from other processes. Since contention is rare in welldesigned systems, it is important to design algorithms which perform well in the absence of contention. We study the contentionfree time complexity of shared memory algorithms using two measures: step complexity, which counts the number of accesses to shared registers; and register complexity, which measures the number of different registers accessed. Depending on the system architecture, one of the two measures more accurately reflects the elapsed time. We provide lower and upper bounds for the contentionfree step and register complexity of solving the mutual exclusion problem as a function of the number of processes and the size of the largest register that can be accessed in one atomic step. We also present bo...