Results 1  10
of
22
Adding Networks
, 2001
"... An adding network is a distributed data structure that supports a concurrent, lockfree, lowcontention implementation of a fetch&add counter; a counting network is an instance of an adding network that supports only fetch&increment. We present a lower bound showing that adding networks ha ..."
Abstract

Cited by 117 (33 self)
 Add to MetaCart
An adding network is a distributed data structure that supports a concurrent, lockfree, lowcontention implementation of a fetch&add counter; a counting network is an instance of an adding network that supports only fetch&increment. We present a lower bound showing that adding networks have inherently high latency. Any adding network powerful enough to support addition by at least two values a and b, where a > b > 0, has sequential executions in which each token traverses Ω(n/c) switching elements, where n is the number of concurrent processes, and c is a quantity we call oneshot contention; for a large class of switching networks and for conventional counting networks the oneshot contention is constant. On the contrary, counting networks have O(log n) latency [4,7]. This bound is tight. We present the first concurrent, lockfree, lowcontention networked data structure that supports arbitrary fetch&add operations.
Diffracting trees
 In Proceedings of the 5th Annual ACM Symposium on Parallel Algorithms and Architectures. ACM
, 1994
"... Shared counters are among the most basic coordination structures in multiprocessor computation, with applications ranging from barrier synchronization to concurrentdatastructure design. This article introduces diffracting trees, novel data structures for shared counting and load balancing in a dis ..."
Abstract

Cited by 63 (13 self)
 Add to MetaCart
(Show Context)
Shared counters are among the most basic coordination structures in multiprocessor computation, with applications ranging from barrier synchronization to concurrentdatastructure design. This article introduces diffracting trees, novel data structures for shared counting and load balancing in a distributed/parallel environment. Empirical evidence, collected on a simulated distributed sharedmemory machine and several simulated messagepassing architectures, shows that diffracting trees scale better and are more robust than both combining trees and counting networks, currently the most effective known methods for implementing concurrent counters in software. The use of a randomized coordination method together with a combinatorial data structure overcomes the resiliency drawbacks of combining trees. Our simulations show that to handle the same load, diffracting trees and counting networks should have a similar width w, yet the depth of a diffracting tree is O(log w), whereas counting networks have depth O(log 2 w). Diffracting trees have already been used to implement highly efficient producer/consumer queues, and we believe diffraction will prove to be an effective alternative paradigm to combining and queuelocking in the design of many concurrent data structures.
Elimination Trees and the Construction of Pools and Stacks
, 1996
"... Shared pools and stacks are two coordination structures with a history of applications ranging from simple producer/consumer buffers to jobschedulers and procedure stacks. This paper introduces elimination trees, a novel form of diffracting trees that offer pool and stack implementations with super ..."
Abstract

Cited by 45 (13 self)
 Add to MetaCart
Shared pools and stacks are two coordination structures with a history of applications ranging from simple producer/consumer buffers to jobschedulers and procedure stacks. This paper introduces elimination trees, a novel form of diffracting trees that offer pool and stack implementations with superior response (on average constant) under high loads, while guaranteeing logarithmic time "deterministic" termination under sparse request patterns. 1 A preliminary version of this paper appeared in the proceedings of the 7th Annual Symposium on Parallel Algorithms and Architectures (SPAA). Contact Author: Email:shanir@theory.lcs.mit.edu 1 Introduction As multiprocessing breaks away from its traditional number crunching role, we are likely to see a growing need for highly distributed and parallel coordination structures. A realtime application such as a system of sensors and actuators will require fast response under both sparse and intense activity levels (typical examples could be a ra...
A Combinatorial Treatment of Balancing Networks
, 1999
"... Balancing networks, originally introduced by Aspnes et al. (Proc. of the 23rd Annual ACM Symposium on Theory of Computing, pp. 348358, May 1991), represent a new class of distributed, lowcontention data structures suitable for solving many fundamental multiprocessor coordination problems that can ..."
Abstract

Cited by 25 (11 self)
 Add to MetaCart
(Show Context)
Balancing networks, originally introduced by Aspnes et al. (Proc. of the 23rd Annual ACM Symposium on Theory of Computing, pp. 348358, May 1991), represent a new class of distributed, lowcontention data structures suitable for solving many fundamental multiprocessor coordination problems that can be expressed as balancing problems. In this work, we present a mathematical study of the combinatorial structure of balancing networks, andavariety of its applications. Our study identies important combinatorial transfer parameters of balancing networks. In turn, necessary and sucient combinatorial conditions are established, expressed in terms of transfer parameters, which precisely characterize many important and well studied classes of balancing networks suchascounting networks and smoothing networks.We propose these combinatorial conditions to be \balancing analogs" of the well known ZeroOne principle holding for sorting networks.
Concurrent Data Structures
, 2001
"... The proliferation of commercial sharedmemory multiprocessor machines has brought about significant changes in the art of concurrent programming. Given current trends towards lowcost chip multithreading (CMT), such machines are bound to become ever more widespread. Sharedmemory multiprocessors are ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
The proliferation of commercial sharedmemory multiprocessor machines has brought about significant changes in the art of concurrent programming. Given current trends towards lowcost chip multithreading (CMT), such machines are bound to become ever more widespread. Sharedmemory multiprocessors are systems that concurrently execute multiple threads of computation which communicate and synchronize through data structures in shared memory. The efficiency of these data structures is crucial to performance, yet designing effective data structures for multiprocessor machines is an art currently mastered by few. By most accounts, concurrent data structures are far more difficult to design than sequential ones because threads executing concurrently may interleave their steps in many ways, each with a different and potentially unexpected outcome. This requires designers to modify the way they think about computation, to understand new design methodologies, and to adopt a new collection of programming tools. Furthermore, new challenges arise in designing scalable concurrent data structures that continue to perform well as machines that execute more and more concurrent threads become available. This chapter provides an overview of the challenges involved in designing concurrent data structures, and a summary of relevant work
Supporting Increment and Decrement Operations in Balancing Networks
 Proceedings of the 16th International Symposium on Theoretical Aspects of Computer Science
, 1998
"... Counting networks are a class of distributed data structures that support highly concurrent implementations of shared Fetch&Increment counters. Applications of these counters include shared pools and stacks, load balancing, and software barriers [4, 16, 18, 23]. A limitation of counting netw ..."
Abstract

Cited by 11 (8 self)
 Add to MetaCart
(Show Context)
Counting networks are a class of distributed data structures that support highly concurrent implementations of shared Fetch&Increment counters. Applications of these counters include shared pools and stacks, load balancing, and software barriers [4, 16, 18, 23]. A limitation of counting networks is that the resulting shared counters can be incremented, but not decremented.
Counting networks are practically linearizable
 In Proceedings of the Fifteenth Annual ACM Symposium on Principles of Distributed Computing
, 1996
"... Counting networks are a class of concurrent structures that allow the design of highly scalable concurrent data structures in a way that eliminates sequential bottlenecks and contention. Linearizable counting networks assure that the order of the values returned by the network reflects the realtime ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
Counting networks are a class of concurrent structures that allow the design of highly scalable concurrent data structures in a way that eliminates sequential bottlenecks and contention. Linearizable counting networks assure that the order of the values returned by the network reflects the realtime order in which they were requested. We argue that in many concurrent systems the worst case scenarios that violate linearizability require a form of timing anomaly that is uncommon in practice. The linear time cost of designing networks that achieve linearizability under all circumstances may thus prove an unnecessary burden on applications that are willing to tradeoff occasional nonlinearizability for speed and parallelism. This paper presents a very simple measure that is iocal to the individual links and nodes of the network, and that quantifies the extent to which a network can suffer from timing anomalies and still remain linearizable. Perhaps counterintuitively, this measure is independent of network depth. We use our measure to mathematically support our experiment al results: that in a variety of normal situations tested on a simulated shared memory multiprocessor, the Monic counting networks of Aspnes, Herlihy, and Shavit are “for all practical purposes” Iinearizable.
Sequentially Consistent versus Linearizable Counting Networks
 Proceedings of the 18th Annual ACM Symposium on Principles of Distributed Computing
, 1999
"... We compare the impact of timing conditions on implementing sequentially consistent and linearizable counters using (uniform) counting networks in distributed systems. For counting problems in application domains which do not require linearizability but will run correctly if only sequential consisten ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We compare the impact of timing conditions on implementing sequentially consistent and linearizable counters using (uniform) counting networks in distributed systems. For counting problems in application domains which do not require linearizability but will run correctly if only sequential consistency is provided, the results of our investigation, and their potential payoffs, are threefold: • First, we show that sequential consistency and linearizability cannot be distinguished by the timing conditions previously considered in the context of counting networks; thus, in contexts where these constraints apply, it is possible to rely on the stronger semantics of linearizability, which simplifies proofs and enhances compositionality. • Second, we identify local timing conditions that support sequential consistency but not linearizability; thus, we suggest weaker, easily implementable timing conditions that are likely to be sufficient in many applications. • Third, we show that any kind of synchronization that is too weak to support even sequential consistency may violate it significantly for some counting networks; hence,
Impossibility Results for Weak Threshold Networks
, 1997
"... It is shown that a weak threshold network (in particular, threshold network) of width w and depth d cannot be constructed from balancers of width p 0 ; p 1 ; : : : ; p m\Gamma1 , if w does not divide P d , where P is the least common multiple of p 0 ; p 1 ; : : : ; p m\Gamma1 . This holds regardle ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
It is shown that a weak threshold network (in particular, threshold network) of width w and depth d cannot be constructed from balancers of width p 0 ; p 1 ; : : : ; p m\Gamma1 , if w does not divide P d , where P is the least common multiple of p 0 ; p 1 ; : : : ; p m\Gamma1 . This holds regardless of the size of the network, as long as it is finite, and it implies a lower bound of log P w on its depth. More strongly, a lower bound of log pmax w is shown on the length of every path from an input wire to any output wire that exhibits the threshold property, where p max is the maximum among p 0 ; p 1 ; : : : ; p m\Gamma1 . Keywords: Distributed computing, parallel processing, impossibility results. 1 Introduction Consider a distributed application which involves solving a system of equations by successive relaxation, where each process holds part of the data. Interleaving of steps by different processes is necessary in order to ensure that a correct ? Some of the results in this...
The Impact of Timing on Linearizability in Counting Networks
 IN PROCEEDINGS OF THE 11TH INTERNATIONAL PARALLEL PROCESSING SYMPOSIUM (IPPS'97
, 1997
"... Counting networks form a new class of distributed, lowcontention data structures, made up of balancers and wires, which are suitable for solving a variety of multiprocessor synchronization problems that can be expressed as counting problems. A linearizable counting network guarantees that the order ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
Counting networks form a new class of distributed, lowcontention data structures, made up of balancers and wires, which are suitable for solving a variety of multiprocessor synchronization problems that can be expressed as counting problems. A linearizable counting network guarantees that the order of the values it returns respects the realtime order they were requested. Linearizability significantly raises the capabilities of the network, but at a possible price in network size or synchronization support [13, 18]. In this work, we further pursue the systematic study of the impact of timing assumptions on linearizability for counting networks, along the line of research recently initiated by Lynch et al. in [18]. We consider two basic timing models, the instantaneous balancer model, in which the transition of a token from an input to an output port of a balancer is modeled as an instantaneous event, and the periodic balancer model, where balancers send out tokens at a fixed rate. We also consider lower and upper bounds on the delays incurred by wires connecting the balancers. We present necessary and suffcient conditions for linearizability in the form of precise inequalities that not only involve timing parameters, but also identify structural parameters of the counting network, which may be of more general interest. Our results extend and strengthen previous impossibility and possibility results on linearizability in counting networks [13, 18].