Results 1 - 10
of
146
A Distributed CSMA Algorithm for Throughput and Utility Maximization in Wireless Networks
"... In multi-hop wireless networks, designing distributed scheduling algorithms to achieve the maximal throughput is a challenging problem because of the complex interference constraints among different links. Traditional maximal-weight (MW) scheduling, although throughput-optimal, is difficult to imple ..."
Abstract
-
Cited by 181 (8 self)
- Add to MetaCart
In multi-hop wireless networks, designing distributed scheduling algorithms to achieve the maximal throughput is a challenging problem because of the complex interference constraints among different links. Traditional maximal-weight (MW) scheduling, although throughput-optimal, is difficult to implement in distributed networks; whereas a distributed greedy protocol similar to IEEE 802.11 does not guarantee the maximal throughput. In this paper, we introduce an adaptive CSMA scheduling algorithm that can achieve the maximal throughput distributedly under some assumptions. Major advantages of the algorithm include: (1) It applies to a very general interference model; (2) It is simple, distributed and asynchronous. Furthermore, we combine the algorithm with endto-end flow control to achieve the optimal utility and fairness of competing flows. The effectiveness of the algorithm is verified by simulations. Finally, we consider some implementation issues in the setting of 802.11 networks.
Understanding the capacity region of the greedy maximal scheduling algorithm in multi-hop wireless networks
- Proc. of IEEE INFOCOM
, 2008
"... Abstract—In this paper, we characterize the performance of an important class of scheduling schemes, called Greedy Maximal Scheduling (GMS), for multi-hop wireless networks. While a lower bound on the throughput performance of GMS is relatively well-known in the simple node-exclusive interference mo ..."
Abstract
-
Cited by 125 (9 self)
- Add to MetaCart
(Show Context)
Abstract—In this paper, we characterize the performance of an important class of scheduling schemes, called Greedy Maximal Scheduling (GMS), for multi-hop wireless networks. While a lower bound on the throughput performance of GMS is relatively well-known in the simple node-exclusive interference model, it has not been thoroughly explored in the more general K-hop interference model. Moreover, empirical observations suggest that the known bounds are quite loose, and that the performance of GMS is often close to optimal. In this paper, we provide a number of new analytic results characterizing the performance limits of GMS. We first provide an equivalent characterization of the efficiency ratio of GMS through a topological property called the local-pooling factor of the network graph. We then develop an iterative procedure to estimate the local-pooling factor under a large class of network topologies and interference models. We use these results to study the worst-case efficiency ratio of GMS on two classes of network topologies. First, we show how these results can be applied to tree networks to prove that GMS achieves the full capacity region in tree networks under theK-hop interference model. Second, we show that the worst-case efficiency ratio of GMS in geometric network graphs is between 1 6
Distributed link scheduling with constant overhead
- In Proceedings of ACM Sigmetrics
, 2007
"... This paper proposes a new class of simple, distributed algorithms for scheduling in wireless networks. The algorithms generate new schedules in a distributed manner via simple local changes to existing schedules. The class is parameterized by integers k ≥ 1. We show that algorithm k of our class ach ..."
Abstract
-
Cited by 102 (3 self)
- Add to MetaCart
(Show Context)
This paper proposes a new class of simple, distributed algorithms for scheduling in wireless networks. The algorithms generate new schedules in a distributed manner via simple local changes to existing schedules. The class is parameterized by integers k ≥ 1. We show that algorithm k of our class achieves k/(k +2) of the capacity region, for every k ≥ 1. The algorithms have small and constant worst-case overheads: in particular, algorithm k generates a new schedule using (a) time less than 4k + 2 round-trip times between neighboring nodes in the network, and (b) at most three control transmissions by any given node, for any k. The control signals are explicitly specified, and face the same interference effects as normal data transmissions. Our class of distributed wireless scheduling algorithms are the first ones guaranteed to achieve any fixed fraction of the capacity region while using small and constant overheads that do not scale with network size. The parameter k explicitly captures the tradeoff between control overhead and scheduler throughput performance and provides a tuning knob protocol designers can use to harness this trade-off in practice. 1.
Network Adiabatic Theorem: An Efficient Randomized Protocol for Contention Resolution
"... The popularity of Aloha(-like) algorithms for resolution of contention between multiple entities accessing common resources is due to their extreme simplicity and distributed nature. Example applications of such algorithms include Ethernet and recently emerging wireless multi-access networks. Despit ..."
Abstract
-
Cited by 88 (10 self)
- Add to MetaCart
(Show Context)
The popularity of Aloha(-like) algorithms for resolution of contention between multiple entities accessing common resources is due to their extreme simplicity and distributed nature. Example applications of such algorithms include Ethernet and recently emerging wireless multi-access networks. Despite a long and exciting history of more than four decades, the question of designing an algorithm that is essentially as simple and distributed as Aloha while being efficient has remained unresolved. In this paper, we resolve this question successfully for a network of queues where contention is modeled through independent-set constraints over the network graph. The work by Tassiulas and Ephremides (1992) suggests that an algorithm that schedules queues so that the summation of “weight ” of scheduled queues is maximized, subject to constraints, is efficient. However, implementing such an algorithm using Aloha-like mechanism has remained a mystery. We design such an algorithm building upon a Metropolis-Hastings sampling mechanism along with selection of“weight” as an appropriate function of the queue-size. The key ingredient in establishing the efficiency of the algorithm is a novel adiabatic-like theorem for the underlying queueing network, which may be of general interest in the context of dynamical systems.
Enabling Distributed Throughput Maximization in Wireless Mesh Networks -- A Partitioning Approach
, 2006
"... This paper considers the interaction between channel assignment and distributed scheduling in multi-channel multiradio Wireless Mesh Networks (WMNs). Recently, a number of distributed scheduling algorithms for wireless networks have emerged. Due to their distributed operation, these algorithms can a ..."
Abstract
-
Cited by 85 (4 self)
- Add to MetaCart
This paper considers the interaction between channel assignment and distributed scheduling in multi-channel multiradio Wireless Mesh Networks (WMNs). Recently, a number of distributed scheduling algorithms for wireless networks have emerged. Due to their distributed operation, these algorithms can achieve only a fraction of the maximum possible throughput. As an alternative to increasing the throughput fraction by designing new algorithms, in this paper we present a novel approach that takes advantage of the inherent multi-radio capability of WMNs. We show that this capability can enable partitioning of the network into subnetworks in which simple distributed scheduling algorithms can achieve 100 % throughput. The partitioning is based on the recently introduced notion of Local Pooling. Using this notion, we characterize topologies in which 100% throughput can be achieved distributedly. These topologies are used in order to develop a number of channel assignment algorithms that are based on a matroid intersection algorithm. These algorithms partition a network in a manner that not only expands the capacity regions of the subnetworks but also allows distributed algorithms to achieve these capacity regions. Finally, we evaluate the performance of the algorithms via simulation and show that they significantly increase the distributedly achievable capacity region.
Computing Separable Functions via Gossip
, 2006
"... Motivated by applications to sensor, peer-to-peer, and adhoc networks, we study the problem of computing functions of values at the nodes in a network in a totally distributed manner. In particular, we consider separable functions, which can be written as linear combinations of functions of individu ..."
Abstract
-
Cited by 75 (6 self)
- Add to MetaCart
(Show Context)
Motivated by applications to sensor, peer-to-peer, and adhoc networks, we study the problem of computing functions of values at the nodes in a network in a totally distributed manner. In particular, we consider separable functions, which can be written as linear combinations of functions of individual variables. Known iterative algorithms for averaging can be used to compute the normalized values of such functions, but these algorithms do not extend in general to the computation of the actual values of separable functions. The main contribution of this paper is the design of a distributed randomized algorithm for computing separable functions based on properties of exponential random variables. We bound the running time of our algorithm in terms of the running time of an information spreading algorithm used as a subroutine by the algorithm. Since we are interested in totally distributed algorithms, we consider a randomized gossip mechanism for information spreading as the subroutine. Combining these algorithms yields a complete and simple distributed algorithm for computing separable functions. The second contribution of this paper is an analysis of the information spreading time of the gossip algorithm. This analysis yields an upper bound on the information spreading time, and therefore a corresponding upper bound on the running time of the algorithm for computing separable functions, in terms of the conductance of an appropriate stochastic matrix. These bounds imply that, for a class of graphs with small spectral gap (such as grid graphs), the time used by our algorithm to compute averages is of a smaller order than the time required for the computation of averages by a known iterative gossip scheme [5].
Performance of Random Access Scheduling Schemes in Multi-hop Wireless Networks
"... The scheduling problem in multi-hop wireless networks has been extensively investigated. Although throughput optimal scheduling solutions have been developed in the literature, they are unsuitable for multi-hop wireless systems because they are usually centralized and have very high complexity. In ..."
Abstract
-
Cited by 74 (7 self)
- Add to MetaCart
(Show Context)
The scheduling problem in multi-hop wireless networks has been extensively investigated. Although throughput optimal scheduling solutions have been developed in the literature, they are unsuitable for multi-hop wireless systems because they are usually centralized and have very high complexity. In this paper, we develop a random-access based scheduling scheme that utilizes local information. The important features of this scheme include constant-time complexity, distributed operations, and a provable performance guarantee. Analytical results show that it guarantees a larger fraction of the optimal throughput performance than the state-of-the-art. Through simulations with both single-hop and multi-hop traffics, we observe that the scheme provides high throughput, close to that of a well-known highly-efficient centralized greedy solution called the Greedy Maximal Scheduler.
Adaptive network coding and scheduling for maximizing througput in wireless networks
- In Proceedings of ACM Mobicom
, 2007
"... Recently, network coding emerged as a promising technol-ogy that can provide significant improvements in through-put and energy efficiency of wireless networks, even for uni-cast communication. Often, network coding schemes are designed as an autonomous layer, independent of the un-derlying Phy and ..."
Abstract
-
Cited by 64 (1 self)
- Add to MetaCart
Recently, network coding emerged as a promising technol-ogy that can provide significant improvements in through-put and energy efficiency of wireless networks, even for uni-cast communication. Often, network coding schemes are designed as an autonomous layer, independent of the un-derlying Phy and MAC capabilities and algorithms. Con-sequently, these schemes are greedy, in the sense that all opportunities of broadcasting combinations of packets are exploited. We demonstrate that this greedy design principle may in fact reduce the network throughput. This begets the need for adaptive network coding schemes. We further show that designing appropriate MAC scheduling algorithms is critical for achieving the throughput gains expected from network coding. In this paper, we propose a general frame-work to develop optimal and adaptive joint network coding and scheduling schemes. Optimality is shown for various Phy and MAC constraints. We apply this framework to two different network coding architectures: COPE, a scheme re-cently proposed in [7], and XOR-Sym, a new scheme we present here. XOR-Sym is designed to achieve a lower im-plementation complexity than that of COPE, and yet to provide similar throughput gains.
Polynomial complexity algorithms for full utilization of multi-hop wireless networks
"... In this paper, we propose and study a general framework that allows the development of distributed mechanisms to achieve full utilization of multi-hop wireless networks. In particular, we develop a generic randomized routing, scheduling and flow control scheme that is applicable to a large class o ..."
Abstract
-
Cited by 58 (15 self)
- Add to MetaCart
(Show Context)
In this paper, we propose and study a general framework that allows the development of distributed mechanisms to achieve full utilization of multi-hop wireless networks. In particular, we develop a generic randomized routing, scheduling and flow control scheme that is applicable to a large class of interference models. We prove that any algorithm which satisfies the conditions of our generic scheme maximizes network throughput and utilization. Then, we focus on a specific interference model, namely the two-hop interference model, and develop distributed algorithms with polynomial communication and computation complexity. This is an important result given that earlier throughput-optimal algorithms developed for such a model relies on the solution to an NP-hard problem. To the best of our knowledge, this is the first polynomial complexity algorithm that guarantees full utilization in multi-hop wireless networks. We further show that our algorithmic approach enables us to efficiently approximate the capacity region of a multi-hop wireless network.
Fast distributed algorithms for computing separable functions
- IEEE Trans. Inform. Theory
"... Abstract—The problem of computing functions of values at the nodes in a network in a fully distributed manner, where nodes do not have unique identities and make decisions based only on local information, has applications in sensor, peer-to-peer, and adhoc networks. The task of computing separable f ..."
Abstract
-
Cited by 57 (5 self)
- Add to MetaCart
(Show Context)
Abstract—The problem of computing functions of values at the nodes in a network in a fully distributed manner, where nodes do not have unique identities and make decisions based only on local information, has applications in sensor, peer-to-peer, and adhoc networks. The task of computing separable functions, which can be written as linear combinations of functions of individual variables, is studied in this context. Known iterative algorithms for averaging can be used to compute the normalized values of such functions, but these algorithms do not extend in general to the computation of the actual values of separable functions. The main contribution of this paper is the design of a distributed randomized algorithm for computing separable functions. The running time of the algorithm is shown to depend on the running time of a minimum computation algorithm used as a subroutine. Using a randomized gossip mechanism for minimum computation as the subroutine yields a complete fully distributed algorithm for computing separable functions. For a class of graphs with small spectral gap, such as grid graphs, the time used by the algorithm to compute averages is of a smaller order than the time required by a known iterative averaging scheme. Index Terms—Data aggregation, distributed algorithms, gossip algorithms, randomized algorithms. I.