Results 1 - 10
of
63
A tutorial on cross-layer optimization in wireless networks
- IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
, 2006
"... This tutorial paper overviews recent developments in optimization based approaches for resource allocation problems in wireless systems. We begin by overviewing important results in the area of opportunistic (channel-aware) scheduling for cellular (single-hop) networks, where easily implementable my ..."
Abstract
-
Cited by 248 (29 self)
- Add to MetaCart
(Show Context)
This tutorial paper overviews recent developments in optimization based approaches for resource allocation problems in wireless systems. We begin by overviewing important results in the area of opportunistic (channel-aware) scheduling for cellular (single-hop) networks, where easily implementable myopic policies are shown to optimize system performance. We then describe key lessons learned and the main obstacles in extending the work to general resource allocation problems for multi-hop wireless networks. Towards this end, we show that a clean-slate optimization based approach to the multi-hop resource allocation problem naturally results in a “loosely coupled” crosslayer solution. That is, the algorithms obtained map to different layers (transport, network, and MAC/PHY) of the protocol stack are coupled through a limited amount of information being passed back and forth. It turns out that the optimal scheduling component at the MAC layer is very complex and thus needs simpler (potentially imperfect) distributed solutions. We demonstrate how to use imperfect scheduling in the crosslayer framework and describe recently developed distributed algorithms along these lines. We conclude by describing a set of open research problems.
Cross-layer congestion control, routing and scheduling design in ad hoc wireless networks
- PROC. IEEE INFOCOM
, 2006
"... This paper considers jointly optimal design of crosslayer congestion control, routing and scheduling for ad hoc wireless networks. We first formulate the rate constraint and scheduling constraint using multicommodity flow variables, and formulate resource allocation in networks with fixed wireless ..."
Abstract
-
Cited by 151 (10 self)
- Add to MetaCart
(Show Context)
This paper considers jointly optimal design of crosslayer congestion control, routing and scheduling for ad hoc wireless networks. We first formulate the rate constraint and scheduling constraint using multicommodity flow variables, and formulate resource allocation in networks with fixed wireless channels (or single-rate wireless devices that can mask channel variations) as a utility maximization problem with these constraints. By dual decomposition, the resource allocation problem naturally decomposes into three subproblems: congestion control, routing and scheduling that interact through congestion price. The global convergence property of this algorithm is proved. We next extend the dual algorithm to handle networks with timevarying channels and adaptive multi-rate devices. The stability of the resulting system is established, and its performance is characterized with respect to an ideal reference system which has the best feasible rate region at link layer. We then generalize the aforementioned results to a general model of queueing network served by a set of interdependent parallel servers with time-varying service capabilities, which models many design problems in communication networks. We show that for a general convex optimization problem where a subset of variables lie in a polytope and the rest in a convex set, the dual-based algorithm remains stable and optimal when the constraint set is modulated by an irreducible finite-state Markov chain. This paper thus presents a step toward a systematic way to carry out cross-layer design in the framework of “layering as optimization decomposition ” for time-varying channel models.
Energy-efficient resource allocation in wireless networks: An overview of gametheoretic approaches
- IEEE Signal Process. Magazine
, 2007
"... A game-theoretic model is proposed to study the cross-layer problem of joint power and rate control with quality of service (QoS) constraints in multiple-access networks. In the proposed game, each user seeks to choose its transmit power and rate in a distributed manner in order to maximize its own ..."
Abstract
-
Cited by 55 (8 self)
- Add to MetaCart
(Show Context)
A game-theoretic model is proposed to study the cross-layer problem of joint power and rate control with quality of service (QoS) constraints in multiple-access networks. In the proposed game, each user seeks to choose its transmit power and rate in a distributed manner in order to maximize its own utility while satisfying its QoS requirements. The user’s QoS constraints are specified in terms of the average source rate and an upper bound on the average delay where the delay includes both transmission and queuing delays. The utility function considered here measures energy efficiency and is particularly suitable for wireless networks with energy constraints. The Nash equilibrium solution for the proposed noncooperative game is derived and a closed-form expression for the utility achieved at equilibrium is obtained. It is shown that the QoS requirements of a user translate into a “size ” for the user which is an indication of the amount of network resources consumed by the user. Using this competitive multiuser framework, the tradeoffs among throughput, delay, network capacity and energy efficiency are studied. In addition, analytical expressions are given for users ’ delay profiles and the delay performance of the users at Nash equilibrium is quantified.
Mathematics and the Internet: A Source of Enormous Confusion and Great Potential
"... For many mathematicians and physicists, the Internet has become a popular realworld domain for the application and/or development of new theories related to the organization and behavior of large-scale, complex, and dynamic systems. In some cases, the Internet has served both as inspiration and just ..."
Abstract
-
Cited by 47 (6 self)
- Add to MetaCart
(Show Context)
For many mathematicians and physicists, the Internet has become a popular realworld domain for the application and/or development of new theories related to the organization and behavior of large-scale, complex, and dynamic systems. In some cases, the Internet has served both as inspiration and justification for the popularization of new models and mathematics within the scientific enterprise. For example, scale-free network models of the preferential attachment type [8] have been claimed to describe the Internet’s connectivity structure, resulting in surprisingly general and strong claims about the network’s resilience to random failures of its components and its vulnerability to targeted attacks against its infrastructure [2]. These models have, as their trademark, power-law type node degree distributions that drastically distinguish them from the classical Erdős-Rényi type random graph models [13]. These “scale-free ” network models have attracted significant attention within the scientific community and have been partly responsible for launching and fueling the new field of network science [42, 4]. To date, the main role that mathematics has played in network science has been to put the physicists’ largely empirical findings on solid grounds Walter Willinger is at AT&T Labs-Research in Florham Park, NJ. His email address is walter@research.att. com.
Distributed utility maximization for network coding based multicasting: a shortest path approach
- IEEE J. Selected Areas in Communications
, 2005
"... Abstract — A central issue in practically deploying network coding in a shared network is the adaptive and efficient al-location of network resources. This issue can be formulated as an optimization problem of maximizing the net-utility – the difference between a utility derived from the attainable ..."
Abstract
-
Cited by 33 (6 self)
- Add to MetaCart
(Show Context)
Abstract — A central issue in practically deploying network coding in a shared network is the adaptive and efficient al-location of network resources. This issue can be formulated as an optimization problem of maximizing the net-utility – the difference between a utility derived from the attainable multicast throughput and the total cost of resource provisioning. We develop a primal-subgradient type distributed algorithm to solve this utility maximization problem. The effectiveness of the algorithm hinges upon two key properties we discovered: (1) the set of subgradients of the multicast capacity is the convex hull of the indicator vectors for the critical cuts, and (2) the complexity of finding such critical cuts can be reduced by exploiting the algebraic properties of linear network coding. The extension to multiple multicast sessions is also carried out. The effectiveness of the proposed algorithm is confirmed by simulations on an
DaVinci: Dynamically Adaptive Virtual Networks for a Customized Internet
- in Proc. CoNEXT
, 2008
"... Running multiple virtual networks, customized for different performance objectives, is a promising way to support diverse applications over a shared substrate. Despite being simple, a static division of resources between virtual networks can be highly inefficient, while dynamic resource allocation r ..."
Abstract
-
Cited by 32 (2 self)
- Add to MetaCart
(Show Context)
Running multiple virtual networks, customized for different performance objectives, is a promising way to support diverse applications over a shared substrate. Despite being simple, a static division of resources between virtual networks can be highly inefficient, while dynamic resource allocation runs the risk of instability. This paper uses optimization theory to show that adaptive resource allocation can be stable and can maximize the aggregate performance across the virtual networks. In the DaVinci architecture, each substrate link periodically reassigns bandwidth shares between its virtual links; while at a smaller timescale, each virtual network runs a distributed protocol that maximizes its own performance objective independently. Numerical experiments with a mix of delay-sensitive and throughputsensitive traffic show that the bandwidth shares converge quickly to the optimal values. We demonstrate that running several custom protocols in parallel and allocating resource adaptively can be more efficient, more flexible, and easier to manage than a compromise “one-size-fits-all ” design. 1.
Towards robust multi-layer traffic engineering: Optimization of congestion control and routing
- IEEE J. on Selected Areas in Communications
, 2007
"... Abstract — In the Internet today, traffic engineering is performed assuming that the offered traffic is inelastic. In reality, end hosts adapt their sending rates to network congestion, and network operators adapt the routing to the measured traffic. This raises the question of whether the joint sys ..."
Abstract
-
Cited by 31 (6 self)
- Add to MetaCart
(Show Context)
Abstract — In the Internet today, traffic engineering is performed assuming that the offered traffic is inelastic. In reality, end hosts adapt their sending rates to network congestion, and network operators adapt the routing to the measured traffic. This raises the question of whether the joint system of congestion control (transport layer) and routing (network layer) is stable and optimal. Using the established optimization model for TCP and that for traffic engineering as a basis, we find the joint system is stable and typically maximizes aggregate user utility, especially under more homogeneous link capacities. We prove that both stability and optimality of the joint system can be guaranteed for sufficiently elastic traffic simply by tuning the cost function used for traffic engineering. Then, we present a new algorithm that adapts on a faster timescale to changes in traffic distribution and is more robust to large traffic bursts. Uniting the network and transport layers in a multi-layer approach, this algorithm, Distributed Adaptive Traffic Engineering (DATE), jointly optimizes the goals of end users and network operators and reacts quickly to avoid bottlenecks. Simulations demonstrate that DATE converges quickly.
Impact of stochastic noisy feedback on distributed network utility maximization
- in INFOCOM 2007
, 2007
"... Abstract — The implementation of distributed network utility maximization (NUM) algorithms hinges heavily on information feedback through message passing among network elements. In practical systems the feedback is often obtained using errorprone measurement mechanisms and suffers from random errors ..."
Abstract
-
Cited by 30 (4 self)
- Add to MetaCart
(Show Context)
Abstract — The implementation of distributed network utility maximization (NUM) algorithms hinges heavily on information feedback through message passing among network elements. In practical systems the feedback is often obtained using errorprone measurement mechanisms and suffers from random errors. In this paper, we investigate the impact of noisy feedback on distributed NUM. We first study the distributed NUM algorithms based on the Lagrangian dual method, and focus on the primal-dual (P-D) algorithm, which is a single time-scale algorithm in the sense that the primal and dual parameters are updated simultaneously. Assuming strong duality, we study both cases when the stochastic gradients are unbiased or biased, and develop a general theory on the stochastic stability of the P-D algorithms in the presence of noisy feedback. When the gradient estimators are unbiased,
Rethinking Internet Traffic Management: From Multiple Decompositions to A Practical Protocol
- in Proc. CoNEXT
, 2007
"... In the Internet today, traffic management spans congestion control (at end hosts), routing protocols (on routers), and traffic engineering (by network operators). Historically, this division of functionality evolved organically. In this paper, we perform a top-down redesign of traffic management usi ..."
Abstract
-
Cited by 29 (11 self)
- Add to MetaCart
(Show Context)
In the Internet today, traffic management spans congestion control (at end hosts), routing protocols (on routers), and traffic engineering (by network operators). Historically, this division of functionality evolved organically. In this paper, we perform a top-down redesign of traffic management using recent innovations in optimization theory. First, we propose an objective function that captures the goals of end users and network operators. Using all known optimization decomposition techniques, we generate four distributed algorithms that divide traffic over multiple paths based on feedback from the network links. Combining the best features of the algorithms, we construct TRUMP: a traffic management protocol that is distributed, adaptive, robust, flexible and easy to manage. Further, TRUMP can operate based on implicit feedback about packet loss and delay. We show that using optimization decompositions as a foundation, simulations as a building block, and human intuition as a guide can be a principled approach to protocol design. 1.
Contrasting views of complexity and their implications for network-centric infrastructures
- IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans
"... Abstract—There exists a widely recognized need to better un-derstand and manage complex “systems of systems, ” ranging from biology, ecology, and medicine to network-centric technologies. This is motivating the search for universal laws of highly evolved systems and driving demand for new mathematic ..."
Abstract
-
Cited by 20 (3 self)
- Add to MetaCart
(Show Context)
Abstract—There exists a widely recognized need to better un-derstand and manage complex “systems of systems, ” ranging from biology, ecology, and medicine to network-centric technologies. This is motivating the search for universal laws of highly evolved systems and driving demand for new mathematics and methods that are consistent, integrative, and predictive. However, the the-oretical frameworks available today are not merely fragmented but sometimes contradictory and incompatible. We argue that complexity arises in highly evolved biological and technological systems primarily to provide mechanisms to create robustness. However, this complexity itself can be a source of new fragility, leading to “robust yet fragile ” tradeoffs in system design. We focus on the role of robustness and architecture in networked infrastructures, and we highlight recent advances in the theory of distributed control driven by network technologies. This view of complexity in highly organized technological and biological sys-tems is fundamentally different from the dominant perspective in the mainstream sciences, which downplays function, constraints, and tradeoffs, and tends to minimize the role of organization and design. Index Terms—Architecture, complexity theory, networks, opti-mal control, optimization methods, protocols. I.