Results 1 - 10
of
336
Fair end-to-end window-based congestion control
- IEEE/ACM TRANS. ON NETWORKING
, 2000
"... In this paper, we demonstrate the existence of fair end-to-end window-based congestion control protocols for packetswitched networks with first come-first served routers. Our definition of fairness generalizes proportional fairness and includes arbitrarily close approximations of max-min fairness. T ..."
Abstract
-
Cited by 676 (3 self)
- Add to MetaCart
In this paper, we demonstrate the existence of fair end-to-end window-based congestion control protocols for packetswitched networks with first come-first served routers. Our definition of fairness generalizes proportional fairness and includes arbitrarily close approximations of max-min fairness. The protocols use only information that is available to end hosts and are designed to converge reasonably fast. Our study is based on a multiclass fluid model of the network. The convergence of the protocols is proved using a Lyapunov function. The technical challenge is in the practical implementation of the protocols.
FAST TCP: Motivation, Architecture, Algorithms, Performance
, 2004
"... We describe FAST TCP, a new TCP congestion control algorithm for high-speed long-latency networks, from design to implementation. We highlight the approach taken by FAST TCP to address the four difficulties, at both packet and flow levels, which the current TCP implementation has at large windows. W ..."
Abstract
-
Cited by 369 (18 self)
- Add to MetaCart
(Show Context)
We describe FAST TCP, a new TCP congestion control algorithm for high-speed long-latency networks, from design to implementation. We highlight the approach taken by FAST TCP to address the four difficulties, at both packet and flow levels, which the current TCP implementation has at large windows. We describe the architecture and characterize the equilibrium and stability properties of FAST TCP. We present experimental results comparing our first Linux prototype with TCP Reno, HSTCP, and STCP in terms of throughput, fairness, stability, and responsiveness. FAST TCP aims to rapidly stabilize high-speed long-latency networks into steady, efficient and fair operating points, in dynamic sharing environments, and the preliminary results are promising.
A Duality Model of TCP and Queue Management Algorithms
- IEEE/ACM Trans. on Networking
, 2002
"... We propose a duality model of congestion control and apply it to understand the equilibrium properties of TCP and active queue management schemes. Congestion control is the interaction of source rates with certain congestion measures at network links. The basic idea is to regard source rates as p ..."
Abstract
-
Cited by 307 (37 self)
- Add to MetaCart
We propose a duality model of congestion control and apply it to understand the equilibrium properties of TCP and active queue management schemes. Congestion control is the interaction of source rates with certain congestion measures at network links. The basic idea is to regard source rates as primal variables and congestion measures as dual variables, and congestion control as a distributed primal-dual algorithm carried out over the Internet to maximize aggregate utility subject to capacity constraints. The primal iteration is carried out by TCP algorithms such as Reno or Vegas, and the dual iteration is carried out by queue management such as DropTail, RED or REM. We present these algorithms and their generalizations, derive their utility functions, and study their interaction.
Analysis and design of an adaptive virtual queue (AVQ) algorithm for active queue management,”
- in Proceedings of the ACM Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication (SIGCOMM ’01),
, 2001
"... ABSTRACT Virtual Queue-based marking schemes have been recently proposed for AQM (Active Queue Management) in Internet routers. We consider a particular scheme, which we call the Adaptive Virtual Queue (AVQ), and study its following properties: stability in the presence of feedback delays, its abil ..."
Abstract
-
Cited by 259 (22 self)
- Add to MetaCart
(Show Context)
ABSTRACT Virtual Queue-based marking schemes have been recently proposed for AQM (Active Queue Management) in Internet routers. We consider a particular scheme, which we call the Adaptive Virtual Queue (AVQ), and study its following properties: stability in the presence of feedback delays, its ability to maintain small queue lengths and its robustness in the presence of extremely short flows (the so-called web mice). Using a mathematical tool motivated by the earlier work of Hollot et al, we present a simple rule to design the parameters of the AVQ algorithm. We then compare its performance through simulation with several well-known AQM schemes such as RED, REM, PI controller and a nonadaptive virtual queue algorithm. With a view towards implementation, we show that AVQ can be implemented as a simple token bucket using only a few lines of code.
Impact of Fairness on Internet Performance
- IN PROCEEDINGS OF ACM SIGMETRICS
, 2000
"... We discuss the relevance of fairness as a design objective for congestion control mechanisms in the Internet. Specifically, we consider a backbone network shared by a dynamic number of short-lived flows, and study the impact of bandwidth sharing on network performance. In particular, we prove that f ..."
Abstract
-
Cited by 220 (15 self)
- Add to MetaCart
We discuss the relevance of fairness as a design objective for congestion control mechanisms in the Internet. Specifically, we consider a backbone network shared by a dynamic number of short-lived flows, and study the impact of bandwidth sharing on network performance. In particular, we prove that for a broad class of fair bandwidth allocations, the total number of ows in progress remains finite if the load of every link is less than one. We also show that provided the bandwidth allocation is "sufficiently" fair, performance is optimal in the sense that the throughput of the ows is mainly determined by their access rate. Neither property is guaranteed with unfair bandwidth allocations, when priority is given to one class of ow with respect to another. This suggests current proposals for a differentiated services Internet may lead to suboptimal utilization of network resources.
Statistical bandwidth sharing: a study of congestion at flow level
, 2001
"... In this paper we study the statistics of the realized throughput of elastic document transfers, accounting for the way network bandwidth is shared dynamically between the randomly varying number of concurrent flows. We first discuss the way TCP realizes statistical bandwidth sharing, illustrating es ..."
Abstract
-
Cited by 214 (23 self)
- Add to MetaCart
(Show Context)
In this paper we study the statistics of the realized throughput of elastic document transfers, accounting for the way network bandwidth is shared dynamically between the randomly varying number of concurrent flows. We first discuss the way TCP realizes statistical bandwidth sharing, illustrating essential properties by means of packet level simulations. Mathematical flow level models based on the theory of stochastic networks are then proposed to explain the observed behavior. A notable result is that first order performance (e.g., mean throughput) is insensitive with respect both to the flow size distribution and the flow arrival process, as long as “sessions ” arrive according to a Poisson process. Perceived performance is shown to depend most significantly on whether demand at flow level is less than or greater than available capacity. The models provide a key to understanding the effectiveness of techniques for congestion management and service differentiation. 1.
Bandwidth Sharing and Admission Control for Elastic Traffic
- Telecommunication Systems
, 1998
"... We consider the performance of a network like the Internet handling so-called elastic traffic where the rate of flows adjusts to fill available bandwidth. Realized throughput depends both on the way bandwidth is shared and on the random nature of traffic. We assume traffic consists of point to point ..."
Abstract
-
Cited by 214 (18 self)
- Add to MetaCart
We consider the performance of a network like the Internet handling so-called elastic traffic where the rate of flows adjusts to fill available bandwidth. Realized throughput depends both on the way bandwidth is shared and on the random nature of traffic. We assume traffic consists of point to point transfers of individual documents of finite size arriving according to a Poisson process. Notable results are that weighted sharing has limited impact on perceived quality of service and that discrimination in favour of short documents leads to considerably better performance than fair sharing. In a linear network, max-min fairness is preferable to proportional fairness under random traffic while the converse is true under the assumption of a static configuration of persistent flows. Admission control is advocated as a necessary means to maintain goodput in case of traffic overload. 1 Introduction Traffic in a multiservice network is essentially composed of individual transactions or flows...
Internet Congestion Control.
- IEEE Control Systems Magazine,
, 2002
"... Abstract This article reviews the current TCP congestion control protocols and overviews recent advances that have brought analytical tools to this problem. We describe an optimization-based framework that provides an interpretation of various flow control mechanisms, in particular, the utility bei ..."
Abstract
-
Cited by 194 (25 self)
- Add to MetaCart
(Show Context)
Abstract This article reviews the current TCP congestion control protocols and overviews recent advances that have brought analytical tools to this problem. We describe an optimization-based framework that provides an interpretation of various flow control mechanisms, in particular, the utility being optimized by the protocol's equilibrium structure. We also look at the dynamics of TCP and employ linear models to exhibit stability limitations in the predominant TCP versions, despite certain built-in compensations for delay. Finally, we present a new protocol that overcomes these limitations and provides stability in a way that is scalable to arbitrary networks, link capacities, and delays.
SimGrid: a Generic Framework for Large-Scale Distributed Experiments
, 2008
"... Distributed computing is a very broad and active research area comprising fields such as cluster computing, computational grids, desktop grids and peer-to-peer (P2P) systems. Unfortunately, it is often impossible to obtain theoretical or analytical results to compare the performance of algorithms ta ..."
Abstract
-
Cited by 138 (28 self)
- Add to MetaCart
Distributed computing is a very broad and active research area comprising fields such as cluster computing, computational grids, desktop grids and peer-to-peer (P2P) systems. Unfortunately, it is often impossible to obtain theoretical or analytical results to compare the performance of algorithms targeting such systems. One possibility is to conduct large numbers of back-to-back experiments on real platforms. While this is possible on tightlycoupled platforms, it is infeasible on modern distributed platforms as experiments are labor-intensive and results typically not reproducible. Consequently, one must resort to simulations, which enable reproducible results and also make it possible to explore wide ranges of platform and application scenarios. In this paper we describe the SimGrid framework, a simulation-based framework for evaluating cluster, grid and P2P algorithms and heuristics. This paper focuses on SimGrid v3, which greatly improves on previous versions thanks to a novel and validated modular simulation engine that achieves higher simulation speed without hindering simulation accuracy. Also, two new user interfaces were added to broaden the targeted research community. After surveying existing tools and methodologies we describe the key features and benefits of SimGrid.
Scheduling Distributed Applications: The SimGrid Simulation Framework
- IN PROCEEDINGS OF THE THIRD IEEE INTERNATIONAL SYMPOSIUM ON CLUSTER COMPUTING AND THE GRID (CCGRID’03
, 2003
"... Since the advent of distributed computer systems an active field of research has been the investigation of scheduling strategies for parallel applications. The common approach is to employ scheduling heuristics that approximate an optimal schedule. Unfortunately, it is often impossible to obtain a ..."
Abstract
-
Cited by 137 (28 self)
- Add to MetaCart
(Show Context)
Since the advent of distributed computer systems an active field of research has been the investigation of scheduling strategies for parallel applications. The common approach is to employ scheduling heuristics that approximate an optimal schedule. Unfortunately, it is often impossible to obtain analytical results to compare the efficacy of these heuristics. One possibility is to conducts large numbers of back-to-back experiments on real platforms. While this is possible on tightly-coupled platforms, it is infeasible on modern distributed platforms (i.e. Grids) as it is labor-intensive and does not enable repeatable results. The solution is to resort to simulations. Simulations not only enables repeatable results but also make it possible to explore wide ranges of platform and application scenarios. In this paper we present the SimGrid framework which enables the simulation of distributed applications in distributed computing environments for the specific purpose of developing and evaluating scheduling algorithms. This paper focuses on SimGrid v2, which greatly improves on the first version of the software with more realistic networkmodels and topologies. SimGrid v2 also enables the simulation of distributed scheduling agents, which has become critical for current scheduling research in large-scale platforms. After describing and validating these features, we present a case study by which we demonstrate the usefulness of SimGrid for conducting scheduling research.