Results 1 - 10
of
831
A Case for End System Multicast
- in Proceedings of ACM Sigmetrics
, 2000
"... Abstract — The conventional wisdom has been that IP is the natural protocol layer for implementing multicast related functionality. However, more than a decade after its initial proposal, IP Multicast is still plagued with concerns pertaining to scalability, network management, deployment and suppor ..."
Abstract
-
Cited by 1290 (24 self)
- Add to MetaCart
Abstract — The conventional wisdom has been that IP is the natural protocol layer for implementing multicast related functionality. However, more than a decade after its initial proposal, IP Multicast is still plagued with concerns pertaining to scalability, network management, deployment and support for higher layer functionality such as error, flow and congestion control. In this paper, we explore an alternative architecture that we term End System Multicast, where end systems implement all multicast related functionality including membership management and packet replication. This shifting of multicast support from routers to end systems has the potential to address most problems associated with IP Multicast. However, the key concern is the performance penalty associated with such a model. In particular, End System Multicast introduces duplicate packets on physical links and incurs larger end-to-end delays than IP Multicast. In this paper, we study these performance concerns in the context of the Narada protocol. In Narada, end systems selforganize into an overlay structure using a fully distributed protocol. Further, end systems attempt to optimize the efficiency of the overlay by adapting to network dynamics and by considering application level performance. We present details of Narada and evaluate it using both simulation and Internet experiments. Our results indicate that the performance penalties are low both from the application and the network perspectives. We believe the potential benefits of transferring multicast functionality from end systems to routers significantly outweigh the performance penalty incurred. I.
Resilient Overlay Networks
, 2001
"... A Resilient Overlay Network (RON) is an architecture that allows distributed Internet applications to detect and recover from path outages and periods of degraded performance within several seconds, improving over today’s wide-area routing protocols that take at least several minutes to recover. A R ..."
Abstract
-
Cited by 1160 (31 self)
- Add to MetaCart
(Show Context)
A Resilient Overlay Network (RON) is an architecture that allows distributed Internet applications to detect and recover from path outages and periods of degraded performance within several seconds, improving over today’s wide-area routing protocols that take at least several minutes to recover. A RON is an application-layer overlay on top of the existing Internet routing substrate. The RON nodes monitor the functioning and quality of the Internet paths among themselves, and use this information to decide whether to route packets directly over the Internet or by way of other RON nodes, optimizing application-specific routing metrics. Results from two sets of measurements of a working RON deployed at sites scattered across the Internet demonstrate the benefits of our architecture. For instance, over a 64-hour sampling period in March 2001 across a twelve-node RON, there were 32 significant outages, each lasting over thirty minutes, over the 132 measured paths. RON’s routing mechanism was able to detect, recover, and route around all of them, in less than twenty seconds on average, showing that its methods for fault detection and recovery work well at discovering alternate paths in the Internet. Furthermore, RON was able to improve the loss rate, latency, or throughput perceived by data transfers; for example, about 5 % of the transfers doubled their TCP throughput and 5 % of our transfers saw their loss probability reduced by 0.05. We found that forwarding packets via at most one intermediate RON node is sufficient to overcome faults and improve performance in most cases. These improvements, particularly in the area of fault detection and recovery, demonstrate the benefits of moving some of the control over routing into the hands of end-systems.
Congestion control for high bandwidth-delay product networks
- SIGCOMM '02
, 2002
"... Theory and experiments show that as the per-flow product of bandwidth and latency increases, TCP becomes inefficient and prone to instability, regardless of the queuing scheme. This failing becomes increasingly important as the Internet evolves to incorporate very high-bandwidth optical links and mo ..."
Abstract
-
Cited by 454 (4 self)
- Add to MetaCart
(Show Context)
Theory and experiments show that as the per-flow product of bandwidth and latency increases, TCP becomes inefficient and prone to instability, regardless of the queuing scheme. This failing becomes increasingly important as the Internet evolves to incorporate very high-bandwidth optical links and more large-delay satellite links. To address this problem, we develop a novel approach to Internet congestion control that outperforms TCP in conventional environments, and remains efficient, fair, scalable, and stable as the bandwidth-delay product increases. This new eXplicit Control Protocol, XCP, generalizes the Explicit Congestion Notification proposal (ECN). In addition, XCP introduces the new concept of decoupling utilization control from fairness control. This allows a more flexible and analytically tractable protocol design and opens new avenues for service differentiation. Using a control theory framework, we model XCP and demonstrate it is stable and efficient regardless of the link capacity, the round trip delay, and the number of sources. Extensive packet-level simulations show that XCP outperforms TCP in both conventional and high bandwidth-delay environments. Further, XCP achieves fair bandwidth allocation, high utilization, small standing queue size, and near-zero packet drops, with both steady and highly varying traffic. Additionally, the new protocol does not maintain any per-flow state in routers and requires few CPU cycles per packet, which makes it implementable in high-speed routers.
Bullet: High Bandwidth Data Dissemination Using an Overlay Mesh
, 2003
"... In recent years, overlay networks have become an effective alternative to IP multicast for efficient point to multipoint communication across the Internet. Typically, nodes self-organize with the goal of forming an efficient overlay tree, one that meets performance targets without placing undue burd ..."
Abstract
-
Cited by 424 (22 self)
- Add to MetaCart
In recent years, overlay networks have become an effective alternative to IP multicast for efficient point to multipoint communication across the Internet. Typically, nodes self-organize with the goal of forming an efficient overlay tree, one that meets performance targets without placing undue burden on the underlying network. In this paper, we target high-bandwidth data distribution from a single source to a large number of receivers. Applications include large-file transfers and real-time multimedia streaming. For these applications, we argue that an overlay mesh, rather than a tree, can deliver fundamentally higher bandwidth and reliability relative to typical tree structures. This paper presents Bullet, a scalable and distributed algorithm that enables nodes spread across the Internet to self-organize into a high bandwidth overlay mesh. We construct Bullet around the insight that data should be distributed in a disjoint manner to strategic points in the network. Individual Bullet receivers are then responsible for locating and retrieving the data from multiple points in parallel. Key contributions of this work include: i) an algorithm that sends data to di#erent points in the overlay such that any data object is equally likely to appear at any node, ii) a scalable and decentralized algorithm that allows nodes to locate and recover missing data items, and iii) a complete implementation and evaluation of Bullet running across the Internet and in a large-scale emulation environment reveals up to a factor two bandwidth improvements under a variety of circumstances. In addition, we find that, relative to tree-based solutions, Bullet reduces the need to perform expensive bandwidth probing.
FAST TCP: Motivation, Architecture, Algorithms, Performance
, 2004
"... We describe FAST TCP, a new TCP congestion control algorithm for high-speed long-latency networks, from design to implementation. We highlight the approach taken by FAST TCP to address the four difficulties, at both packet and flow levels, which the current TCP implementation has at large windows. W ..."
Abstract
-
Cited by 369 (18 self)
- Add to MetaCart
(Show Context)
We describe FAST TCP, a new TCP congestion control algorithm for high-speed long-latency networks, from design to implementation. We highlight the approach taken by FAST TCP to address the four difficulties, at both packet and flow levels, which the current TCP implementation has at large windows. We describe the architecture and characterize the equilibrium and stability properties of FAST TCP. We present experimental results comparing our first Linux prototype with TCP Reno, HSTCP, and STCP in terms of throughput, fairness, stability, and responsiveness. FAST TCP aims to rapidly stabilize high-speed long-latency networks into steady, efficient and fair operating points, in dynamic sharing environments, and the preliminary results are promising.
Rate-Distortion Optimized Streaming of Packetized Media
- IEEE Trans. Multimedia
, 2001
"... This paper addresses the problem of streaming packetized media ..."
Abstract
-
Cited by 337 (15 self)
- Add to MetaCart
This paper addresses the problem of streaming packetized media
Enabling Conferencing Applications on the Internet using an Overlay Multicast Architecture
- In Proceedings of ACM SIGCOMM
, 2001
"... ..."
(Show Context)
On the constancy of Internet path properties
- In Proceedings of ACM SIGCOMM Internet Measurement Workshop
, 2001
"... Abstract — Many Internet protocols and operational procedures use measurements to guide future actions. This is an effective strategy if the quantities being measured exhibit a degree of constancy: that is, in some fundamental sense, they are not changing. In this paper we explore three different no ..."
Abstract
-
Cited by 294 (15 self)
- Add to MetaCart
(Show Context)
Abstract — Many Internet protocols and operational procedures use measurements to guide future actions. This is an effective strategy if the quantities being measured exhibit a degree of constancy: that is, in some fundamental sense, they are not changing. In this paper we explore three different notions of constancy: mathematical, operational, and predictive. Using a large measurement dataset gathered from the NIMI infrastructure, we then apply these notions to three Internet path properties: loss, delay, and throughput. Our aim is to provide guidance as to when assumptions of various forms of constancy are sound, versus when they might prove misleading. I.
Binomial Congestion Control Algorithms
, 2001
"... This paper introduces and analyzes a class of nonlinear congestion control algorithms called binomial algorithms, motivated in part by the needs of streaming audio and video applications for which a drastic reduction in transmission rate upon each congestion indication (or loss) is problematic. Bino ..."
Abstract
-
Cited by 217 (11 self)
- Add to MetaCart
(Show Context)
This paper introduces and analyzes a class of nonlinear congestion control algorithms called binomial algorithms, motivated in part by the needs of streaming audio and video applications for which a drastic reduction in transmission rate upon each congestion indication (or loss) is problematic. Binomial algorithms generalize TCP-style additive-increase by increasing inversely proportional to a power of the current window (for TCP, ) ; they generalize TCP-style multiplicative-decrease by decreasing proportional to a power of the current window (for TCP, ). We show that there are an infinite number of deployable TCP-compatible binomial algorithms, those which satisfy , and that all binomial algorithms converge to fairness under a synchronized-feedback assumption provided . Our simulation results show that binomial algorithms interact well with TCP across a RED gateway. We focus on two particular algorithms, IIAD ( ) and SQRT ( !" ), showing that they are well-suited to applications that do not react well to large TCP-style window reductions. Keywords--- Congestion control, TCP-friendliness, TCP-compatibility, nonlinear algorithms, transport protocols, TCP, streaming media, Internet. I.
Characterizing Residential Broadband Networks
- Proc. of ACM IMC
, 2007
"... A large and rapidly growing proportion of users connect to the Internet via residential broadband networks such as Digital Subscriber Lines (DSL) and cable. Residential networks are often the bottleneck in the last mile of today’s Internet. Their characteristics critically affect Internet applicatio ..."
Abstract
-
Cited by 173 (7 self)
- Add to MetaCart
(Show Context)
A large and rapidly growing proportion of users connect to the Internet via residential broadband networks such as Digital Subscriber Lines (DSL) and cable. Residential networks are often the bottleneck in the last mile of today’s Internet. Their characteristics critically affect Internet applications, including voice-over-IP, online games, and peer-to-peer content sharing/delivery systems. However, to date, few studies have investigated commercial broadband deployments, and rigorous measurement data that characterize these networks at scale are lacking. In this paper, we present the first large-scale measurement study of major cable and DSL providers in North America and Europe. We describe and evaluate the measurement tools we developed for this purpose. Our study characterizes several properties of broadband networks, including link capacities, packet round-trip times and jitter, packet loss rates, queue lengths, and queue drop policies. Our analysis reveals important ways in which residential networks differ from how the Internet is conventionally thought to operate. We also discuss the implications of our findings for many emerging protocols and systems, including delay-based congestion control (e.g., PCP) and network coordinate systems (e.g., Vivaldi).