Results 1 - 10
of
295
Binomial Congestion Control Algorithms
, 2001
"... This paper introduces and analyzes a class of nonlinear congestion control algorithms called binomial algorithms, motivated in part by the needs of streaming audio and video applications for which a drastic reduction in transmission rate upon each congestion indication (or loss) is problematic. Bino ..."
Abstract
-
Cited by 217 (11 self)
- Add to MetaCart
(Show Context)
This paper introduces and analyzes a class of nonlinear congestion control algorithms called binomial algorithms, motivated in part by the needs of streaming audio and video applications for which a drastic reduction in transmission rate upon each congestion indication (or loss) is problematic. Binomial algorithms generalize TCP-style additive-increase by increasing inversely proportional to a power of the current window (for TCP, ) ; they generalize TCP-style multiplicative-decrease by decreasing proportional to a power of the current window (for TCP, ). We show that there are an infinite number of deployable TCP-compatible binomial algorithms, those which satisfy , and that all binomial algorithms converge to fairness under a synchronized-feedback assumption provided . Our simulation results show that binomial algorithms interact well with TCP across a RED gateway. We focus on two particular algorithms, IIAD ( ) and SQRT ( !" ), showing that they are well-suited to applications that do not react well to large TCP-style window reductions. Keywords--- Congestion control, TCP-friendliness, TCP-compatibility, nonlinear algorithms, transport protocols, TCP, streaming media, Internet. I.
Detecting shared congestion of flows via end-to-end measurement
- IEEE/ACM Transactions on Networking
, 2000
"... Abstract—Current Internet congestion control protocols operate independently on a per-flow basis. Recent work has demonstrated that cooperative congestion control strategies between flows can improve performance for a variety of applications, ranging from aggregated TCP transmissions to multiple-sen ..."
Abstract
-
Cited by 165 (6 self)
- Add to MetaCart
Abstract—Current Internet congestion control protocols operate independently on a per-flow basis. Recent work has demonstrated that cooperative congestion control strategies between flows can improve performance for a variety of applications, ranging from aggregated TCP transmissions to multiple-sender multicast applications. However, in order for this cooperation to be effective, one must first identify the flows that are congested at the same set of resources. In this paper, we present techniques based on loss or delay observations at end hosts to infer whether or not two flows experiencing congestion are congested at the same network resources. Our novel result is that such detection can be achieved for unicast flows, but the techniques can also be applied to multicast flows. We validate these techniques via queueing analysis, simulation, and experimentation within the Internet. In addition, we demonstrate preliminary simulation results that show that the delay-based technique can determine whether two TCP flows are congested at the same set of resources. We also propose metrics that can be used as a measure of the amount of congestion sharing between two flows. Index Terms—Hypothesis testing, inference, network congestion, queueing analysis. I.
Scalable distributed stream processing
- in Proc. Conf. for Innovative Database Research (CIDR
, 2003
"... Many stream-based applications are naturally distributed. Applications are often embedded in an environment with numerous connected computing devices with heterogeneous capabilities. As data travels from its point of origin (e.g., sensors) downstream to applications, it passes through many computin ..."
Abstract
-
Cited by 156 (16 self)
- Add to MetaCart
(Show Context)
Many stream-based applications are naturally distributed. Applications are often embedded in an environment with numerous connected computing devices with heterogeneous capabilities. As data travels from its point of origin (e.g., sensors) downstream to applications, it passes through many computing devices, each of which is a potential target of computation. Furthermore, to cope with time-varying load spikes and changing demand, many servers would be brought to bear on the problem. In both cases, distributed computation is the norm. Abstract Stream processing fits a large class of new applications for which conventional DBMSs fall short. Because many stream-oriented systems are inherently geographically distributed and because distribution offers scalable load management and higher availability, future stream processing systems will operate in a distributed fashion. They will run across the Internet on computers typically owned by multiple cooperating administrative domains. This paper describes the architectural challenges facing the design of large-scale distributed stream processing systems, and discusses novel approaches for addressing load management, high availability, and federated operation issues. We describe two stream processing systems, Aurora* and Medusa, which are being designed to explore complementary solutions to these challenges. This paper discusses the architectural issues facing the design of large-scale distributed stream processing systems. We begin in Section 2 with a brief description of our centralized stream processing system, Aurora
A transport layer approach for achieving aggregate bandwidths on multi-homed mobile hosts.
- In MobiCom ’02: Proceedings of the 8th annual international conference on Mobile computing and networking,
, 2002
"... ..."
On Making TCP More Robust to Packet Reordering
- ACM Computer Communication Review
, 2002
"... rare event on some Internet paths. Reordering can cause performance problems for TCP's fast retransmission algorithm, which uses the arrival of duplicate acknowledgments to detect segment loss. Duplicate acknowledgments can be caused by the loss of a segment or by the reordering of segments by ..."
Abstract
-
Cited by 155 (11 self)
- Add to MetaCart
rare event on some Internet paths. Reordering can cause performance problems for TCP's fast retransmission algorithm, which uses the arrival of duplicate acknowledgments to detect segment loss. Duplicate acknowledgments can be caused by the loss of a segment or by the reordering of segments by the network. In this paper we illustrate the impact of reordering on TCP performance. In addition, we show the performance of a conservative approach to "undo" the congestion control state changes made in conjunction with spurious retransmissions. Finally, we propose several alternatives to dynamically make the fast retransmission algorithm more tolerant of the reordering observed in the network and assess these algorithms.
TCP Friendly Rate Control (TFRC): Protocol Specification", RFC 3448
, 2003
"... Status of this Memo This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the "Internet Official Protocol Standards " (STD 1) for the standardization state an ..."
Abstract
-
Cited by 135 (5 self)
- Add to MetaCart
Status of this Memo This document specifies an Internet standards track protocol for the Internet community, and requests discussion and suggestions for improvements. Please refer to the current edition of the "Internet Official Protocol Standards " (STD 1) for the standardization state and status of this protocol. Distribution of this memo is unlimited. Copyright Notice Copyright (C) The Internet Society (2003). All Rights Reserved. This document specifies TCP-Friendly Rate Control (TFRC). TFRC is a congestion control mechanism for unicast flows operating in a besteffort Internet environment. It is reasonably fair when competing for bandwidth with TCP flows, but has a much lower variation of throughput over time compared with TCP, making it more suitable for applications such as telephony or streaming media where a relatively
RMX: Reliable Multicast for Heterogeneous Networks
- IN PROC. IEEE INFOCOM
, 2000
"... Although IP Multicast is an effective network primitive for best-effort, large-scale, multi-point communication, many multicast applications such as shared whiteboards, multi-player games and software distribution require reliable data delivery. Building services like reliable sequenced delivery on ..."
Abstract
-
Cited by 125 (2 self)
- Add to MetaCart
(Show Context)
Although IP Multicast is an effective network primitive for best-effort, large-scale, multi-point communication, many multicast applications such as shared whiteboards, multi-player games and software distribution require reliable data delivery. Building services like reliable sequenced delivery on top of IP Multicast has proven to be a hard problem. The enormous extent of network and end-system heterogeneity in multipoint communication exacerbates the design of scalable end-to-end reliable multicast protocols. In this paper, we propose a radical departure from the traditional end-to-end model for reliable multicast and instead propose a hybrid approach that leverages the successes of unicast reliability protocols such as TCP while retaining the efficiency of IP multicast for multi-point data delivery. Our approach splits a large heterogeneous reliable multicast session into a number of multicast data groups of co-located homogeneous participants. A collection of application-aware agents--Reliable Multicast proxies (RMXs)--organizes these data groups into a spanning tree using an overlay network of TCP connections. Sources transmit data to their local group, and the RNLX in that group forwards the data towards the rest of the data groups. RMXs use detailed knowledge of application semantics to adapt to the effects of heterogeneity in the environment. To demonstrate the efficacy of our architecture, we have built a prototype implementation that can be customized for different kinds of applications.
IRON File Systems
- In Proceedings of the 20th ACMSymposium on OperatingSystems Principles(SOSP ’05),Brighton, UnitedKingdom
, 2005
"... Commodity file systems trust disks to either work or fail completely, yet modern disks exhibit more complex failure modes. We suggest a new fractured failure model for disks, which incorporates realistic localized faults such as latent sector errors and block corruption. We then develop and apply a ..."
Abstract
-
Cited by 123 (32 self)
- Add to MetaCart
Commodity file systems trust disks to either work or fail completely, yet modern disks exhibit more complex failure modes. We suggest a new fractured failure model for disks, which incorporates realistic localized faults such as latent sector errors and block corruption. We then develop and apply a novel faultinjection framework, to investigate how commodity file systems react to a range of more realistic disk failures. We classify their failure policies in a new taxonomy that measures their Internal RObustNess (IRON), which includes both failure detection and recovery techniques. We show that commodity file system failure policies are often inconsistent, sometimes buggy, and generally inadequate in their ability to recover from localized disk failures. Finally, we design, implement, and evaluate a prototype IRON file system, ixt3, showing that techniques such as in-disk checksumming and replication greatly enhance file system robustness while incurring minimal time and space overheads. 1
Designing DCCP: Congestion Control Without Reliability
, 2003
"... DCCP, the Datagram Congestion Control Protocol, is a new transport protocol in the TCP/UDP family that provides a congestion-controlled flow of unreliable datagrams. Delay-sensitive applications, such as streaming media and telephony, prefer timeliness to reliability. These applications have histori ..."
Abstract
-
Cited by 115 (2 self)
- Add to MetaCart
(Show Context)
DCCP, the Datagram Congestion Control Protocol, is a new transport protocol in the TCP/UDP family that provides a congestion-controlled flow of unreliable datagrams. Delay-sensitive applications, such as streaming media and telephony, prefer timeliness to reliability. These applications have historically used UDP and implemented their own congestion control mechanisms---a difficult task---or no congestion control at all. DCCP will make it easy to deploy these applications without risking congestion collapse. It aims to add to a UDP-like foundation the minimum mechanisms necessary to support congestion control, such as possibly-reliable transmission of acknowledgement information. This minimal design should make DCCP suitable as a building block for more advanced application semantics, such as selective reliability. We introduce and motivate the protocol and discuss some of its design principles. Those principles particularly shed light on the ways TCP's reliable byte-stream semantics influence its implementation of congestion control.
A model, analysis, and protocol framework for soft state-based communication
, 1999
"... "Soft state" is an often cited yet vague concept in network protocol design in which two or more network entities intercommunicate in a loosely coupled, often anonymous fashion. Researchers often define this concept operationally (if at all) rather than analytically: a source of soft state ..."
Abstract
-
Cited by 106 (7 self)
- Add to MetaCart
"Soft state" is an often cited yet vague concept in network protocol design in which two or more network entities intercommunicate in a loosely coupled, often anonymous fashion. Researchers often define this concept operationally (if at all) rather than analytically: a source of soft state transmits periodic "refresh messages" over a (lossy) communication channel to one or more receivers that maintain a copy of that state, which in turn "expires" if the periodic updates cease. Though a number of crucial Internet protocol building blocks are rooted in soft state-based designs | e.g., RSVP refresh messages, PIM membership updates, various routing protocol updates, RTCP control messages, directory services like SAP, and so forth | controversy is building as to whether the performance overhead of soft state refresh messages justify their qualitative benefit of enhanced system "robustness". We believe that this controversy has risen not from fundamental performance tradeo s but rather from our lack of a comprehensive understanding of soft state. To better understand these tradeoffs, we propose herein a formal model for soft state communication based on a probabilistic delivery model with relaxed reliability. Using this model, we conduct queueing analysis and simulation to characterize the data consistency and performance tradeo s under a range of workloads and network loss rates. We then extend our model with feedback and show, through simulation, that adding feedback dramatically improves data consistency (by up to 55%) without increasing network resource consumption. Our model not only provides a foundation for understanding soft state, but also induces a new fundamental transport protocol based on probabilistic delivery. Toward this end, we sketch our design of the "Soft State Transport Protocol" (SSTP), which enjoys the robustness of soft state while retaining the performance benefit of hard state protocols like TCP through its judicious use of feedback.