Results 1 - 10
of
518
Modeling TCP Throughput: A Simple Model and its Empirical Validation
, 1998
"... In this paper we develop a simple analytic characterization of the steady state throughput, as a function of loss rate and round trip time for a bulk transfer TCP flow, i.e., a flow with an unlimited amount of data to send. Unlike the models in [6, 7, 10], our model captures not only the behavior of ..."
Abstract
-
Cited by 1337 (36 self)
- Add to MetaCart
(Show Context)
In this paper we develop a simple analytic characterization of the steady state throughput, as a function of loss rate and round trip time for a bulk transfer TCP flow, i.e., a flow with an unlimited amount of data to send. Unlike the models in [6, 7, 10], our model captures not only the behavior of TCP’s fast retransmit mechanism (which is also considered in [6, 7, 10]) but also the effect of TCP’s timeout mechanism on throughput. Our measurements suggest that this latter behavior is important from a modeling perspective, as almost all of our TCP traces contained more timeout events than fast retransmit events. Our measurements demonstrate that our model is able to more accurately predict TCP throughput and is accurate over a wider range of loss rates. This material is based upon work supported by the National Science Foundation under grants NCR-95-08274, NCR-95-23807 and CDA-95-02639. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
The click modular router
, 2001
"... Click is a new software architecture for building flexible and configurable routers. A Click router is assembled from packet processing modules called elements. Individual elements implement simple router functions like packet classification, queueing, scheduling, and interfacing with network devic ..."
Abstract
-
Cited by 1167 (28 self)
- Add to MetaCart
Click is a new software architecture for building flexible and configurable routers. A Click router is assembled from packet processing modules called elements. Individual elements implement simple router functions like packet classification, queueing, scheduling, and interfacing with network devices. A router configuration is a directed graph with elements at the vertices; packets flow along the edges of the graph. Configurations are written in a declarative language that supports user-defined abstractions. This language is both readable by humans and easily manipulated by tools. We present language tools that optimize router configurations and ensure they satisfy simple invariants. Due to Click’s architecture and language, Click router configurations are modular and easy to extend. A standards-compliant Click IP router has sixteen elements on its forwarding path. We present extensions to this router that support dropping policies, fairness among flows, quality-of-service, and
Promoting the Use of End-to-End Congestion Control in the Internet
- IEEE/ACM TRANSACTIONS ON NETWORKING
, 1999
"... This paper considers the potentially negative impacts of an increasing deployment of non-congestion-controlled best-effort traffic on the Internet.’ These negative impacts range from extreme unfairness against competing TCP traffic to the potential for congestion collapse. To promote the inclusion ..."
Abstract
-
Cited by 875 (14 self)
- Add to MetaCart
(Show Context)
This paper considers the potentially negative impacts of an increasing deployment of non-congestion-controlled best-effort traffic on the Internet.’ These negative impacts range from extreme unfairness against competing TCP traffic to the potential for congestion collapse. To promote the inclusion of end-to-end congestion control in the design of future protocols using best-effort traffic, we argue that router mechanisms are needed to identify and restrict the bandwidth of selected high-bandwidth best-effort flows in times of congestion. The paper discusses several general approaches for identifying those flows suitable for bandwidth regulation. These approaches are to identify a high-bandwidth flow in times of congestion as unresponsive, “not TCP-friendly,” or simply using disproportionate bandwidth. A flow that is not “TCP-friendly ” is one whose long-term arrival rate exceeds that of any conformant TCP in the same circumstances. An unresponsive flow is one failing to reduce its offered load at a router in response to an increased packet drop rate, and a disproportionate-bandwidth flow is one that uses considerably more bandwidth than other flows in a time of congestion.
End-to-End Internet Packet Dynamics,”
- Proc. SIGCOMM '97,
, 1997
"... Abstract We discuss findings from a large-scale study of Internet packet dynamics conducted by tracing 20,000 TCP bulk transfers between 35 Internet sites. Because we traced each 100 Kbyte transfer at both the sender and the receiver, the measurements allow us to distinguish between the end-to-end ..."
Abstract
-
Cited by 843 (19 self)
- Add to MetaCart
Abstract We discuss findings from a large-scale study of Internet packet dynamics conducted by tracing 20,000 TCP bulk transfers between 35 Internet sites. Because we traced each 100 Kbyte transfer at both the sender and the receiver, the measurements allow us to distinguish between the end-to-end behaviors due to the different directions of the Internet paths, which often exhibit asymmetries. We characterize the prevalence of unusual network events such as out-of-order delivery and packet corruption; discuss a robust receiver-based algorithm for estimating "bottleneck bandwidth" that addresses deficiencies discovered in techniques based on "packet pair"; investigate patterns of packet loss, finding that loss events are not well-modeled as independent and, furthermore, that the distribution of the duration of loss events exhibits infinite variance; and analyze variations in packet transit delays as indicators of congestion periods, finding that congestion periods also span a wide range of time scales.
Managing Energy and Server Resources in Hosting Centers
- In Proceedings of the 18th ACM Symposium on Operating System Principles (SOSP
, 2001
"... Interact hosting centers serve multiple service sites from a common hardware base. This paper presents the design and implementation of an architecture for resource management in a hosting center op-erating system, with an emphasis on energy as a driving resource management issue for large server cl ..."
Abstract
-
Cited by 574 (37 self)
- Add to MetaCart
Interact hosting centers serve multiple service sites from a common hardware base. This paper presents the design and implementation of an architecture for resource management in a hosting center op-erating system, with an emphasis on energy as a driving resource management issue for large server clusters. The goals are to provi-sion server resources for co-hosted services in a way that automati-cally adapts to offered load, improve the energy efficiency of server dusters by dynamically resizing the active server set, and respond to power supply disruptions or thermal events by degrading service in accordance with negotiated Service Level Agreements (SLAs). Our system is based on an economic approach to managing shared server resources, in which services "bid " for resources as a func-tion of delivered performance. The system continuously moni-tors load and plans resource allotments by estimating the value of their effects on service performance. A greedy resource allocation algorithm adjusts resource prices to balance supply and demand, allocating resources to their most efficient use. A reconfigurable server switching infrastructure directs request traffic to the servers assigned to each service. Experimental results from a prototype confirm that the system adapts to offered load and resource avail-ability, and can reduce server energy usage by 29 % or more for a typical Web workload. 1.
Modeling TCP Reno Performance: A Simple Model and Its Empirical Validation
- IEEE/ACM Transactions on Networking
, 2000
"... Abstract—The steady-state performance of a bulk transfer TCP flow (i.e., a flow with a large amount of data to send, such as FTP transfers) may be characterized by the send rate, which is the amount of data sent by the sender in unit time. In this paper we develop a simple analytic characterization ..."
Abstract
-
Cited by 371 (4 self)
- Add to MetaCart
Abstract—The steady-state performance of a bulk transfer TCP flow (i.e., a flow with a large amount of data to send, such as FTP transfers) may be characterized by the send rate, which is the amount of data sent by the sender in unit time. In this paper we develop a simple analytic characterization of the steady-state send rate as a function of loss rate and round trip time (RTT) for a bulk transfer TCP flow. Unlike the models in [7]–[9], and [12], our model captures not only the behavior of the fast retransmit mechanism but also the effect of the time-out mechanism. Our measurements suggest that this latter behavior is important from a modeling perspective, as almost all of our TCP traces contained more time-out events than fast retransmit events. Our measurements demonstrate that our model is able to more accurately predict TCP send rate and is accurate over a wider range of loss rates. We also present a simple extension of our model to compute the throughput of a bulk transfer TCP flow, which is defined as the amount of data received by the receiver in unit time. Index Terms—Empirical validation, modeling, retransmission timeouts, TCP.
Difficulties in Simulating the Internet
- IEEE/ACM Transactions on Networking
, 2001
"... Simulating how the global Internet behaves is an immensely challenging undertaking because of the network's great heterogeneity and rapid change. The heterogeneity ranges from the individual links that carry the network's traffic, to the protocols that interoperate over the links, to the & ..."
Abstract
-
Cited by 341 (8 self)
- Add to MetaCart
Simulating how the global Internet behaves is an immensely challenging undertaking because of the network's great heterogeneity and rapid change. The heterogeneity ranges from the individual links that carry the network's traffic, to the protocols that interoperate over the links, to the "mix" of different applications used at a site, to the levels of congestion seen on different links. We discuss two key strategies for developing meaningful simulations in the face of these difficulties: searching for invariants, and judiciously exploring the simulation parameter space. We finish with a brief look at a collaborative effort within the research community to develop a common network simulator. 1 Introduction Due to the network's complexity, simulation plays a vital role in attempting to characterize both the behavior of the current Internet and the possible effects of proposed changes to its operation. Yet modeling and simulating the Internet is not an easy task. The goal of this paper ...
Multicast-Based Inference of Network-Internal Characteristics: Accuracy of Packet Loss Estimation
- IEEE Transactions on Information Theory
, 1998
"... We explore the use of end-to-end multicast traffic as measurement probes to infer network-internal characteristics. We have developed in an earlier paper [2] a Maximum Likelihood Estimator for packet loss rates on individual links based on losses observed by multicast receivers. This technique explo ..."
Abstract
-
Cited by 323 (40 self)
- Add to MetaCart
(Show Context)
We explore the use of end-to-end multicast traffic as measurement probes to infer network-internal characteristics. We have developed in an earlier paper [2] a Maximum Likelihood Estimator for packet loss rates on individual links based on losses observed by multicast receivers. This technique exploits the inherent correlation between such observations to infer the performance of paths between branch points in the multicast tree spanning the probe source and its receivers. We evaluate through analysis and simulation the accuracy of our estimator under a variety of network conditions. In particular, we report on the error between inferred loss rates and actual loss rates as we vary the network topology, propagation delay, packet drop policy, background traffic mix, and probe traffic type. In all but one case, estimated losses and probe losses agree to within 2 percent on average. We feel this accuracy is enough to reliably identify congested links in a wide-area internetwork. Keywords---Internet performance, end-to-end measurements, Maximum Likelihood Estimator, tomography I.
The End-to-End Effects of Internet Path Selection
- IN PROCEEDINGS OF ACM SIGCOMM
, 1999
"... The path taken by a packet traveling across the Internet depends on a large number of factors, including routing protocols and pernetwork routing policies. The impact of these factors on the endto -end performance experienced by users is poorly understood. In this paper, we conduct a measurement-bas ..."
Abstract
-
Cited by 307 (10 self)
- Add to MetaCart
The path taken by a packet traveling across the Internet depends on a large number of factors, including routing protocols and pernetwork routing policies. The impact of these factors on the endto -end performance experienced by users is poorly understood. In this paper, we conduct a measurement-based study comparing the performance seen using the "default" path taken in the Internet with the potential performance available using some alternate path. Our study uses five distinct datasets containing measurements of "path quality", such as round-trip time, loss rate, and bandwidth, taken between pairs of geographically diverse Internet hosts. We construct the set of potential alternate paths by composing these measurements to form new synthetic paths. We find that in 30-80% of the cases, there is an alternate path with significantly superior quality. We argue that the overall result is robust and we explore two hypotheses for explaining it.
Deriving Traffic Demands for Operational IP networks: Methodology and Experience
- IEEE/ACM TRANSACTIONS ON NETWORKING
, 2001
"... Engineering a large IP backbone network without an accurate, network-wide view of the traffic demands is challenging. Shifts in user behavior, changes in routing policies, and failures of network elements can result in significant (and sudden) fluctuations in load. In this paper, we present a model ..."
Abstract
-
Cited by 297 (39 self)
- Add to MetaCart
(Show Context)
Engineering a large IP backbone network without an accurate, network-wide view of the traffic demands is challenging. Shifts in user behavior, changes in routing policies, and failures of network elements can result in significant (and sudden) fluctuations in load. In this paper, we present a model of traffic demands to support traffic engineering and performance debugging of large Internet Service Provider networks. By de ning a traffic demand as a volume of load originating from an ingress link and destined to a set of egress links, we can capture and predict how routing affects the traffic traveling between domains. To infer the traffic demands, we propose a measurement methodology that combines flow-level measurements collected at all ingress links with reachability information about all egress links. We discuss how to cope with situations where practical considerations limit the amount and quality of the necessary data. Specifically, we show how to infer interdomain traffic demands using measurements collected at a smaller number of edge links -- the peering links connecting to neighboring providers. We report on our experiences in deriving the traffic demands in the AT&T IP Backbone, by collecting, validating, and joining very large and diverse sets of usage, configuration, and routing data over extended periods of time. The paper concludes with a preliminary analysis of the observed dynamics of the traffic demands and a discussion of the practical implications for traffic engineering.