Results 1 - 10
of
272
Scalable TCP: Improving Performance in Highspeed Wide Area Networks
- ACM SIGCOMM Computer Communication Review
, 2002
"... TCP congestion control can perform badly in highspeed wide area networks because of its slow response with large congestion windows. The challenge for any alternative protocol is to better utilize networks with high bandwidth-delay products in a simple and robust manner without interacting badly wit ..."
Abstract
-
Cited by 373 (0 self)
- Add to MetaCart
(Show Context)
TCP congestion control can perform badly in highspeed wide area networks because of its slow response with large congestion windows. The challenge for any alternative protocol is to better utilize networks with high bandwidth-delay products in a simple and robust manner without interacting badly with existing traffic. Scalable TCP is a simple sender-side alteration to the TCP congestion window update algorithm. It offers a robust mechanism to improve performance in highspeed wide area networks using traditional TCP receivers. Scalable TCP is designed to be incrementally deployable and behaves identically to traditional TCP stacks when small windows are sufficient. The performance of the scheme is evaluated through experimental results gathered using a Scalable TCP implementation for the Linux operating system and a gigabit transatlantic network. The results gathered suggest that the deployment of Scalable TCP would have negligible impact on existing network traffic at the same time as improving bulk transfer performance in highspeed wide area networks.
Difficulties in Simulating the Internet
- IEEE/ACM Transactions on Networking
, 2001
"... Simulating how the global Internet behaves is an immensely challenging undertaking because of the network's great heterogeneity and rapid change. The heterogeneity ranges from the individual links that carry the network's traffic, to the protocols that interoperate over the links, to the & ..."
Abstract
-
Cited by 341 (8 self)
- Add to MetaCart
Simulating how the global Internet behaves is an immensely challenging undertaking because of the network's great heterogeneity and rapid change. The heterogeneity ranges from the individual links that carry the network's traffic, to the protocols that interoperate over the links, to the "mix" of different applications used at a site, to the levels of congestion seen on different links. We discuss two key strategies for developing meaningful simulations in the face of these difficulties: searching for invariants, and judiciously exploring the simulation parameter space. We finish with a brief look at a collaborative effort within the research community to develop a common network simulator. 1 Introduction Due to the network's complexity, simulation plays a vital role in attempting to characterize both the behavior of the current Internet and the possible effects of proposed changes to its operation. Yet modeling and simulating the Internet is not an easy task. The goal of this paper ...
Why We Don't Know How to Simulate the Internet
, 1997
"... Simulating how the global Internet data network behaves is an immensely challenging undertaking because of the network's great heterogeneity and rapid change. The heterogeneity ranges from the individual links that carry the network's traffic, to the protocols that interoperate over the li ..."
Abstract
-
Cited by 232 (4 self)
- Add to MetaCart
(Show Context)
Simulating how the global Internet data network behaves is an immensely challenging undertaking because of the network's great heterogeneity and rapid change. The heterogeneity ranges from the individual links that carry the network's traffic, to the protocols that interoperate over the links, to the "mix" of different applications used at a site and the levels of congestion (load) seen on different links. We discuss two key strategies for developing meaningful simulations in the face of these difficulties: searching for invariants and judiciously exploring the simulation parameter space. We finish with a look at a collaborative effort to build a common simulation environment for conducting Internet studies.
Statistical bandwidth sharing: a study of congestion at flow level
, 2001
"... In this paper we study the statistics of the realized throughput of elastic document transfers, accounting for the way network bandwidth is shared dynamically between the randomly varying number of concurrent flows. We first discuss the way TCP realizes statistical bandwidth sharing, illustrating es ..."
Abstract
-
Cited by 214 (23 self)
- Add to MetaCart
(Show Context)
In this paper we study the statistics of the realized throughput of elastic document transfers, accounting for the way network bandwidth is shared dynamically between the randomly varying number of concurrent flows. We first discuss the way TCP realizes statistical bandwidth sharing, illustrating essential properties by means of packet level simulations. Mathematical flow level models based on the theory of stochastic networks are then proposed to explain the observed behavior. A notable result is that first order performance (e.g., mean throughput) is insensitive with respect both to the flow size distribution and the flow arrival process, as long as “sessions ” arrive according to a Poisson process. Perceived performance is shown to depend most significantly on whether demand at flow level is less than or greater than available capacity. The models provide a key to understanding the effectiveness of techniques for congestion management and service differentiation. 1.
Low-Rate TCP-Targeted Denial of Service Attacks
- in Proc. of ACM SIGCOMM 2003
, 2003
"... Denial of Service attacks are presenting an increasing threat to the global inter-networking infrastructure. While TCP’s congestion control algorithm is highly robust to diverse network conditions, its implicit assumption of end-system cooperation results in a wellknown vulnerability to attack by hi ..."
Abstract
-
Cited by 201 (2 self)
- Add to MetaCart
(Show Context)
Denial of Service attacks are presenting an increasing threat to the global inter-networking infrastructure. While TCP’s congestion control algorithm is highly robust to diverse network conditions, its implicit assumption of end-system cooperation results in a wellknown vulnerability to attack by high-rate non-responsive flows. In this paper, we investigate a class of low-rate denial of service attacks which, unlike high-rate attacks, are difficult for routers and counter-DoS mechanisms to detect. Using a combination of analytical modeling, simulations, and Internet experiments, we show that maliciously chosen low-rate DoS traffic patterns that exploit TCP’s retransmission time-out mechanism can throttle TCP flows to a small fraction of their ideal rate while eluding detection. Moreover, as such attacks exploit protocol homogeneity, we study fundamental limits of the ability of a class of randomized time-out mechanisms to thwart such low-rate DoS attacks.
NetScope: Traffic Engineering for IP Networks
- IEEE NETWORK MAGAZINE
, 2000
"... Managing large IP networks requires an understanding of the current traffic ows, routing policies, and network configuration. Yet, the state-of-the-art for managing IP networks involves manual con guration of each IP router, and traffic engineering based on limited measurements. The networking indus ..."
Abstract
-
Cited by 147 (35 self)
- Add to MetaCart
Managing large IP networks requires an understanding of the current traffic ows, routing policies, and network configuration. Yet, the state-of-the-art for managing IP networks involves manual con guration of each IP router, and traffic engineering based on limited measurements. The networking industry is sorely lacking in software systems that a large Internet Service Provider (ISP) can use to support traffic measurement and network modeling, the underpinnings of effective traffic engineering. This paper describes the AT&T Labs NetScope, a unified set of software tools for managing the performance of IP backbone networks. The key idea behind NetScope is to generate global views of the network, on the basis of configuration and usage data associated with the individual network elements. Having created an appropriate global view, we are able to infer and visualize the network-wide implications of local changes in traffic, con guration, and control. Using NetScope, a network provider can experiment with changes in network configuration in a simulated environment, rather than the operational network. In addition, the tool provides a sound framework for additional modules for network optimization and performance debugging. We demonstrate the capabilities of the tool through an example traffic-engineering exercise of locating a heavily-loaded link, identifying which traffic demands flow on the link, and changing the configuration of intra-domain routing to reduce the congestion.
The War Between Mice and Elephants
, 2001
"... Recent measurement based studies reveal that most of the Internet connections are short in terms of the amount of traffic they carry (mice), while a small fraction of the connections are carrying a large portion of the traffic (elephants). A careful study of the TCP protocol shows that without help ..."
Abstract
-
Cited by 141 (11 self)
- Add to MetaCart
(Show Context)
Recent measurement based studies reveal that most of the Internet connections are short in terms of the amount of traffic they carry (mice), while a small fraction of the connections are carrying a large portion of the traffic (elephants). A careful study of the TCP protocol shows that without help from an Active Queue Management (AQM) policy, short connections tend to lose to long connections in their competition for bandwidth. This is because short connections do not gain detailed knowledge of the network state, and therefore they are doomed to be less competitive due to the conservative nature of the TCP congestion control algorithm.
OverQoS: An Overlay based Architecture for Enhancing Internet QoS
, 2004
"... This paper describes the design, implementation, and experimental evaluation of OverQoS, an overlay-based architecture for enhancing the best-effort service of today's Internet. Using a Controlled loss virtual link (CLVL) abstraction to bound the loss rate observed by a traffic aggregate, OverQ ..."
Abstract
-
Cited by 138 (6 self)
- Add to MetaCart
This paper describes the design, implementation, and experimental evaluation of OverQoS, an overlay-based architecture for enhancing the best-effort service of today's Internet. Using a Controlled loss virtual link (CLVL) abstraction to bound the loss rate observed by a traffic aggregate, OverQoS can provide a variety of services including: (a) smoothing packet losses; (b) prioritizing packets within an aggregate; (c) statistical loss and bandwidth guarantees.
A Nonstationary Poisson View of Internet Traffic
- in Proceedings of IEEE INFOCOM
, 2004
"... Since the identification of long-range dependence in network traffic ten years ago, its consistent appearance across numerous measurement studies has largely discredited Poissonbased models. However, since that original data set was collected, both link speeds and the number of Internet-connected ho ..."
Abstract
-
Cited by 97 (4 self)
- Add to MetaCart
(Show Context)
Since the identification of long-range dependence in network traffic ten years ago, its consistent appearance across numerous measurement studies has largely discredited Poissonbased models. However, since that original data set was collected, both link speeds and the number of Internet-connected hosts have increased by more than three orders of magnitude. Thus, we now revisit the Poisson assumption, by studying a combination of historical traces and new measurements obtained from a major backbone link belonging to a Tier 1 ISP. We show that unlike the older data sets, current network traffic can be well represented by the Poisson model for sub-second time scales. At multi-second scales, we find a distinctive piecewise-linear non-stationarity, together with evidence of long-range dependence. Combining our observations across both time scales leads to a time-dependent Poisson characterization of network traffic that, when viewed across very long time scales, exhibits the observed long-range dependence. This traffic characterization reconciliates the seemingly contradicting observations of Poisson and long-memory traffic characteristics. It also seems to be in general agreement with recent theoretical models for large-scale traffic aggregation.