Results 11 - 20
of
844
Proxy Prefix Caching for Multimedia Streams
, 1999
"... Proxies are emerging as an important way to reduce user-perceived latency and network resource requirements in the Internet. While relaying traffic between servers and clients, a proxy can cache resources in the hope of satisfying future client requests directly at the proxy. However, existing techn ..."
Abstract
-
Cited by 288 (17 self)
- Add to MetaCart
Proxies are emerging as an important way to reduce user-perceived latency and network resource requirements in the Internet. While relaying traffic between servers and clients, a proxy can cache resources in the hope of satisfying future client requests directly at the proxy. However, existing techniques for caching text and images are not appropriate for the rapidly growing number of continuous media streams. In addition, high latency and loss rates in the Internet make it difficult to stream audio and video without introducing a large playback delay. To address these problems, we propose that, instead of caching entire audio or video streams (which may be quite large), the proxy should store a prefix consisting of the initial frames of each clip. Upon receiving a request for the stream, the proxy immediately initiates transmission to the client, while simultaneously requesting the remaining frames from the server. In addition to hiding the latency between the server and the proxy, st...
Dynamics of IP traffic: A study of the role of variability and the impact of control
, 1999
"... Using the ns-2-simulator to experiment with different aspects of user- or session-behaviors and network configurations and focusing on the qualitative aspects of a wavelet-based scaling analysis, we present a systematic investigation into how and why variability and feedback-control contribute to th ..."
Abstract
-
Cited by 271 (12 self)
- Add to MetaCart
(Show Context)
Using the ns-2-simulator to experiment with different aspects of user- or session-behaviors and network configurations and focusing on the qualitative aspects of a wavelet-based scaling analysis, we present a systematic investigation into how and why variability and feedback-control contribute to the intriguing scaling properties observed in actual Internet traces (as our benchmark data, we use measured Internet traffic from an ISP). We illustrate how variability of both user aspects and network environments (i) causes self-similar scaling behavior over large time scales, (ii) determines a more or less pronounced change in scaling behavior around a specific time scale, and (iii) sets the stage for the emergence of surprisingly rich scaling dynamics over small time scales; i.e., multifractal scaling. Moreover, our scaling analyses indicate whether or not open-loop controls such as UDP or closed-loop controls such as TCP impact the local or small-scale behavior of the traffic and how the...
On Estimating End-to-End Network Path Properties
, 1999
"... The more information about current network conditions available to a transport protocol, the more efficiently it can use the network to transfer its data. In networks such as the Internet, the transport protocol must often form its own estimates of network properties based on measurements performed ..."
Abstract
-
Cited by 245 (14 self)
- Add to MetaCart
(Show Context)
The more information about current network conditions available to a transport protocol, the more efficiently it can use the network to transfer its data. In networks such as the Internet, the transport protocol must often form its own estimates of network properties based on measurements performed by the connection endpoints. We consider two basic transport estimation problems: determining the setting of the retransmission timer (RTO) for a reliable protocol, and estimating the bandwidth available to a connection as it begins. We look at both of these problems in the context of TCP, using a large TCP measurement set [Pax97b] for trace-driven simulations. For RTO estimation, we evaluate a number of different algorithms, finding that the performance of the estimators is dominated by their minimum values, and to a lesser extent, the timer granularity, while being virtually unaffected by how often round-trip time measurements are made or the settings of the parameters in the exponentially-weighted moving average estimators commonly used. For bandwidth estimation, we explore techniques previously sketched in the literature [Hoe96, AD98] and find that in practice they perform less well than anticipated. We then develop a receiver-side algorithm that performs significantly better. 1
Modeling TCP latency
- in IEEE INFOCOM
, 2000
"... Abstract—Several analytic models describe the steady-state throughput of bulk transfer TCP flows as a function of round trip time and packet loss rate. These models describe flows based on the assumption that they are long enough to sustain many packet losses. However, most TCP transfers across toda ..."
Abstract
-
Cited by 235 (8 self)
- Add to MetaCart
(Show Context)
Abstract—Several analytic models describe the steady-state throughput of bulk transfer TCP flows as a function of round trip time and packet loss rate. These models describe flows based on the assumption that they are long enough to sustain many packet losses. However, most TCP transfers across today’s Internet are short enough to see few, if any, losses and consequently their performance is dominated by startup effects such as connection establishment and slow start. This paper extends the steadystate model proposed in [34] in order to capture these startup effects. The extended model characterizes the expected value and distribution of TCP connection establishment and data transfer latency as a function of transfer size, round trip time, and packet loss rate. Using simulations, controlled measurements of TCP transfers, and live Web measurements we show that, unlike earlier steady-state models for TCP performance, our extended model describes connection establishment and data transfer latency under a range of packet loss conditions, including no loss. I.
Why We Don't Know How to Simulate the Internet
, 1997
"... Simulating how the global Internet data network behaves is an immensely challenging undertaking because of the network's great heterogeneity and rapid change. The heterogeneity ranges from the individual links that carry the network's traffic, to the protocols that interoperate over the li ..."
Abstract
-
Cited by 232 (4 self)
- Add to MetaCart
(Show Context)
Simulating how the global Internet data network behaves is an immensely challenging undertaking because of the network's great heterogeneity and rapid change. The heterogeneity ranges from the individual links that carry the network's traffic, to the protocols that interoperate over the links, to the "mix" of different applications used at a site and the levels of congestion (load) seen on different links. We discuss two key strategies for developing meaningful simulations in the face of these difficulties: searching for invariants and judiciously exploring the simulation parameter space. We finish with a look at a collaborative effort to build a common simulation environment for conducting Internet studies.
Measuring Link Bandwidths Using a Deterministic Model of Packet Delay
- in Proceedings of ACM SIGCOMM
, 2000
"... We describe a deterministic model of packet delay and use it to derive both the packet pair [2] property of FIFO-queueing networks and a new technique (packet tailgating) for actively measuring link bandwidths. Compared to previously known techniques, packet tailgating usually consumes less network ..."
Abstract
-
Cited by 223 (3 self)
- Add to MetaCart
We describe a deterministic model of packet delay and use it to derive both the packet pair [2] property of FIFO-queueing networks and a new technique (packet tailgating) for actively measuring link bandwidths. Compared to previously known techniques, packet tailgating usually consumes less network bandwidth, does not rely on consistent behavior of routers handling ICMP packets, and does not rely on timely delivery of acknowledgments. Preliminary empirical measurements in the Internet indicate that compared to current measurement tools, packet tailgating sends an order of magnitude fewer packets, while maintaining approximately the same accuracy. Unfortunately, for all currently available measurement tools, including our prototype implementation of packet tailgating, accuracy is low for paths longer than a few hops. 1. INTRODUCTION As long as Internet bandwidth has increased, the amount of trac sent over the Internet has grown to consume it. This means that despite the increasing li...
SPAND: Shared Passive Network Performance Discovery
- IN USENIX SYMPOSIUM ON INTERNET TECHNOLOGIES AND SYSTEMS
, 1997
"... In the Internet today, users and applications must often make decisions based on the performance they expect to receive from other Internet hosts. For example, users can often view many Web pages in low-bandwidth or high-bandwidth versions, while other pages present users with long lists of mirror s ..."
Abstract
-
Cited by 221 (8 self)
- Add to MetaCart
In the Internet today, users and applications must often make decisions based on the performance they expect to receive from other Internet hosts. For example, users can often view many Web pages in low-bandwidth or high-bandwidth versions, while other pages present users with long lists of mirror sites to chose from. Current techniques to perform these decisions are often ad hoc or poorly designed. The most common solution used today is to require the user to manually make decisions based on their own experience and whatever information is provided by the application. Previous efforts to automate this decision-making process have relied on isolated, active network probes from a host. Unfortunately, this method of making measurements has several problems. Active probing introduces unnecessary network traffic that can quickly become a significant part of the total traffic handled by busy Web servers. Probing from a single host results in less accurate information and more redundant network probes than a system that shares information with nearby hosts. In this paper, we propose a system called SPAND (Shared Passive Network Performance Discovery) that determines network characteristics by making shared, passive measurements from a collection of hosts. In this paper, we show why using passive measurements from a collection of hosts has advantages over using active measurements from a single host. We also show that sharing measurements can significantly increase the accuracy and timeliness of predictions. In addition, we present a initial prototype design of SPAND, the current implementation status of our system, and initial performance results that show the potential benefits of SPAND.
Data networks as cascades: Investigating the multifractal nature of Internet WAN traffic
, 1998
"... In apparent contrast to the well-documented self-similar (i.e., monofractal) scaling behavior of measured LAN traffic, recent studies have suggested that measured TCP/IP and ATM WAN traffic exhibits more complex scaling behavior, consistent with multifractals. To bring multifractals into the realm o ..."
Abstract
-
Cited by 220 (14 self)
- Add to MetaCart
(Show Context)
In apparent contrast to the well-documented self-similar (i.e., monofractal) scaling behavior of measured LAN traffic, recent studies have suggested that measured TCP/IP and ATM WAN traffic exhibits more complex scaling behavior, consistent with multifractals. To bring multifractals into the realm of networking, this paper provides a simple construction based on cascades (also known as multiplicative processes) that is motivated by the protocol hierarchy of IP data networks. The cascade framework allows for a plausible physical explanation of the observed multifractal scaling behavior of data traffic and suggests that the underlying multiplicative structure is a traffic invariant for WAN traffic that co-exists with self-similarity. In particular, cascades allow us to refine the previously observed self-similar nature of data traffic to account for local irregularities in WAN traffic that are typically associated with networking mechanisms operating on small time scales, such as TCP flo...
TCP congestion control
- RFC
, 1999
"... This document defines TCP’s four intertwined congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery. In addition, the document specifies how TCP should begin transmission after a relatively long idle period, as well as discussing various acknowledgment ..."
Abstract
-
Cited by 214 (4 self)
- Add to MetaCart
This document defines TCP’s four intertwined congestion control algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery. In addition, the document specifies how TCP should begin transmission after a relatively long idle period, as well as discussing various acknowledgment generation methods.
Bimodal Multicast
- ACM Transactions on Computer Systems
, 1998
"... This paper looks at reliability with a new goal: development of a multicast protocol which is reliable in a sense that can be rigorously quantified and includes throughput stability guarantees. We characterize this new protocol as a "bimodal multicast" in reference to its reliability model ..."
Abstract
-
Cited by 210 (16 self)
- Add to MetaCart
This paper looks at reliability with a new goal: development of a multicast protocol which is reliable in a sense that can be rigorously quantified and includes throughput stability guarantees. We characterize this new protocol as a "bimodal multicast" in reference to its reliability model, which corresponds to a family of bimodal probability distributions. Here, we introduce the protocol, provide a theoretical analysis of its behavior, review experimental results, and discuss some candidate applications. These confirm that bimodal multicast is reliable, scalable, and that the protocol provides remarkably stable delivery throughput