• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Measurements and Analysis of End-to-End Internet Dynamics,” (1997)

by V Paxson
Add To MetaCart

Tools

Sorted by:
Results 11 - 20 of 411
Next 10 →

Measuring Link Bandwidths Using a Deterministic Model of Packet Delay

by Kevin Lai, Mary Baker - in Proceedings of ACM SIGCOMM , 2000
"... We describe a deterministic model of packet delay and use it to derive both the packet pair [2] property of FIFO-queueing networks and a new technique (packet tailgating) for actively measuring link bandwidths. Compared to previously known techniques, packet tailgating usually consumes less network ..."
Abstract - Cited by 223 (3 self) - Add to MetaCart
We describe a deterministic model of packet delay and use it to derive both the packet pair [2] property of FIFO-queueing networks and a new technique (packet tailgating) for actively measuring link bandwidths. Compared to previously known techniques, packet tailgating usually consumes less network bandwidth, does not rely on consistent behavior of routers handling ICMP packets, and does not rely on timely delivery of acknowledgments. Preliminary empirical measurements in the Internet indicate that compared to current measurement tools, packet tailgating sends an order of magnitude fewer packets, while maintaining approximately the same accuracy. Unfortunately, for all currently available measurement tools, including our prototype implementation of packet tailgating, accuracy is low for paths longer than a few hops. 1. INTRODUCTION As long as Internet bandwidth has increased, the amount of trac sent over the Internet has grown to consume it. This means that despite the increasing li...

SPAND: Shared Passive Network Performance Discovery

by Srinivasan Seshan, Mark Stemm, Randy H. Katz - IN USENIX SYMPOSIUM ON INTERNET TECHNOLOGIES AND SYSTEMS , 1997
"... In the Internet today, users and applications must often make decisions based on the performance they expect to receive from other Internet hosts. For example, users can often view many Web pages in low-bandwidth or high-bandwidth versions, while other pages present users with long lists of mirror s ..."
Abstract - Cited by 221 (8 self) - Add to MetaCart
In the Internet today, users and applications must often make decisions based on the performance they expect to receive from other Internet hosts. For example, users can often view many Web pages in low-bandwidth or high-bandwidth versions, while other pages present users with long lists of mirror sites to chose from. Current techniques to perform these decisions are often ad hoc or poorly designed. The most common solution used today is to require the user to manually make decisions based on their own experience and whatever information is provided by the application. Previous efforts to automate this decision-making process have relied on isolated, active network probes from a host. Unfortunately, this method of making measurements has several problems. Active probing introduces unnecessary network traffic that can quickly become a significant part of the total traffic handled by busy Web servers. Probing from a single host results in less accurate information and more redundant network probes than a system that shares information with nearby hosts. In this paper, we propose a system called SPAND (Shared Passive Network Performance Discovery) that determines network characteristics by making shared, passive measurements from a collection of hosts. In this paper, we show why using passive measurements from a collection of hosts has advantages over using active measurements from a single host. We also show that sharing measurements can significantly increase the accuracy and timeliness of predictions. In addition, we present a initial prototype design of SPAND, the current implementation status of our system, and initial performance results that show the potential benefits of SPAND.
(Show Context)

Citation Context

...ng their efficiency and sometimes their scalability. . Probing from a single host prevents a client from using the past information of nearby clients to predict future performance. Recent studies [3] =-=[34]-=- have shown that network performance from a client to a server is often stable for many minutes and very similar to the performance observed by other nearby clients, so there are potential benefits of...

Nettimer: A Tool for Measuring Bottleneck Link Bandwidth

by Kevin Lai, Mary Baker - In Proceedings of the USENIX Symposium on Internet Technologies and Systems , 2001
"... Measuring the bottleneck link bandwidth along a path is important for understanding the performance of many Internet applications. Existing tools to measure bottleneck bandwidth are relatively slow, can only measure bandwidth in one direction, and/or actively send probe packets. We present the netti ..."
Abstract - Cited by 201 (1 self) - Add to MetaCart
Measuring the bottleneck link bandwidth along a path is important for understanding the performance of many Internet applications. Existing tools to measure bottleneck bandwidth are relatively slow, can only measure bandwidth in one direction, and/or actively send probe packets. We present the nettimer bottleneck link bandwidth measurement tool, the libdpcap distributed packet capture library, and experiments quantifying their utility. We test nettimer across a variety of bottleneck network technologies ranging from 19.2Kb/s to 100Mb/s, wired and wireless, symmetric and asymmetric bandwidth, across local area and crosscountry paths, while using both one and two packet capture hosts. In most cases, nettimer has an error of less than 10%, but at worst has an error of 40%, even on cross-country paths of 17 or more hops. It converges within 10KB of the first large packet arrival while consuming less than 7% of the network traffic being measured.
(Show Context)

Citation Context

...oping and verifying the validity of an available bandwidth algorithm that deals with that variability is difficult. In contrast, bottleneck link bandwidth is well understood in theory [Kes91] [Bol93] =-=[Pax97]-=- [LB00], and techniques to measure it are straightforward to validate in practice (see Section 4). Moreover, bottleneck link bandwidth measurement techniques have been shown to be accurate and fast in...

Measuring Bandwidth

by Kevin Lai, Mary Baker , 1999
"... Accurate network bandwidth measurement is important to a variety of network applications. Unfortunately, accurate bandwidth measurement is difficult. We describe some current bandwidth measurement techniques: using throughput, pathchar [8], and Packet Pair [2]. We explain some of the problems with t ..."
Abstract - Cited by 199 (4 self) - Add to MetaCart
Accurate network bandwidth measurement is important to a variety of network applications. Unfortunately, accurate bandwidth measurement is difficult. We describe some current bandwidth measurement techniques: using throughput, pathchar [8], and Packet Pair [2]. We explain some of the problems with these techniques, including poor accuracy, poor scalability, lack of statistical robustness, poor agility in adapting to bandwidth changes, lack of flexibility in deployment, and inaccuracy when used on a variety of traffic types. Our solutions to these problems include using a packet window to adapt quickly to bandwidth changes, Receiver Only Packet Pair to combine accuracy and ease of deployment, and Potential Bandwidth Filtering to increase accuracy. Our techniques are are at least as accurate as previously used filtering algorithms, and in some situations, our techniques are more than 37% more accurate. I. INTRODUCTION A common complaint about the Internet is that it is slow. Some of this...
(Show Context)

Citation Context

...an an infinite window. 2. We propose the use of Receiver Only Packet Pair to allow the deployment of special software at only one host while achieving accuracy within 1% of Receiver Based Packet Pair =-=[Pax97b]-=-. 3. We propose the use of Potential Bandwidth Filtering to accurately measure bandwidth in the presence of a variety of packet sizes. In such an environment, it is more than 37% more accurate than pr...

Automated Packet Trace Analysis of TCP Implementations

by Vern Paxson - In ACM SIGCOMM
"... We describe tcpanaly, a tool for automatically analyzing a TCP implementation's behavior by inspecting packet traces of the TCP's activity. Doing so requires surmounting a number of hurdles, including detecting packet filter measurement errors, coping with ambiguities due to the distance b ..."
Abstract - Cited by 195 (10 self) - Add to MetaCart
We describe tcpanaly, a tool for automatically analyzing a TCP implementation's behavior by inspecting packet traces of the TCP's activity. Doing so requires surmounting a number of hurdles, including detecting packet filter measurement errors, coping with ambiguities due to the distance between the measurement point and the TCP, and accommodating a surprisingly large range of behavior among different TCP implementations. We discuss why our efforts to develop a fully general tool failed, and detail a number of significant differences among 8 major TCP implementations, some of which, if ubiquitous, would devastate Internet performance. The most problematic TCPs were all independently written, suggesting that correct TCP implementation is fraught with difficulty. Consequently, it behooves the Internet community to develop testing programs and reference implementations. 1 Introduction There can be a world of difference between the behavior we expect of a transport protocol, and what we g...

Estimation and Removal of Clock Skew from Network Delay Measurements

by Sue B. Moon, Paul Skelly, Don Towsley , 1999
"... Packet delay and loss traces are frequently used by network engineers, as well as network applications, to analyze network performance. The clocks on the end-systems used to measure the delays, however, are not always synchronized, and this lack of synchronization reduces the accuracy of these measu ..."
Abstract - Cited by 181 (9 self) - Add to MetaCart
Packet delay and loss traces are frequently used by network engineers, as well as network applications, to analyze network performance. The clocks on the end-systems used to measure the delays, however, are not always synchronized, and this lack of synchronization reduces the accuracy of these measurements. Therefore, estimating and removing relative skews and offsets from delay measurements between sender and receiver clocks are critical to the accurate assessment and analysis of network performance. In this paper we introduce a linear programming-based algorithm to estimate the clock skew in network delay measurements and compare it with three other algorithms. We show that our algorithm has time complexity of O(N), leaves the delay after the skew removal positive, and is robust in the sense that the error margin of the skew estimate is independent of the magnitude of the skew. We use traces of real Internet delay measurements to assess the algorithm, and compare its performance to t...

Measuring and Analyzing the Characteristics of Napster and Gnutella Hosts

by Stefan Saroiu, Krishna P. Gummadi, Steven D. Gribble , 2003
"... The popularity of peer-to-peer multimedia file sharing applications such as Gnutella and Napster has created a flurry of recent research activity into peer-to-peer architectures. We believe that the proper evaluation of a peer-to-peer system must take into account the characteristics of the peers th ..."
Abstract - Cited by 154 (0 self) - Add to MetaCart
The popularity of peer-to-peer multimedia file sharing applications such as Gnutella and Napster has created a flurry of recent research activity into peer-to-peer architectures. We believe that the proper evaluation of a peer-to-peer system must take into account the characteristics of the peers that choose to participate in it. Surprisingly, however, few of the peer-to-peer architectures currently being developed are evaluated with respect to such considerations. In this paper, we remedy this situation by performing a detailed measurement study of the two popular peer-to-peer file sharing systems, namely Napster and Gnutella. In particular, our measurement study seeks to characterize the population of end-user hosts that participate in these two systems. This characterization includes the bottleneck bandwidths between these hosts and the Internet at large, IP-level latencies to send packets to these hosts, how often hosts connect and disconnect from the system, how many files hosts share and download, the degree of cooperation between the hosts, and several correlations between these characteristics. Our measurements show that there is significant heterogeneity and lack of cooperation across peers participating in these systems.

On Calibrating Measurements of Packet Transit Times

by Vern Paxson - In Proceedings of ACM SIGMETRICS , 1998
"... We discuss the problem of detecting errors in measurements of the total delay experienced by packets transmitted through a wide-area network. We assume that we have measurements of the transmission times of a group of packets sent from an originating host, A, and a corresponding set of measurements ..."
Abstract - Cited by 138 (6 self) - Add to MetaCart
We discuss the problem of detecting errors in measurements of the total delay experienced by packets transmitted through a wide-area network. We assume that we have measurements of the transmission times of a group of packets sent from an originating host, A, and a corresponding set of measurements of their arrival times at their destination host, B, recorded by two separate clocks. We also assume that we have a similar series of measurements of packets sent from B to A (as might occur when recording a TCP connection), but we do not assume that the clock at A is synchronized with the clock at B, nor that they run at the same frequency. We develop robust algorithms for detecting abrupt adjustments to either clock, and for estimating the relative skew between the clocks. By analyzing a large set of measurements of Internet TCP connections, we find that both clock adjustments and relative skew are sufficiently common that failing to detect them can lead to potentially large errors when an...
(Show Context)

Citation Context

...unately, a high degree of synchronization between two clocks does not necessarily mean that the clocks are free of relative errors. Finally, the topics in this paper are discussed in greater depth in =-=[Pa97c]-=-. 2 Basic clock terminology In this section we define basic terminology for discussing the characteristics of the clocks used in our study. The Network Time Protocol (NTP; [Mi92a]) defines a nomenclat...

TCP Fast Start: A Technique For Speeding Up Web Transfers

by Venkata N. Padmanabhan, Randy H. Katz , 1998
"... Web browsing is characterized by short and bursty data transfers interspersed by idle periods. The TCP protocol yields poor performance for such a workload because the TCP slow start procedure, which is initiated both at connection start up and upon restart after an idle period, usually requires sev ..."
Abstract - Cited by 117 (3 self) - Add to MetaCart
Web browsing is characterized by short and bursty data transfers interspersed by idle periods. The TCP protocol yields poor performance for such a workload because the TCP slow start procedure, which is initiated both at connection start up and upon restart after an idle period, usually requires several round trips to probe the network for bandwidth. When a transfer is short in length, this leads to poor bandwidth utilization and increased latency, which limit the performance benefits of techniques such as P-HTTP. In this paper, we present a new technique, which we call TCP fast start, to speed up short Web transfers. The basic idea is that the sender caches network parameters to avoid paying the slow start penalty for each page download. However, there is the risk of performance degradation if the cached information is stale. The two key contributions of our work are in addressing this problem. First, to shield the network as a whole from the ill-effects of stale information, packets...
(Show Context)

Citation Context

...e techniques (Section 4 and Section 5). Several researchers have analyzed wide-area network performance and concluded that the available bandwidth is often stable over periods lasting several minutes =-=[4,24]-=-. This suggests that in the common case fast start would indeed be successful. Application-level approaches have also been proposed and implemented to speed up Web transfers. One common approach is to...

On the Marginal Utility of Network Topology Measurements

by Paul Barford , Azer Bestavros, John Byers, Mark Crovella , 2001
"... The cost and complexity of deploying measurement infrastructure in the Internet for the purpose of analyzing its structure and behavior is considerable. Basic questions about the utility of increasing the number of measurements and measurement sites have not yet been addressed which has led to a &qu ..."
Abstract - Cited by 115 (12 self) - Add to MetaCart
The cost and complexity of deploying measurement infrastructure in the Internet for the purpose of analyzing its structure and behavior is considerable. Basic questions about the utility of increasing the number of measurements and measurement sites have not yet been addressed which has led to a "more is better" approach to wide-area measurement studies. In this paper, we step toward a more quantifiable understanding of the marginal utility of performing wide-area measurements in the context of Internet topology discovery. We characterize the observable topology in terms of nodes, links, node degree distribution, and distribution of end-to-end flows using statistical and information-theoretic techniques. We classify nodes discovered on the routes between a set of 8 sources and 1277 destinations to differentiate nodes which make up the so called "backbone" from those which border the backbone and those on links between the border nodes and destination nodes. This process includes reducing nodes that advertise multiple interfaces to single IP addresses. We show that the utility of adding sources beyond the second source quickly diminishes from the perspective of interface, node, link and node degree discovery. We also show that the utility of adding destinations is constant for interfaces, nodes, links and node degree indicating that it is more important to add destinations than sources.
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University