Results 11 - 20
of
604
On the accuracy of embeddings for Internet coordinate systems
- in: Proceedings of the Internet Measurement Conference, ACM
, 2005
"... Internet coordinate systems embed Round-Trip-Times (RTTs) between Internet nodes into some geometric space so that unmeasured RTTs can be estimated using distance computation in that space. If accurate, such techniques would allow us to predict Internet RTTs without extensive measurements. The publi ..."
Abstract
-
Cited by 91 (6 self)
- Add to MetaCart
(Show Context)
Internet coordinate systems embed Round-Trip-Times (RTTs) between Internet nodes into some geometric space so that unmeasured RTTs can be estimated using distance computation in that space. If accurate, such techniques would allow us to predict Internet RTTs without extensive measurements. The published techniques appear to work very well when accuracy is measured using metrics such as absolute relative error. Our main observation is that absolute relative error tells us very little about the quality of an embedding as experienced by a user. We define several new accuracy metrics that attempt to quantify various aspects of user-oriented quality. Evaluation of current Internet coordinate systems using our new metrics indicates that their quality is not as high as that suggested by the use of absolute relative error. 1
Network coordinates in the wild
- In Proceeding of USENIX NSDI’07
, 2007
"... Network coordinates provide a mechanism for selecting and placing servers efficiently in a large distributed system. This approach works well as long as the coordinates continue to accurately reflect network topology. We conducted a long-term study of a subset of a million-plus node coordinate syste ..."
Abstract
-
Cited by 81 (2 self)
- Add to MetaCart
(Show Context)
Network coordinates provide a mechanism for selecting and placing servers efficiently in a large distributed system. This approach works well as long as the coordinates continue to accurately reflect network topology. We conducted a long-term study of a subset of a million-plus node coordinate system and found that it exhibited some of the problems for which network coordinates are frequently criticized, for example, inaccuracy and fragility in the presence of violations of the triangle inequality. Fortunately, we show that several simple techniques remedy many of these problems. Using the Azureus BitTorrent network as our testbed, we show that live, large-scale network coordinate systems behave differently than their tame PlanetLab and simulation-based counterparts. We find higher relative errors, more triangle inequality violations, and higher churn. We present and evaluate a number of techniques that, when applied to Azureus, efficiently produce accurate and stable network coordinates. 1
How much anonymity does network latency leak
- In CCS ’07: Proceedings of the 14th ACM conference on Computer and communications security. ACM
, 2007
"... Low-latency anonymity systems such as Tor, AN.ON, Crowds, and Anonymizer.com aim to provide anonymous connections that are both untraceable by “local ” adversaries who control only a few machines, and have low enough delay to support anonymous use of network services like web browsing and remote log ..."
Abstract
-
Cited by 76 (1 self)
- Add to MetaCart
Low-latency anonymity systems such as Tor, AN.ON, Crowds, and Anonymizer.com aim to provide anonymous connections that are both untraceable by “local ” adversaries who control only a few machines, and have low enough delay to support anonymous use of network services like web browsing and remote login. One consequence of these goals is that these services leak some information about the network latency between the sender and one or more nodes in the system. We present two attacks on low-latency anonymity schemes using this information. The first attack allows a pair of colluding web sites to predict, based on local timing information and with no additional resources, whether two connections from the same Tor exit node are using the same circuit with high confidence. The second attack requires more resources but allows a malicious website to gain several bits of information about a client each time he visits the site. We evaluate both attacks against two low-latency anonymity protocols – the Tor network and the MultiProxy proxy aggregator service – and conclude that both are highly vulnerable to these attacks. Categories and Subject Descriptors: C.2.0 [Computer Networks]: General—Security and protection;
Internet routing policies and round-trip-times
- In PAM
, 2005
"... Abstract. Round trip times (RTTs) play an important role in Internet measurements. In this paper, we explore some of the ways in which routing policies impact RTTs. In particular, we investigate how routing policies for both intra- and inter-domain routing can naturally give rise to violations of th ..."
Abstract
-
Cited by 76 (4 self)
- Add to MetaCart
Abstract. Round trip times (RTTs) play an important role in Internet measurements. In this paper, we explore some of the ways in which routing policies impact RTTs. In particular, we investigate how routing policies for both intra- and inter-domain routing can naturally give rise to violations of the triangle inequality with respect to RTTs. Triangle Inequality Violations (TIVs) might be exploited by overlay routing if an end-to-end forwarding path can be stitched together with paths routed at layer 3. However, TIVs pose a problem for Internet Coordinate Systems that attempt to associate Internet hosts with points in Euclidean space so that RTTs between hosts are accurately captured by distances between their associated points. Three points having RTTs that violate the triangle inequality cannot be embedded into Euclidean space without some level of inaccuracy. We argue that TIVs should not be treated as measurement artifacts, but rather as natural features of the Internet’s structure. In addition to explaining routing policies that give rise to TIVs, we present illustrating examples from the current Internet. 1
Towards IP geolocation using delay and topology measurements
- In IMC
, 2006
"... We present Topology-based Geolocation (TBG), a novel approach to estimating the geographic location of arbitrary Internet hosts. We motivate our work by showing that 1) existing approaches, based on end-to-end delay measurements from a set of landmarks, fail to outperform much simpler techniques, an ..."
Abstract
-
Cited by 67 (8 self)
- Add to MetaCart
(Show Context)
We present Topology-based Geolocation (TBG), a novel approach to estimating the geographic location of arbitrary Internet hosts. We motivate our work by showing that 1) existing approaches, based on end-to-end delay measurements from a set of landmarks, fail to outperform much simpler techniques, and 2) the error of these approaches is strongly determined by the distance to the nearest landmark, even when triangulation is used to combine estimates from different landmarks. Our approach improves on these earlier techniques by leveraging network topology, along with measurements of network delay, to constrain host position. We convert topology and delay data into a set of constraints, then solve for router and host locations simultaneously. This approach improves the consistency of location estimates, reducing the error substantially for structured networks in our experiments on Abilene and Sprint. For networks with insufficient structural constraints, our techniques integrate external hints that are validated using measurements before being trusted. Together, these techniques lower the median estimation error for our university-based dataset to 67 km vs. 228 km for the best previous approach.
Bandwidth-efficient management of DHT routing tables
, 2005
"... Today an application developer using a distributed hash table (DHT) with n nodes must choose a DHT protocol from the spectrum between O(1) lookup protocols [9, 18] and O(log n) protocols [20–23,25,26]. O(1) protocols achieve low latency lookups on small or low-churn networks because lookups take onl ..."
Abstract
-
Cited by 64 (3 self)
- Add to MetaCart
Today an application developer using a distributed hash table (DHT) with n nodes must choose a DHT protocol from the spectrum between O(1) lookup protocols [9, 18] and O(log n) protocols [20–23,25,26]. O(1) protocols achieve low latency lookups on small or low-churn networks because lookups take only a few hops, but incur high maintenance traffic on large or high-churn networks. O(log n) protocols incur less maintenance traffic on large or highchurn networks but require more lookup hops in small networks. Accordion is a new routing protocol that does not force the developer to make this choice: Accordion adjusts itself to provide the best performance across a range of network sizes and churn rates while staying within a bounded bandwidth budget. The key challenges in the design of Accordion are the algorithms that choose the routing table’s size and content. Each Accordion node learns of new neighbors opportunistically, in a way that causes the density of its neighbors to be inversely proportional to their distance in ID space from the node. This distribution allows Accordion to vary the table size along a continuum while still guaranteeing at most O(log n) lookup hops. The user-specified bandwidth budget controls the rate at which a node learns about new neighbors. Each node limits its routing table size by evicting neighbors that it judges likely to have failed. High churn (i.e., short node lifetimes) leads to a high eviction rate. The equilibrium between the learning and eviction processes determines the table size. Simulations show that Accordion maintains an efficient lookup latency versus bandwidth tradeoff over a wider range of operating conditions than existing DHTs.
Scale and performance in the CoBlitz largefile distribution service
- In Proceedings of the 3rd USENIX/ACM Symposium on Networked Systems Design and Implementation (NSDI
"... Scalable distribution of large files has been the area of much research and commercial interest in the past few years. In this paper, we describe the CoBlitz system, which efficiently distributes large files using a content distribution network (CDN) designed for HTTP. As a result, CoBlitz is able t ..."
Abstract
-
Cited by 62 (6 self)
- Add to MetaCart
(Show Context)
Scalable distribution of large files has been the area of much research and commercial interest in the past few years. In this paper, we describe the CoBlitz system, which efficiently distributes large files using a content distribution network (CDN) designed for HTTP. As a result, CoBlitz is able to serve large files without requiring any modifications to standard Web servers and clients, making it an interesting option both for end users as well as infrastructure services. Over the 18 months that CoBlitz and its partner service, CoDeploy, have been running on PlanetLab, we have had the opportunity to observe its algorithms in practice, and to evolve its design. These changes stem not only from observations on its use, but also from a better understanding of their behavior in real-world conditions. This utilitarian approach has led us to better understand the effects of scale, peering policies, replication behavior, and congestion, giving us new insights into how to better improve their performance. With these changes, CoBlitz is able to deliver in excess of 1 Gbps on PlanetLab, and to outperform a range of systems, including research systems as well as the widely-used BitTorrent. 1
Octant: a comprehensive framework for the geolocalization of Internet hosts
- in Proc. 4th USENIX NSDI
, 2007
"... Determining the physical location of Internet hosts is a critical enabler for many new location-aware services. In this paper, we present Octant, a novel, comprehen-sive framework for determining the location of Internet hosts in the real world based solely on network mea-surements. The key insight ..."
Abstract
-
Cited by 60 (4 self)
- Add to MetaCart
(Show Context)
Determining the physical location of Internet hosts is a critical enabler for many new location-aware services. In this paper, we present Octant, a novel, comprehen-sive framework for determining the location of Internet hosts in the real world based solely on network mea-surements. The key insight behind this framework is to pose the geolocalization problem formally as one of error-minimizing constraint satisfaction, to create a sys-tem of constraints by deriving them aggressively from network measurements, and to solve the system geomet-rically to yield the estimated region in which the target resides. This approach gains its accuracy and precision by taking advantage of both positive and negative con-straints, that is, constraints on where the node can and cannot be, respectively. The constraints are represented using regions bounded by Bézier curves, allowing pre-cise constraint representation and low-cost geometric op-erations. The framework can reason in the presence of uncertainty, enabling it to gracefully cope with aggres-sively derived constraints that may contain errors. An evaluation of Octant using PlanetLab nodes and public traceroute servers shows that Octant can localize the me-dian node to within 22 mi., a factor of three better than other evaluated approaches. 1
iPlane Nano: Path Prediction for Peer-to-Peer Applications
"... Many peer-to-peer distributed applications can benefit from accurate predictions of Internet path performance. Existing approaches either 1) achieve high accuracy for sophisticated path properties, but adopt an unscalable centralized approach, or 2) are lightweight and decentralized, but work only f ..."
Abstract
-
Cited by 60 (10 self)
- Add to MetaCart
(Show Context)
Many peer-to-peer distributed applications can benefit from accurate predictions of Internet path performance. Existing approaches either 1) achieve high accuracy for sophisticated path properties, but adopt an unscalable centralized approach, or 2) are lightweight and decentralized, but work only for latency prediction. In this paper, we present the design and implementation of iPlane Nano, a library for delivering Internet path information to peer-to-peer applications. iPlane Nano is itself a peer-to-peer application, and scales to a large number of end hosts with little centralized infrastructure and with a low cost of participation. The key enabling idea underlying iPlane Nano is a compact model of Internet routing. Our model can accurately predict end-to-end PoP-level paths, latencies, and loss rates between arbitrary hosts on the Internet, with 70 % of AS paths predicted exactly in our evaluation set. Yet our model can be stored in less than 7MB and updated with approximately 1MB/day. Our evaluation of iPlane Nano shows that it can provide significant performance improvements for large-scale applications. For example, iPlane Nano yields near-optimal download performance for both small and large files in a P2P content delivery system. 1
Donnybrook: Enabling Large-Scale, High-Speed, Peer-to-Peer Games
"... Without well-provisioned dedicated servers, modern fast-paced action games limit the number of players who can interact simultaneously to 16–32. This is because interacting players must frequently exchange state updates, and high player counts would exceed the bandwidth available to participating ma ..."
Abstract
-
Cited by 59 (6 self)
- Add to MetaCart
(Show Context)
Without well-provisioned dedicated servers, modern fast-paced action games limit the number of players who can interact simultaneously to 16–32. This is because interacting players must frequently exchange state updates, and high player counts would exceed the bandwidth available to participating machines. In this paper, we describe Donnybrook, a system that enables epicscale battles without dedicated server resources, even in a fastpaced game with tight latency bounds. It achieves this scalability through two novel components. First, it reduces bandwidth demand by estimating what players are paying attention to, thereby enabling it to reduce the frequency of sending less important state updates. Second, it overcomes resource and interest heterogeneity by disseminating updates via a multicast system designed for the special requirements of games: that they have multiple sources, are latency-sensitive, and have frequent group membership changes. We present user study results using a prototype implementation based on Quake III that show our approach provides a desirable user experience. We also present simulation results that demonstrate Donnybrook’s efficacy in enabling battles of up to 900 players.