Results 1 - 10
of
340
Meridian: A Lightweight Network Location Service without Virtual Coordinates
- In SIGCOMM
, 2005
"... This paper introduces a lightweight, scalable and accurate framework, called Meridian, for performing node selection based on network location. The framework consists of an overlay network structured around multi-resolution rings, query routing with direct measurements, and gossip protocols for diss ..."
Abstract
-
Cited by 190 (8 self)
- Add to MetaCart
(Show Context)
This paper introduces a lightweight, scalable and accurate framework, called Meridian, for performing node selection based on network location. The framework consists of an overlay network structured around multi-resolution rings, query routing with direct measurements, and gossip protocols for dissemination. We show how this framework can be used to address three commonly encountered problems, namely, closest node discovery, central leader election, and locating nodes that satisfy target latency constraints in large-scale distributed systems without having to compute absolute coordinates. We show analytically that the framework is scalable with logarithmic convergence when Internet latencies are modeled as a growthconstrained metric, a low-dimensional Euclidean metric, or a metric of low doubling dimension. Large scale simulations, based on latency measurements from 6.25 million node-pairs as well as an implementation deployed on PlanetLab show that the framework is accurate and effective.
Online Balancing of Range-Partitioned Data with Applications to Peer-to-Peer Systems
- In VLDB
, 2004
"... We consider the problem of horizontally partitioning a dynamic relation across a large number of disks/nodes by the use of range partitioning. Such partitioning is often desirable in large-scale parallel databases, as well as in peer-to-peer (P2P) systems. As tuples are inserted and deleted... ..."
Abstract
-
Cited by 127 (4 self)
- Add to MetaCart
(Show Context)
We consider the problem of horizontally partitioning a dynamic relation across a large number of disks/nodes by the use of range partitioning. Such partitioning is often desirable in large-scale parallel databases, as well as in peer-to-peer (P2P) systems. As tuples are inserted and deleted...
Design and Implementation Tradeoffs for Wide-Area Resource Discovery
- In Proceedings of 14th IEEE Symposium on High Performance, Research Triangle Park
, 2005
"... We describe the design and implementation of SWORD, a scalable resource discovery service for wide-area distributed systems. In contrast to previous systems, SWORD allows users to describe desired resources as a topology of interconnected groups with required intra-group, inter-group, and per-node c ..."
Abstract
-
Cited by 98 (13 self)
- Add to MetaCart
We describe the design and implementation of SWORD, a scalable resource discovery service for wide-area distributed systems. In contrast to previous systems, SWORD allows users to describe desired resources as a topology of interconnected groups with required intra-group, inter-group, and per-node characteristics, along with the utility that the application derives from specified ranges of metric values. This design gives users the flexibility to find geographically distributed resources for applications that are sensitive to both node and network characteristics, and allows the system to rank acceptable configurations based on their quality for that application. Rather than evaluating a single implementation of SWORD, we explore a variety of architectural designs that deliver the required functionality in a scalable and highly-available manner. We discuss the tradeoffs of using a centralized architecture as compared to a fully decentralized design to perform wide-area resource discovery. To summarize our results, we found that a centralized architecture based on 4-node server cluster sites at network peering facilities outperforms a decentralized DHT-based resource discovery infrastructure with respect to query latency for all but the smallest number of sites. However, although a centralized architecture shows significant promise in stable environments, we find that our decentralized implementation has acceptable performance and also benefits from the DHT’s self-healing properties in more volatile environments. We evaluate the advantages and disadvantages of centralized and distributed resource discovery architectures on 1000 hosts in emulation and on approximately 200 PlanetLab nodes spread across the Internet.
Colyseus: A Distributed Architecture for Online Multiplayer Games
- In Proc. Symposium on Networked Systems Design and Implementation (NSDI
, 2006
"... This paper presents the design, implementation, and evaluation of Colyseus, a distributed architecture for interactive multiplayer games. Colyseus takes advantage of a game’s tolerance for weakly consistent state and predictable workload to meet the tight latency constraints of game-play and maintai ..."
Abstract
-
Cited by 83 (2 self)
- Add to MetaCart
(Show Context)
This paper presents the design, implementation, and evaluation of Colyseus, a distributed architecture for interactive multiplayer games. Colyseus takes advantage of a game’s tolerance for weakly consistent state and predictable workload to meet the tight latency constraints of game-play and maintain scalable communication costs. In addition, it provides a rich distributed query interface and effective pre-fetching subsystem to help locate and replicate objects before they are accessed at a node. We have implemented Colyseus and modified Quake II, a popular first person shooter game, to use it. Our measurements of Quake II and our own Colyseus-based game with hundreds of players shows that Colyseus effectively distributes game traffic across the participating nodes, allowing Colyseus to support low-latency game-play for an order of magnitude more players than existing single server designs, with similar per-node bandwidth costs. 1
Bandwidth-efficient management of DHT routing tables
, 2005
"... Today an application developer using a distributed hash table (DHT) with n nodes must choose a DHT protocol from the spectrum between O(1) lookup protocols [9, 18] and O(log n) protocols [20–23,25,26]. O(1) protocols achieve low latency lookups on small or low-churn networks because lookups take onl ..."
Abstract
-
Cited by 64 (3 self)
- Add to MetaCart
Today an application developer using a distributed hash table (DHT) with n nodes must choose a DHT protocol from the spectrum between O(1) lookup protocols [9, 18] and O(log n) protocols [20–23,25,26]. O(1) protocols achieve low latency lookups on small or low-churn networks because lookups take only a few hops, but incur high maintenance traffic on large or high-churn networks. O(log n) protocols incur less maintenance traffic on large or highchurn networks but require more lookup hops in small networks. Accordion is a new routing protocol that does not force the developer to make this choice: Accordion adjusts itself to provide the best performance across a range of network sizes and churn rates while staying within a bounded bandwidth budget. The key challenges in the design of Accordion are the algorithms that choose the routing table’s size and content. Each Accordion node learns of new neighbors opportunistically, in a way that causes the density of its neighbors to be inversely proportional to their distance in ID space from the node. This distribution allows Accordion to vary the table size along a continuum while still guaranteeing at most O(log n) lookup hops. The user-specified bandwidth budget controls the rate at which a node learns about new neighbors. Each node limits its routing table size by evicting neighbors that it judges likely to have failed. High churn (i.e., short node lifetimes) leads to a high eviction rate. The equilibrium between the learning and eviction processes determines the table size. Simulations show that Accordion maintains an efficient lookup latency versus bandwidth tradeoff over a wider range of operating conditions than existing DHTs.
Distributed resource discovery on PlanetLab with SWORD
- In WORLDS
, 2004
"... Large-scale distributed services such as content distribution networks, peer-to-peer storage, distributed games, and scientific applications, have recently received substantial interest from both researchers and industry. At ..."
Abstract
-
Cited by 58 (0 self)
- Add to MetaCart
(Show Context)
Large-scale distributed services such as content distribution networks, peer-to-peer storage, distributed games, and scientific applications, have recently received substantial interest from both researchers and industry. At
Friday: Global comprehension for distributed replay
- IN PROCEEDINGS OF THE FOURTH SYMPOSIUM ON NETWORKED SYSTEMS DESIGN AND IMPLEMENTATION (NSDI ’07
, 2007
"... Debugging and profiling large-scale distributed applications is a daunting task. We present Friday, a system for debugging distributed applications that combines deterministic replay of components with the power of symbolic, low-level debugging and a simple language for expressing higher-level distr ..."
Abstract
-
Cited by 56 (5 self)
- Add to MetaCart
(Show Context)
Debugging and profiling large-scale distributed applications is a daunting task. We present Friday, a system for debugging distributed applications that combines deterministic replay of components with the power of symbolic, low-level debugging and a simple language for expressing higher-level distributed conditions and actions. Friday allows the programmer to understand the collective state and dynamics of a distributed collection of coordinated application components. To evaluate Friday, we consider several distributed problems, including routing consistency in overlay networks, and temporal state abnormalities caused by route flaps. We show via micro-benchmarks and larger-scale application measurement that Friday can be used interactively to debug large distributed applications under replay on common hardware.
A casestudy in building layered DHT applications
- In Proceedings of the 2005 SIGCOMM (Aug. 2005). [11] CHAWATHE, Y., RATNASAMY, S., BRESLAU, L., LANHAM, N., AND SHENKER, S. Making Gnutella-like P2P systems scalable.In Proceedings of the 2003 SIGCOMM
, 2003
"... Recent research has shown that one can use Distributed Hash Tables (DHTs) to build scalable, robust and efficient applications. One question that is often left unanswered is that of simplicity of implementation and deployment. In this paper, we explore a case study of building an application for whi ..."
Abstract
-
Cited by 54 (2 self)
- Add to MetaCart
(Show Context)
Recent research has shown that one can use Distributed Hash Tables (DHTs) to build scalable, robust and efficient applications. One question that is often left unanswered is that of simplicity of implementation and deployment. In this paper, we explore a case study of building an application for which ease of deployment dominated the need for high performance. The application we focus on is Place Lab, an end-user positioning system. We evaluate whether it is feasible to use DHTs as an application-independent building block to implement a key component of Place Lab: its “mapping infrastructure.” We present Prefix Hash Trees, a data structure used by Place Lab for geographic range queries that is built entire on top of a standard DHT. By strictly layering Place Lab’s data structures on top of a generic DHT service, we were able to decouple the deployment and management of Place Lab from that of the underlying DHT. We identify the characteristics of Place Lab that made it amenable for deploying in this layered manner, and comment on its effect on performance.
Distributed segment tree: Support of range query and cover query over dht
- In Electronic publications of the 5th International Workshop on Peer-to-Peer Systems (IPTPS’06
, 2006
"... Range query, which is defined as to find all the keys in a certain range over the underlying P2P network, has received a lot of research attentions recently. However, cover query, which is to find all the ranges currently in the system that cover a given key, is rarely touched. In this paper, we fir ..."
Abstract
-
Cited by 48 (1 self)
- Add to MetaCart
(Show Context)
Range query, which is defined as to find all the keys in a certain range over the underlying P2P network, has received a lot of research attentions recently. However, cover query, which is to find all the ranges currently in the system that cover a given key, is rarely touched. In this paper, we first identify that cover query is a highly desired functionality by some popular P2P applications, and then propose distributed segment tree (DST), a layered DHT structure that incorporates the concept of segment tree. Due to the intrinsic capability of segment tree in maintaining the sturcture of ranges, DST is shown to be very efficient for supporting both range query and cover query in a uniform way. It also possesses excellent parallelizability in query operations and can achieve O(1) complexity for moderate query ranges. To balance the load among DHT nodes, we design a downward load stripping mechanism that controls tradeoffs between load and performance. We implemented DST on publicly available OpenDHT service and performed extensive real experiments. All the results and comparisons demonstrate the effectiveness of DST for several important metrics. 1.