Results 1  10
of
102
GraphTheoretic Analysis of Structured PeertoPeer Systems: Routing Distances and Fault Resilience
, 2003
"... This paper examines graphtheoretic properties of existing peertopeer architectures and proposes a new infrastructure based on optimaldiameter de Bruijn graphs. Since generalized de Bruijn graphs possess very short average routing distances and high resilience to node failure, they are well suite ..."
Abstract

Cited by 127 (7 self)
 Add to MetaCart
(Show Context)
This paper examines graphtheoretic properties of existing peertopeer architectures and proposes a new infrastructure based on optimaldiameter de Bruijn graphs. Since generalized de Bruijn graphs possess very short average routing distances and high resilience to node failure, they are well suited for structured peertopeer networks. Using the example of Chord, CAN, and de Bruijn, we first study routing performance, graph expansion, and clustering properties of each graph. We then examine bisection width, path overlap, and several other properties that affect routing and resilience of peertopeer networks. Having confirmed that de Bruijn graphs offer the best diameter and highest connectivity among the existing peertopeer structures, we offer a very simple incremental building process that preserves optimal properties of de Bruijn graphs under uniform user joins/departures. We call the combined peertopeer architecture
Complex Networks and Decentralized Search Algorithms
 In Proceedings of the International Congress of Mathematicians (ICM
, 2006
"... The study of complex networks has emerged over the past several years as a theme spanning many disciplines, ranging from mathematics and computer science to the social and biological sciences. A significant amount of recent work in this area has focused on the development of random graph models that ..."
Abstract

Cited by 111 (1 self)
 Add to MetaCart
(Show Context)
The study of complex networks has emerged over the past several years as a theme spanning many disciplines, ranging from mathematics and computer science to the social and biological sciences. A significant amount of recent work in this area has focused on the development of random graph models that capture some of the qualitative properties observed in largescale network data; such models have the potential to help us reason, at a general level, about the ways in which realworld networks are organized. We survey one particular line of network research, concerned with smallworld phenomena and decentralized search algorithms, that illustrates this style of analysis. We begin by describing a wellknown experiment that provided the first empirical basis for the "six degrees of separation" phenomenon in social networks; we then discuss some probabilistic network models motivated by this work, illustrating how these models lead to novel algorithmic and graphtheoretic questions, and how they are supported by recent empirical studies of large social networks.
Hybrid search schemes for unstructured peertopeer networks
 In Proceedings of IEEE INFOCOM
, 2005
"... Abstract — We study hybrid search schemes for unstructured peertopeer networks. We quantify performance in terms of number of hits, network overhead, and response time. Our schemes combine flooding and random walks, look ahead and replication. We consider both regular topologies and topologies wit ..."
Abstract

Cited by 100 (2 self)
 Add to MetaCart
(Show Context)
Abstract — We study hybrid search schemes for unstructured peertopeer networks. We quantify performance in terms of number of hits, network overhead, and response time. Our schemes combine flooding and random walks, look ahead and replication. We consider both regular topologies and topologies with supernodes. We introduce a general search scheme, of which flooding and random walks are special instances, and show how to use locally maintained network information to improve the performance of searching. Our main findings are: (a)A small number of supernodes in an otherwise regular topology can offer sharp savings in the performance of search, both in the case of search by flooding and search by random walk, particularly when it is combined with 1step replication. We quantify, analytically and experimentally, that the reason of these savings is that the search is biased towards nodes that yield more information. (b)There is a generalization of search, of which flooding and random walk are special instances, which may take further advantage of locally maintained network information, and yield better performance than both flooding and random walk in clustered topologies. The method determines edge criticality and is reminiscent of fundamental heuristics from the area of approximation algorithms. I.
Minimizing churn in distributed systems
, 2006
"... A pervasive requirement of distributed systems is to deal with churn — change in the set of participating nodes due to joins, graceful leaves, and failures. A high churn rate can increase costs or decrease service quality. This paper studies how to reduce churn by selecting which subset of a set of ..."
Abstract

Cited by 80 (3 self)
 Add to MetaCart
(Show Context)
A pervasive requirement of distributed systems is to deal with churn — change in the set of participating nodes due to joins, graceful leaves, and failures. A high churn rate can increase costs or decrease service quality. This paper studies how to reduce churn by selecting which subset of a set of available nodes to use. First, we provide a comparison of the performance of a range of different node selection strategies in five realworld traces. Among our findings is that the simple strategy of picking a uniformrandom replacement whenever a node fails performs surprisingly well. We explain its performance through analysis in a stochastic model. Second, we show that a class of strategies, which we call “Preference List ” strategies, arise commonly as a result of optimizing for a metric other than churn, and produce high churn relative to more randomized strategies under realistic node failure patterns. Using this insight, we demonstrate and explain differences in performance for designs that incorporate varying degrees of randomization. We give examples from a variety of protocols, including anycast, overlay multicast, and distributed hash tables. In many cases, simply adding some randomization can go a long way towards reducing churn.
Distance Estimation and Object Location via Rings of Neighbors
 In 24 th Annual ACM Symposium on Principles of Distributed Computing (PODC
, 2005
"... We consider four problems on distance estimation and object location which share the common flavor of capturing global information via informative node labels: lowstretch routing schemes [47], distance labeling [24], searchable small worlds [30], and triangulationbased distance estimation [33]. Fo ..."
Abstract

Cited by 77 (7 self)
 Add to MetaCart
We consider four problems on distance estimation and object location which share the common flavor of capturing global information via informative node labels: lowstretch routing schemes [47], distance labeling [24], searchable small worlds [30], and triangulationbased distance estimation [33]. Focusing on metrics of low doubling dimension, we approach these problems with a common technique called rings of neighbors, which refers to a sparse distributed data structure that underlies all our constructions. Apart from improving the previously known bounds for these problems, our contributions include extending Kleinberg’s small world model to doubling metrics, and a short proof of the main result in Chan et al. [14]. Doubling dimension is a notion of dimensionality for general metrics that has recently become a useful algorithmic concept in the theoretical computer science literature. 1
Virtual Coordinates for Ad hoc and Sensor Networks
, 2004
"... In many applications of wireless ad hoc and sensor networks, positionawareness is of great importance. Often, as in the case of geometric routing, it is sufficient to have virtual coordinates, rather than real coordinates. In this paper, we address the problem of obtaining virtual coordinates based ..."
Abstract

Cited by 58 (9 self)
 Add to MetaCart
(Show Context)
In many applications of wireless ad hoc and sensor networks, positionawareness is of great importance. Often, as in the case of geometric routing, it is sufficient to have virtual coordinates, rather than real coordinates. In this paper, we address the problem of obtaining virtual coordinates based on connectivity information. In particular, we propose the first approximation algorithm for this problem and discuss implementational aspects.
On the Windfall of Friendship: Inoculation Strategies on Social Networks
 EC'08
, 2008
"... This paper studies a virus inoculation game on social networks. A framework is presented which allows the measuring of the windfall of friendship, i.e., how much players benefit if they care about the welfare of their direct neighbors in the social network graph compared to purely selfish environmen ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
(Show Context)
This paper studies a virus inoculation game on social networks. A framework is presented which allows the measuring of the windfall of friendship, i.e., how much players benefit if they care about the welfare of their direct neighbors in the social network graph compared to purely selfish environments. We analyze the corresponding equilibria and show that the computation of the worst and best Nash equilibrium is N Phard. Intriguingly, even though the windfall of friendship can never be negative, the social welfare does not increase monotonically with the extent to which players care for each other. While these phenomena are known on an anecdotal level, our framework allows us to quantify these effects analytically.
Know thy Neighbor's Neighbor: Better Routing for SkipGraphs and Small Worlds
 in Proc. of IPTPS, 2004
, 2004
"... We investigate an approach for routing in p2p networks called neighborofneighbor greedy. We show that this approach may reduce significantly the number of hops used, when routing in skip graphs and small worlds. Furthermore we show that a simple variation of Chord is degree optimal. Our algorithm ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
(Show Context)
We investigate an approach for routing in p2p networks called neighborofneighbor greedy. We show that this approach may reduce significantly the number of hops used, when routing in skip graphs and small worlds. Furthermore we show that a simple variation of Chord is degree optimal. Our algorithm is implemented on top of the conventional greedy algorithms, thus it maintains the good properties of greedy routing. Implementing it may only improve the performance of the system.
Decentralized algorithms using both local and random probes for p2p load balancing
 In Seventeenth ACM Symposium on Parallelism in Algorithms and Architectures (SPAA
, 2005
"... We study randomized algorithms for placing a sequence of n nodes on a circle with unit perimeter. Nodes divide the circle into disjoint arcs. We desire that a newlyarrived node (which is oblivious of its index in the sequence) choose its position on the circle by learning the positions of as few ex ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
(Show Context)
We study randomized algorithms for placing a sequence of n nodes on a circle with unit perimeter. Nodes divide the circle into disjoint arcs. We desire that a newlyarrived node (which is oblivious of its index in the sequence) choose its position on the circle by learning the positions of as few existing nodes as possible. At the same time, we desire that that the variation in arclengths be small. To this end, we propose a new algorithm that works as follows: The k th node chooses r random points on the circle, inspects the sizes of v arcs in the vicinity of each random point, and places itself at the midpoint of the largest arc encountered. We show that for any combination of r and v satisfying rv ≥ c log k, where c is a small constant, the ratio of the largest to the smallest arclength is at most eight w.h.p., for an arbitrarily long
Skipwebs: Efficient distributed data structures for multidimensional data sets
 In 24th ACM Symp. on Principles of Distributed Computing (PODC
, 2005
"... large(at)daimi.au.dk eppstein(at)ics.uci.edu goodrich(at)acm.org We present a framework for designing efficient distributed data structures for multidimensional data. Our structures, which we call skipwebs, extend and improve previous randomized distributed data structures, including skipnets and ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
(Show Context)
large(at)daimi.au.dk eppstein(at)ics.uci.edu goodrich(at)acm.org We present a framework for designing efficient distributed data structures for multidimensional data. Our structures, which we call skipwebs, extend and improve previous randomized distributed data structures, including skipnets and skip graphs. Our framework applies to a general class of data querying scenarios, which include linear (onedimensional) data, such as sorted sets, as well as multidimensional data, such as ddimensional octrees and digital tries of character strings defined over a fixed alphabet. We show how to perform a query over such a set of n items spread among n hosts using O(log n/log log n) messages for onedimensional data, or O(log n) messages for fixeddimensional data, while using only O(log n) space per host. We also show how to make such structures dynamic so as to allow for insertions and deletions in O(log n) messages for quadtrees, octrees, and digital tries, and O(log n/log log n) messages for onedimensional data. Finally, we show how to apply a blocking strategy to skipwebs to further improve message complexity for onedimensional data when hosts can store more data.