Results 1 - 10
of
222
CoolStreaming/DONet: A Data-driven Overlay Network for Peer-to-Peer Live Media Streaming
- in IEEE Infocom
, 2005
"... This paper presents DONet, a Data-driven Overlay Network for live media streaming. The core operations in DONet are very simple: every node periodically exchanges data availability information with a set of partners, and retrieves unavailable data from one or more partners, or supplies available dat ..."
Abstract
-
Cited by 475 (42 self)
- Add to MetaCart
(Show Context)
This paper presents DONet, a Data-driven Overlay Network for live media streaming. The core operations in DONet are very simple: every node periodically exchanges data availability information with a set of partners, and retrieves unavailable data from one or more partners, or supplies available data to partners. We emphasize three salient features of this data-driven design: 1) easy to implement, as it does not have to construct and maintain a complex global structure; 2) efficient, as data forwarding is dynamically determined according to data availability while not restricted by specific directions; and 3) robust and resilient, as the partnerships enable adaptive and quick switching among multi-suppliers. We show through analysis that DONet is scalable with bounded delay. We also address a set of practical challenges for realizing DONet, and propose an efficient member- and partnership management algorithm, together with an intelligent scheduling algorithm that achieves real-time and continuous distribution of streaming contents.
Lightweight probabilistic broadcast
- ACM Transaction on Computer Systems
, 2003
"... The growing interest in peer-to-peer applications has underlined the importance of scalability in modern distributed systems. Not surprisingly, much research effort has been invested in gossip-based broadcast protocols. These trade the traditional strong reliability guarantees against very good “sca ..."
Abstract
-
Cited by 302 (35 self)
- Add to MetaCart
The growing interest in peer-to-peer applications has underlined the importance of scalability in modern distributed systems. Not surprisingly, much research effort has been invested in gossip-based broadcast protocols. These trade the traditional strong reliability guarantees against very good “scalability” properties. Scalability is in that context usually expressed in terms of throughput and delivery latency, but there is only little work on how to reduce the overhead of membership management at large scale. This paper presents Lightweight Probabilistic Broadcast (lpbcast), a novel gossip-based broadcast algorithm which preserves the inherent throughput scalability of traditional gossip-based algorithms and adds a notion of membership management scalability: every process only knows a random subset of fixed size of the processes in the system. We formally analyze our broadcast algorithm in terms of scalability with respect to the size of individual views, and compare the analytical results both with simulations and concrete measurements.
CYCLON: Inexpensive Membership Management for Unstructured P2P Overlays
- Journal of Network and Systems Management
, 2005
"... Unstructured overlays form an important class of peer-to-peer networks, notably when content-based searching is at stake. The construction of these overlays, which is essentially a membership management issue, is crucial. Ideally, the resulting overlays should have low diameter and be resilient to m ..."
Abstract
-
Cited by 223 (25 self)
- Add to MetaCart
(Show Context)
Unstructured overlays form an important class of peer-to-peer networks, notably when content-based searching is at stake. The construction of these overlays, which is essentially a membership management issue, is crucial. Ideally, the resulting overlays should have low diameter and be resilient to massive node failures, which are both characteristic properties of random graphs. In addition, they should be able to deal with a high node churn (i.e., expect high-frequency membership changes). Inexpensive membership management while retaining random-graph properties is therefore important. In this paper, we describe a novel gossip-based membership management protocol that meets these requirements. Our protocol is shown to construct graphs that have low diameter, low clustering, highly symmetric node degrees, and that are highly resilient to massive node failures. Moreover, we show that the protocol is highly reactive to restoring randomness when a large number of nodes fail. KEY WORDS: Membership management; peer-to-peer; epidemic/gossiping protocols; unstructured overlays; random graphs.
The Peer Sampling Service: Experimental Evaluation of Unstructured Gossip-Based Implementations
- In Middleware ’04: Proceedings of the 5th ACM/IFIP/USENIX international conference on Middleware
, 2004
"... Abstract. In recent years, the gossip-based communication model in large-scale distributed systems has become a general paradigm with important applications which include information dissemination, aggregation, overlay topology management and synchronization. At the heart of all of these protocols l ..."
Abstract
-
Cited by 187 (41 self)
- Add to MetaCart
(Show Context)
Abstract. In recent years, the gossip-based communication model in large-scale distributed systems has become a general paradigm with important applications which include information dissemination, aggregation, overlay topology management and synchronization. At the heart of all of these protocols lies a fundamental distributed abstraction: the peer sampling service. In short, the aim of this service is to provide every node with peers to exchange information with. Analytical studies reveal a high reliability and efficiency of gossip-based protocols, under the (often implicit) assumption that the peers to send gossip messages to are selected uniformly at random from the set of all nodes. In practice—instead of requiring all nodes to know all the peer nodes so that a random sample could be drawn—a scalable and efficient way to implement the peer sampling service is by constructing and maintaining dynamic unstructured overlays through gossiping membership information itself. This paper presents a generic framework to implement reliable and efficient peer sampling services. The framework generalizes existing approaches and makes it easy to introduce new ones. We use this framework to explore and compare several implementations of our abstract scheme. Through extensive experimental analysis, we show that all of them lead to different peer sampling services none of which is uniformly random. This clearly renders traditional theoretical approaches invalid, when the underlying peer sampling service is based on a gossip-based scheme. Our observations also help explain important differences between design choices of peer sampling algorithms, and how these impact the reliability of the corresponding service. 1
Probabilistic Reliable Dissemination in Large-Scale Systems
- IEEE Transactions on Parallel and Distributed Systems
, 2001
"... The growth of the Internet raises new challenges for the design of distributed systems and applications. In the context of group communication protocols, gossip-based schemes have attracted interest as they are scalable, easy to deploy, and resilient to network and process failures. However, tradi ..."
Abstract
-
Cited by 181 (24 self)
- Add to MetaCart
(Show Context)
The growth of the Internet raises new challenges for the design of distributed systems and applications. In the context of group communication protocols, gossip-based schemes have attracted interest as they are scalable, easy to deploy, and resilient to network and process failures. However, traditional gossip-based protocols have two major drawbacks: 1) They rely on each peer having knowledge of the global membership and 2) being oblivious to the network topology, they can impose a high load on network links when applied to wide-area settings. In this paper, we provide a theoretical analysis of gossip-based protocols which relates their reliability to key system parameters (system size, failure rates, and number of gossip targets). The results provide guidelines for the design of practical protocols. In particular, they show how reliability can be maintained while alleviating drawback 1) by providing each peer with only a small subset of the total membership information and drawback 2) by organizing members into a hierarchical structure that reflects their proximity according to some network-related metric. We validate the analytical results by simulations and verify that the hierarchical gossip protocol considerably reduces the load on the network compared to the original, nonhierarchical protocol.
Gossip-based Peer Sampling
, 2007
"... Gossip-based communication protocols are appealing in large-scale distributed applications such as information dissemination, aggregation, and overlay topology management. This paper factors out a fundamental mechanism at the heart of all these protocols: the peer-sampling service. In short, this se ..."
Abstract
-
Cited by 161 (43 self)
- Add to MetaCart
Gossip-based communication protocols are appealing in large-scale distributed applications such as information dissemination, aggregation, and overlay topology management. This paper factors out a fundamental mechanism at the heart of all these protocols: the peer-sampling service. In short, this service provides every node with peers to gossip with. We promote this service to the level of a first-class abstraction of a large-scale distributed system, similar to a name service being a first-class abstraction of a local-area system. We present a generic framework to implement a peer-sampling service in a decentralized manner by constructing and maintaining dynamic unstructured overlays through gossiping membership information itself. Our framework generalizes existing approaches and makes it easy to discover new ones. We use this framework to empirically explore and compare several implementations of the peer-sampling service. Through extensive simulation experiments we show that—although all protocols provide a good quality uniform random stream of peers to each node locally—traditional theoretical assumptions about the randomness of the unstructured overlays as a whole do not hold in any of the instances. We also show that different design decisions result in severe differences from the point of view of two crucial aspects: load balancing and fault tolerance. Our simulations are validated by means of a wide-area implementation.
Survey of Research towards Robust Peer-to-Peer Networks: Search Methods
- COMPUTER NETWORKS
, 2004
"... ..."
From Epidemics to Distributed Computing
- IEEE Computer
"... Abstract — Epidemic algorithms have been recently recognized as robust and scalable means to disseminate information in large-scale settings. Information is disseminated reliably in a distributed system the same way an epidemic would be propagated throughout a group of individuals: each process of t ..."
Abstract
-
Cited by 95 (4 self)
- Add to MetaCart
(Show Context)
Abstract — Epidemic algorithms have been recently recognized as robust and scalable means to disseminate information in large-scale settings. Information is disseminated reliably in a distributed system the same way an epidemic would be propagated throughout a group of individuals: each process of the system chooses random peers to whom it relays the information it has received. The underlying peer-to-peer communication paradigm is the key to the scalability of the dissemination scheme. Epidemic algorithms have been studied theoretically and their analysis is built on sound mathematical foundations. Although promising, their general applicability to large scale distributed systems has yet to go through addressing many issues. These constitute an exciting research agenda. Index Terms — Scalability, peer-to-peer, epidemics, information
Epidemic-Style Proactive Aggregation in Large Overlay Networks
- In Proceedings of the 24th International Conference on Distributed Computing Systems (ICDCS’04
, 2004
"... Aggregation---that is, the computation of global properties like average or maximal load, or the number of nodes--- is an important basic functionality in fully distributed environments. In many cases---which include protocols responsible for self-organization in large-scale systems and collaborativ ..."
Abstract
-
Cited by 85 (15 self)
- Add to MetaCart
Aggregation---that is, the computation of global properties like average or maximal load, or the number of nodes--- is an important basic functionality in fully distributed environments. In many cases---which include protocols responsible for self-organization in large-scale systems and collaborative environments---it is useful if all nodes know the value of some aggregates continuously. In this paper we present and analyze novel protocols capable of providing this service. The proposed anti-entropy aggregation protocols compute different aggregates of component properties like extremal values, average and counting. Our protocols are inspired by the anti-entropy epidemic protocol where random pairs of databases periodically resolve their differences. In the case of aggregation, resolving difference is generalized to an arbitrary (numeric) computation based on the states of the two communicating peers. The advantage of this approach is that it is proactive and "democratic", which means it has no performance bottlenecks, and the approximation of the aggregates is present continuously at all nodes. These properties make our protocol suitable for implementing e.g. collective decision making or automatic system maintenance based on global information in a fully distributed fashion. As our main contribution we provide fundamental theoretical results on the proposed averaging protocol.
Newscast Computing
, 2003
"... Monitoring large computer networks often involves aggregation of various sorts of data that are distributed across network components. Finding extreme values, counting discrete observations or computing an average or a sum of some parameter values are typical examples of such "background" ..."
Abstract
-
Cited by 73 (13 self)
- Add to MetaCart
Monitoring large computer networks often involves aggregation of various sorts of data that are distributed across network components. Finding extreme values, counting discrete observations or computing an average or a sum of some parameter values are typical examples of such "background" activities that provide input to monitoring systems. Another aspect of network management is fast and reliable information dissemination, like propagation of alarm signals.