Results 1 - 10
of
1,161
Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications
- SIGCOMM'01
, 2001
"... A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto ..."
Abstract
-
Cited by 4469 (69 self)
- Add to MetaCart
(Show Context)
A fundamental problem that confronts peer-to-peer applications is to efficiently locate the node that stores a particular data item. This paper presents Chord, a distributed lookup protocol that addresses this problem. Chord provides support for just one operation: given a key, it maps the key onto a node. Data location can be easily implemented on top of Chord by associating a key with each data item, and storing the key/data item pair at the node to which the key maps. Chord adapts efficiently as nodes join and leave the system, and can answer queries even if the system is continuously changing. Results from theoretical analysis, simulations, and experiments show that Chord is scalable, with communication cost and the state maintained by each node scaling logarithmically with the number of Chord nodes.
Wide-area cooperative storage with CFS
, 2001
"... The Cooperative File System (CFS) is a new peer-to-peer readonly storage system that provides provable guarantees for the efficiency, robustness, and load-balance of file storage and retrieval. CFS does this with a completely decentralized architecture that can scale to large systems. CFS servers pr ..."
Abstract
-
Cited by 999 (53 self)
- Add to MetaCart
(Show Context)
The Cooperative File System (CFS) is a new peer-to-peer readonly storage system that provides provable guarantees for the efficiency, robustness, and load-balance of file storage and retrieval. CFS does this with a completely decentralized architecture that can scale to large systems. CFS servers provide a distributed hash table (DHash) for block storage. CFS clients interpret DHash blocks as a file system. DHash distributes and caches blocks at a fine granularity to achieve load balance, uses replication for robustness, and decreases latency with server selection. DHash finds blocks using the Chord location protocol, which operates in time logarithmic in the number of servers. CFS is implemented using the SFS file system toolkit and runs on Linux, OpenBSD, and FreeBSD. Experience on a globally deployed prototype shows that CFS delivers data to clients as fast as FTP. Controlled tests show that CFS is scalable: with 4,096 servers, looking up a block of data involves contacting only seven servers. The tests also demonstrate nearly perfect robustness and unimpaired performance even when as many as half the servers fail.
Scalable Application Layer Multicast
, 2002
"... We describe a new scalable application-layer multicast protocol, specifically designed for low-bandwidth, data streaming applications with large receiver sets. Our scheme is based upon a hierarchical clustering of the application-layer multicast peers and can support a number of different data deliv ..."
Abstract
-
Cited by 731 (21 self)
- Add to MetaCart
(Show Context)
We describe a new scalable application-layer multicast protocol, specifically designed for low-bandwidth, data streaming applications with large receiver sets. Our scheme is based upon a hierarchical clustering of the application-layer multicast peers and can support a number of different data delivery trees with desirable properties. We present extensive simulations of both our protocol and the Narada application-layer multicast protocol over Internet-like topologies. Our results show that for groups of size 32 or more, our protocol has lower link stress (by about 25%), improved or similar endto-end latencies and similar failure recovery properties. More importantly, it is able to achieve these results by using orders of magnitude lower control traffic. Finally, we present results from our wide-area testbed in which we experimented with 32-100 member groups distributed over 8 different sites. In our experiments, averagegroup members established and maintained low-latency paths and incurred a maximum packet loss rate of less than 1 % as members randomly joined and left the multicast group. The average control overhead during our experiments was less than 1 Kbps for groups of size 100.
An Integrated Experimental Environment for Distributed Systems and Networks
- In Proc. of the Fifth Symposium on Operating Systems Design and Implementation
, 2002
"... Three experimental environments traditionally support network and distributed systems research: network emulators, network simulators, and live networks. The continued use of multiple approaches highlights both the value and inadequacy of each. Netbed, a descendant of Emulab, provides an experimenta ..."
Abstract
-
Cited by 688 (41 self)
- Add to MetaCart
(Show Context)
Three experimental environments traditionally support network and distributed systems research: network emulators, network simulators, and live networks. The continued use of multiple approaches highlights both the value and inadequacy of each. Netbed, a descendant of Emulab, provides an experimentation facility that integrates these approaches, allowing researchers to configure and access networks composed of emulated, simulated, and wide-area nodes and links. Netbed's primary goals are ease of use, control, and realism, achieved through consistent use of virtualization and abstraction.
A blueprint for introducing disruptive technology into the internet
, 2002
"... This paper argues that a new class of geographically distributed network services is emerging, and that the most effective way to design, evaluate, and deploy these services is by using an overlay-based testbed. Unlike conventional network testbeds, however, we advocate an approach that supports bot ..."
Abstract
-
Cited by 593 (43 self)
- Add to MetaCart
(Show Context)
This paper argues that a new class of geographically distributed network services is emerging, and that the most effective way to design, evaluate, and deploy these services is by using an overlay-based testbed. Unlike conventional network testbeds, however, we advocate an approach that supports both researchers that want to develop new services, and clients that want to use them. This dual use, in turn, suggests four design principles that are not widely supported in existing testbeds: services should be able to run continuously and access a slice of the overlay’s resources, control over resources should be distributed, overlay management services should be unbundled and run in their own slices, and APIs should be designed to promote application development. We believe a testbed that supports these design principles will facilitate the emergence of a new service-oriented network architecture. Towards this end, the paper also briefly describes PlanetLab, an overlay network being designed with these four principles in mind.
Astrolabe: A Robust and Scalable Technology for Distributed System Monitoring, Management, and Data Mining
- ACM Transactions on Computer Systems
, 2001
"... this paper, we describe a new information management service called Astrolabe. Astrolabe monitors the dynamically changing state of a collection of distributed resources, reporting summaries of this information to its users. Like DNS, Astrolabe organizes the resources into a hierarchy of domains, wh ..."
Abstract
-
Cited by 452 (27 self)
- Add to MetaCart
this paper, we describe a new information management service called Astrolabe. Astrolabe monitors the dynamically changing state of a collection of distributed resources, reporting summaries of this information to its users. Like DNS, Astrolabe organizes the resources into a hierarchy of domains, which we call zones to avoid confusion, and associates attributes with each zone. Unlike DNS, zones are not bound to specific servers, the attributes may be highly dynamic, and updates propagate quickly; typically, in tens of seconds
Planetlab: An overlay testbed for broad-coverage services
- ACM SIGCOMM Computer Communication Review
, 2003
"... PlanetLab is a global overlay network for developing and accessing broad-coverage network services. Our goal is to grow to 1000 geographically distributed nodes, connected by a diverse collection of links. PlanetLab allows multiple services to run concurrently and continuously, each in its own slice ..."
Abstract
-
Cited by 445 (3 self)
- Add to MetaCart
(Show Context)
PlanetLab is a global overlay network for developing and accessing broad-coverage network services. Our goal is to grow to 1000 geographically distributed nodes, connected by a diverse collection of links. PlanetLab allows multiple services to run concurrently and continuously, each in its own slice of PlanetLab. This paper describes our initial implementation of PlanetLab, including the mechanisms used to implement virtualization, and the collection of core services used to manage PlanetLab. 1.
End-to-end available bandwidth: Measurement methodology, dynamics, and relation with TCP throughput
- In Proceedings of ACM SIGCOMM
, 2002
"... The available bandwidth (avail-bw) in a network path is of major importance in congestion control, streaming applications, QoS verification, server selection, and overlay networks. We describe an end-to-end methodology, called Self-Loading Periodic Streams (SLoPS), for measuring avail-bw. The basic ..."
Abstract
-
Cited by 414 (20 self)
- Add to MetaCart
(Show Context)
The available bandwidth (avail-bw) in a network path is of major importance in congestion control, streaming applications, QoS verification, server selection, and overlay networks. We describe an end-to-end methodology, called Self-Loading Periodic Streams (SLoPS), for measuring avail-bw. The basic idea in SLoPS is that the one-way delays of a periodic packet stream show an increasing trend when the stream’s rate is higher than the avail-bw. We implemented SLoPS in a tool called pathload. The accuracy of the tool has been evaluated with both simulations and experiments over real-world Internet paths. Pathload is non-intrusive, meaning that it does not cause significant increases in the network utilization, delays, or losses. We used pathload to evaluate the variability (‘dynamics’) of the avail-bw in some paths that cross USA and Europe. The avail-bw becomes significantly more variable in heavily utilized paths, as well as in paths with limited capacity (probably due to a lower degree of statistical multiplexing). We finally examine the relation between avail-bw and TCP throughput. A persistent TCP connection can be used to roughly measure the avail-bw in a path, but TCP saturates the path, and increases significantly the path delays and jitter.
A Taxonomy of DDoS Attack and DDoS Defense Mechanisms
- ACM SIGCOMM Computer Communication Review
, 2004
"... Distributed denial-of-service (DDoS) is a rapidly growing problem. The multitude and variety of both the attacks and the defense approaches is overwhelming. This paper presents two taxonomies for classifying attacks and defenses, and thus provides researchers with a better understanding of the probl ..."
Abstract
-
Cited by 358 (2 self)
- Add to MetaCart
(Show Context)
Distributed denial-of-service (DDoS) is a rapidly growing problem. The multitude and variety of both the attacks and the defense approaches is overwhelming. This paper presents two taxonomies for classifying attacks and defenses, and thus provides researchers with a better understanding of the problem and the current solution space. The attack classification criteria was selected to highlight commonalities and important features of attack strategies, that define challenges and dictate the design of countermeasures. The defense taxonomy classifies the body of existing DDoS defenses based on their design decisions; it then shows how these decisions dictate the advantages and deficiencies of proposed solutions.
Comparison of routing metrics for static multi-hop wireless networks
- In ACM SIGCOMM
, 2004
"... Routing protocols for wireless ad hoc networks have traditionally focused on finding paths with minimum hop count. However, such paths can include slow or lossy links, leading to poor throughput. A routing algorithm can select better paths by explicitly taking the quality of the wireless links into ..."
Abstract
-
Cited by 331 (3 self)
- Add to MetaCart
(Show Context)
Routing protocols for wireless ad hoc networks have traditionally focused on finding paths with minimum hop count. However, such paths can include slow or lossy links, leading to poor throughput. A routing algorithm can select better paths by explicitly taking the quality of the wireless links into account. In this paper, we conduct a detailed, empirical evaluation of the performance of three link-quality metrics— ETX, per-hop RTT, and per-hop packet pair—and compare them against minimum hop count. We study these metrics using a DSR-based routing protocol running in a wireless testbed. We find that the ETX metric has the best performance when all nodes are stationary. We also find that the per-hop RTT and per-hop packet-pair metrics perform poorly due to self-interference. Interestingly, the hop-count metric outperforms all of the link-quality metrics in a scenario where the sender is mobile.