Results 11 - 20
of
248
Erasure-coding based routing for opportunistic networks
, 2005
"... mobility is a challenging problem because disconnections are prevalent and lack of knowledge about network dynamics hinders good decision making. Current approaches are primarily based on redundant transmissions. They have either high overhead due to excessive transmissions or long delays due to the ..."
Abstract
-
Cited by 126 (4 self)
- Add to MetaCart
(Show Context)
mobility is a challenging problem because disconnections are prevalent and lack of knowledge about network dynamics hinders good decision making. Current approaches are primarily based on redundant transmissions. They have either high overhead due to excessive transmissions or long delays due to the possibility of making wrong choices when forwarding a few redundant copies. In this paper, we propose a novel forwarding algorithm based on the idea of erasure codes. Erasure coding allows use of a large number of relays while maintaining a constant overhead, which results in fewer cases of long delays. We use simulation to compare the routing performance of using erasure codes in DTN with four other categories of forwarding algorithms proposed in the literature. Our simulations are based on a real-world mobility trace collected in a large outdoor wild-life environment. The results show that the erasure-coding based algorithm provides the best worst-case delay performance with a fixed amount of overhead. We also present a simple analytical model to capture the delay characteristics of erasure-coding based forwarding, which provides insights on the potential of our approach.
Slurpie: A Cooperative Bulk Data Transfer Protocol
- IN PROCEEDINGS OF IEEE INFOCOM
, 2004
"... We present Slurpie: a peer-to-peer protocol for bulk data transfer. Slurpie is specifically designed to reduce client download times for large, popular files, and to reduce load on servers that serve these files. Slurpie employs a novel adaptive downloading strategy to increase client performance, a ..."
Abstract
-
Cited by 115 (6 self)
- Add to MetaCart
We present Slurpie: a peer-to-peer protocol for bulk data transfer. Slurpie is specifically designed to reduce client download times for large, popular files, and to reduce load on servers that serve these files. Slurpie employs a novel adaptive downloading strategy to increase client performance, and employs a randomized backoff strategy to precisely control load on the server. We describe a full implementation of the Slurpie protocol, and present results from both controlled localarea and wide-area testbeds. Our results show that Slurpie clients improve performance as the size of the network increases, and the server is completely insulated from large flash crowds entering the Slurpie network.
Cooperative security for network coding file distribution," in
- Proc. of IEEE INFOCOM'06,
, 2006
"... Abstract-Peer-to-peer content distribution networks can suffer from malicious participants that intentionally corrupt content. Traditional systems verify blocks with traditional cryptographic signatures and hashes. However, these techniques do not apply well to more elegant schemes that use network ..."
Abstract
-
Cited by 109 (2 self)
- Add to MetaCart
Abstract-Peer-to-peer content distribution networks can suffer from malicious participants that intentionally corrupt content. Traditional systems verify blocks with traditional cryptographic signatures and hashes. However, these techniques do not apply well to more elegant schemes that use network coding techniques for efficient content distribution. Architectures that use network coding are prone to jamming attacks where the introduction of a few corrupted blocks can quickly result in a large number of bad blocks propagating through the system. Identifying such bogus blocks is difficult and requires the use of homomorphic hashing functions, which are computationally expensive. This paper presents a practical security scheme for network coding that reduces the cost of verifying blocks on-the-fly while efficiently preventing the propagation of malicious blocks. In our scheme, users not only cooperate to distribute the content, but (well-behaved) users also cooperate to protect themselves against malicious users by informing affected nodes when a malicious block is found. We analyze and study such cooperative security scheme and introduce elegant techniques to prevent DoS attacks. We show that the loss in the efficiency caused by the attackers is limited to the effort the attackers put to corrupt the communication, which is a natural lower bound in the damage of the system. We also show experimentally that checking as low as 1-5% of the received blocks is enough to guarantee low corruption rates.
Growth codes: Maximizing sensor network data persistence
- in Proc. ACM SIGCOMM
"... Sensor networks are especially useful in catastrophic or emergency scenarios such as floods, fires, terrorist attacks or earthquakes where human participation may be too dangerous. However, such disaster scenarios pose an interesting design challenge since the sensor nodes used to collect and commun ..."
Abstract
-
Cited by 92 (0 self)
- Add to MetaCart
(Show Context)
Sensor networks are especially useful in catastrophic or emergency scenarios such as floods, fires, terrorist attacks or earthquakes where human participation may be too dangerous. However, such disaster scenarios pose an interesting design challenge since the sensor nodes used to collect and communicate data may themselves fail suddenly and unpredictably, resulting in the loss of valuable data. Furthermore, because these networks are often expected to be deployed in response to a disaster, or because of sudden configuration changes due to failure, these networks are often expected to operate in a “zero-configuration ” paradigm, where data collection and transmission must be initiated immediately, before the nodes have a chance to assess the current network topology. In this paper, we design and analyze techniques to increase “persistence ” of sensed data, so that data is more likely to reach a data sink, even as network nodes fail. This is done by replicating data compactly at neighboring nodes using novel “Growth Codes ” that increase in efficiency as data accumulates at the sink. We show that Growth Codes preserve more data in the presence of node failures than previously proposed erasure resilient techniques.
Network Coding in Undirected Networks
, 2004
"... Recent work in network coding shows that, it is necessary to consider both the routing and coding strategies to achieve optimal throughput of information transmission in data networks. So far, most research on network coding has focused on the model of directed networks, where each communication li ..."
Abstract
-
Cited by 79 (18 self)
- Add to MetaCart
Recent work in network coding shows that, it is necessary to consider both the routing and coding strategies to achieve optimal throughput of information transmission in data networks. So far, most research on network coding has focused on the model of directed networks, where each communication link has a fixed direction. In this paper, we study the benefits of network coding in undirected networks, where each communication link is bidirectional. Our theoretical results show that, for a single unicast or broadcast session, there are no improvements with respect to throughput due to network coding. In the case of a single multicast session, such an improvement is bounded by a factor of two, as long as half integer routing is permitted. This is dramatically different from previous results obtained in directed networks. We also show that multicast throughput in an undirected network is independent of the selection of the sender within the multicast group. We finally show that, rather than improving the optimal achievable throughput, the benefit of network coding is to significantly facilitate the design of efficient algorithms to compute and achieve such optimal throughput. I.
On achieving optimal throughput with network coding
- in Proc. IEEE Infocom 2005
, 2005
"... Abstrkt- With the constraints of network topologies and link capacities, achieving the optimal end-to-end throughput in data networks has been known as a fundamental but camputationally hard problem, In this paper, we seek efficient solutions to the problem of achieving optimal throughput in data ne ..."
Abstract
-
Cited by 72 (30 self)
- Add to MetaCart
Abstrkt- With the constraints of network topologies and link capacities, achieving the optimal end-to-end throughput in data networks has been known as a fundamental but camputationally hard problem, In this paper, we seek efficient solutions to the problem of achieving optimal throughput in data networks, with single or multiple unicast, multicast and broadcast sessions. Although previous approaches lead to solving NP-complete prohlems, we show the surprising result that, facilitated by the recent advances of network coding, computing the strategies to achieve the optimal end-to-end throughput can be performed in polynomial time. This result holds for one or more communication sessions, as well as in the overlay network model, Supported by empirical studies, we present the surprising observation that in most topologies, applying network coding may not improve the achievable optimal throughput; rather, it facilitates the design of significantly more efficient algorithms to achieve such optimality.
Using Random Subsets to Build Scalable Network Services
, 2003
"... In this paper, we argue that a broad range of large-scale network services would benefit from a scalable mechanism for delivering state about a random subset of global participants. Key to this approach is ensuring that membership in the subset changes periodically and with uniform representation ov ..."
Abstract
-
Cited by 72 (12 self)
- Add to MetaCart
In this paper, we argue that a broad range of large-scale network services would benefit from a scalable mechanism for delivering state about a random subset of global participants. Key to this approach is ensuring that membership in the subset changes periodically and with uniform representation over all participants. Random subsets could help overcome inherent scaling limitations to services that maintain global state and perform global network probing. It could further improve the routing performance of peer-to-peer distributed hash tables by locating topologically-close nodes. This paper presents the design, implementation, and evaluation of RanSub, a scalable protocol for delivering such state.
Rateless Codes and Big Downloads
, 2003
"... This paper presents a novel algorithm for downloading big files from multiple sources in peer-to-peer networks. The algorithm is simple, but offers several compelling properties. It ensures low handshaking overhead between peers that download files (or parts of a files) from each other. It is comput ..."
Abstract
-
Cited by 69 (1 self)
- Add to MetaCart
This paper presents a novel algorithm for downloading big files from multiple sources in peer-to-peer networks. The algorithm is simple, but offers several compelling properties. It ensures low handshaking overhead between peers that download files (or parts of a files) from each other. It is computationally efficient, with cost linear in the amount of data transfered. Most importantly, when nodes leave the network in the middle of uploads, the algorithm minimizes the duplicate information shared by nodes with truncated downloads. Thus, any two peers with partial knowledge of a given file can almost always fully benefit from each other's knowledge. Our algorithm is made possible by the recent introduction of linear-time, rateless erasure codes.
Improving Collection Selection with Overlap Awareness in P2P Search Engines
- In SIGIR
, 2005
"... Collection selection has been a research issue for years. Typically, in related work, precomputed statistics are employed in order to estimate the expected result quality of each collection, and subsequently the collections are ranked accordingly. Our thesis is that this simple approach is insuffici ..."
Abstract
-
Cited by 66 (23 self)
- Add to MetaCart
(Show Context)
Collection selection has been a research issue for years. Typically, in related work, precomputed statistics are employed in order to estimate the expected result quality of each collection, and subsequently the collections are ranked accordingly. Our thesis is that this simple approach is insufficient for several applications in which the collections typically overlap. This is the case, for example, for the collections built by autonomous peers crawling the web. We argue for the extension of existing quality measures using estimators of mutual overlap among collections and present experiments in which this combination outperforms CORI, a popular approach based on quality estimation. We outline our prototype implementation of a P2P web search engine, coined MINERVA 1, that allows handling large amounts of data in a distributed and self-organizing manner. We conduct experiments which show that taking overlap into account during collection selection can drastically decrease the number of collections that have to be contacted in order to reach a satisfactory level of recall, which is a great step toward the feasibility of distributed web search.