Results 1 - 10
of
568
Raptor codes
- IEEE Transactions on Information Theory
, 2006
"... LT-Codes are a new class of codes introduced in [1] for the purpose of scalable and fault-tolerant distribution of data over computer networks. In this paper we introduce Raptor Codes, an extension of LT-Codes with linear time encoding and decoding. We will exhibit a class of universal Raptor codes: ..."
Abstract
-
Cited by 577 (7 self)
- Add to MetaCart
LT-Codes are a new class of codes introduced in [1] for the purpose of scalable and fault-tolerant distribution of data over computer networks. In this paper we introduce Raptor Codes, an extension of LT-Codes with linear time encoding and decoding. We will exhibit a class of universal Raptor codes: for a given integer k, and any real ε> 0, Raptor codes in this class produce a potentially infinite stream of symbols such that any subset of symbols of size k(1 + ε) is sufficient to recover the original k symbols with high probability. Each output symbol is generated using O(log(1/ε)) operations, and the original symbols are recovered from the collected ones with O(k log(1/ε)) operations. We will also introduce novel techniques for the analysis of the error probability of the decoder for finite length Raptor codes. Moreover, we will introduce and analyze systematic versions of Raptor codes, i.e., versions in which the first output elements of the coding system coincide with the original k elements. 1
Bullet: High Bandwidth Data Dissemination Using an Overlay Mesh
, 2003
"... In recent years, overlay networks have become an effective alternative to IP multicast for efficient point to multipoint communication across the Internet. Typically, nodes self-organize with the goal of forming an efficient overlay tree, one that meets performance targets without placing undue burd ..."
Abstract
-
Cited by 424 (22 self)
- Add to MetaCart
In recent years, overlay networks have become an effective alternative to IP multicast for efficient point to multipoint communication across the Internet. Typically, nodes self-organize with the goal of forming an efficient overlay tree, one that meets performance targets without placing undue burden on the underlying network. In this paper, we target high-bandwidth data distribution from a single source to a large number of receivers. Applications include large-file transfers and real-time multimedia streaming. For these applications, we argue that an overlay mesh, rather than a tree, can deliver fundamentally higher bandwidth and reliability relative to typical tree structures. This paper presents Bullet, a scalable and distributed algorithm that enables nodes spread across the Internet to self-organize into a high bandwidth overlay mesh. We construct Bullet around the insight that data should be distributed in a disjoint manner to strategic points in the network. Individual Bullet receivers are then responsible for locating and retrieving the data from multiple points in parallel. Key contributions of this work include: i) an algorithm that sends data to di#erent points in the overlay such that any data object is equally likely to appear at any node, ii) a scalable and decentralized algorithm that allows nodes to locate and recover missing data items, and iii) a complete implementation and evaluation of Bullet running across the Internet and in a large-scale emulation environment reveals up to a factor two bandwidth improvements under a variety of circumstances. In addition, we find that, relative to tree-based solutions, Bullet reduces the need to perform expensive bandwidth probing.
Informed Content Delivery Across Adaptive Overlay Networks
, 2002
"... Overlay networks have emerged as a powerful and highly flexible method for delivering content. We study how to optimize through-put of large, multipoint transfers across richly connected overlay networks, focusing on the question of what to put in each transmit-ted packet. We first make the case for ..."
Abstract
-
Cited by 247 (8 self)
- Add to MetaCart
Overlay networks have emerged as a powerful and highly flexible method for delivering content. We study how to optimize through-put of large, multipoint transfers across richly connected overlay networks, focusing on the question of what to put in each transmit-ted packet. We first make the case for transmitting encoded content in this scenario, arguing for the digital fountain approach which en-ables end-hosts to efficiently restitute the original content of size n from a subset of any n symbols from a large universe of encoded symbols. Such an approach affords reliability and a substantial de-gree of application-level flexibility, as it seamlessly tolerates packet loss, connection migration, and parallel transfers. However, since the sets of symbols acquired by peers are likely to overlap substan-tially, care must be taken to enable them to collaborate effectively. We provide a collection of useful algorithmic tools for efficient es-timation, summarization, and approximate reconciliation of sets of symbols between pairs of collaborating peers, all of which keep messaging complexity and computation to a minimum. Through simulations and experiments on a prototype implementation, we demonstrate the performance benefits of our informed content de-livery mechanisms and how they complement existing overlay net-work architectures.
On coding for reliable communication over packet networks
, 2008
"... We consider the use of random linear network coding in lossy packet networks. In particular, we consider the following simple strategy: nodes store the packets that they receive and, whenever they have a transmission opportunity, they send out coded packets formed from random linear combinations of ..."
Abstract
-
Cited by 217 (37 self)
- Add to MetaCart
We consider the use of random linear network coding in lossy packet networks. In particular, we consider the following simple strategy: nodes store the packets that they receive and, whenever they have a transmission opportunity, they send out coded packets formed from random linear combinations of stored packets. In such a strategy, intermediate nodes perform additional coding yet do not decode nor wait for a block of packets before sending out coded packets. Moreover, all coding and decoding operations have polynomial complexity. We show that, provided packet headers can be used to carry an amount of side-information that grows arbitrarily large (but independently of payload size), random linear network coding achieves packet-level capacity for both single unicast and single multicast connections and for both wireline and wireless networks. This result holds as long as packets received on links arrive according to processes that have average rates. Thus packet losses on links may exhibit correlations in time or with losses on other links. In the special case of Poisson traffic with i.i.d. losses, we give error exponents that quantify the rate of decay of the probability of error with coding delay. Our analysis of random linear network coding shows not only that it achieves packet-level capacity, but also that the propagation of packets carrying “innovative ” information follows the propagation of jobs through a queueing network, thus implying that fluid flow models yield good approximations.
Minimum-Cost Multicast over Coded Packet Networks
- IEEE TRANS. ON INF. THE
, 2006
"... We consider the problem of establishing minimum-cost multicast connections over coded packet networks, i.e., packet networks where the contents of outgoing packets are arbitrary, causal functions of the contents of received packets. We consider both wireline and wireless packet networks as well as b ..."
Abstract
-
Cited by 164 (28 self)
- Add to MetaCart
We consider the problem of establishing minimum-cost multicast connections over coded packet networks, i.e., packet networks where the contents of outgoing packets are arbitrary, causal functions of the contents of received packets. We consider both wireline and wireless packet networks as well as both static multicast (where membership of the multicast group remains constant for the duration of the connection) and dynamic multicast (where membership of the multicast group changes in time, with nodes joining and leaving the group). For static multicast, we reduce the problem to a polynomial-time solvable optimization problem, ... and we present decentralized algorithms for solving it. These algorithms, when coupled with existing decentralized schemes for constructing network codes, yield a fully decentralized approach for achieving minimum-cost multicast. By contrast, establishing minimum-cost static multicast connections over routed packet networks is a very difficult problem even using centralized computation, except in the special cases of unicast and broadcast connections. For dynamic multicast, we reduce the problem to a dynamic programming problem and apply the theory of dynamic programming to suggest how it may be solved.
Capacity of Wireless Erasure Networks
- IEEE TRANSACTIONS ON INFORMATION THEORY
, 2006
"... In this paper, a special class of wireless networks, called wireless erasure networks, is considered. In these networks, each node is connected to a set of nodes by possibly correlated erasure channels. The network model incorporates the broadcast nature of the wireless environment by requiring eac ..."
Abstract
-
Cited by 149 (12 self)
- Add to MetaCart
In this paper, a special class of wireless networks, called wireless erasure networks, is considered. In these networks, each node is connected to a set of nodes by possibly correlated erasure channels. The network model incorporates the broadcast nature of the wireless environment by requiring each node to send the same signal on all outgoing channels. However, we assume there is no interference in reception. Such models are therefore appropriate for wireless networks where all information transmission is packetized and where some mechanism for interference avoidance is already built in. This paper looks at multicast problems over these networks. The capacity under the assumption that erasure locations on all the links of the network are provided to the destinations is obtained. It turns out that the capacity region has a nice max-flow min-cut interpretation. The definition of cut-capacity in these networks incorporates the broadcast property of the wireless medium. It is further shown that linear coding at nodes in the network suffices to achieve the capacity region. Finally, the performance of different coding schemes in these networks when no side information is available to the destinations is analyzed.
On-the-fly verification of rateless erasure codes for efficient content distribution
- In Proceedings of the IEEE Symposium on Security and Privacy
, 2004
"... Abstract — The quality of peer-to-peer content distribution can suffer when malicious participants intentionally corrupt content. Some systems using simple block-by-block downloading can verify blocks with traditional cryptographic signatures and hashes, but these techniques do not apply well to mor ..."
Abstract
-
Cited by 137 (4 self)
- Add to MetaCart
Abstract — The quality of peer-to-peer content distribution can suffer when malicious participants intentionally corrupt content. Some systems using simple block-by-block downloading can verify blocks with traditional cryptographic signatures and hashes, but these techniques do not apply well to more elegant systems that use rateless erasure codes for efficient multicast transfers. This paper presents a practical scheme, based on homomorphic hashing, that enables a downloader to perform on-the-fly verification of erasure-encoded blocks. I.
High Availability in DHTs: Erasure Coding vs. Replication
"... High availability in peer-to-peer DHTs requires data redundancy. This paper compares two popular redundancy schemes: replication and erasure coding. Unlike previous comparisons, we take the characteristics of the nodes that comprise the overlay into account, and conclude that in some cases the benef ..."
Abstract
-
Cited by 115 (1 self)
- Add to MetaCart
(Show Context)
High availability in peer-to-peer DHTs requires data redundancy. This paper compares two popular redundancy schemes: replication and erasure coding. Unlike previous comparisons, we take the characteristics of the nodes that comprise the overlay into account, and conclude that in some cases the benefits from coding are limited, and may not be worth its disadvantages.
Growth codes: Maximizing sensor network data persistence
- in Proc. ACM SIGCOMM
"... Sensor networks are especially useful in catastrophic or emergency scenarios such as floods, fires, terrorist attacks or earthquakes where human participation may be too dangerous. However, such disaster scenarios pose an interesting design challenge since the sensor nodes used to collect and commun ..."
Abstract
-
Cited by 92 (0 self)
- Add to MetaCart
(Show Context)
Sensor networks are especially useful in catastrophic or emergency scenarios such as floods, fires, terrorist attacks or earthquakes where human participation may be too dangerous. However, such disaster scenarios pose an interesting design challenge since the sensor nodes used to collect and communicate data may themselves fail suddenly and unpredictably, resulting in the loss of valuable data. Furthermore, because these networks are often expected to be deployed in response to a disaster, or because of sudden configuration changes due to failure, these networks are often expected to operate in a “zero-configuration ” paradigm, where data collection and transmission must be initiated immediately, before the nodes have a chance to assess the current network topology. In this paper, we design and analyze techniques to increase “persistence ” of sensed data, so that data is more likely to reach a data sink, even as network nodes fail. This is done by replicating data compactly at neighboring nodes using novel “Growth Codes ” that increase in efficiency as data accumulates at the sink. We show that Growth Codes preserve more data in the presence of node failures than previously proposed erasure resilient techniques.