Results 1  10
of
170
Coding for errors and erasures in random network coding
, 2007
"... The problem of errorcontrol in random network coding is considered. A “noncoherent” or “channel oblivious ” model is assumed where neither transmitter nor receiver is assumed to have knowledge of the channel transfer characteristic. Motivated by the property that random network coding is vectorspa ..."
Abstract

Cited by 258 (14 self)
 Add to MetaCart
The problem of errorcontrol in random network coding is considered. A “noncoherent” or “channel oblivious ” model is assumed where neither transmitter nor receiver is assumed to have knowledge of the channel transfer characteristic. Motivated by the property that random network coding is vectorspace preserving, information transmission is modelled as the injection into the network of a basis for a vector space V and the collection by the receiver of a basis for a vector space U. We introduce a metric on the space of all subspaces of a fixed vector space, and show that a minimum distance decoder for this metric achieves correct decoding if the dimension of the space V ∩ U is large enough. If the dimension of each codeword is restricted to a fixed integer, the code forms a subset of a finitefield Grassmannian. Spherepacking and spherecovering bounds as well as generalization of the Singleton bound are provided for such codes. Finally, a ReedSolomonlike code construction, related to Gabidulin’s construction of maximum rankdistance codes, is provided.
A rankmetric approach to error control in random network coding
 IEEE Transactions on Information Theory
"... It is shown that the error control problem in random network coding can be reformulated as a generalized decoding problem for rankmetric codes. This result allows many of the tools developed for rankmetric codes to be applied to random network coding. In the generalized decoding problem induced by ..."
Abstract

Cited by 163 (11 self)
 Add to MetaCart
(Show Context)
It is shown that the error control problem in random network coding can be reformulated as a generalized decoding problem for rankmetric codes. This result allows many of the tools developed for rankmetric codes to be applied to random network coding. In the generalized decoding problem induced by random network coding, the channel may supply partial information about the error in the form of erasures (knowledge of an error location but not its value) and deviations (knowledge of an error value but not its location). For Gabidulin codes, an important family of maximum rank distance codes, an efficient decoding algorithm is proposed that can fully exploit the correction capability of the code; namely, it can correct any pattern of ǫ errors, µ erasures and δ deviations provided 2ǫ+ µ + δ ≤ d − 1, where d is the minimum rank distance of the code. Our approach is based on the coding theory for subspaces introduced by Koetter and Kschischang and can be seen as a practical way to construct codes in that context. I.
Byzantine Modification Detection in Multicast Networks using Randomized Network Coding
 IN IEEE PROC. INTL. SYM. INFORM. THEORY
, 2004
"... We show how distributed randomized network coding, a robust approach to multicasting in distributed network settings, can be extended to provide Byzantine modification detection without the use of cryptographic functions. ..."
Abstract

Cited by 118 (13 self)
 Add to MetaCart
We show how distributed randomized network coding, a robust approach to multicasting in distributed network settings, can be extended to provide Byzantine modification detection without the use of cryptographic functions.
Symbollevel Network Coding for Wireless Mesh Networks
"... This paper describes MIXIT, a system that improves the throughput of wireless mesh networks. MIXIT exploits a basic property of mesh networks: even when no node receives a packet correctly, any given bit is likely to be received by some node correctly. Instead of insisting on forwarding only correct ..."
Abstract

Cited by 84 (2 self)
 Add to MetaCart
(Show Context)
This paper describes MIXIT, a system that improves the throughput of wireless mesh networks. MIXIT exploits a basic property of mesh networks: even when no node receives a packet correctly, any given bit is likely to be received by some node correctly. Instead of insisting on forwarding only correct packets, MIXIT routers use physical layer hints to make their best guess about which bits in a corrupted packet are likely to be correct and forward them to the destination. Even though this approach inevitably lets erroneous bits through, we find that it can achieve high throughput without compromising endtoend reliability. The core component of MIXIT is a novel network code that operates on small groups of bits, called symbols. It allows the nodes to opportunistically route groups of bits to their destination with low overhead. MIXIT’s network code also incorporates an endtoend error correction component that the destination uses to correct any errors that might seep through. We have implemented MIXIT on a software radio platform running the Zigbee radio protocol. Our experiments on a 25node indoor testbed show that MIXIT has a throughput gain of 2.8 × over MORE, a stateoftheart opportunistic routing scheme, and about 3.9 × over traditional routing using the ETX metric.
Signatures for content distribution with network coding
 in Proc. 2007 International Symposium on Information Theory
"... Abstract — Recent research has shown that network coding can be used in content distribution systems to improve the speed of downloads and the robustness of the systems. However, such systems are very vulnerable to attacks by malicious nodes, and we need to have a signature scheme that allows nodes ..."
Abstract

Cited by 74 (6 self)
 Add to MetaCart
(Show Context)
Abstract — Recent research has shown that network coding can be used in content distribution systems to improve the speed of downloads and the robustness of the systems. However, such systems are very vulnerable to attacks by malicious nodes, and we need to have a signature scheme that allows nodes to check the validity of a packet without decoding. In this paper, we propose such a signature scheme for network coding. Our scheme makes use of the linearity property of the packets in a coded system, and allows nodes to check the integrity of the packets received easily. We show that the proposed scheme is secure, and its overhead is negligible for large files. I.
Signing a Linear Subspace: Signature Schemes for Network Coding
"... Abstract. Network coding offers increased throughput and improved robustness to random faults in completely decentralized networks. In contrast to traditional routing schemes, however, network coding requires intermediate nodes to modify data packets en route; for this reason, standard signature sch ..."
Abstract

Cited by 73 (9 self)
 Add to MetaCart
(Show Context)
Abstract. Network coding offers increased throughput and improved robustness to random faults in completely decentralized networks. In contrast to traditional routing schemes, however, network coding requires intermediate nodes to modify data packets en route; for this reason, standard signature schemes are inapplicable and it is a challenge to provide resilience to tampering by malicious nodes. Here, we propose two signature schemes that can be used in conjunction with network coding to prevent malicious modification of data. In particular, our schemes can be viewed as signing linear subspaces in the sense that a signature σ on V authenticates exactly those vectors in V. Our first scheme is homomorphic and has better performance, with both public key size and perpacket overhead being constant. Our second scheme does not rely on random oracles and uses weaker assumptions. We also prove a lower bound on the length of signatures for linear subspaces showing that both of our schemes are essentially optimal in this regard. 1
On metrics for error correction in network coding
 IEEE Trans. Inf. Theory
, 2009
"... The problem of error correction in both coherent and noncoherent network coding is considered under an adversarial model. For coherent network coding, where knowledge of the network topology and network code is assumed at the source and destination nodes, the error correction capability of an (outer ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
The problem of error correction in both coherent and noncoherent network coding is considered under an adversarial model. For coherent network coding, where knowledge of the network topology and network code is assumed at the source and destination nodes, the error correction capability of an (outer) code is succinctly described by the rank metric; as a consequence, it is shown that universal network error correcting codes achieving the Singleton bound can be easily constructed and efficiently decoded. For noncoherent network coding, where knowledge of the network topology and network code is not assumed, the error correction capability of a (subspace) code is given exactly by a new metric, called the injection metric, which is closely related to, but different than, the subspace metric of Kötter and Kschischang. In particular, in the case of a nonconstantdimension code, the decoder associated with the injection metric is shown to correct more errors then a minimumsubspacedistance decoder. All of these results are based on a general approach to adversarial error correction, which could be useful for other adversarial channels beyond network coding. Index Terms Adversarial channels, error correction, injection distance, network coding, rank distance, subspace codes.
Practical defenses against pollution attacks in intraflow network coding for wireless mesh networks
, 2009
"... Recent studies show that network coding can provide significant benefits to network protocols, such as increased throughput, reduced network congestion, higher reliability, and lower power consumption. The core principle of network coding is that intermediate nodes actively mix input packets to prod ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
(Show Context)
Recent studies show that network coding can provide significant benefits to network protocols, such as increased throughput, reduced network congestion, higher reliability, and lower power consumption. The core principle of network coding is that intermediate nodes actively mix input packets to produce output packets. This mixing subjects network coding systems to a severe security threat, known as a pollution attack, where attacker nodes inject corrupted packets into the network. Corrupted packets propagate in an epidemic manner, depleting network resources and significantly decreasing throughput. Pollution attacks are particularly dangerous in wireless networks, where attackers can easily inject packets or compromise devices due to the increased network vulnerability. In this paper, we address pollution attacks against network coding systems in wireless mesh networks. We demonstrate that previous
Index coding: An interference alignment perspective
 in International Symposium on Information Theory
, 2012
"... The index coding problem is studied from an interference alignment perspective providing new results as well as new insights into, and generalizations of, previously known results. An equivalence is established between the capacity of the multiple unicast index coding (where each message is desired ..."
Abstract

Cited by 33 (9 self)
 Add to MetaCart
(Show Context)
The index coding problem is studied from an interference alignment perspective providing new results as well as new insights into, and generalizations of, previously known results. An equivalence is established between the capacity of the multiple unicast index coding (where each message is desired by exactly one receiver), and groupcast index coding (where a message can be desired by multiple receivers), which settles the heretofore open question of insufficiency of linear codes for the multiple unicast index coding problem by equivalence with groupcast settings where this question has previously been answered. Necessary and sufficient conditions for the achievability of rate half per message in the index coding problem are shown to be a natural consequence of interference alignment constraints, and generalizations to feasibility of rate 1 L+1 per message when each destination desires at least L messages, are similarly obtained. Finally, capacity optimal solutions are presented to a series of symmetric index coding problems inspired by the local connectivity and local interference characteristics of wireless networks. The solutions are based on vector linear coding.
Secure network coding over the integers
 In Public Key Cryptography — PKC ’10, Springer LNCS 6056
, 2010
"... Network coding has received significant attention in the networking community for its potential to increase throughput and improve robustness without any centralized control. Unfortunately, network coding is highly susceptible to “pollution attacks ” in which malicious nodes modify packets in a way ..."
Abstract

Cited by 31 (3 self)
 Add to MetaCart
(Show Context)
Network coding has received significant attention in the networking community for its potential to increase throughput and improve robustness without any centralized control. Unfortunately, network coding is highly susceptible to “pollution attacks ” in which malicious nodes modify packets in a way that prevents the reconstruction of information at recipients; such attacks cannot be prevented using standard endtoend cryptographic authentication because network coding requires that intermediate nodes modify data packets in transit. Specialized solutions to the problem have been developed in recent years based on homomorphic hashing and homomorphic signatures. The latter are more bandwidthefficient but require more computation; in particular, the only known construction uses bilinear maps. We contribute to this area in several ways. We present the first homomorphic signature scheme based solely on the RSA assumption (in the random oracle model), and present a homomorphic hashing scheme based on composite moduli that is computationally more efficient than existing schemes (and which leads to secure network coding signatures based solely on the hardness of factoring in the standard model). Both schemes use shorter public keys than previous