Results 11  20
of
60
On the separation of lossy sourcenetwork coding and channel coding in wireline networks
 in Proc. IEEE International Symposium on Information Theory
, 2010
"... ar ..."
(Show Context)
Compressive Sensing Over Networks
"... Abstract—In this paper, we demonstrate some applications of compressive sensing over networks. We make a connection between compressive sensing and traditional information theoretic techniques in source coding and channel coding. Our results provide an explicit tradeoff between the rate and the dec ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we demonstrate some applications of compressive sensing over networks. We make a connection between compressive sensing and traditional information theoretic techniques in source coding and channel coding. Our results provide an explicit tradeoff between the rate and the decoding complexity. The key difference of compressive sensing and traditional information theoretic approaches is at their decoding side. Although optimal decoders to recover the original signal, compressed by source coding have high complexity, the compressive sensing decoder is a linear or convex optimization. First, we investigate applications of compressive sensing on distributed compression of correlated sources. Here, by using compressive sensing, we propose a compression scheme for a family of correlated sources with a modularized decoder, providing a tradeoff between the compression rate and the decoding complexity. We call this scheme Sparse Distributed Compression. We use this compression scheme for a general multicast network with correlated sources. Here, we first decode some of the sources by a network decoding technique and then, we use a compressive sensing decoder to obtain the whole sources. Then, we investigate applications of compressive sensing on channel coding. We propose a coding scheme that combines compressive sensing and random channel coding for a highSNR pointtopoint Gaussian channel. We call this scheme Sparse Channel Coding. We propose a modularized decoder providing a tradeoff between the capacity loss and the decoding complexity. At the receiver side, first, we use a compressive sensing decoder on a noisy signal to obtain a noisy estimate of the original signal and then, we apply a traditional channel coding decoder to find the original signal. I.
Minimumcost subgraphs for joint distributed source and network coding
 in Proc. of Workshop on Network Coding, Theory and Applications
, 2007
"... Abstract — We consider multicast of correlated sources over a network. Assuming the use of random network coding, we provide a linear optimization formulation for allocation of link rates in the network, also known as subgraph construction. Such an approach requires joint distributed source and netw ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
Abstract — We consider multicast of correlated sources over a network. Assuming the use of random network coding, we provide a linear optimization formulation for allocation of link rates in the network, also known as subgraph construction. Such an approach requires joint distributed source and network coding, which often has a lower cost than of that required by separated source and network coding. We support this result with simulations on randomly generated networks and on network data collected from a Future Combat Systems (FCS) exercise at Lakehurst, NJ. I.
Communicating the sum of sources in a 3sources/3terminals network. Manuscript, 2009, available at http://www.openu.ac.il/home/mikel/ISIT09/ISIT09.pdf
"... Abstract—We consider the network communication scenario in which a number of sources si each holding independent information Xi wish to communicate the sum ∑ Xi to a set of terminals tj. In this work we consider directed acyclic graphs with unit capacity edges and independent sources of unitentropy ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Abstract—We consider the network communication scenario in which a number of sources si each holding independent information Xi wish to communicate the sum ∑ Xi to a set of terminals tj. In this work we consider directed acyclic graphs with unit capacity edges and independent sources of unitentropy. The case in which there are only two sources or only two terminals was considered by the work of Ramamoorthy [ISIT 2008] where it was shown that communication is possible if and only if each source terminal pair si/tj is connected by at least a single path. In this work we study the communication problem in general, and show that even for the case of three sources and three terminals, a single path connecting source/terminal pairs does not suffice to communicate ∑ Xi. We then present an efficient encoding scheme which enables the communication of ∑ Xi for the three sources, three terminals case, given that each source terminal pair is connected by two edge disjoint paths. Our encoding scheme includes a structural decomposition of the network at hand which may be found useful for other network coding problems as well. I.
Practical sourcenetwork decoding
 in ISWCS 2009, 2009
"... Abstract—When correlated sources are to be communicated over a network to more than one sink, joint sourcenetwork coding is, in general, required for information theoretically optimal transmission. Whereas on the encoder side simple randomized schemes based on linear codes suffice, the decoder is r ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
Abstract—When correlated sources are to be communicated over a network to more than one sink, joint sourcenetwork coding is, in general, required for information theoretically optimal transmission. Whereas on the encoder side simple randomized schemes based on linear codes suffice, the decoder is required to perform joint sourcenetwork decoding which is computationally expensive. Focusing on maximum aposteriori decoders (or, in the case of continuous sources, conditional mean estimators), we show how to exploit (structural) knowledge about the network topology as well as the source correlations giving rise to an efficient decoder implementation (in some cases even with linear dependency on the number of nodes). In particular, we show how to statistically represent the overall system (including the packets) by a factorgraph on which the sumproduct algorithm can be run. A proofofconcept is provided in the form of a working decoder for the case of three sources and two sinks. I.
Towards Optimum Cost in MultiHop Networks with Arbitrary Network Demands
"... This paper considers the problem of minimizing the communication cost for a general multihop network with correlated sources and multiple sinks. For the single sink scenario, it has been shown that this problem can be decoupled, without loss of optimality, into two separate subproblems of distribu ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
This paper considers the problem of minimizing the communication cost for a general multihop network with correlated sources and multiple sinks. For the single sink scenario, it has been shown that this problem can be decoupled, without loss of optimality, into two separate subproblems of distributed source coding and finding the optimal routing (transmission structure). It has further been established that, under certain assumptions, such decoupling also applies in the general case of multiple sinks and arbitrary network demands. We show that these assumptions are significantly restrictive, and further provide examples to substantiate the loss, including settings where removing the assumptions yields unbounded performance gains. Finally, an approach to solving the unconstrained problem, where routing and coding cannot be decoupled, is derived based on Han and Kobayashi’s achievability region for multiterminal coding.
On distributed distortion optimization for correlated sources
 in Proc. of IEEE International Symposium on Information Theory
, 2007
"... Abstract — We consider lossy data compression in capacityconstrained networks with correlated sources. We develop, using dual decomposition, a distributed algorithm that maximizes an aggregate utility measure defined in terms of the distortion levels of the sources. No coordination among sources is ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Abstract — We consider lossy data compression in capacityconstrained networks with correlated sources. We develop, using dual decomposition, a distributed algorithm that maximizes an aggregate utility measure defined in terms of the distortion levels of the sources. No coordination among sources is required; each source adjusts its distortion level according to distortion prices fed back by the sinks. The algorithm is developed for the case of squared error distortion and high resolution coding where the rate distortion region is known, and is easily extended to consider achievable regions that can be expressed in a related form. Our distributed optimization framework applies to unicast and multicast with and without network coding. Numerical example shows relatively fast convergence, allowing the algorithm to be used in timevarying networks. I.
Reduceddimension linear transform coding of correlated signasl in networks
 IEEE Trans. Sig. Processing
, 2012
"... ar ..."
(Show Context)
On the Continuity of Achievable Rate Regions for Source Coding over Networks
"... Abstract — The continuity property of achievable rate regions for source coding over networks is considered. We show ratedistortion regions are continuous with respect to distortion vectors. Then we focus on the continuity of lossless rate regions with respect to source distribution: First, the proo ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Abstract — The continuity property of achievable rate regions for source coding over networks is considered. We show ratedistortion regions are continuous with respect to distortion vectors. Then we focus on the continuity of lossless rate regions with respect to source distribution: First, the proof of continuity for general networks with independent sources is given; then, for the case of dependent sources, continuity is proven both in examples where oneletter characterizations are known and in examples where oneletter characterizations are not known; the proofs in the latter case rely on the concavity of the rate regions for those networks. I.