Results 1  10
of
25
WynerZiv coding over broadcast channels using hybrid digital/analog transmission
 IEEE International Symposium on Information Theory
, 2008
"... ar ..."
(Show Context)
Shitz), “Optimality and approximate optimality of sourcechannel separation in networks
 in Proc. IEEE Int. Symp. Inf. Theory
, 2010
"... Abstract — We consider the sourcechannel separation architecture for lossy source coding in communication networks. It is shown that the separation approach is optimal in two general scenarios and is approximately optimal in a third scenario. The two scenarios for which separation is optimal compl ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
(Show Context)
Abstract — We consider the sourcechannel separation architecture for lossy source coding in communication networks. It is shown that the separation approach is optimal in two general scenarios and is approximately optimal in a third scenario. The two scenarios for which separation is optimal complement each other: the first is when the memoryless sources at source nodes are arbitrarily correlated, each of which is to be reconstructed at possibly multiple destinations within certain distortions, but the channels in this network are synchronized, orthogonal, and memoryless pointtopoint channels; the second is when the memoryless sources are mutually independent, each of which is to be reconstructed only at one destination within a certain distortion, but the channels are general, including multiuser channels, such as multiple access, broadcast, interference, and relay channels, possibly with feedback. The third scenario, for which we demonstrate approximate optimality of sourcechannel separation, generalizes the second scenario by allowing each source to be reconstructed at multiple destinations with different distortions. For this case, the loss from optimality using the separation approach can be upperbounded when a difference distortion measure is taken, and in the special case of quadratic distortion measure, this leads to universal constant bounds. Index Terms — Joint sourcechannel coding, separation. I.
Lossy Source Coding for a Cascade Communication System with SideInformations
"... Abstract — We investigate source coding in a cascade communication system consisting of an encoder, a relay and an end terminal, where both the relay and the end terminal wish to reconstruct source X with certain fidelities. Additionally, sideinformations Z and Y are available at the relay and the ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
(Show Context)
Abstract — We investigate source coding in a cascade communication system consisting of an encoder, a relay and an end terminal, where both the relay and the end terminal wish to reconstruct source X with certain fidelities. Additionally, sideinformations Z and Y are available at the relay and the end terminal, respectively. The sideinformation Z at the relay is a physically degraded version of sideinformation Y at the end terminal. Inner and outer bounds for the rate distortion region are provided in this work for general discrete memoryless sources. The rate distortion region is characterized when the source and sideinformations are jointly Gaussian and physically degraded. The doubly symmetric binary source is also investigated and the inner and outer bounds are shown to coincide in certain distortion regimes. A
Side information scalable source coding
 in EPFL Technical Report
, 2006
"... The problem of sideinformation scalable (SIscalable) source coding is considered in this work, where the encoder constructs a progressive description, such that the receiver with high quality side information will be able to truncate the bitstream and reconstruct in the rate distortion sense, whil ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
(Show Context)
The problem of sideinformation scalable (SIscalable) source coding is considered in this work, where the encoder constructs a progressive description, such that the receiver with high quality side information will be able to truncate the bitstream and reconstruct in the rate distortion sense, while the receiver with low quality side information will have to receive further data in order to decode. We provide inner and outer bounds to the rate distortion region for general discrete memoryless sources. The achievable region is shown to be tight for the case that either one of the two decoders requires a lossless reconstruction, as well as the case with degraded deterministic distortion measures. Furthermore we show that the gap between the inner bounds and the outer bounds can be bounded by a constant when square error distortion measure is used. The notion of perfectly scalable coding is introduced as both stages operate on the WynerZiv bound, and necessary and sufficient conditions are given for sources satisfying a mild support condition. Using SIscalable coding and successive refinement WynerZiv coding as basic building blocks, we provide a complete characterization of the rate distortion region for the important quadratic Gaussian case with multiple jointly Gaussian sideinformations, where the side information quality does not have to be monotonic along the scalable coding order. Partial result is provided for the doubly symmetric binary source under the Hamming distortion measure when the worse side information is a constant, for which one of the outer bounds is strictly tighter than the other. I.
Minimum Expected Distortion in Gaussian Source Coding with Uncertain Side Information
"... Abstract — We consider a layered approach to source coding with side information received over an uncertain channel that minimizes expected distortion. Specifically, we assume a Gaussian source encoder whereby the decoder receives a compressed version of the symbol at a given rate, as well as an unc ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
Abstract — We consider a layered approach to source coding with side information received over an uncertain channel that minimizes expected distortion. Specifically, we assume a Gaussian source encoder whereby the decoder receives a compressed version of the symbol at a given rate, as well as an uncompressed version over a separate sideinformation channel with slow fading and noise. The decoder knows the realization of the slow fading but the encoder knows only its distribution. We consider a layered encoding strategy with a base layer describing the source assuming worstcase fading on the sideinformation channel, and subsequent layers describing the source under better fading conditions. Optimization of the layering scheme utilizes the HeegardBerger ratedistortion function that describes the rate required to meet a different distortion constraint for each fading state. When the sideinformation channel has two discrete fading states, we obtain closedform expressions for the optimal rate allocation between the fading states and the resulting minimum expected distortion. For multiple fading states, the minimum expected distortion is formulated as the solution of a convex optimization problem. Under discretized Rayleigh fading, we show that the optimal rate allocation puts almost all rate into the base layer associated with the worstcase fading. This implies that uncertain side information yields little performance benefit over no side information. Moreover, as the source coding rate increases, the benefit of uncertain sideinformation decreases. I.
On the Role of Encoder SideInformation in Source Coding for Multiple Decoders
 in Proc. 2006 International Symposium on Information Theory
, 2006
"... Abstract — We consider a lossy source coding problem where the description of a source is going to be used by two decoders, each having access to information correlated with the source. This sideinformation is also present at the encoder. We give inner and outer bounds to the set of achievable rate ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
Abstract — We consider a lossy source coding problem where the description of a source is going to be used by two decoders, each having access to information correlated with the source. This sideinformation is also present at the encoder. We give inner and outer bounds to the set of achievable rate and distortion triples. For the special case of Gaussian sources with degraded sideinformation and squared error distortions, the two bounds coincide and we obtain the true ratedistortion region. As a further specialization, we obtain the ratedistortion region of the Gaussian version of a problem previously solved by Kaspi for discrete memoryless sources. Using this result, we quantify how much revealing the sideinformation to the encoder helps in such a Gaussian setup. I.
On scalable source coding for multiple decoders with sideinformation
 in UCSD ITA Center Inaugural Workshop
, 2006
"... Abstract — The problem of sideinformation scalable (SIscalable) source coding and the problem of successive refinement in the WynerZiv (SRWZ) setting are considered in this paper. Both problems can be understood as special cases of the general problem of scalable lossy source coding for multiple ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
Abstract — The problem of sideinformation scalable (SIscalable) source coding and the problem of successive refinement in the WynerZiv (SRWZ) setting are considered in this paper. Both problems can be understood as special cases of the general problem of scalable lossy source coding for multiple decoders with access to side informations, but the encoder does not. In the former problem, the quality of the side informations deteriorates from the early stage to the later stage, while in the latter problem, the quality of the side informations improves with the stages. The SRWZ problem was considered by Steinberg and Merhav (TIT, 2004) where it was solved for the special case of two stages. We provide a generalization of their characterization to multiple stages. We also provide achievable rate and outer bounds for the SIscalable coding problem. Furthermore, the notion of generalized successively refinability with multiple side informations is introduced, which captures whether progressive encoding to satisfy the distortion constraints for different side information is as good as encoding without progressive requirement. For the quadratic Gaussian case, by combining the results of SIscalable and SRWZ coding, we provide a complete answer to the general scalable source coding problem. I.
Distributed multistage coding of correlated sources
 in Proc. IEEE Data Compress. Conf
, 2008
"... All intext references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately. ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
All intext references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately.
A Calculation of the HeegardBerger RateDistortion Function for a Binary Source
 Information Theory Workshop, 2006. ITW’06 Chengdu. IEEE
, 2006
"... Abstract — We provide an explicit calculation of the ratedistortion function for the doublysymmetric binary source (DSBS), when the side information may be absent at the decoder. The ratedistortion function for general discrete memoryless source was characterized by Heegard and Berger in 1985 [IT ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract — We provide an explicit calculation of the ratedistortion function for the doublysymmetric binary source (DSBS), when the side information may be absent at the decoder. The ratedistortion function for general discrete memoryless source was characterized by Heegard and Berger in 1985 [IT31(6)], who showed that a twostage coding structure is in fact optimal. However, an explicit characterization of the ratedistortion function for DSBS, and more importantly the optimal forward testing channel structure for this source, was not found despite several attempts. In this work, we resolve this open problem. It is shown that in the twostage coding structure, the optimal testing channel for the first stage decoder (who does not have side information) is the same as the optimal testing channel for the ordinary symmetric binary source, and this confirms a conjecture made by Fleming and Effros. I.
SCALABLE DISTRIBUTED SOURCE CODING
"... This paper considers the problem of scalable distributed coding of correlated sources that are communicated to a central unit. The general setting is typically encountered in sensor networks. The conditions of communication channels between the source encoders and fusion center may be timevarying a ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
This paper considers the problem of scalable distributed coding of correlated sources that are communicated to a central unit. The general setting is typically encountered in sensor networks. The conditions of communication channels between the source encoders and fusion center may be timevarying and it is often desirable to guarantee a base layer of coarse information during channel fades. In addition the desired system should be robust to various scenarios of channel failure and should utilize all the available information to attain the best possible compression efficiency. Although a standard ‘Lloydstyle ’ distributed coder design algorithm can be generalized to scalable distributed source coding, the resulting algorithm depends heavily on initialization and will virtually always converge to a poor local minimum on the distortioncost surface. We propose an efficient initialization scheme for such a system, which employs a properly designed multistage distributed coder. In our prior work, we highlighted the fundamental conflict that arises when multistage coding is directly combined with distributed quantization. Here we use the multistage distributed coding system to initialize a scalable distributed coder and propose an iterative algorithm for joint design of all system components once the structural constraint is removed. Simulation results show considerable gains over randomly initialized scalable distributed coder design. Index Terms — Distributed coding, scalable coding, sensor networks. 1.