Results 1 
3 of
3
Source coding when the side information may be delayed”, submitted to
 IEEE Trans. Inf. Theory
, 2012
"... Abstract—For memoryless sources, delayed side information at the decoder does not improve the ratedistortion function. However, this is not the case for sources with memory, as demonstrated by a number of works focusing on the special case of (delayed) feedforward. In this paper, a setting is stu ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Abstract—For memoryless sources, delayed side information at the decoder does not improve the ratedistortion function. However, this is not the case for sources with memory, as demonstrated by a number of works focusing on the special case of (delayed) feedforward. In this paper, a setting is studied in which the encoder is potentially uncertain about the delay with which measurements of the side information, which is available at the encoder, are acquired at the decoder. Assuming a hidden Markov model for the source sequences, at first, a singleletter characterization is given for the setup where the side information delay is arbitrary and known at the encoder, and the reconstruction at the destination is required to be asymptotically lossless. Then, with delay equal to zero or one source symbol, a singleletter characterization of the ratedistortion region is given for the case where, unbeknownst to the encoder, the side information may be delayed or not. Finally, examples for binary and Gaussian sources are provided. Index Terms—Causal conditioning, hidden Markov model, Markov Gaussian process, multiplexing, ratedistortion function, strictly causal side information. I.
On Successive Refinement for the Kaspi/HeegardBerger Problem
, 2008
"... Consider a source that produces independent copies of a triplet of jointly distributed random variables, {Xi, Yi, Zi} ∞ i=1. The process {Xi} is observed at the encoder, and is supposed to be reproduced at two decoders, decoder Y and decoder Z, where {Yi} and {Zi} are observed, respectively, in eit ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Consider a source that produces independent copies of a triplet of jointly distributed random variables, {Xi, Yi, Zi} ∞ i=1. The process {Xi} is observed at the encoder, and is supposed to be reproduced at two decoders, decoder Y and decoder Z, where {Yi} and {Zi} are observed, respectively, in either a causal or noncausal manner. The communication between the encoder and the decoders is carried in two successive stages. In the first stage, the transmission is available to both decoders and they reconstruct the source according to the received bitstream and the individual side information ({Zi} or {Yi}). In the second stage, additional information is sent to both decoders and they refine the reconstructions of the source according to the available side information and the transmissions at both stages. It is desired to find the necessary and sufficient conditions on the communication rates between the encoder and decoders, so that the distortions incurred (at each stage) will not exceed given thresholds. For the case of nondegraded causal side information at the decoders, an exact singleletter characterization of the achievable region is derived for the case of pure sourcecoding. Then, for the
1Source Coding Problems with Conditionally Less Noisy Side Information
, 2014
"... A computable expression for the ratedistortion (RD) function proposed by Heegard and Berger has eluded information theory for nearly three decades. Heegard and Berger’s singleletter achievability bound is well known to be optimal for physically degraded side information; however, it is not known w ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
A computable expression for the ratedistortion (RD) function proposed by Heegard and Berger has eluded information theory for nearly three decades. Heegard and Berger’s singleletter achievability bound is well known to be optimal for physically degraded side information; however, it is not known whether the bound is optimal for arbitrarily correlated side information (general discrete memoryless sources). In this paper, we consider a new setup in which the side information at one receiver is conditionally less noisy than the side information at the other. The new setup includes degraded side information as a special case, and it is motivated by the literature on degraded and less noisy broadcast channels. Our key contribution is a converse proving the optimality of Heegard and Berger’s achievability bound in a new setting. The converse rests upon a certain singleletterization lemma, which we prove using an information theoretic telescoping identity recently presented by Kramer. We also generalise the above ideas to two different successiverefinement problems.