Results 1  10
of
16
Capacity of a class of modulosum relay channels
 IEEE Trans. Inf. Theory
, 2009
"... Abstract—This paper characterizes the capacity of a class of modulo additive noise relay channels, in which the relay observes a corrupted version of the noise and has a separate channel to the destination. The capacity is shown to be strictly below the cutset bound in general and achievable using ..."
Abstract

Cited by 44 (3 self)
 Add to MetaCart
(Show Context)
Abstract—This paper characterizes the capacity of a class of modulo additive noise relay channels, in which the relay observes a corrupted version of the noise and has a separate channel to the destination. The capacity is shown to be strictly below the cutset bound in general and achievable using a quantizeandforward strategy at the relay. This result confirms a conjecture by Ahlswede and Han about the capacity of channels with rate limited state information at the destination for this particular class of channels. I.
On the Role of the Refinement Layer in Multiple Description Coding and Scalable Coding
"... Abstract—We clarify the relationship among several existing achievable multiple description ratedistortion regions by investigating the role of refinement layer in multiple description coding. Specifically, we show that the refinement layer in the El GamalCover (EGC) scheme and the Venkataramani–K ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
Abstract—We clarify the relationship among several existing achievable multiple description ratedistortion regions by investigating the role of refinement layer in multiple description coding. Specifically, we show that the refinement layer in the El GamalCover (EGC) scheme and the Venkataramani–Kramer–Goyal (VKG) scheme can be removed; as a consequence, the EGC region is equivalent to the EGC * region (an antecedent version of the EGC region) while the VKG region (when specialized to the 2description case) is equivalent to the Zhang–Berger (ZB) region. Moreover, we prove that for multiple description coding with individual and hierarchical distortion constraints, the number of layers in the VKG scheme can be significantly reduced when only certain weighted sum rates are concerned. The role of refinement layer in scalable coding (a special case of multiple description coding) is also studied. Index Terms—Contrapolymatroid, multiple description coding, ratedistortion region, scalable coding, successive refinement. I.
Improved bounds for the rate loss of multiresolution source codes
 IEEE Transactions on Information Theory
, 2003
"... Abstract—In this paper, we present new bounds for the rate loss of multiresolution source codes (MRSCs). Considering anresolution code, the rate loss at the th resolution with distortion is defined as = (), where is the rate achievable by the MRSC at stage. This rate loss describes the performanc ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we present new bounds for the rate loss of multiresolution source codes (MRSCs). Considering anresolution code, the rate loss at the th resolution with distortion is defined as = (), where is the rate achievable by the MRSC at stage. This rate loss describes the performance degradation of the MRSC compared to the best singleresolution code with the same distortion. For tworesolution source codes, there are three scenarios of particular interest: i) when both resolutions are equally important; ii) when the rate loss at the first resolution is 0 (
Successive Refinement of Vector Sources Under Individual Distortion Criteria
"... Abstract—The successive refinement problem is extended to vector sources where individual distortion constraints are posed on each vector component. For vector Gaussian sources with squarederror distortion, a singleletter ratedistortion characterization is inherited from the previously studied Ga ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Abstract—The successive refinement problem is extended to vector sources where individual distortion constraints are posed on each vector component. For vector Gaussian sources with squarederror distortion, a singleletter ratedistortion characterization is inherited from the previously studied Gaussian multiple descriptions problem with covariance distortion constraints. Though this characterization is amenable to wellknown numerical convex optimization techniques, an analytical solution is difficult to obtain in full generality even for 2D sources. In this work, the special case of successive refinability is addressed analytically. Specifically, vector Gaussian sources are shown to be not successively refinable everywhere unlike scalar Gaussian sources. It is also shown that, for 2D Gaussian sources, the rate loss at the second stage can be as high as 0.5 b/sample in a “degenerate” scenario corresponding to what is known as sequential coding of correlated sources. Finally, analysis of 2D binary symmetric sources with Hamming distortion reveals that the behavior of these sources with respect to successive refinability exhibits remarkable similarity to their 2D Gaussian counterparts. Index Terms—Individual distortion criteria, rate loss, successive refinement, vector sources. I.
A Random Variable Substitution Lemma With Applications to Multiple Description Coding
, 2009
"... We establish a random variable substitution lemma and use it to investigate the role of refinement layer in multiple description coding, which clarifies the relationship among several existing achievable multiple description ratedistortion regions. Specifically, it is shown that the El GamalCover ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
We establish a random variable substitution lemma and use it to investigate the role of refinement layer in multiple description coding, which clarifies the relationship among several existing achievable multiple description ratedistortion regions. Specifically, it is shown that the El GamalCover (EGC) region is equivalent to the EGC* region (an antecedent version of the EGC region) while the VenkataramaniKramerGoyal (VKG) region (when specialized to the 2description case) is equivalent to the ZhangBerger (ZB) region. Moreover, we prove that for multiple description coding with individual and hierarchical distortion constraints, the number of layers in the VKG scheme can be significantly reduced when only certain weighted sum rates are concerned. The role of refinement layer in scalable coding (a special case of multiple description coding) is also studied.
Successive refinement for hypothesis testing and lossless onehelper problem
 IEEE Trans. Inf. Theory
, 2008
"... Abstract—We investigate two closely related successive refinement (SR) coding problems: 1) In the hypothesis testing (HT) problem, bivariate hypothesis ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Abstract—We investigate two closely related successive refinement (SR) coding problems: 1) In the hypothesis testing (HT) problem, bivariate hypothesis
SELECTIVE ENCRYPTION AND SCALABLE SPEECH CODING FOR VOICE COMMUNICATIONS OVER MULTIHOP WIRELESS LINKS
"... gibson,huidong,gersho§ servetti,demartin§ ..."
(Show Context)
Sparse Linear Representation
, 905
"... Abstract — This paper studies the question of how well a signal can be reprsented by a sparse linear combination of reference signals from an overcomplete dictionary. When the dictionary size is exponential in the dimension of signal, then the exact characterization of the optimal distortion is give ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract — This paper studies the question of how well a signal can be reprsented by a sparse linear combination of reference signals from an overcomplete dictionary. When the dictionary size is exponential in the dimension of signal, then the exact characterization of the optimal distortion is given as a function of the dictionary size exponent and the number of reference signals for the linear representation. Roughly speaking, every signal is sparse if the dictionary size is exponentially large, no matter how small the exponent is. Furthermore, an iterative method similar to matching pursuit that successively finds the best reference signal at each stage gives asymptotically optimal representations. This method is essentially equivalent to successive refinement for multiple descriptions and provides a simple alternative proof of the successive refinability of white Gaussian sources. I. INTRODUCTION AND MAIN RESULTS
On Scalable Distributed Coding of Correlated Sources
"... Abstract—This paper considers the problem of scalable distributed coding of correlated sources that are communicated to a central unit, a setting typically encountered in sensor networks. As communication channel conditions may vary with time, it is often desirable to guarantee a base layer of coars ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract—This paper considers the problem of scalable distributed coding of correlated sources that are communicated to a central unit, a setting typically encountered in sensor networks. As communication channel conditions may vary with time, it is often desirable to guarantee a base layer of coarse information during channel fades, as well as robustness to channel link (or sensor) failures. The main contribution is twofold. First, we consider the special case of multistage distributed coding, and show that naive combination of distributed coding with multistage coding yields poor ratedistortion performance, due to underlying conflicts between the objectives of these two coding methods. An appropriate system paradigm is developed, which allows such tradeoffs to be explicitly controlled. Next, we consider the unconstrained scalable distributed coding problem. Although a standard “Lloydstyle” distributed coder design algorithm is easily generalized to encompass scalable coding, the algorithm performance is heavily dependent on initialization and will virtually always converge to a poor local minimum. We propose an effective initialization scheme for such a system, which employs a properly designed multistage distributed coder. We present iterative design techniques and derive the necessary conditions for optimality for both multistage and unconstrained scalable distributed coding systems. Simulation results show substantial gains for the proposed multistage distributed coding system over naive extensions which incorporate scalability in a multistage distributed coding system. Moreover, the proposed overall scalable distributed coder design consistently and substantially outperforms the randomly initialized “Lloydstyle ” scalable distributed coder design. Index Terms—Distributed quantization, multistage coding, scalable coding, sensor networks. I.
Multiple descriptions with codebook reuse
 in Proceedings of the 42th Asilomar Conf. Signals, Systems, and Computers
, 2008
"... Abstract—Multiple description coding, in general, requires separate codebooks for each description. Motivated by the problem of sparse linear representation, we propose a simple coding scheme that recursively describes successive description errors using the same codebook, resulting in a sparse line ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Multiple description coding, in general, requires separate codebooks for each description. Motivated by the problem of sparse linear representation, we propose a simple coding scheme that recursively describes successive description errors using the same codebook, resulting in a sparse linear combination of codewords that achieves the Gaussian rate distortion region. This result, in particular, provides an elementary proof of successive refinability and additive refinability of white Gaussian sources. I.