Results 1  10
of
22
Quantized Frame Expansions as SourceChannel Codes for Erasure Channels
 Proc. IEEE Data Compression Conf
, 1999
"... Quantized frame expansions are proposed as a method for generalized multiple description coding, where each quantized coe cient is a description. Whereas previous investigations have revealed the robustness of frame expansions to additive noise and quantization, this represents a new application of ..."
Abstract

Cited by 41 (8 self)
 Add to MetaCart
(Show Context)
Quantized frame expansions are proposed as a method for generalized multiple description coding, where each quantized coe cient is a description. Whereas previous investigations have revealed the robustness of frame expansions to additive noise and quantization, this represents a new application of frame expansions. The performance of a system based on quantized frame expansions is compared to that of a system with a conventional block channel code. The new system performs well when the number of lost descriptions (erasures on an erasure channel) is hard to predict. 1
Information Theoretic Proofs of Entropy Power Inequalities
, 2007
"... While most useful information theoretic inequalities can be deduced from the basic properties of entropy or mutual information, up to now Shannon’s entropy power inequality (EPI) is an exception: Existing information theoretic proofs of the EPI hinge on representations of differential entropy using ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
While most useful information theoretic inequalities can be deduced from the basic properties of entropy or mutual information, up to now Shannon’s entropy power inequality (EPI) is an exception: Existing information theoretic proofs of the EPI hinge on representations of differential entropy using either Fisher information or minimum meansquare error (MMSE), which are derived from de Bruijn’s identity. In this paper, we first present an unified view of these proofs, showing that they share two essential ingredients: 1) a data processing argument applied to a covariancepreserving linear transformation; 2) an integration over a path of a continuous Gaussian perturbation. Using these ingredients, we develop a new and brief proof of the EPI through a mutual information inequality, which replaces Stam and Blachman’s Fisher information inequality (FII) and an inequality for MMSE by Guo, Shamai and Verdú used in earlier proofs. The result has the advantage of being very simple in that it relies only on the basic properties of mutual information. These ideas are then generalized to various extended versions of the EPI: Zamir and Feder’s generalized EPI for linear transformations of the random variables, Takano and Johnson’s EPI for dependent variables, Liu and Viswanath’s covarianceconstrained EPI, and Costa’s concavity inequality for the entropy power.
Noise reduction in oversampled filter banks using predictive quantization
 IEEE Transactions on Information Theory
, 2001
"... Abstract—We introduce two methods for quantization noise reduction in oversampled filter banks. These methods are based on predictive quantization (noise shaping or linear prediction). It is demonstrated that oversampled noise shaping or linear predictive subband coders are well suited for subband c ..."
Abstract

Cited by 29 (1 self)
 Add to MetaCart
(Show Context)
Abstract—We introduce two methods for quantization noise reduction in oversampled filter banks. These methods are based on predictive quantization (noise shaping or linear prediction). It is demonstrated that oversampled noise shaping or linear predictive subband coders are well suited for subband coding applications where, for technological or other reasons, lowresolution quantizers have to be used. In this case, oversampling combined with noise shaping or linear prediction improves the effective resolution of the subband coder at the expense of increased rate. Simulation results are provided to assess the achievable quantization noise reduction and resolution enhancement, and to investigate the ratedistortion properties of the proposed methods. Index Terms—Filter banks, frame theory, linear prediction, noise reduction, noise shaping, oversampling, quantization, ratedistortion theory, sigma–delta converter, subband coding.
Information Rates of Pre/Post Filtered Dithered Quantizers
 IEEE Trans. Information Theory
, 1997
"... We consider encoding of a source with a prespecified second order statistics, but otherwise arbitrary, by Entropy Coded Dithered (lattice) Quantization (ECDQ) incorporating linear preand postfilters. In the design and analysis of this scheme we utilize the equivalent additive noise channel model o ..."
Abstract

Cited by 26 (14 self)
 Add to MetaCart
(Show Context)
We consider encoding of a source with a prespecified second order statistics, but otherwise arbitrary, by Entropy Coded Dithered (lattice) Quantization (ECDQ) incorporating linear preand postfilters. In the design and analysis of this scheme we utilize the equivalent additive noise channel model of the ECDQ. For Gaussian sources and square error distortion measure, the coding performance of the pre/post filtered ECDQ approaches the ratedistortion function, as the dimension of the (optimal) lattice quantizer becomes large; actually, in this case the proposed coding scheme simulates the optimal forward channel realization of the ratedistortion function. For nonGaussian sources and finite dimensional lattice quantizers, the coding rate exceeds the ratedistortion function by at most the sum of two terms: the "information divergence of the source from Gaussianity" and the "information divergence of the quantization noise from Gaussianity". Additional bounds on the excess rate of the s...
Recursive Consistent Estimation with Bounded Noise
 IEEE TRANS. INFORM. TH
, 2001
"... Estimation problems with bounded, uniformly distributed noise arise naturally in reconstruction problems from over complete linear expansions with subtractive dithered quantization. We present a simple recursive algorithm for such boundednoise estimation problems. The meansquare error (MSE) of the ..."
Abstract

Cited by 24 (16 self)
 Add to MetaCart
Estimation problems with bounded, uniformly distributed noise arise naturally in reconstruction problems from over complete linear expansions with subtractive dithered quantization. We present a simple recursive algorithm for such boundednoise estimation problems. The meansquare error (MSE) of the algorithm is "almost" (1/n²), where is the number of samples. This rate is faster than the (1/n) MSE obtained by standard recursive least squares estimation and is optimal to within a constant factor.
A Framework for Control System Design Subject to Average DataRate Constraints
"... This paper studies discretetime control systems subject to average datarate limits. We focus on a situation where a noisy linear system has been designed assuming transparent feedback and, due to implementation constraints, a sourcecoding scheme (with unity signal transfer function) has to be dep ..."
Abstract

Cited by 20 (10 self)
 Add to MetaCart
This paper studies discretetime control systems subject to average datarate limits. We focus on a situation where a noisy linear system has been designed assuming transparent feedback and, due to implementation constraints, a sourcecoding scheme (with unity signal transfer function) has to be deployed in the feedback path. For this situation, and by focusing on a class of sourcecoding schemes built around entropy coded dithered quantizers, we develop a framework to deal with average datarate constraints in a tractable manner that combines ideas from both information and control theories. As an illustration of the uses of our framework, we apply it to study the interplay between stability and average datarates in the considered architecture. It is shown that the proposed class of coding schemes can achieve mean square stability at average datarates that are, at most, 1.254 bits per sample away from the absolute minimum rate for stability established by Nair and Evans. This rate penalty is compensated by the simplicity of our approach.
Proof of entropy power inequalities via MMSE
 in Proceedings of the IEEE International Symposium on Information Theory
, 2006
"... Abstract — The differential entropy of a random variable (or vector) can be expressed as the integral over signaltonoise ratio (SNR) of the minimum meansquare error (MMSE) of estimating the variable (or vector) when observed in additive Gaussian noise. This representation sidesteps Fisher’s infor ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
(Show Context)
Abstract — The differential entropy of a random variable (or vector) can be expressed as the integral over signaltonoise ratio (SNR) of the minimum meansquare error (MMSE) of estimating the variable (or vector) when observed in additive Gaussian noise. This representation sidesteps Fisher’s information to provide simple and insightful proofs for Shannon’s entropy power inequality (EPI) and two of its variations: Costa’s strengthened EPI in the case in which one of the variables is Gaussian, and a generalized EPI for linear transformations of a random vector due to Zamir and Feder. I.
A Generalization of the Entropy Power Inequality with Applications
 IEEE Trans. Information Theory
, 1993
"... We prove the following generalization of the Entropy Power Inequality: h(Ax) h(A~x) where h(\Delta) denotes (joint) differentialentropy, x = x 1 : : : xn is a random vector with independent components, ~ x = ~ x 1 : : : ~ xn is a Gaussian vector with independent components such that h(~x i ) = ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
(Show Context)
We prove the following generalization of the Entropy Power Inequality: h(Ax) h(A~x) where h(\Delta) denotes (joint) differentialentropy, x = x 1 : : : xn is a random vector with independent components, ~ x = ~ x 1 : : : ~ xn is a Gaussian vector with independent components such that h(~x i ) = h(x i ), i = 1 : : : n, and A is any matrix. This generalization of the entropypower inequality is applied to show that a nonGaussian vector with independent components becomes "closer" to Gaussianity after a linear transformation, where the distance to Gaussianity is measured by the information divergence. Another application is a lower bound, greater than zero, for the mutualinformation between non overlapping spectral components of a nonGaussian white process. Finally, we describe a dual generalization of the Fisher Information Inequality. Key Words: Entropy Power Inequality, NonGaussianity, Divergence, Fisher Information Inequality. This research was supported in part by the Wolf...