Results 11  20
of
297
Multiple Shrinkage and Subset Selection in Wavelets
, 1997
"... This paper discusses Bayesian methods for multiple shrinkage estimation in wavelets. Wavelets are used in applications for data denoising, via shrinkage of the coefficients towards zero, and for data compression, by shrinkage and setting small coefficients to zero. We approach wavelet shrinkage by u ..."
Abstract

Cited by 140 (16 self)
 Add to MetaCart
(Show Context)
This paper discusses Bayesian methods for multiple shrinkage estimation in wavelets. Wavelets are used in applications for data denoising, via shrinkage of the coefficients towards zero, and for data compression, by shrinkage and setting small coefficients to zero. We approach wavelet shrinkage by using Bayesian hierarchical models, assigning a positive prior probability to the wavelet coefficients being zero. The resulting estimator for the wavelet coefficients is a multiple shrinkage estimator that exhibits a wide variety of nonlinear shrinkage patterns. We discuss fast computational implementations, with a focus on easytocompute analytic approximations as well as importance sampling and Markov chain Monte Carlo methods. Multiple shrinkage estimators prove to have excellent mean squared error performance in reconstructing standard test functions. We demonstrate this in simulated test examples, comparing various implementations of multiple shrinkage to commonly used shrinkage rules. Finally, we illustrate our approach with an application to the socalled "glint" data.
WaveLab and Reproducible Research
, 1995
"... WaveLab is a library of Matlab routines for wavelet analysis, waveletpacket analysis, cosinepacket analysis and matching pursuit. The library is available free of charge over the Internet. Versions are provided for Macintosh, UNIX and Windows machines. WaveLab makes ..."
Abstract

Cited by 134 (17 self)
 Add to MetaCart
WaveLab is a library of Matlab routines for wavelet analysis, waveletpacket analysis, cosinepacket analysis and matching pursuit. The library is available free of charge over the Internet. Versions are provided for Macintosh, UNIX and Windows machines. WaveLab makes
Nonlinear Wavelet Shrinkage With Bayes Rules and Bayes Factors
 Journal of the American Statistical Association
, 1998
"... this article a wavelet shrinkage by coherent ..."
Ideal denoising in an orthonormal basis chosen from a library of bases
 Comptes Rendus Acad. Sci., Ser. I
, 1994
"... of bases ..."
(Show Context)
Sparse Code Shrinkage: Denoising of Nongaussian Data by Maximum Likelihood Estimation
, 1999
"... Sparse coding is a method for finding a representation of data in which each of the components of the representation is only rarely significantly active. Such a representation is closely related to redundancy reduction and independent component analysis, and has some neurophysiological plausibility. ..."
Abstract

Cited by 116 (13 self)
 Add to MetaCart
Sparse coding is a method for finding a representation of data in which each of the components of the representation is only rarely significantly active. Such a representation is closely related to redundancy reduction and independent component analysis, and has some neurophysiological plausibility. In this paper, we show how sparse coding can be used for denoising. Using maximum likelihood estimation of nongaussian variables corrupted by gaussian noise, we show how to apply a softthresholding (shrinkage) operator on the components of sparse coding so as to reduce noise. Our method is closely related to the method of wavelet shrinkage, but it has the important benefit over wavelet methods that the representation is determined solely by the statistical properties of the data. The wavelet representation, on the other hand, relies heavily on certain mathematical properties (like selfsimilarity) that may be only weakly related to the properties of natural data.
Universal Discrete Denoising: Known Channel
 IEEE Trans. Inform. Theory
, 2003
"... A discrete denoising algorithm estimates the input sequence to a discrete memoryless channel (DMC) based on the observation of the entire output sequence. For the case in which the DMC is known and the quality of the reconstruction is evaluated with a given singleletter fidelity criterion, we pr ..."
Abstract

Cited by 100 (33 self)
 Add to MetaCart
(Show Context)
A discrete denoising algorithm estimates the input sequence to a discrete memoryless channel (DMC) based on the observation of the entire output sequence. For the case in which the DMC is known and the quality of the reconstruction is evaluated with a given singleletter fidelity criterion, we propose a discrete denoising algorithm that does not assume knowledge of statistical properties of the input sequence. Yet, the algorithm is universal in the sense of asymptotically performing as well as the optimum denoiser that knows the input sequence distribution, which is only assumed to be stationary and ergodic. Moreover, the algorithm is universal also in a semistochastic setting, in which the input is an individual sequence, and the randomness is due solely to the channel noise.
Flexible empirical Bayes estimation for wavelets
 Journal of the Royal Statistics Society, Series B
, 2000
"... Wavelet shrinkage estimation is an increasingly popular method for signal denoising and compression. Although Bayes estimators can provide excellent mean squared error (MSE) properties, selection of an effective prior is a difficult task. To address this problem, we propose Empirical Bayes (EB) prio ..."
Abstract

Cited by 85 (14 self)
 Add to MetaCart
(Show Context)
Wavelet shrinkage estimation is an increasingly popular method for signal denoising and compression. Although Bayes estimators can provide excellent mean squared error (MSE) properties, selection of an effective prior is a difficult task. To address this problem, we propose Empirical Bayes (EB) prior selection methods for various error distributions including the normal and the heavier tailed Student t distributions. Under such EB prior distributions, we obtain threshold shrinkage estimators based on model selection, and multiple shrinkage estimators based on model averaging. These EB estimators are seen to be computationally competitive with standard classical thresholding methods, and to be robust to outliers in both the data and wavelet domains. Simulated and real examples are used to illustrate the flexibility and improved MSE performance of these methods in a wide variety of settings.
On spatial adaptive estimation of nonparametric regression
 Math. Meth. Statistics
, 1997
"... The paper is devoted to developing spatial adaptive estimates for restoring functions from noisy observations. We show that the traditional least square (piecewise polynomial) estimate equipped with adaptively adjusted window possesses simultaneously many attractive adaptive properties, namely, 1) i ..."
Abstract

Cited by 83 (4 self)
 Add to MetaCart
(Show Context)
The paper is devoted to developing spatial adaptive estimates for restoring functions from noisy observations. We show that the traditional least square (piecewise polynomial) estimate equipped with adaptively adjusted window possesses simultaneously many attractive adaptive properties, namely, 1) it is near– optimal within ln n–factor for estimating a function (or its derivative) at a single point; 2) it is spatial adaptive in the sense that its quality is close to that one which could be achieved if smoothness of the underlying function was known in advance; 3) it is optimal in order (in the case of “strong ” accuracy measure) or near–optimal within ln n–factor (in the case of “weak ” accuracy measure) for estimating whole function (or its derivative) over wide range of the classes and global loss functions. We demonstrate that the “spatial adaptive abilities ” of our estimate are, in a sense, the best possible. Besides this, our adaptive estimate is computationally efficient and demonstrates reasonable practical behavior. 1
Wavelet Shrinkage Denoising Using the NonNegative Garrote
, 1997
"... In this paper, we combine Donoho and Johnstone's Wavelet Shrinkage denoising technique (known as WaveShrink) with Breiman's nonnegative garrote. We show that the nonnegative garrote shrinkage estimate enjoys the same asymptotic convergence rate as the hard and the soft shrinkage estimate ..."
Abstract

Cited by 82 (1 self)
 Add to MetaCart
In this paper, we combine Donoho and Johnstone's Wavelet Shrinkage denoising technique (known as WaveShrink) with Breiman's nonnegative garrote. We show that the nonnegative garrote shrinkage estimate enjoys the same asymptotic convergence rate as the hard and the soft shrinkage estimates. Simulations are used to demonstrate that garrote shrinkage offers advantages over both hard shrinkage (generally smaller meansquare error and less sensitivity to small perturbations in the data) and soft shrinkage (generally smaller bias and overall meansquareerror). The minimax thresholds for the nonnegative garrote are derived and the threshold selection procedure based on Stein's Unbiased Risk Estimate (SURE) is studied. We also propose a threshold selection procedure based on combining Coifman and Donoho's cyclespinning and SURE. The procedure is called SPINSURE. We use examples to show that SPINSURE is more stable than SURE: smaller standard deviation and smaller range. Key Words and Phra...
Maximum likelihood estimation of a stochastic integrateandfire neural encoding model
, 2004
"... We examine a cascade encoding model for neural response in which a linear filtering stage is followed by a noisy, leaky, integrateandfire spike generation mechanism. This model provides a biophysically more realistic alternative to models based on Poisson (memoryless) spike generation, and can eff ..."
Abstract

Cited by 82 (23 self)
 Add to MetaCart
(Show Context)
We examine a cascade encoding model for neural response in which a linear filtering stage is followed by a noisy, leaky, integrateandfire spike generation mechanism. This model provides a biophysically more realistic alternative to models based on Poisson (memoryless) spike generation, and can effectively reproduce a variety of spiking behaviors seen in vivo. We describe the maximum likelihood estimator for the model parameters, given only extracellular spike train responses (not intracellular voltage data). Specifically, we prove that the log likelihood function is concave and thus has an essentially unique global maximum that can be found using gradient ascent techniques. We develop an efficient algorithm for computing the maximum likelihood solution, demonstrate the effectiveness of the resulting estimator with numerical simulations, and discuss a method of testing the model’s validity using timerescaling and density evolution techniques.