Results 1  10
of
114
Wavelets, Ridgelets, and Curvelets for Poisson Noise Removal
"... Abstract—In order to denoise Poisson count data, we introduce a variance stabilizing transform (VST) applied on a filtered discrete Poisson process, yielding a near Gaussian process with asymptotic constant variance. This new transform, which can be deemed as an extension of the Anscombe transform t ..."
Abstract

Cited by 42 (2 self)
 Add to MetaCart
(Show Context)
Abstract—In order to denoise Poisson count data, we introduce a variance stabilizing transform (VST) applied on a filtered discrete Poisson process, yielding a near Gaussian process with asymptotic constant variance. This new transform, which can be deemed as an extension of the Anscombe transform to filtered data, is simple, fast, and efficient in (very) lowcount situations. We combine this VST with the filter banks of wavelets, ridgelets and curvelets, leading to multiscale VSTs (MSVSTs) and nonlinear decomposition schemes. By doing so, the noisecontaminated coefficients of these MSVSTmodified transforms are asymptotically normally distributed with known variances. A classical hypothesistesting framework is adopted to detect the significant coefficients, and a sparsitydriven iterative scheme reconstructs properly the final estimate. A range of examples show the power of this MSVST approach for recovering important structures of various morphologies in (very) lowcount images. These results also demonstrate that the MSVST approach is competitive relative to many existing denoising methods. Index Terms—Curvelets, filtered Poisson process, multiscale variance stabilizing transform, Poisson intensity estimation, ridgelets, wavelets. I.
Stochastic Coherent Adaptive Large Eddy Simulation Method
, 2004
"... In this thesis the longstanding need for a dynamically adaptive Large Eddy Simulation (LES) method has been addressed. Current LES methodologies rely on, at best, a zonal grid adaptation strategy to attempt to minimize computational cost in resolving large eddies in complex turbulent flow simulation ..."
Abstract

Cited by 22 (8 self)
 Add to MetaCart
In this thesis the longstanding need for a dynamically adaptive Large Eddy Simulation (LES) method has been addressed. Current LES methodologies rely on, at best, a zonal grid adaptation strategy to attempt to minimize computational cost in resolving large eddies in complex turbulent flow simulations. While an improvement over regular grids, these methodologies fail to resolve the high wave number components of the spatially intermittent coherent eddies that typify turbulent flows, thus not resolving valuable physical information. At the same time the flow is over resolved in regions between the intermittent coherent eddies. The Stochastic Coherent Adaptive Large Eddy Simulation (SCALES) methodology addresses the shortcomings of LES by using a dynamic grid adaptation strategy that resolves the most energetic coherent structures in a turbulent flow field. This new methodology inherits from Coherent Vortex Simulation (CVS) the ability to dynamically resolve and “track” the most energetic part of the coherent eddies in a turbulent flow field, while using a field compression similar to LES, which could be considerably higher than with CVS. Unlike CVS, which is able to recover low order statistics with no subgrid scale stress model, the effect of the unresolved
Analytical form for a bayesian wavelet estimator of images using the bassel k forms densities
 IEEE Trans. Image Processing
"... Abstract—A novel Bayesian nonparametric estimator in the wavelet domain is presented. In this approach, a prior model is imposed on the wavelet coefficients designed to capture the sparseness of the wavelet expansion. Seeking probability models for the marginal densities of the wavelet coefficients, ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
Abstract—A novel Bayesian nonparametric estimator in the wavelet domain is presented. In this approach, a prior model is imposed on the wavelet coefficients designed to capture the sparseness of the wavelet expansion. Seeking probability models for the marginal densities of the wavelet coefficients, the new family of Bessel K forms (BKF) densities are shown to fit very well to the observed histograms. Exploiting this prior, we designed a Bayesian nonlinear denoiser and we derived a closed form for its expression. We then compared it to other priors that have been introduced in the literature, such as the generalized Gaussian density (GGD) or thestable models, where no analytical form is available for the corresponding Bayesian denoisers. Specifically, the BKF model turns out to be a good compromise between these two extreme cases (hyperbolic tails for thestable and exponential tails for the GGD). Moreover, we demonstrate a high degree of match between observed and estimated prior densities using the BKF model. Finally, a comparative study is carried out to show the effectiveness of our denoiser which clearly outperforms the classical shrinkage or thresholding waveletbased techniques. Index Terms—Bayesian denoiser, Bessel K forms (BKF), posterior conditional mean, wavelets.
A new algorithm for fixed design regression and denoising
 ANNALS OF THE INSTITUTE OF STATISTICAL MATHEMATICS
, 2004
"... In this paper, we present a new algorithm to estimate a regression function in a fixed design regression model, by piecewise (standard and trigonometric) polynomials computed with an automatic choice of the knots of the subdivision and of the degrees of the polynomials on each subinterval. First w ..."
Abstract

Cited by 21 (11 self)
 Add to MetaCart
In this paper, we present a new algorithm to estimate a regression function in a fixed design regression model, by piecewise (standard and trigonometric) polynomials computed with an automatic choice of the knots of the subdivision and of the degrees of the polynomials on each subinterval. First we give the theoretical background underlying the method: the theoretical performances of our penalized leastsquares estimator are based on nonasymptotic evaluations of a meansquare type risk. Then we explain how the algorithm is built and possibly accelerated (to face the case when the number of observations is great), how the penalty term is chosen and why it contains some constants requiring an empirical calibration. Lastly, a comparison with some wellknown or recent wavelet methods is made: this brings out that our algorithm behaves in a very competitive way in term of denoising and of compression.
Wavelet methods for continuoustime prediction using representations of autoregressive processes
 University Joseph Fourier,
, 2001
"... Abstract We consider the prediction problem of a continuoustime stochastic process on an entire timeinterval in terms of its recent past. The approach we adopt is based on the notion of autoregressive Hilbert processes that represent a generalization of the classical autoregressive processes to r ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
(Show Context)
Abstract We consider the prediction problem of a continuoustime stochastic process on an entire timeinterval in terms of its recent past. The approach we adopt is based on the notion of autoregressive Hilbert processes that represent a generalization of the classical autoregressive processes to random variables with values in a Hilbert space. A careful analysis reveals, in particular, that this approach is related to the theory of function estimation in linear illposed inverse problems. In the deterministic literature, such problems are usually solved by suitable regularization techniques. We describe some recent approaches from the deterministic literature that can be adapted to obtain fast and feasible predictions. For large sample sizes, however, these approaches are not computationally efficient. With this in mind, we propose three linear wavelet methods to efficiently address the aforementioned prediction problem. We present regularization techniques for the sample paths of the stochastic process and obtain consistency results of the resulting prediction estimators. We illustrate the performance of the proposed methods in finite sample situations by means of a reallife data example which concerns with the prediction of the entire annual cycle of climatological El Nin˜oSouthern Oscillation time series 1 year ahead. We also compare the resulting predictions with those obtained by other methods available in the literature, in particular with a smoothing spline interpolation method and with a SARIMA model. r
On optimality of Bayesian wavelet estimators
, 2004
"... We investigate the asymptotic optimality of several Bayesian wavelet estimators corresponding to dierent losses, namely, posterior mean, posterior median and Bayes Factor. The considered prior is a mixture of a mass function at zero and a Gaussian density. We show that in terms of the mean squared e ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
We investigate the asymptotic optimality of several Bayesian wavelet estimators corresponding to dierent losses, namely, posterior mean, posterior median and Bayes Factor. The considered prior is a mixture of a mass function at zero and a Gaussian density. We show that in terms of the mean squared error, for the properly chosen hyperparameters of the prior all the three resulting Bayesian wavelet estimators achieve optimal minimax rates within any prescribed Besov p;q for p 2. For 1 p < 2, the Bayes Factor is still optimal for (2s+2)=(2s+1) p < 2 and always outperforms the posterior mean and the posterior median that can achieve only the best possible rates for linear estimators in this case. Key words: Bayes Factor, Bayes model; Besov spaces; minimax estimation; nonlinear estimation; nonparametric regression; posterior mean; posterior median; wavelets. 1
1 ClosedForm MMSE Estimation for Signal Denoising Under Sparse Representation Modeling Over a Unitary Dictionary
"... This paper deals with the Bayesian signal denoising problem, assuming a prior based on a sparse representation modeling over a unitary dictionary. It is well known that the Maximum Aposteriori Probability (MAP) estimator in such a case has a closedform solution based on a simple shrinkage. The foc ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
This paper deals with the Bayesian signal denoising problem, assuming a prior based on a sparse representation modeling over a unitary dictionary. It is well known that the Maximum Aposteriori Probability (MAP) estimator in such a case has a closedform solution based on a simple shrinkage. The focus in this paper is on the better performing and less familiar MinimumMeanSquaredError (MMSE) estimator. We show that this estimator also leads to a simple formula, in the form of a plain recursive expression for evaluating the contribution of every atom in the solution. An extension of the model to realworld signals is also offered, considering heteroscedastic nonzero entries in the representation, and allowing varying probabilities for the chosen atoms and the overall cardinality of the sparse representation. The MAP and MMSE estimators are redeveloped for this extended model, again resulting in closedform simple algorithms. Finally, the superiority of the MMSE estimator is demonstrated both on synthetically generated signals and on realworld signals (image patches).
EMDBased Signal Noise Reduction
"... Abstract—This paper introduces a new signal denoising based on the Empirical mode decomposition (EMD) framework. The method is a fully data driven approach. Noisy signal is decomposed adaptively into oscillatory components called Intrinsic mode functions (IMFs) by means of a process called sifting. ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
(Show Context)
Abstract—This paper introduces a new signal denoising based on the Empirical mode decomposition (EMD) framework. The method is a fully data driven approach. Noisy signal is decomposed adaptively into oscillatory components called Intrinsic mode functions (IMFs) by means of a process called sifting. The EMD denoising involves filtering or thresholding each IMF and reconstructs the estimated signal using the processed IMFs. The EMD can be combined with a filtering approach or with nonlinear transformation. In this work the SavitzkyGolay filter and shoftthresholding are investigated. For thresholding, IMF samples are shrinked or scaled below a threshold value. The standard deviation of the noise is estimated for every IMF. The threshold is derived for the Gaussian white noise. The method is tested on simulated and real data and compared with averaging, median and wavelet approaches. Keywords—Empirical mode decomposition, Signal denoising nonstationary process. I.
Stein block thresholding for image denoising
 APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS
, 2010
"... ..."
Adaptive density estimation: a curse of support?
, 2009
"... This paper deals with the classical problem of density estimation on the real line. Most of the existing papers devoted to minimax properties assume that the support of the underlying density is bounded and known. But this assumption may be very difficult to handle in practice. In this work, we show ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
This paper deals with the classical problem of density estimation on the real line. Most of the existing papers devoted to minimax properties assume that the support of the underlying density is bounded and known. But this assumption may be very difficult to handle in practice. In this work, we show that, exactly as a curse of dimensionality exists when the data lie in R d, there exists a curse of support as well when the support of the density is infinite. As for the dimensionality problem where the rates of convergence deteriorate when the dimension grows, the minimax rates of convergence may deteriorate as well when the support becomes infinite. This problem is not purely theoretical since the simulations show that the supportdependent methods are really affected in practice by the size of the density support, or by the weight of the density tail. We propose a method based on a biorthogonal wavelet thresholding rule that is adaptive with respect to the nature of the support and the regularity of the signal, but that is also robust in practice to this curse of support. The threshold, that is proposed here, is very accurately calibrated so that the gap between optimal theoretical and practical tuning parameters is almost filled.