Results 1  10
of
11
Sparse stochastic processes and discretization of linear inverse problems
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 2013
"... We present a novel statisticallybased discretization paradigm and derive a class of maximum a posteriori (MAP) estimators for solving illconditioned linear inverse problems. We are guided by the theory of sparse stochastic processes, which specifies continuousdomain signals as solutions of line ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
We present a novel statisticallybased discretization paradigm and derive a class of maximum a posteriori (MAP) estimators for solving illconditioned linear inverse problems. We are guided by the theory of sparse stochastic processes, which specifies continuousdomain signals as solutions of linear stochastic differential equations. Accordingly, we show that the class of admissible priors for the discretized version of the signal is confined to the family of infinitely divisible distributions. Our estimators not only cover the wellstudied methods of Tikhonov and 1type regularizations as particular cases, but also open the door to a broader class of sparsitypromoting regularization schemes that are typically nonconvex. We provide an algorithm that handles the corresponding nonconvex problems and illustrate the use of our formalism by applying it to deconvolution, magnetic resonance imaging, and Xray tomographic reconstruction problems. Finally, we compare the performance of estimators associated with models of increasing sparsity.
Ground States and Singular Vectors of Convex Variational Regularization Methods
, 2012
"... Singular value decomposition is the key tool in the analysis and understanding of linear regularization methods in Hilbert spaces. Besides simplifying computations it allows to provide a good understanding of properties of the forward problem compared to the prior information introduced by the regul ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Singular value decomposition is the key tool in the analysis and understanding of linear regularization methods in Hilbert spaces. Besides simplifying computations it allows to provide a good understanding of properties of the forward problem compared to the prior information introduced by the regularization methods. In the last decade nonlinear variational approaches such as ℓ 1 or total variation regularizations became quite prominent regularization techniques with certain properties being superior to standard methods. In the analysis of those, singular values and vectors did not play any role so far, for the obvious reason that these problems are nonlinear, together with the issue of defining singular values and singular vectors in the first place. In this paper however we want to start a study of singular values and vectors for nonlinear variational regularization of linear inverse problems, with particular focus on singular onehomogeneous regularization functionals. A major role is played by the smallest singular value, which we define as the ground state of an appropriate functional combining the (semi)norm introduced by the forward operator and the regularization functional. The optimality condition for the ground state further yields a natural generalization to higher singular values
Variational Justification of Cycle Spinning for WaveletBased Solutions of Inverse Problems
 IEEE Signal Process. Letters
, 2014
"... Abstract—Cycle spinning is a widely used approach for improving the performance of waveletbased methods that solve linear inverse problems. Extensive numerical experiments have shown that it significantly improves the quality of the recovered signal without increasing the computational cost. In th ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract—Cycle spinning is a widely used approach for improving the performance of waveletbased methods that solve linear inverse problems. Extensive numerical experiments have shown that it significantly improves the quality of the recovered signal without increasing the computational cost. In this letter, we provide the first theoretical convergence result for cycle spinning for solving general linear inverse problems. We prove that the sequence of reconstructed signals is guaranteed to converge to the minimizer of some global cost function that incorporates all wavelet shifts. Index Terms—Cycle spinning, linear inverse problems, wavelet regularization. I.
MMSE Denoising of 2D Signals Using Consistent Cycle Spinning Algorithm
"... Abstract: It is well known that in a real world signals do not exist without noise, which may be negligible under certain conditions. The process of noise removal is generally known as signal denoising .In this project a technique for the implementation of MMSE (Minimum Mean Square Error) for the s ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract: It is well known that in a real world signals do not exist without noise, which may be negligible under certain conditions. The process of noise removal is generally known as signal denoising .In this project a technique for the implementation of MMSE (Minimum Mean Square Error) for the signal denoising using CCS algorithm is implemented. This method exploits the link between the discrete gradient and Haarwavelet shrinkage. The Additive White Gaussian Noise (AWGN) involved in the 2D image can be removed using various methods and compare those results obtained. The performance of denoising algorithm is usually taken to be mean square error (MSE) based, between the original signal and its reconstructed version.
unknown title
"... Abstract—We investigate a stochastic signalprocessing framework for signals with sparse derivatives, where the samples of a Lévy process are corrupted by noise. The proposed signal model covers the wellknown Brownian motion and piecewiseconstant Poisson process; moreover, the Lévy family also con ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—We investigate a stochastic signalprocessing framework for signals with sparse derivatives, where the samples of a Lévy process are corrupted by noise. The proposed signal model covers the wellknown Brownian motion and piecewiseconstant Poisson process; moreover, the Lévy family also contains other interesting members exhibiting heavytail statistics that fulfill the requirements of compressibility. We characterize the maximumaposteriori probability (MAP) and minimum meansquare error (MMSE) estimators for such signals. Interestingly, some of the MAP estimators for the Lévy model coincide with popular signaldenoising algorithms (e.g., totalvariation (TV) regularization). We propose a novel noniterative implementation of the MMSE estimator based on the beliefpropagation (BP) algorithm performed in the Fourier domain. Our algorithm takes advantage of the fact that the joint statistics of general Lévy processes are much easier to describe by their characteristic function, as the probability densities do not always admit closedform expressions. We then use our new estimator as a benchmark to compare the performance of existing algorithms for the optimal recovery of gradientsparse signals. Index Terms—Belief propagation, Lévy process, message passing, nonlinear reconstruction, sparsesignal estimation, stochastic
Int J Comput Vis DOI 10.1007/s112630140755z Image Deblurring with Coupled Dictionary Learning
, 2014
"... Abstract Image deblurring is a challenging problem in vision computing. Traditionally, this task is addressed as an inverse problem that is enclosed into the image itself. This paper presents a learningbased framework where the knowledge hidden in huge amounts of available data is explored and exp ..."
Abstract
 Add to MetaCart
Abstract Image deblurring is a challenging problem in vision computing. Traditionally, this task is addressed as an inverse problem that is enclosed into the image itself. This paper presents a learningbased framework where the knowledge hidden in huge amounts of available data is explored and exploited for image deblurring. To this end, our algorithm is developed under the conceptual framework of coupled dictionary learning. Specifically, given pairs of blurred image patches and their corresponding clear ones, a learning model is constructed to learn a pair of dictionaries. Among them, one dictionary is responsible for the representation of clear images, while the other is responsible for that of the blurred images. Theoretically, the learning model is analyzed with coupled sparse representations for training samples. As the atoms of these dictionaries are coupled together onebyone, the reconstruction information can be transmitted between the clear and blurry images. In application phase, the blurry dictionary is employed to reconstruct linearly the blurry image to be restored. Then, the reconstruction coeffi
MMSE ESTIMATION OF SPARSE LÉVY PROCESSES 1 MMSE Estimation of Sparse Lévy Processes
"... Abstract—We investigate a stochastic signalprocessing framework for signals with sparse derivatives, where the samples of a Lévy process are corrupted by noise. The proposed signal model covers the wellknown Brownian motion and piecewiseconstant Poisson process; moreover, the Lévy family also ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—We investigate a stochastic signalprocessing framework for signals with sparse derivatives, where the samples of a Lévy process are corrupted by noise. The proposed signal model covers the wellknown Brownian motion and piecewiseconstant Poisson process; moreover, the Lévy family also contains other interesting members exhibiting heavytail statistics that fulfill the requirements of compressibility. We characterize the maximumaposteriori probability (MAP) and minimum meansquare error (MMSE) estimators for such signals. Interestingly, some of the MAP estimators for the Lévy model coincide with popular signaldenoising algorithms (e.g., totalvariation (TV) regularization). We propose a novel noniterative implementation of the MMSE estimator based on the beliefpropagation algorithm performed in the Fourier domain. Our algorithm takes advantage of the fact that the joint statistics of general Lévy processes are much easier to describe by their characteristic function, as the probability densities do not always admit closedform expressions. We then use our new estimator as a benchmark to compare the performance of existing algorithms for the optimal recovery of gradientsparse signals. Index Terms—Lévy process, stochastic modeling, sparsesignal estimation, non linear reconstruction, totalvariation estimation, belief propagation (BP), message passing. I.
BENEFITS OF CONSISTENCY IN IMAGE DENOISINGWITH STEERABLEWAVELETS
"... The steerable wavelet transform is a redundant image representation with the remarkable property that its basis functions can be adaptively rotated to a desired orientation. This makes the transform wellsuited to the design of waveletbased algorithms applicable to images with a high amount of di ..."
Abstract
 Add to MetaCart
(Show Context)
The steerable wavelet transform is a redundant image representation with the remarkable property that its basis functions can be adaptively rotated to a desired orientation. This makes the transform wellsuited to the design of waveletbased algorithms applicable to images with a high amount of directional features. However, arbitrary modification of the waveletdomain coefficients may violate consistency constraints because a legitimate representation must be redundant. In this paper, by honoring the redundancy of the coefficients, we demonstrate that it is possible to improve the performance of regularized leastsquares problems in the steerable wavelet domain. We illustrate that our consistent method significantly improves upon the performance of conventional denoising with steerable wavelets. Index Terms—Image denoising, sparse estimation, wavelet regularization, steerable wavelet transform 1.
1Blind Denoising with Random Greedy Pursuits
"... Denoising methods require some assumptions about the signal of interest and the noise. While most denoising procedures require some knowledge about the noise level, which may be unknown in practice, here we assume that the signal expansion in a given dictionary has a distribution that is more heavy ..."
Abstract
 Add to MetaCart
(Show Context)
Denoising methods require some assumptions about the signal of interest and the noise. While most denoising procedures require some knowledge about the noise level, which may be unknown in practice, here we assume that the signal expansion in a given dictionary has a distribution that is more heavytailed than the noise. We show how this hypothesis leads to a stopping criterion for greedy pursuit algorithms which is independent from the noise level. Inspired by the success of ensemble methods in machine learning, we propose a strategy to reduce the variance of greedy estimates by averaging pursuits obtained from randomly subsampled dictionaries. We call this denoising procedure Blind Random Pursuit Denoising (BIRD). We offer a generalization to multidimensional signals, with a structured sparse model (SBIRD). The relevance of this approach is demonstrated on synthetic and experimental MEG signals where, without any parameter tuning, BIRD outperforms stateoftheart algorithms even when they are informed by the noise level. Code is available to reproduce all experiments. EDICS Category: DSPSPARSE I.