Results 1  10
of
258
A fast iterative shrinkagethresholding algorithm with application to . . .
, 2009
"... We consider the class of Iterative ShrinkageThresholding Algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods is attractive due to its simplicity, however, they are also known to converge quite slowly. In this paper we present a Fast Iterat ..."
Abstract

Cited by 1055 (8 self)
 Add to MetaCart
(Show Context)
We consider the class of Iterative ShrinkageThresholding Algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods is attractive due to its simplicity, however, they are also known to converge quite slowly. In this paper we present a Fast Iterative ShrinkageThresholding Algorithm (FISTA) which preserves the computational simplicity of ISTA, but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for waveletbased image deblurring demonstrate the capabilities of FISTA.
Image denoising using a scale mixture of Gaussians in the wavelet domain
 IEEE TRANS IMAGE PROCESSING
, 2003
"... We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vecto ..."
Abstract

Cited by 514 (17 self)
 Add to MetaCart
(Show Context)
We describe a method for removing noise from digital images, based on a statistical model of the coefficients of an overcomplete multiscale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vector and a hidden positive scalar multiplier. The latter modulates the local variance of the coefficients in the neighborhood, and is thus able to account for the empirically observed correlation between the coefficient amplitudes. Under this model, the Bayesian least squares estimate of each coefficient reduces to a weighted average of the local linear estimates over all possible values of the hidden multiplier variable. We demonstrate through simulations with images contaminated by additive white Gaussian noise that the performance of this method substantially surpasses that of previously published methods, both visually and in terms of mean squared error.
Splines: A Perfect Fit for Signal/Image Processing
 IEEE SIGNAL PROCESSING MAGAZINE
, 1999
"... ..."
(Show Context)
Probing the Pareto frontier for basis pursuit solutions
, 2008
"... The basis pursuit problem seeks a minimum onenorm solution of an underdetermined leastsquares problem. Basis pursuit denoise (BPDN) fits the leastsquares problem only approximately, and a single parameter determines a curve that traces the optimal tradeoff between the leastsquares fit and the ..."
Abstract

Cited by 363 (4 self)
 Add to MetaCart
The basis pursuit problem seeks a minimum onenorm solution of an underdetermined leastsquares problem. Basis pursuit denoise (BPDN) fits the leastsquares problem only approximately, and a single parameter determines a curve that traces the optimal tradeoff between the leastsquares fit and the onenorm of the solution. We prove that this curve is convex and continuously differentiable over all points of interest, and show that it gives an explicit relationship to two other optimization problems closely related to BPDN. We describe a rootfinding algorithm for finding arbitrary points on this curve; the algorithm is suitable for problems that are large scale and for those that are in the complex domain. At each iteration, a spectral gradientprojection method approximately minimizes a leastsquares problem with an explicit onenorm constraint. Only matrixvector operations are required. The primaldual solution of this problem gives function and derivative information needed for the rootfinding method. Numerical experiments on a comprehensive set of test problems demonstrate that the method scales well to large problems.
Adaptive Wavelet Thresholding for Image Denoising and Compression
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 2000
"... The first part of this paper proposes an adaptive, datadriven threshold for image denoising via wavelet softthresholding. The threshold is derived in a Bayesian framework, and the prior used on the wavelet coefficients is the generalized Gaussian distribution (GGD) widely used in image processing ..."
Abstract

Cited by 351 (4 self)
 Add to MetaCart
The first part of this paper proposes an adaptive, datadriven threshold for image denoising via wavelet softthresholding. The threshold is derived in a Bayesian framework, and the prior used on the wavelet coefficients is the generalized Gaussian distribution (GGD) widely used in image processing applications. The proposed threshold is simple and closedform, and it is adaptive to each subband because it depends on datadriven estimates of the parameters. Experimental results show that the proposed method, called BayesShrink, is typically within 5% of the MSE of the best softthresholding benchmark with the image assumed known. It also outperforms Donoho and Johnstone's SureShrink most of the time. The second part
Bayesian TreeStructured Image Modeling using Waveletdomain Hidden Markov Models
 IEEE Trans. Image Processing
, 1999
"... Waveletdomain hidden Markov models have proven to be useful tools for statistical signal and image processing. The hidden Markov tree (HMT) model captures the key features of the joint probability density of the wavelet coefficients of realworld data. One potential drawback to the HMT framework ..."
Abstract

Cited by 187 (17 self)
 Add to MetaCart
Waveletdomain hidden Markov models have proven to be useful tools for statistical signal and image processing. The hidden Markov tree (HMT) model captures the key features of the joint probability density of the wavelet coefficients of realworld data. One potential drawback to the HMT framework is the need for computationally expensive iterative training to fit an HMT model to a given data set (using the ExpectationMaximization algorithm, for example). In this paper, we greatly simplify the HMT model by exploiting the inherent selfsimilarity of realworld images. This simplified model specifies the HMT parameters with just nine metaparameters (independent of the size of the image and the number of wavelet scales). We also introduce a Bayesian universal HMT (uHMT) that fixes these nine parameters. The uHMT requires no training of any kind. While extremely simple, we show using a series of image estimation /denoising experiments that these two new models retain nearly all of the key structure modeled by the full HMT. Finally, we propose a fast shiftinvariant HMT estimation algorithm that outperforms other waveletbased estimators in the current literature, both in meansquare error and visual metrics.
A New TwIST: TwoStep Iterative Shrinkage/Thresholding Algorithms for Image Restoration
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 2007
"... Iterative shrinkage/thresholding (IST) algorithms have been recently proposed to handle a class of convex unconstrained optimization problems arising in image restoration and other linear inverse problems. This class of problems results from combining a linear observation model with a nonquadratic ..."
Abstract

Cited by 184 (25 self)
 Add to MetaCart
(Show Context)
Iterative shrinkage/thresholding (IST) algorithms have been recently proposed to handle a class of convex unconstrained optimization problems arising in image restoration and other linear inverse problems. This class of problems results from combining a linear observation model with a nonquadratic regularizer (e.g., total variation or waveletbased regularization). It happens that the convergence rate of these IST algorithms depends heavily on the linear observation operator, becoming very slow when this operator is illconditioned or illposed. In this paper, we introduce twostep IST (TwIST) algorithms, exhibiting much faster convergence rate than IST for illconditioned problems. For a vast class of nonquadratic convex regularizers ( norms, some Besov norms, and total variation), we show that TwIST converges to a minimizer of the objective function, for a given range of values of its parameters. For noninvertible observation operators, we introduce a monotonic version of TwIST (MTwIST); although the convergence proof does not apply to this scenario, we give experimental evidence that MTwIST exhibits similar speed gains over IST. The effectiveness of the new methods are experimentally confirmed on problems of image deconvolution and of restoration with missing samples.
Analysis versus synthesis in signal priors
, 2005
"... The concept of prior probability for signals plays a key role in the successful solution of many inverse problems. Much of the literature on this topic can be divided between analysisbased and synthesisbased priors. Analysisbased priors assign probability to a signal through various forward measu ..."
Abstract

Cited by 147 (17 self)
 Add to MetaCart
The concept of prior probability for signals plays a key role in the successful solution of many inverse problems. Much of the literature on this topic can be divided between analysisbased and synthesisbased priors. Analysisbased priors assign probability to a signal through various forward measurements of it, while synthesisbased priors seek a reconstruction of the signal as a combination of atom signals. In this paper we describe these two prior classes, focusing on the distinction between them. We show that although when reducing to the complete and undercomplete formulations the two become equivalent, in their more interesting overcomplete formulation the two types depart. Focusing on the ℓ1 denoising case, we present several ways of comparing the two types of priors, establishing the existence of an unbridgeable gap between them. 1.
Adaptive wavelet estimation: A block thresholding and oracle inequality approach
 Ann. Statist
, 1999
"... We study wavelet function estimation via the approach of block thresholding and ideal adaptation with oracle. Oracle inequalities are derived and serve as guides for the selection of smoothing parameters. Based on an oracle inequality and motivated by the data compression and localization properties ..."
Abstract

Cited by 146 (20 self)
 Add to MetaCart
We study wavelet function estimation via the approach of block thresholding and ideal adaptation with oracle. Oracle inequalities are derived and serve as guides for the selection of smoothing parameters. Based on an oracle inequality and motivated by the data compression and localization properties of wavelets, an adaptive wavelet estimator for nonparametric regression is proposed and the optimality of the procedure is investigated. We show that the estimator achieves simultaneously three objectives: adaptivity, spatial adaptivity and computational efficiency. Specifically, it is proved that the estimator attains the exact optimal rates of convergence over a range of Besov classes and the estimator achieves adaptive local minimax rate for estimating functions at a point. The estimator is easy to implement, at the computational cost of O�n�. Simulation shows that the estimator has excellent numerical performance relative to more traditional wavelet estimators. 1. Introduction. Wavelet