Results 1  10
of
46
1 Automatic Parameter Selection for Denoising Algorithms Using a NoReference Measure of Image Content
"... Across the field of inverse problems in image and video processing, nearly all algorithms have various parameters which need to be set in order to yield good results. In practice, usually the choice of such parameters is made empirically with trial and error if no ”groundtruth ” reference is availa ..."
Abstract

Cited by 45 (9 self)
 Add to MetaCart
(Show Context)
Across the field of inverse problems in image and video processing, nearly all algorithms have various parameters which need to be set in order to yield good results. In practice, usually the choice of such parameters is made empirically with trial and error if no ”groundtruth ” reference is available. Some analytical methods such as crossvalidation and Stein’s unbiased risk estimate (SURE) have been successfully used to set such parameters. However, these methods tend to be strongly reliant on restrictive assumptions on the noise, and also computationally heavy. In this paper, we propose a metric Q which is based on singular value decomposition of local image gradients, and provides a quantitative measure of true image content (e.g. visually salient geometric structures such as edges etc.), in the presence of noise and other disturbances. This measure (1) is easy to compute (2) does not require the use of a reference image, (3) reacts reasonably to both blur and random noise, (4) works well even when the noise is not Gaussian. To illustrate its use in selection of algorithmic parameters, the proposed measure is used to automatically and effectively set the parameters of two leading image denoising algorithms. While the experimental focus of this paper is on optimizing denoising filters, the proposed metric can also be used for a large variety of other image and video restoration algorithms such as deblurring, superresolution, and more. In this paper, ample simulated and real data experiments illustrate the effectiveness of the proposed approach for denoising applications. For the sake of completeness, the statistical properties of the proposed metric Q in some special cases are also provided.
ClusteringBased Denoising With Locally Learned Dictionaries
, 2009
"... In this paper, we propose KLLD: a patchbased, locally adaptive denoising method based on clustering the given noisy image into regions of similar geometric structure. In order to effectively perform such clustering, we employ as features the local weight functions derived from our earlier work on ..."
Abstract

Cited by 43 (10 self)
 Add to MetaCart
In this paper, we propose KLLD: a patchbased, locally adaptive denoising method based on clustering the given noisy image into regions of similar geometric structure. In order to effectively perform such clustering, we employ as features the local weight functions derived from our earlier work on steering kernel regression [1]. These weights are exceedingly informative and robust in conveying reliable local structural information about the image even in the presence of significant amounts of noise. Next, we model each region (or cluster)—which may not be spatially contiguous—by “learning ” a best basis describing the patches within that cluster using principal components analysis. This learned basis (or “dictionary”) is then employed to optimally estimate the underlying pixel values using a kernel regression framework. An iterated version of the proposed algorithm is also presented which leads to further performance enhancements. We also introduce a novel mechanism for optimally choosing the local patch size for each cluster using Stein’s unbiased risk estimator (SURE). We illustrate the overall algorithm’s capabilities with several examples. These indicate that the proposed method appears to be competitive with some of the most recently published state of the art denoising methods.
Optimal inversion of the Anscombe transformation in lowcount Poisson image denoising
 IEEE TRANSACTIONS
"... The removal of Poisson noise is often performed through the following threestep procedure. First, the noise variance is stabilized by applying the Anscombe root transformation to the data, producing a signal in which the noise can be treated as additive Gaussian with unitary variance. Second, the n ..."
Abstract

Cited by 28 (4 self)
 Add to MetaCart
The removal of Poisson noise is often performed through the following threestep procedure. First, the noise variance is stabilized by applying the Anscombe root transformation to the data, producing a signal in which the noise can be treated as additive Gaussian with unitary variance. Second, the noise is removed using a conventional denoising algorithm for additive white Gaussian noise. Third, an inverse transformation is applied to the denoised signal, obtaining the estimate of the signal of interest. The choice of the proper inverse transformation is crucial in order to minimize the bias error which arises when the nonlinear forward transformation is applied. We introduce optimal inverses for the Anscombe transformation, in particular the exact unbiased inverse, a maximum likelihood (ML) inverse, and a more sophisticated minimum mean square error (MMSE) inverse. We then present an experimental analysis using a few stateoftheart denoising algorithms and show that the estimation can be consistently improved by applying the exact unbiased inverse, particularly at the lowcount regime. This results in a very efficient filtering solution that is competitive with some of the best existing methods for Poisson image denoising.
Parallel proximal algorithm for image restoration using hybrid regularization
 IEEE Transactions on Image Processing
, 2011
"... Regularization approaches have demonstrated their effectiveness for solving illposed problems. However, in the context of variational restoration methods, a challenging question remains, namely how to find a good regularizer. While total variation introduces staircase effects, wavelet domain regula ..."
Abstract

Cited by 24 (7 self)
 Add to MetaCart
(Show Context)
Regularization approaches have demonstrated their effectiveness for solving illposed problems. However, in the context of variational restoration methods, a challenging question remains, namely how to find a good regularizer. While total variation introduces staircase effects, wavelet domain regularization brings other artefacts, e.g. ringing. However, a tradeoff can be made by introducing a hybrid regularization including several terms non necessarily acting in the same domain (e.g. spatial and wavelet transform domains). While this approachwas shown to provide good results for solving deconvolution problems in the presence of additive Gaussian noise, an important issue is to efficiently deal with this hybrid regularization for more general noise models. To solve this problem, we adopt a convex optimization framework where the criterion to be minimized is split in the sum of more than two terms. For spatial domain regularization, isotropic or anisotropic total variation definitions using various gradient filters are considered. An accelerated version of the Parallel Proximal Algorithm is proposed to perform the minimization. Some difficulties in the computation of the proximity operators involved in this algorithm are also addressed in this paper. Numerical experiments performed in the context of Poisson data recovery, show the good behaviour of the algorithm as well as promising results concerning the use of hybrid regularization techniques.
A SURE Approach for Digital Signal/Image Deconvolution Problems
, 2009
"... In this paper, we are interested in the classical problem of restoring data degraded by a convolution and the addition of a white Gaussian noise. The originality of the proposed approach is twofold. Firstly, we formulate the restoration problem as a nonlinear estimation problem leading to the mini ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
(Show Context)
In this paper, we are interested in the classical problem of restoring data degraded by a convolution and the addition of a white Gaussian noise. The originality of the proposed approach is twofold. Firstly, we formulate the restoration problem as a nonlinear estimation problem leading to the minimization of a criterion derived from Stein’s unbiased quadratic risk estimate. Secondly, the deconvolution procedure is performed using any analysis and synthesis frames that can be overcomplete or not. New theoretical results concerning the calculation of the variance of the Stein’s risk estimate are also provided in this work. Simulations carried out on natural images show the good performance of our method w.r.t. conventional waveletbased restoration methods.
1 Accelerated dynamic MRI exploiting sparsity and lowrank structure: kt SLR
"... We introduce a novel algorithm to reconstruct dynamic MRI data from undersampled kt space data. In contrast to classical model based cine MRI schemes that rely on the sparsity or banded structure in Fourier space, we use the compact representation of the data in the Karhunen Louve transform (KLT) ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
(Show Context)
We introduce a novel algorithm to reconstruct dynamic MRI data from undersampled kt space data. In contrast to classical model based cine MRI schemes that rely on the sparsity or banded structure in Fourier space, we use the compact representation of the data in the Karhunen Louve transform (KLT) domain to exploit the correlations in the dataset. The use of the datadependent KL transform makes our approach ideally suited to a range of dynamic imaging problems, even when the motion is not periodic. In comparison to current KLTbased methods that rely on a twostep approach to first estimate the basis functions and then use it for reconstruction, we pose the problem as a spectrally regularized matrix recovery problem. By simultaneously determining the temporal basis functions and its spatial weights from the entire measured data, the proposed scheme is capable of providing high quality reconstructions at a range of accelerations. In addition to using the compact representation in the KLT domain, we also exploit the sparsity of the data to further improve the recovery rate. Validations using numerical phantoms and invivo cardiac perfusion MRI data demonstrate the significant improvement in performance offered by the proposed scheme over existing methods. I.
Local behavior of sparse analysis regularization: Applications to risk estimation
 Applied and Computational Harmonic Analysis
, 2013
"... In this paper, we aim at recovering an unknown signal x0 from noisy measurements y = Φx0 +w, where Φ is an illconditioned or singular linear operator and w accounts for some noise. To regularize such an illposed inverse problem, we impose an analysis sparsity prior. More precisely, the recovery is ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
(Show Context)
In this paper, we aim at recovering an unknown signal x0 from noisy measurements y = Φx0 +w, where Φ is an illconditioned or singular linear operator and w accounts for some noise. To regularize such an illposed inverse problem, we impose an analysis sparsity prior. More precisely, the recovery is cast as a convex optimization program where the objective is the sum of a quadratic data fidelity term and a regularization term formed of the ℓ 1norm of the correlations between the sought after signal and atoms in a given (generally overcomplete) dictionary. The ℓ 1sparsity analysis prior is weighted by a regularization parameter λ> 0. In this paper, we prove that any minimizers of this problem is a piecewiseaffine function of the observations y and the regularization parameter λ. As a byproduct, we exploit these properties to get an objectively guided choice of λ. In particular, we develop an extension of the Generalized Stein Unbiased Risk Estimator (GSURE) and show that it is an unbiased and reliable estimator of an appropriately defined risk. The latter encompasses special cases
A signal processing approach to generalized 1D total variation
 IEEE Trans. Signal Process
, 2011
"... Abstract—Total variation (TV) is a powerful method that brings great benefit for edgepreserving regularization. Despite being widely employed in image processing, it has restricted applicability for 1D signal processing since piecewiseconstant signals form a rather limited model for many applicat ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
Abstract—Total variation (TV) is a powerful method that brings great benefit for edgepreserving regularization. Despite being widely employed in image processing, it has restricted applicability for 1D signal processing since piecewiseconstant signals form a rather limited model for many applications. Here we generalize conventional TV in 1D by extending the derivative operator, which is within the regularization term, to any linear differential operator. This provides flexibility for tailoring the approach to the presence of nontrivial linear systems and for different types of driving signals such as spikelike, piecewiseconstant, and so on. Conventional TV remains a special case of this general framework. We illustrate the feasibility of the method by considering a nontrivial linear system and different types of driving signals. Index Terms—Differential operators, linear systems, regularization, sparsity, total variation. I.
Global image denoising
 IEEE Trans. on Image Proc
, 2014
"... Abstract — Most existing stateoftheart image denoising algorithms are based on exploiting similarity between a relatively modest number of patches. These patchbased methods are strictly dependent on patch matching, and their performance is hamstrung by the ability to reliably find sufficiently ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
Abstract — Most existing stateoftheart image denoising algorithms are based on exploiting similarity between a relatively modest number of patches. These patchbased methods are strictly dependent on patch matching, and their performance is hamstrung by the ability to reliably find sufficiently similar patches. As the number of patches grows, a point of diminishing returns is reached where the performance improvement due to more patches is offset by the lower likelihood of finding sufficiently close matches. The net effect is that while patchbased methods, such as BM3D, are excellent overall, they are ultimately limited in how well they can do on (larger) images with increasing complexity. In this paper, we address these shortcomings by developing a paradigm for truly global filtering where each pixel is estimated from all pixels in the image. Our objectives in this paper are twofold. First, we give a statistical analysis of our proposed global filter, based on a spectral decomposition of its corresponding operator, and we study the effect of truncation of this spectral decomposition. Second, we derive an approximation to the spectral (principal) components using the Nyström extension. Using these, we demonstrate that this global filter can be implemented efficiently by sampling a fairly small percentage of the pixels in the image. Experiments illustrate that our strategy can effectively globalize any existing denoising filters to estimate each pixel using all pixels in the image, hence improving upon the best patchbased methods. Index Terms — Image denoising, nonlocal filters, Nyström extension, spatial domain filter, risk estimator.
Epigraphical projection and proximal tools for solving constrained convex optimization problems
 Part I,” pp. x+24, 2012, Submitted. Preprint: http://arxiv.org/pdf/1210.5844
"... We propose a proximal approach to deal with convex optimization problems involving nonlinear constraints. A large family of such constraints, proven to be effective in the solution of inverse problems, can be expressed as the lower level set of a sum of convex functions evaluated over different, but ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
We propose a proximal approach to deal with convex optimization problems involving nonlinear constraints. A large family of such constraints, proven to be effective in the solution of inverse problems, can be expressed as the lower level set of a sum of convex functions evaluated over different, but possibly overlapping, blocks of the signal. For this class of constraints, the associated projection operator generally does not have a closed form. We circumvent this difficulty by splitting the lower level set into as many epigraphs as functions involved in the sum. A closed halfspace constraint is also enforced, in order to limit the sum of the introduced epigraphical variables to the upper bound of the original lower level set. In this paper, we focus on a family of constraints involving linear transforms of ℓ1,p balls. Our main theoretical contribution is to provide closed form expressions of the epigraphical projections associated with the Euclidean norm (p = 2) andthe supnorm (p = +∞). The proposed approach is validated in the context of image restoration with missing samples, by making use of TVlike constraints. Experiments show that our method leads to significant improvements in term of convergence speed over existing algorithms for solving similar constrained problems. 1