Results 1  10
of
358
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
, 2007
"... A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combin ..."
Abstract

Cited by 427 (35 self)
 Add to MetaCart
(Show Context)
A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combinatorial in nature, are there efficient methods for finding the sparsest solution? These questions have been answered positively and constructively in recent years, exposing a wide variety of surprising phenomena; in particular, the existence of easilyverifiable conditions under which optimallysparse solutions can be found by concrete, effective computational methods. Such theoretical results inspire a bold perspective on some important practical problems in signal and image processing. Several wellknown signal and image processing problems can be cast as demanding solutions of undetermined systems of equations. Such problems have previously seemed, to many, intractable. There is considerable evidence that these problems often have sparse solutions. Hence, advances in finding sparse solutions to underdetermined systems energizes research on such signal and image processing problems – to striking effect. In this paper we review the theoretical results on sparse solutions of linear systems, empirical
Spatially adaptive wavelet thresholding with context modeling for image denoising,” in ICIP,
, 1998
"... ..."
Bivariate Shrinkage Functions for WaveletBased Denoising Exploiting Interscale Dependency
, 2002
"... Most simple nonlinear thresholding rules for waveletbased denoising assume that the wavelet coefficients are independent. However, wavelet coefficients of natural images have significant dependencies. In this paper, we will only consider the dependencies between the coefficients and their parents i ..."
Abstract

Cited by 209 (8 self)
 Add to MetaCart
Most simple nonlinear thresholding rules for waveletbased denoising assume that the wavelet coefficients are independent. However, wavelet coefficients of natural images have significant dependencies. In this paper, we will only consider the dependencies between the coefficients and their parents in detail. For this purpose, new nonGaussian bivariate distributions are proposed, and corresponding nonlinear threshold functions (shrinkage functions) are derived from the models using Bayesian estimation theory. The new shrinkage functions do not assume the independence of wavelet coefficients. We will show three image denoising examples in order to show the performance of these new bivariate shrinkage rules. In the second example, a simple subbanddependent datadriven image denoising system is described and compared with effective datadriven techniques in the literature, namely VisuShrink, SureShrink, BayesShrink, and hidden Markov models. In the third example, the same idea is applied to the dualtree complex wavelet coefficients.
Bivariate Shrinkage with Local Variance Estimation
, 2002
"... The performance of imagedenoising algorithms using wavelet transforms can be improved significantly by taking into account the statistical dependencies among wavelet coefficients as demonstrated by several algorithms presented in the literature. In two earlier papers by the authors, a simple bivari ..."
Abstract

Cited by 121 (7 self)
 Add to MetaCart
The performance of imagedenoising algorithms using wavelet transforms can be improved significantly by taking into account the statistical dependencies among wavelet coefficients as demonstrated by several algorithms presented in the literature. In two earlier papers by the authors, a simple bivariate shrinkage rule is described using a coefficient and its parent. The performance can also be improved using simple models by estimating model parameters in a local neighborhood. This letter presents a locally adaptive denoising algorithm using the bivariate shrinkage function. The algorithm is illustrated using both the orthogonal and dual tree complex wavelet transforms. Some comparisons with the best available results will be given in order to illustrate the effectiveness of the proposed algorithm.
Why simple shrinkage is still relevant for redundant representations
 IEEE Trans. Inf. Theory
, 2006
"... General Description • Problem statement: Shrinkage is a well known and appealing denoising technique, introduced originally by Donoho and Johnstone in 1994. The use of shrinkage for denoising is known to be optimal for Gaussian white noise, provided that the sparsity on the signal's representa ..."
Abstract

Cited by 115 (12 self)
 Add to MetaCart
(Show Context)
General Description • Problem statement: Shrinkage is a well known and appealing denoising technique, introduced originally by Donoho and Johnstone in 1994. The use of shrinkage for denoising is known to be optimal for Gaussian white noise, provided that the sparsity on the signal's representation is enforced using a unitary transform. Still, shrinkage is also practiced with nonunitary, and even redundant representations, typically leading to satisfactory results. In this paper we shed some light on this behavior. • Originality of the work: The main argument in this paper is that such simple shrinkage could be interpreted as the first iteration of an algorithm that solves the basis pursuit denoising (BPDN) problem. While the desired solution of BPDN is hard to obtain in general, a simple iterative procedure that amounts to stepwise shrinkage can be employed with quite successful performance. • New results: We demonstrate how the simple shrinkage emerges as the first iteration of such algorithm. Furthermore, we show how shrinkage can be iterated, turning into an effective algorithm that minimizes the BPDN via simple shrinkage steps, in order to further strengthen the denoising effect. Lastly, the emerging algorithm stands in between the basis and the matching pursuit as a novel and appealing pursuit technique for atom decomposition in the presence of noise.
NONSUBSAMPLED CONTOURLET TRANSFORM: FILTER DESIGN AND APPLICATIONS IN DENOISING
"... In this paper we study the nonsubsampled contourlet transform. We address the corresponding filter design problem using the McClellan transformation. We show how zeroes can be imposed in the filters so that the iterated structure produces regular basis functions. The proposed design framework yields ..."
Abstract

Cited by 109 (4 self)
 Add to MetaCart
(Show Context)
In this paper we study the nonsubsampled contourlet transform. We address the corresponding filter design problem using the McClellan transformation. We show how zeroes can be imposed in the filters so that the iterated structure produces regular basis functions. The proposed design framework yields filters that can be implemented efficiently through a lifting factorization. We apply the constructed transform in image noise removal where the results obtained are comparable to the stateofthe art, being superior in some cases.
Estimating the probability of the presence of a signal of interest in multiresolution single and multiband image denoising
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 2005
"... We develop three novel wavelet domain denoising methods for subbandadaptive, spatiallyadaptive and multivalued image denoising. The core of our approach is the estimation of the probability that a given coefficient contains a significant noisefree component, which we call “signal of interest”. In ..."
Abstract

Cited by 91 (13 self)
 Add to MetaCart
We develop three novel wavelet domain denoising methods for subbandadaptive, spatiallyadaptive and multivalued image denoising. The core of our approach is the estimation of the probability that a given coefficient contains a significant noisefree component, which we call “signal of interest”. In this respect we analyze cases where the probability of signal presence is (i) fixed per subband, (ii) conditioned on a local spatial context and (iii) conditioned on information from multiple image bands. All the probabilities are estimated assuming a generalized Laplacian prior for noisefree subband data and additive white Gaussian noise. The results demonstrate that the new subbandadaptive shrinkage function outperforms Bayesian thresholding approaches in terms of mean squared error. The spatially adaptive version of the proposed method yields better results than the existing spatially adaptive ones of similar and higher complexity. The performance on color and on multispectral images is superior with respect to recent multiband wavelet thresholding.
Wavelets on graphs via spectral graph theory
, 2009
"... We propose a novel method for constructing wavelet transforms of functions defined on the vertices of an arbitrary finite weighted graph. Our approach is based on defining scaling using the the graph analogue of the Fourier domain, namely the spectral decomposition of the discrete graph Laplacian L. ..."
Abstract

Cited by 89 (7 self)
 Add to MetaCart
(Show Context)
We propose a novel method for constructing wavelet transforms of functions defined on the vertices of an arbitrary finite weighted graph. Our approach is based on defining scaling using the the graph analogue of the Fourier domain, namely the spectral decomposition of the discrete graph Laplacian L. Given a wavelet generating kernel g and a scale parameter t, we define the scaled wavelet operator T t g = g(tL). The spectral graph wavelets are then formed by localizing this operator by applying it to an indicator function. Subject to an admissibility condition on g, this procedure defines an invertible transform. We explore the localization properties of the wavelets in the limit of fine scales. Additionally, we present a fast Chebyshev polynomial approximation algorithm for computing the transform that avoids the need for diagonalizing L. We highlight potential applications of the transform through examples of wavelets on graphs corresponding to a variety of different problem domains.
The SURELET Approach to Image Denoising
, 2007
"... We propose a new approach to image denoising, based on the imagedomain minimization of an estimate of the mean squared error—Stein’s unbiased risk estimate (SURE). Unlike most existing denoising algorithms, using the SURE makes it needless to hypothesize a statistical model for the noiseless image ..."
Abstract

Cited by 67 (18 self)
 Add to MetaCart
We propose a new approach to image denoising, based on the imagedomain minimization of an estimate of the mean squared error—Stein’s unbiased risk estimate (SURE). Unlike most existing denoising algorithms, using the SURE makes it needless to hypothesize a statistical model for the noiseless image. A key point of our approach is that, although the (nonlinear) processing is performed in a transformed domain—typically, an undecimated discrete wavelet transform, but we also address nonorthonormal transforms—this minimization is performed in the image domain. Indeed, we demonstrate that, when the transform is a “tight ” frame (an undecimated wavelet transform using orthonormal filters), separate subband minimization yields substantially worse results. In order for our approach to be viable, we add another principle, that the denoising process can be expressed as a linear combination of elementary denoising processes—linear expansion of thresholds (LET). Armed with the SURE and LET principles, we show that a denoising algorithm merely amounts to solving a linear system of equations which is obviously fast and efficient. Quite remarkably, the very competitive results obtained by performing a simple threshold (imagedomain SURE optimized) on the undecimated Haar wavelet coefficients show that the SURELET principle has a huge potential.
Denoising by Sparse Approximation: Error Bounds Based on RateDistortion Theory
, 2006
"... If a signal x is known to have a sparse representation with respect to a frame, it can be estimated from a noisecorrupted observation y by finding the best sparse approximation to y. Removing noise in this manner depends on the frame efficiently representing the signal while it inefficiently repres ..."
Abstract

Cited by 44 (7 self)
 Add to MetaCart
If a signal x is known to have a sparse representation with respect to a frame, it can be estimated from a noisecorrupted observation y by finding the best sparse approximation to y. Removing noise in this manner depends on the frame efficiently representing the signal while it inefficiently represents the noise. The meansquared error (MSE) of this denoising scheme and the probability that the estimate has the same sparsity pattern as the original signal are analyzed. First an MSE bound that depends on a new bound on approximating a Gaussian signal as a linear combination of elements of an overcomplete dictionary is given. Further analyses are for dictionaries generated randomly according to a sphericallysymmetric distribution and signals expressible with single dictionary elements. Easilycomputed approximations for the probability of selecting the correct dictionary element and the MSE are given. Asymptotic expressions reveal a critical input signaltonoise ratio for signal recovery.