Results 1  10
of
306
Compressed sensing
, 2004
"... We study the notion of Compressed Sensing (CS) as put forward in [14] and related work [20, 3, 4]. The basic idea behind CS is that a signal or image, unknown but supposed to be compressible by a known transform, (eg. wavelet or Fourier), can be subjected to fewer measurements than the nominal numbe ..."
Abstract

Cited by 3625 (22 self)
 Add to MetaCart
(Show Context)
We study the notion of Compressed Sensing (CS) as put forward in [14] and related work [20, 3, 4]. The basic idea behind CS is that a signal or image, unknown but supposed to be compressible by a known transform, (eg. wavelet or Fourier), can be subjected to fewer measurements than the nominal number of pixels, and yet be accurately reconstructed. The samples are nonadaptive and measure ‘random’ linear combinations of the transform coefficients. Approximate reconstruction is obtained by solving for the transform coefficients consistent with measured data and having the smallest possible `1 norm. We perform a series of numerical experiments which validate in general terms the basic idea proposed in [14, 3, 5], in the favorable case where the transform coefficients are sparse in the strong sense that the vast majority are zero. We then consider a range of lessfavorable cases, in which the object has all coefficients nonzero, but the coefficients obey an `p bound, for some p ∈ (0, 1]. These experiments show that the basic inequalities behind the CS method seem to involve reasonable constants. We next consider synthetic examples modelling problems in spectroscopy and image pro
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
, 2007
"... A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combin ..."
Abstract

Cited by 427 (36 self)
 Add to MetaCart
(Show Context)
A fullrank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combinatorial in nature, are there efficient methods for finding the sparsest solution? These questions have been answered positively and constructively in recent years, exposing a wide variety of surprising phenomena; in particular, the existence of easilyverifiable conditions under which optimallysparse solutions can be found by concrete, effective computational methods. Such theoretical results inspire a bold perspective on some important practical problems in signal and image processing. Several wellknown signal and image processing problems can be cast as demanding solutions of undetermined systems of equations. Such problems have previously seemed, to many, intractable. There is considerable evidence that these problems often have sparse solutions. Hence, advances in finding sparse solutions to underdetermined systems energizes research on such signal and image processing problems – to striking effect. In this paper we review the theoretical results on sparse solutions of linear systems, empirical
Wavelet Threshold Estimators for Data With Correlated Noise
, 1994
"... Wavelet threshold estimators for data with stationary correlated noise are constructed by the following prescription. First, form the discrete wavelet transform of the data points. Next, apply a leveldependent soft threshold to the individual coefficients, allowing the thresholds to depend on the l ..."
Abstract

Cited by 240 (15 self)
 Add to MetaCart
Wavelet threshold estimators for data with stationary correlated noise are constructed by the following prescription. First, form the discrete wavelet transform of the data points. Next, apply a leveldependent soft threshold to the individual coefficients, allowing the thresholds to depend on the level in the wavelet transform. Finally, transform back to obtain the estimate in the original domain. The threshold used at level j is s j p 2 log n, where s j is the standard deviation of the coefficients at that level, and n is the overall sample size. The minimax properties of the estimators are investigated by considering a general problem in multivariate normal decision theory, concerned with the estimation of the mean vector of a general multivariate normal distribution subject to squared error loss. An ideal risk is obtained by the use of an `oracle' that provides the optimum diagonal projection estimate. This `benchmark' risk can be considered in its own right as a measure of the s...
Modeling the Joint Statistics of Images in the Wavelet Domain
 IN PROC SPIE, 44TH ANNUAL MEETING
, 1999
"... I describe a statistical model for natural photographic images, when decomposed in a multiscale wavelet basis. In particular, I examine both the marginal and pairwise joint histograms of wavelet coefficients at adjacent spatial locations, orientations, and spatial scales. Although the histograms ar ..."
Abstract

Cited by 118 (2 self)
 Add to MetaCart
I describe a statistical model for natural photographic images, when decomposed in a multiscale wavelet basis. In particular, I examine both the marginal and pairwise joint histograms of wavelet coefficients at adjacent spatial locations, orientations, and spatial scales. Although the histograms are highly nonGaussian, they are nevertheless well described using fairly simple parameterized density models.
Wavelet estimators in nonparametric regression: a comparative simulation study
 Journal of Statistical Software
, 2001
"... OATAO is an open access repository that collects the work of Toulouse researchers and makes it freely available over the web where possible. ..."
Abstract

Cited by 114 (19 self)
 Add to MetaCart
OATAO is an open access repository that collects the work of Toulouse researchers and makes it freely available over the web where possible.
Wavelet Shrinkage Denoising Using the NonNegative Garrote
, 1997
"... In this paper, we combine Donoho and Johnstone's Wavelet Shrinkage denoising technique (known as WaveShrink) with Breiman's nonnegative garrote. We show that the nonnegative garrote shrinkage estimate enjoys the same asymptotic convergence rate as the hard and the soft shrinkage estimate ..."
Abstract

Cited by 84 (1 self)
 Add to MetaCart
In this paper, we combine Donoho and Johnstone's Wavelet Shrinkage denoising technique (known as WaveShrink) with Breiman's nonnegative garrote. We show that the nonnegative garrote shrinkage estimate enjoys the same asymptotic convergence rate as the hard and the soft shrinkage estimates. Simulations are used to demonstrate that garrote shrinkage offers advantages over both hard shrinkage (generally smaller meansquare error and less sensitivity to small perturbations in the data) and soft shrinkage (generally smaller bias and overall meansquareerror). The minimax thresholds for the nonnegative garrote are derived and the threshold selection procedure based on Stein's Unbiased Risk Estimate (SURE) is studied. We also propose a threshold selection procedure based on combining Coifman and Donoho's cyclespinning and SURE. The procedure is called SPINSURE. We use examples to show that SPINSURE is more stable than SURE: smaller standard deviation and smaller range. Key Words and Phra...
Proximal Methods for Hierarchical Sparse Coding
, 2010
"... Sparse coding consists in representing signals as sparse linear combinations of atoms selected from a dictionary. We consider an extension of this framework where the atoms are further assumed to be embedded in a tree. This is achieved using a recently introduced treestructured sparse regularizatio ..."
Abstract

Cited by 83 (18 self)
 Add to MetaCart
Sparse coding consists in representing signals as sparse linear combinations of atoms selected from a dictionary. We consider an extension of this framework where the atoms are further assumed to be embedded in a tree. This is achieved using a recently introduced treestructured sparse regularization norm, which has proven useful in several applications. This norm leads to regularized problems that are difficult to optimize, and we propose in this paper efficient algorithms for solving them. More precisely, we show that the proximal operator associated with this norm is computable exactly via a dual approach that can be viewed as the composition of elementary proximal operators. Our procedure has a complexity linear, or close to linear, in the number of atoms, and allows the use of accelerated gradient techniques to solve the treestructured sparse approximation problem at the same computational cost as traditional ones using the ℓ1norm. Our method is efficient and scales gracefully to millions of variables, which we illustrate in two types of applications: first, we consider fixed hierarchical dictionaries of wavelets to denoise natural images. Then, we apply our optimization tools in the context of dictionary learning, where learned dictionary elements naturally organize in a prespecified arborescent structure, leading to a better performance in reconstruction of natural image patches. When applied to text documents, our method learns hierarchies of topics, thus providing a competitive alternative to probabilistic topic models.
Wavelet Processes and Adaptive Estimation of the Evolutionary Wavelet Spectrum
, 1998
"... This article defines and studies a new class of nonstationary random processes constructed from discrete nondecimated wavelets which generalizes the Cramer (Fourier) representation of stationary time series. We define an evolutionary wavelet spectrum (EWS) which quantifies how process power va ..."
Abstract

Cited by 76 (30 self)
 Add to MetaCart
This article defines and studies a new class of nonstationary random processes constructed from discrete nondecimated wavelets which generalizes the Cramer (Fourier) representation of stationary time series. We define an evolutionary wavelet spectrum (EWS) which quantifies how process power varies locally over time and scale. We show how the EWS may be rigorously estimated by a smoothed wavelet periodogram and how both these quantities may be inverted to provide an estimable timelocalized autocovariance. We illustrate our theory with a pedagogical example based on discrete nondecimated Haar wavelets and also a real medical time series example.
Novel Bayesian Multiscale Method for Speckle Removal in Medical Ultrasound Images
 IEEE TRANS. MED. IMAG
, 2001
"... A novel speckle suppression method for medical ultrasound images is presented. First, the logarithmic transform of the original image is analyzed into the multiscale wavelet domain. We show that the subband decompositions of ultrasound images have significantly nonGaussian statistics that are best ..."
Abstract

Cited by 75 (11 self)
 Add to MetaCart
A novel speckle suppression method for medical ultrasound images is presented. First, the logarithmic transform of the original image is analyzed into the multiscale wavelet domain. We show that the subband decompositions of ultrasound images have significantly nonGaussian statistics that are best described by families of heavytailed distributions such as the alphastable. Then, we design a Bayesian estimator that exploits these statistics. We use the alphastable model to develop a blind noiseremoval processor that performs a nonlinear operation on the data. Finally, we compare our technique with current stateoftheart soft and hard thresholding methods applied on actual ultrasound medical images and we quantify the achieved performance improvement.