Results 1  10
of
41
CoherenceBased Performance Guarantees for Estimating a Sparse Vector Under Random Noise
"... We consider the problem of estimating a deterministic sparse vector x0 from underdetermined measurements Ax0 + w, where w represents white Gaussian noise and A is a given deterministic dictionary. We analyze the performance of three sparse estimation algorithms: basis pursuit denoising (BPDN), orth ..."
Abstract

Cited by 43 (15 self)
 Add to MetaCart
(Show Context)
We consider the problem of estimating a deterministic sparse vector x0 from underdetermined measurements Ax0 + w, where w represents white Gaussian noise and A is a given deterministic dictionary. We analyze the performance of three sparse estimation algorithms: basis pursuit denoising (BPDN), orthogonal matching pursuit (OMP), and thresholding. These algorithms are shown to achieve nearoracle performance with high probability, assuming that x0 is sufficiently sparse. Our results are nonasymptotic and are based only on the coherence of A, so that they are applicable to arbitrary dictionaries. Differences in the precise conditions required for the performance guarantees of each algorithm are manifested in the observed performance at high and low signaltonoise ratios. This provides insight on the advantages and drawbacks of ℓ1 relaxation techniques such as BPDN as opposed to greedy approaches such as OMP and thresholding.
RIPBased NearOracle Performance Guarantees for SP, CoSaMP, and IHT
"... Abstract—This correspondence presents an average case denoising performance analysis for SP, CoSaMP, and IHT algorithms. This analysis considers the recovery of a noisy signal, with the assumptions that it is corrupted by an additive random zeromean white Gaussian noise and has asparse representat ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
(Show Context)
Abstract—This correspondence presents an average case denoising performance analysis for SP, CoSaMP, and IHT algorithms. This analysis considers the recovery of a noisy signal, with the assumptions that it is corrupted by an additive random zeromean white Gaussian noise and has asparse representation with respect to a known dictionary D. The proposed analysis is based on the RIP, establishing a nearoracle performance guarantee for each of these algorithms. Beyond bounds for the reconstruction error that hold with high probability, in this work we also provide a bound for the average error. Index Terms—Additive white noise, compressed sensing, Gaussian noise, signal denoising, signal reconstruction, signal representations. I.
Thresholdingbased iterative selection procedures for model selection and shrinkage
 Electronic Journal of Statistics
, 2009
"... Abstract: This paper discusses a class of thresholdingbased iterative selection procedures (TISP) for model selection and shrinkage. People have long before noticed the weakness of the convex l1constraint (or the softthresholding) in wavelets and have designed many different forms of nonconvex pen ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
(Show Context)
Abstract: This paper discusses a class of thresholdingbased iterative selection procedures (TISP) for model selection and shrinkage. People have long before noticed the weakness of the convex l1constraint (or the softthresholding) in wavelets and have designed many different forms of nonconvex penalties to increase model sparsity and accuracy. But for a nonorthogonal regression matrix, there is great difficulty in both investigating the performance in theory and solving the problem in computation. TISP provides a simple and efficient way to tackle this so that we successfully borrow the rich results in the orthogonal design to solve the (nonconvexly) penalized regression for a general design matrix. Our starting point is, however, thresholding rules rather than penalty functions. Indeed, there is a universal connection between them. But a drawback of the latter is its nonunique form, and our approach greatly facilitates the computation and the analysis. In fact, we are able to build the convergence theorem and explore theoretical properties of the selection and estimation via TISP nonasymptotically. More importantly, a novel HybridTISP is proposed based on hardthresholding and ridgethresholding. It provides a fusion between the l0penalty and the l2penalty, and adaptively achieves the right balance between shrinkage and selection in statistical modeling. In practice, HybridTISP shows superior performance in testerror and is parsimonious.
MAP MODEL SELECTION IN GAUSSIAN REGRESSION
"... We consider a Bayesian approach to model selection in Gaussian linear regression, where the number of predictors might be much larger than the number of observations. From a frequentist view, the proposed procedure results in the penalized least squares estimation with a complexity penalty associate ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
We consider a Bayesian approach to model selection in Gaussian linear regression, where the number of predictors might be much larger than the number of observations. From a frequentist view, the proposed procedure results in the penalized least squares estimation with a complexity penalty associated with a prior on the model size. We investigate the optimality properties of the resulting model selector. We establish the oracle inequality and specify conditions on the prior that imply its asymptotic minimaxity within a wide range of sparse and dense settings for “nearlyorthogonal ” and “multicollinear ” designs. 1
Coherencebased nearoracle performance guarantees for sparse estimation under Gaussian noise
 in Proc. Int. Conf. Acoustics, Speech, and Signal Processing (ICASSP 2010
, 2010
"... We consider the problem of estimating a deterministic sparse vector x0 from underdetermined measurements Ax0 + w, where w represents white Gaussian noise and A is a given deterministic dictionary. We analyze the performance of three sparse estimation algorithms: basis pursuit denoising, orthogonal m ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We consider the problem of estimating a deterministic sparse vector x0 from underdetermined measurements Ax0 + w, where w represents white Gaussian noise and A is a given deterministic dictionary. We analyze the performance of three sparse estimation algorithms: basis pursuit denoising, orthogonal matching pursuit, and thresholding. These approaches are shown to achieve nearoracle performance with high probability, assuming that x0 is sufficiently sparse. Our results are nonasymptotic and are based only on the coherence of A, so that they are applicable to arbitrary dictionaries. Index Terms — Sparse estimation, basis pursuit, matching pursuit, thresholding algorithm, oracle
CAN WE ALLOW LINEAR DEPENDENCIES IN THE DICTIONARY IN THE SPARSE SYNTHESIS FRAMEWORK?
"... Signal recovery from a given set of linear measurements using a sparsity prior has been a major subject of research in recent years. In this model, the signal is assumed to have a sparse representation under a given dictionary. Most of the work dealing with this subject has focused on the reconstruc ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Signal recovery from a given set of linear measurements using a sparsity prior has been a major subject of research in recent years. In this model, the signal is assumed to have a sparse representation under a given dictionary. Most of the work dealing with this subject has focused on the reconstruction of the signal’s representation as the means for recovering the signal itself. This approach forced the dictionary to be of low coherence and with no linear dependencies between its columns. Recently, a series of contributions that focus on signal recovery using the analysis model find that linear dependencies in the analysis dictionary are in fact permitted and beneficial. In this paper we show theoretically that the same holds also for signal recovery in the synthesis case for the ℓ0synthesis minimization problem. In addition, we demonstrate empirically the relevance of our conclusions for recovering the signal using an ℓ1relaxation. Index Terms — Sparse representations, compressed sensing, analysis versus synthesis, inverse problems.
BAYESIAN MODEL SELECTION IN GAUSSIAN REGRESSION
"... We consider a Bayesian approach to model selection in Gaussian linear regression, where the number of predictors might be much larger than the number of observations. From a frequentist view, the proposed procedure results in the penalized least squares estimation with a complexity penalty associate ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We consider a Bayesian approach to model selection in Gaussian linear regression, where the number of predictors might be much larger than the number of observations. From a frequentist view, the proposed procedure results in the penalized least squares estimation with a complexity penalty associated with a prior on the model size. We investigate the optimality properties of the resulting estimator. We establish the oracle inequality and specify conditions on the prior that imply its asymptotic minimaxity within a wide range of sparse and dense settings for “nearlyorthogonal ” and “multicollinear ” designs. 1
OracleOrder Recovery Performance of Greedy Pursuits With Replacement Against General Perturbations
 Signal Processing, IEEE Transactions on
, 2013
"... Applying the theory of compressive sensing in practice always takes different kinds of perturbations into consideration. In this paper, the recovery performance of greedy pursuits with replacement for sparse recovery is analyzed when both the measurement vector and the sensing matrix are contaminate ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Applying the theory of compressive sensing in practice always takes different kinds of perturbations into consideration. In this paper, the recovery performance of greedy pursuits with replacement for sparse recovery is analyzed when both the measurement vector and the sensing matrix are contaminated with additive perturbations. Specifically, greedy pursuits with replacement include three algorithms, compressive sampling matching pursuit (CoSaMP), subspace pursuit (SP), and iterative hard thresholding (IHT), where the support estimation is evaluated and updated in each iteration. Based on restricted isometry property, a unified form of the error bounds of these recovery algorithms is derived under general perturbations for compressible signals. The results reveal that the recovery performance is stable against both perturbations. In addition, these bounds are compared with that of oracle recovery — least squares solution with the locations of some largest entries in magnitude known a priori. The comparison shows that the error bounds of these algorithms only differ in coefficients from the lower bound of oracle recovery for some certain signal and perturbations, as reveals that oracleorder recovery performance of greedy pursuits with replacement is guaranteed. Numerical simulations are performed to verify the conclusions.