Results 1  10
of
30
Sparse Reconstruction by Separable Approximation
, 2008
"... Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), waveletbased deconvolution and reconstruction, and compressed sensing ( ..."
Abstract

Cited by 373 (36 self)
 Add to MetaCart
(Show Context)
Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), waveletbased deconvolution and reconstruction, and compressed sensing (CS) are a few wellknown areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (ℓ2) error term added to a sparsityinducing (usually ℓ1) regularization term. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (which is therefore separable in the unknowns) plus the original sparsityinducing regularizer. Our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. In addition to solving the standard ℓ2 − ℓ1 case, our framework yields an efficient solution technique for other regularizers, such as an ℓ∞norm regularizer and groupseparable (GS) regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard ℓ2 − ℓ1 problem, as well as being efficient on problems with other separable regularization terms.
Minimization of Nonsmooth, Nonconvex Functionals by Iterative Thresholding
, 2009
"... Preprint 10The consecutive numbering of the publications is determined by their chronological order. The aim of this preprint series is to make new research rapidly available for scientific discussion. Therefore, the responsibility for the contents is solely due to the authors. The publications will ..."
Abstract

Cited by 127 (2 self)
 Add to MetaCart
(Show Context)
Preprint 10The consecutive numbering of the publications is determined by their chronological order. The aim of this preprint series is to make new research rapidly available for scientific discussion. Therefore, the responsibility for the contents is solely due to the authors. The publications will be distributed by the authors. Minimization of nonsmooth, nonconvex functionals by iterative thresholding
Linear convergence of iterative softthresholding
 J. Fourier Anal. Appl
"... ABSTRACT. In this article a unified approach to iterative softthresholding algorithms for the solution of linear operator equations in infinite dimensional Hilbert spaces is presented. We formulate the algorithm in the framework of generalized gradient methods and present a new convergence analysis ..."
Abstract

Cited by 58 (13 self)
 Add to MetaCart
(Show Context)
ABSTRACT. In this article a unified approach to iterative softthresholding algorithms for the solution of linear operator equations in infinite dimensional Hilbert spaces is presented. We formulate the algorithm in the framework of generalized gradient methods and present a new convergence analysis. As main result we show that the algorithm converges with linear rate as soon as the underlying operator satisfies the socalled finite basis injectivity property or the minimizer possesses a socalled strict sparsity pattern. Moreover it is shown that the constants can be calculated explicitly in special cases (i.e. for compact operators). Furthermore, the techniques also can be used to establish linear convergence for related methods such as the iterative thresholding algorithm for joint sparsity and the accelerated gradient projection method. 1.
Convergence rates and source conditions for Tikhonov regularization with sparsity constraints. Submitted for publication, 2008. convergence of iterative softthresholding 27
"... This paper addresses the regularization by sparsity constraints by means of weighted ℓ p penalties for 0 ≤ p ≤ 2. For 1 ≤ p ≤ 2 special attention is payed to convergence rates in norm and to source conditions. As main results it is proven that one gets a convergence rate of √ δ in the 2norm for 1 & ..."
Abstract

Cited by 42 (15 self)
 Add to MetaCart
This paper addresses the regularization by sparsity constraints by means of weighted ℓ p penalties for 0 ≤ p ≤ 2. For 1 ≤ p ≤ 2 special attention is payed to convergence rates in norm and to source conditions. As main results it is proven that one gets a convergence rate of √ δ in the 2norm for 1 < p ≤ 2 and in the 1norm for p = 1 as soon as the unknown solution is sparse. The case p = 1 needs a special technique where not only Bregman distances but also a socalled BregmanTaylor distance has to be employed. For p < 1 only preliminary results are shown. These results indicate that, different from p ≥ 1, the regularizing properties depend on the interplay of the operator and the basis of sparsity. A counterexample for p = 0 shows that regularization need not to happen. AMS Subject classification: Primary 47A52; Secondary 65J20, 65F22. 1
Stagewise Weak Gradient Pursuits
, 2009
"... Abstract — Finding sparse solutions to underdetermined inverse problems is a fundamental challenge encountered in a wide range of signal processing applications, from signal acquisition to source separation. This paper looks at greedy algorithms that are applicable to very large problems. The main c ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
Abstract — Finding sparse solutions to underdetermined inverse problems is a fundamental challenge encountered in a wide range of signal processing applications, from signal acquisition to source separation. This paper looks at greedy algorithms that are applicable to very large problems. The main contribution is the development of a new selection strategy (called stagewise weak selection) that effectively selects several elements in each iteration. The new selection strategy is based on the realisation that many classical proofs for recovery of sparse signals can be trivially extended to the new setting. What is more, simulation studies show the computational benefits and good performance of the approach. This strategy can be used in several greedy algorithms and we argue for the use within the gradient pursuit framework in which selected coefficients are updated using a conjugate update direction. For this update, we present a fast implementation and novel convergence result.
ElasticNet Regularization: Error estimates and Active Set Methods
, 905
"... This paper investigates theoretical properties and efficient numerical algorithms for the socalled elasticnet regularization originating from statistics, which enforces simultaneously ℓ 1 and ℓ 2 regularization. The stability of the minimizer and its consistency are studied, and convergence rates ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
(Show Context)
This paper investigates theoretical properties and efficient numerical algorithms for the socalled elasticnet regularization originating from statistics, which enforces simultaneously ℓ 1 and ℓ 2 regularization. The stability of the minimizer and its consistency are studied, and convergence rates for both a priori and a posteriori parameter choice rules are established. Two iterative numerical algorithms of active set type are proposed, and their convergence properties are discussed. Numerical results are presented to illustrate the features of the functional and algorithms. 1
A quasinewton proximal splitting method
 In Advances in Neural Information Processing Systems (NIPS
"... A new result in convex analysis on the calculation of proximity operators in certain scaled norms is derived. We describe efficient implementations of the proximity calculation for a useful class of functions; the implementations exploit the piecewise linear nature of the dual problem. The second p ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
A new result in convex analysis on the calculation of proximity operators in certain scaled norms is derived. We describe efficient implementations of the proximity calculation for a useful class of functions; the implementations exploit the piecewise linear nature of the dual problem. The second part of the paper applies the previous result to acceleration of convex minimization problems, and leads to an elegant quasiNewton method. The optimization method compares favorably against stateoftheart alternatives. The algorithm has extensive applications including signal processing, sparse recovery and machine learning and classification. 1
Sparse regularization with l q penalty term
 Inverse Probl
, 2008
"... We consider the stable approximation of sparse solutions to nonlinear operator equations by means of Tikhonov regularization with a subquadratic penalty term. Imposing certain assumptions, which for a linear operator are equivalent to the standard range condition, we derive the usual convergence ra ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
We consider the stable approximation of sparse solutions to nonlinear operator equations by means of Tikhonov regularization with a subquadratic penalty term. Imposing certain assumptions, which for a linear operator are equivalent to the standard range condition, we derive the usual convergence rate O ( √ δ) of the regularized solutions in dependence of the noise level δ. Particular emphasis lies on the case, where the true solution is known to have a sparse representation in a given basis. In this case, if the differential of the operator satisfies a certain injectivity condition, we can show that the actual convergence rate improves up to O(δ). MSC: 65J20; 65J22, 49N45.