Results 1 - 10
of
54
NESTA: A Fast and Accurate First-Order Method for Sparse Recovery
, 2009
"... Accurate signal recovery or image reconstruction from indirect and possibly undersampled data is a topic of considerable interest; for example, the literature in the recent field of compressed sensing is already quite immense. Inspired by recent breakthroughs in the development of novel first-order ..."
Abstract
-
Cited by 171 (2 self)
- Add to MetaCart
(Show Context)
Accurate signal recovery or image reconstruction from indirect and possibly undersampled data is a topic of considerable interest; for example, the literature in the recent field of compressed sensing is already quite immense. Inspired by recent breakthroughs in the development of novel first-order methods in convex optimization, most notably Nesterov’s smoothing technique, this paper introduces a fast and accurate algorithm for solving common recovery problems in signal processing. In the spirit of Nesterov’s work, one of the key ideas of this algorithm is a subtle averaging of sequences of iterates, which has been shown to improve the convergence properties of standard gradient-descent algorithms. This paper demonstrates that this approach is ideally suited for solving large-scale compressed sensing reconstruction problems as 1) it is computationally efficient, 2) it is accurate and returns solutions with several correct digits, 3) it is flexible and amenable to many kinds of reconstruction problems, and 4) it is robust in the sense that its excellent performance across a wide range of problems does not depend on the fine tuning of several parameters. Comprehensive numerical experiments on realistic signals exhibiting a large dynamic range show that this algorithm compares favorably with recently proposed state-of-the-art methods. We also apply the algorithm to solve other problems for which there are fewer alternatives, such as total-variation minimization, and
Computational methods for sparse solution of linear inverse problems
, 2009
"... The goal of sparse approximation problems is to represent a target signal approximately as a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, ..."
Abstract
-
Cited by 167 (0 self)
- Add to MetaCart
The goal of sparse approximation problems is to represent a target signal approximately as a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a wealth of applications.
Templates for Convex Cone Problems with Applications to Sparse Signal Recovery
, 2010
"... This paper develops a general framework for solving a variety of convex cone problems that frequently arise in signal processing, machine learning, statistics, and other fields. The approach works as follows: first, determine a conic formulation of the problem; second, determine its dual; third, app ..."
Abstract
-
Cited by 122 (6 self)
- Add to MetaCart
This paper develops a general framework for solving a variety of convex cone problems that frequently arise in signal processing, machine learning, statistics, and other fields. The approach works as follows: first, determine a conic formulation of the problem; second, determine its dual; third, apply smoothing; and fourth, solve using an optimal first-order method. A merit of this approach is its flexibility: for example, all compressed sensing problems can be solved via this approach. These include models with objective functionals such as the total-variation norm, ‖W x‖1 where W is arbitrary, or a combination thereof. In addition, the paper also introduces a number of technical contributions such as a novel continuation scheme, a novel approach for controlling the step size, and some new results showing that the smooth and unsmoothed problems are sometimes formally equivalent. Combined with our framework, these lead to novel, stable and computationally efficient algorithms. For instance, our general implementation is competitive with state-of-the-art methods for solving intensively studied problems such as the LASSO. Further, numerical experiments show that one can solve the Dantzig selector problem, for which no efficient large-scale solvers exist, in a few hundred iterations. Finally, the paper is accompanied with a software release. This software is not a single, monolithic solver; rather, it is a suite of programs and routines designed to serve as building blocks for constructing complete algorithms. Keywords. Optimal first-order methods, Nesterov’s accelerated descent algorithms, proximal algorithms, conic duality, smoothing by conjugation, the Dantzig selector, the LASSO, nuclearnorm minimization.
Analysis and generalizations of the linearized Bregman method
- SIAM J. IMAGING SCI
, 2010
"... This paper analyzes and improves the linearized Bregman method for solving the basis pursuit and related sparse optimization problems. The analysis shows that the linearized Bregman method has the exact regularization property; namely, it converges to an exact solution of the basis pursuit problem ..."
Abstract
-
Cited by 36 (9 self)
- Add to MetaCart
This paper analyzes and improves the linearized Bregman method for solving the basis pursuit and related sparse optimization problems. The analysis shows that the linearized Bregman method has the exact regularization property; namely, it converges to an exact solution of the basis pursuit problem whenever its smooth parameter α is greater than a certain value. The analysis is based on showing that the linearized Bregman algorithm is equivalent to gradient descent applied to a certain dual formulation. This result motivates generalizations of the algorithm enabling the use of gradient-based optimization techniques such as line search, Barzilai–Borwein, limited memory BFGS (L-BFGS), nonlinear conjugate gradient, and Nesterov’s methods. In the numerical simulations, the two proposed implementations, one using Barzilai–Borwein steps with nonmonotone line search and the other using L-BFGS, gave more accurate solutions in much shorter times than the basic implementation of the linearized Bregman method with a so-called kicking technique.
Sparse signal reconstruction via iterative support detection
- Siam Journal on Imaging Sciences, issue
, 2010
"... Abstract. We present a novel sparse signal reconstruction method, iterative support detection (ISD), aiming to achieve fast reconstruction and a reduced requirement on the number of measurements compared to the classical ℓ1 minimization approach. ISD addresses failed reconstructions of ℓ1 minimizati ..."
Abstract
-
Cited by 36 (5 self)
- Add to MetaCart
(Show Context)
Abstract. We present a novel sparse signal reconstruction method, iterative support detection (ISD), aiming to achieve fast reconstruction and a reduced requirement on the number of measurements compared to the classical ℓ1 minimization approach. ISD addresses failed reconstructions of ℓ1 minimization due to insufficient measurements. It estimates a support set I from a current reconstruction and obtains a new reconstruction by solving the minimization problem min { ∑ i/∈I |xi | : Ax = b}, and it iterates these two steps for a small number of times. ISD differs from the orthogonal matching pursuit method, as well as its variants, because (i) the index set I in ISD is not necessarily nested or increasing, and (ii) the minimization problem above updates all the components of x at the same time. We generalize the null space property to the truncated null space property and present our analysis of ISD based on the latter. We introduce an efficient implementation of ISD, called threshold-ISD, for recovering signals with fast decaying distributions of nonzeros from compressive sensing measurements. Numerical experiments show that threshold-ISD has significant advantages over the classical ℓ1 minimization approach, as well as two state-of-the-art algorithms: the iterative reweighted ℓ1 minimization algorithm (IRL1) and the iterative reweighted least-squares algorithm (IRLS). MATLAB code is available for download from
Distributed basis pursuit
- IEEE Trans. Sig. Proc
, 2012
"... Abstract—We propose a distributed algorithm for solving the optimization problem Basis Pursuit (BP). BP finds the least-norm solution of the underdetermined linear system and is used, for example, in compressed sensing for reconstruction. Our algorithm solves BP on a distributed platform such as a s ..."
Abstract
-
Cited by 28 (6 self)
- Add to MetaCart
(Show Context)
Abstract—We propose a distributed algorithm for solving the optimization problem Basis Pursuit (BP). BP finds the least-norm solution of the underdetermined linear system and is used, for example, in compressed sensing for reconstruction. Our algorithm solves BP on a distributed platform such as a sensor network, and is designed to minimize the communication between nodes. The algorithm only requires the network to be connected, has no notion of a central processing node, and no node has access to the entire matrix at any time. We consider two scenarios in which either the columns or the rows of are distributed among the compute nodes. Our algorithm, named D-ADMM, is a decentralized implementation of the alternating direction method of multipliers. We show through numerical simulation that our algorithm requires considerably less communications between the nodes than the state-of-the-art algorithms. Index Terms—Augmented Lagrangian, basis pursuit (BP), distributed optimization, sensor networks.
A proximal-gradient homotopy method for the sparse least-squares problem
- SIAM Journal on Optimization
, 2013
"... Abstract We consider solving the 1 -regularized least-squares ( 1 -LS) problem in the context of sparse recovery, for applications such as compressed sensing. The standard proximal gradient method, also known as iterative soft-thresholding when applied to this problem, has low computational cost pe ..."
Abstract
-
Cited by 12 (2 self)
- Add to MetaCart
(Show Context)
Abstract We consider solving the 1 -regularized least-squares ( 1 -LS) problem in the context of sparse recovery, for applications such as compressed sensing. The standard proximal gradient method, also known as iterative soft-thresholding when applied to this problem, has low computational cost per iteration but a rather slow convergence rate. Nevertheless, when the solution is sparse, it often exhibits fast linear convergence in the final stage. We exploit the local linear convergence using a homotopy continuation strategy, i.e., we solve the 1 -LS problem for a sequence of decreasing values of the regularization parameter, and use an approximate solution at the end of each stage to warm start the next stage. Although similar strategies have been studied in the literature, there have been no theoretical analysis of their global iteration complexity. This paper shows that under suitable assumptions for sparse recovery, the proposed homotopy strategy ensures that all iterates along the homotopy solution path are sparse. Therefore the objective function is effectively strongly convex along the solution path, and geometric convergence at each stage can be established. As a result, the overall iteration complexity of our method is O(log(1/ )) for finding an -optimal solution, which can be interpreted as global geometric rate of convergence. We also present empirical results to support our theoretical analysis.
Optimal computational and statistical rates of convergence for sparse nonconvex learning problems. arXiv preprint, arXiv
, 2013
"... We provide theoretical analysis of the statistical and computational properties of penalized M-estimators that can be formulated as the solution to a possibly nonconvex optimization prob-lem. Many important estimators fall in this category, including least squares regression with nonconvex regulariz ..."
Abstract
-
Cited by 11 (5 self)
- Add to MetaCart
(Show Context)
We provide theoretical analysis of the statistical and computational properties of penalized M-estimators that can be formulated as the solution to a possibly nonconvex optimization prob-lem. Many important estimators fall in this category, including least squares regression with nonconvex regularization, generalized linear models with nonconvex regularization, and sparse elliptical random design regression. For these problems, it is intractable to calculate the global solution due to the nonconvex formulation. In this paper, we propose an approximate regulariza-tion path following method for solving a variety of learning problems with nonconvex objective functions. Under a unified analytic framework, we simultaneously provide explicit statistical and computational rates of convergence of any local solution obtained by the algorithm. Com-putationally, our algorithm attains a global geometric rate of convergence for calculating the full regularization path, which is optimal among all first-order algorithms. Unlike most existing methods that only attain geometric rates of convergence for one single regularization parameter, our algorithm calculates the full regularization path with the same iteration complexity. In par-ticular, we provide a refined iteration complexity bound to sharply characterize the performance of each stage along the regularization path. Statistically, we provide sharp sample complexity analysis for all the approximate local solutions along the regularization path. In particular, our analysis improves upon existing results by providing a more refined sample complexity bound as well as an exact support recovery result for the final estimator. These results show that the final estimator attains an oracle statistical property due to the usage of nonconvex penalty. 1
A quasi-newton proximal splitting method
- In Advances in Neural Information Processing Systems (NIPS
"... A new result in convex analysis on the calculation of proximity operators in certain scaled norms is derived. We describe efficient implementations of the proximity calculation for a useful class of functions; the implementations exploit the piece-wise linear nature of the dual problem. The second p ..."
Abstract
-
Cited by 9 (0 self)
- Add to MetaCart
(Show Context)
A new result in convex analysis on the calculation of proximity operators in certain scaled norms is derived. We describe efficient implementations of the proximity calculation for a useful class of functions; the implementations exploit the piece-wise linear nature of the dual problem. The second part of the paper applies the previous result to acceleration of convex minimization problems, and leads to an elegant quasi-Newton method. The optimization method compares favorably against state-of-the-art alternatives. The algorithm has extensive applications including signal processing, sparse recovery and machine learning and classification. 1