• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. Selected Topics in Signal Processing (0)

by Mário AT Figueiredo, Robert D Nowak, Stephen J Wright
Venue:IEEE Journal of
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 541
Next 10 →

A fast iterative shrinkage-thresholding algorithm with application to . . .

by Amir Beck, Marc Teboulle , 2009
"... We consider the class of Iterative Shrinkage-Thresholding Algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods is attractive due to its simplicity, however, they are also known to converge quite slowly. In this paper we present a Fast Iterat ..."
Abstract - Cited by 1058 (9 self) - Add to MetaCart
We consider the class of Iterative Shrinkage-Thresholding Algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods is attractive due to its simplicity, however, they are also known to converge quite slowly. In this paper we present a Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) which preserves the computational simplicity of ISTA, but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA.
(Show Context)

Citation Context

...x − b‖ 2 + λ‖x‖1}, (2) This research is partially supported by the Israel Science Foundation, ISF grant #489-06. where ‖x‖1 stands for the sum of the absolute values of the components of x, see e.g., =-=[1, 2, 3, 4]-=-. The presence of the l1 term is used to induce sparsity in the optimal solution, of (2), see e.g., [5, 6]. In image deblurring for example, A is often chosen as A = RW where R is the blurring matrix ...

Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers

by Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, Jonathan Eckstein , 2010
"... ..."
Abstract - Cited by 1001 (20 self) - Add to MetaCart
Abstract not found

Gradient methods for minimizing composite objective function

by Yu. Nesterov , 2007
"... ..."
Abstract - Cited by 399 (0 self) - Add to MetaCart
Abstract not found

Sparse Reconstruction by Separable Approximation

by Stephen J. Wright , Robert D. Nowak , Mário A. T. Figueiredo , 2007
"... Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing ..."
Abstract - Cited by 373 (38 self) - Add to MetaCart
Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (ℓ2) error term added to a sparsity-inducing (usually ℓ1) regularizer. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex, sparsity-inducing function. We propose iterative methods in which each step is an optimization subproblem involving a separable quadratic term (diagonal Hessian) plus the original sparsity-inducing term. Our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. In addition to solving the standard ℓ2 − ℓ1 case, our approach handles other problems, e.g., ℓp regularizers with p � = 1, or group-separable (GS) regularizers. Experiments with CS problems show that our approach provides state-of-the-art speed for the standard ℓ2 − ℓ1 problem, and is also efficient on problems with GS regularizers. Index Terms — sparse approximation, compressed sensing, optimization, reconstruction.

Probing the Pareto frontier for basis pursuit solutions

by Ewout van den Berg, Michael P. Friedlander , 2008
"... The basis pursuit problem seeks a minimum one-norm solution of an underdetermined least-squares problem. Basis pursuit denoise (BPDN) fits the least-squares problem only approximately, and a single parameter determines a curve that traces the optimal trade-off between the least-squares fit and the ..."
Abstract - Cited by 365 (5 self) - Add to MetaCart
The basis pursuit problem seeks a minimum one-norm solution of an underdetermined least-squares problem. Basis pursuit denoise (BPDN) fits the least-squares problem only approximately, and a single parameter determines a curve that traces the optimal trade-off between the least-squares fit and the one-norm of the solution. We prove that this curve is convex and continuously differentiable over all points of interest, and show that it gives an explicit relationship to two other optimization problems closely related to BPDN. We describe a root-finding algorithm for finding arbitrary points on this curve; the algorithm is suitable for problems that are large scale and for those that are in the complex domain. At each iteration, a spectral gradient-projection method approximately minimizes a least-squares problem with an explicit one-norm constraint. Only matrix-vector operations are required. The primal-dual solution of this problem gives function and derivative information needed for the root-finding method. Numerical experiments on a comprehensive set of test problems demonstrate that the method scales well to large problems.

Bayesian Compressive Sensing

by Shihao Ji, Ya Xue, Lawrence Carin , 2007
"... The data of interest are assumed to be represented as N-dimensional real vectors, and these vectors are compressible in some linear basis B, implying that the signal can be reconstructed accurately using only a small number M ≪ N of basis-function coefficients associated with B. Compressive sensing ..."
Abstract - Cited by 330 (24 self) - Add to MetaCart
The data of interest are assumed to be represented as N-dimensional real vectors, and these vectors are compressible in some linear basis B, implying that the signal can be reconstructed accurately using only a small number M ≪ N of basis-function coefficients associated with B. Compressive sensing is a framework whereby one does not measure one of the aforementioned N-dimensional signals directly, but rather a set of related measurements, with the new measurements a linear combination of the original underlying N-dimensional signal. The number of required compressive-sensing measurements is typically much smaller than N, offering the potential to simplify the sensing system. Let f denote the unknown underlying N-dimensional signal, and g a vector of compressive-sensing measurements, then one may approximate f accurately by utilizing knowledge of the (under-determined) linear relationship between f and g, in addition to knowledge of the fact that f is compressible in B. In this paper we employ a Bayesian formalism for estimating the underlying signal f based on compressive-sensing measurements g. The proposed framework has the following properties: (i) in addition to estimating the underlying signal f, “error bars ” are also estimated, these giving a measure of confidence in the inverted signal; (ii) using knowledge of the error bars, a principled means is provided for determining when a sufficient
(Show Context)

Citation Context

...easurements if u is highly compressible in the basis Ψ. The utility of this framework has motivated development over the last few years of several techniques for performing the CS inversion v → û [7]–=-=[11]-=-. Before proceeding, it should be emphasized that CS is a framework that is not limited to wavelet-based representations. While wavelets played a key role in early developments of sparse signal repres...

An interior-point method for large-scale l1-regularized logistic regression

by Kwangmoo Koh, Seung-jean Kim, Stephen Boyd, Yi Lin - Journal of Machine Learning Research , 2007
"... Logistic regression with ℓ1 regularization has been proposed as a promising method for feature selection in classification problems. In this paper we describe an efficient interior-point method for solving large-scale ℓ1-regularized logistic regression problems. Small problems with up to a thousand ..."
Abstract - Cited by 290 (9 self) - Add to MetaCart
Logistic regression with ℓ1 regularization has been proposed as a promising method for feature selection in classification problems. In this paper we describe an efficient interior-point method for solving large-scale ℓ1-regularized logistic regression problems. Small problems with up to a thousand or so features and examples can be solved in seconds on a PC; medium sized problems, with tens of thousands of features and examples, can be solved in tens of seconds (assuming some sparsity in the data). A variation on the basic method, that uses a preconditioned conjugate gradient method to compute the search step, can solve very large problems, with a million features and examples (e.g., the 20 Newsgroups data set), in a few minutes, on a PC. Using warm-start techniques, a good approximation of the entire regularization path can be computed much more efficiently than by solving a family of problems independently.
(Show Context)

Citation Context

...e descent methods [19], sequential subspace optimization methods [34], bound optimization methods [17], iterated shrinkage methods [9, 16], gradient methods [35], and gradient projec4stion algorithms =-=[18]-=-. Some of these methods including the gradient projection algorithms [18] can handle efficiently very large problems. 1.3 Outline The main purpose of this paper is to describe a specialized interior-p...

NESTA: A Fast and Accurate First-Order Method for Sparse Recovery

by Stephen Becker, Jérôme Bobin, Emmanuel J. Candès , 2009
"... Accurate signal recovery or image reconstruction from indirect and possibly undersampled data is a topic of considerable interest; for example, the literature in the recent field of compressed sensing is already quite immense. Inspired by recent breakthroughs in the development of novel first-order ..."
Abstract - Cited by 171 (2 self) - Add to MetaCart
Accurate signal recovery or image reconstruction from indirect and possibly undersampled data is a topic of considerable interest; for example, the literature in the recent field of compressed sensing is already quite immense. Inspired by recent breakthroughs in the development of novel first-order methods in convex optimization, most notably Nesterov’s smoothing technique, this paper introduces a fast and accurate algorithm for solving common recovery problems in signal processing. In the spirit of Nesterov’s work, one of the key ideas of this algorithm is a subtle averaging of sequences of iterates, which has been shown to improve the convergence properties of standard gradient-descent algorithms. This paper demonstrates that this approach is ideally suited for solving large-scale compressed sensing reconstruction problems as 1) it is computationally efficient, 2) it is accurate and returns solutions with several correct digits, 3) it is flexible and amenable to many kinds of reconstruction problems, and 4) it is robust in the sense that its excellent performance across a wide range of problems does not depend on the fine tuning of several parameters. Comprehensive numerical experiments on realistic signals exhibiting a large dynamic range show that this algorithm compares favorably with recently proposed state-of-the-art methods. We also apply the algorithm to solve other problems for which there are fewer alternatives, such as total-variation minimization, and
(Show Context)

Citation Context

...rge systems of linear equations to compute the Newton steps. On the other hand, inspired by iterative thresholding ideas [24, 30, 20], we have now available a great number of first-order methods, see =-=[31, 9, 34, 35]-=- and the many earlier references therein, which may be faster but not necessarily accurate. Indeed, these methods are shown to converge slowly, and typically need a very large number of iterations whe...

Computational methods for sparse solution of linear inverse problems

by Joel A. Tropp, Stephen J. Wright , 2009
"... The goal of sparse approximation problems is to represent a target signal approximately as a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, ..."
Abstract - Cited by 167 (0 self) - Add to MetaCart
The goal of sparse approximation problems is to represent a target signal approximately as a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a wealth of applications.

Beyond Nyquist: Efficient Sampling of Sparse Bandlimited Signals

by Joel A. Tropp , Jason N. Laska, Marco F. Duarte, Justin K. Romberg, Richard G. Baraniuk , 2009
"... Wideband analog signals push contemporary analog-to-digital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the bandlimit, alt ..."
Abstract - Cited by 158 (18 self) - Add to MetaCart
Wideband analog signals push contemporary analog-to-digital conversion systems to their performance limits. In many applications, however, sampling at the Nyquist rate is inefficient because the signals of interest contain only a small number of significant frequencies relative to the bandlimit, although the locations of the frequencies may not be known a priori. For this type of sparse signal, other sampling strategies are possible. This paper describes a new type of data acquisition system, called a random demodulator, that is constructed from robust, readily available components. Let K denote the total number of frequencies in the signal, and let W denote its bandlimit in Hz. Simulations suggest that the random demodulator requires just O(K log(W/K)) samples per second to stably reconstruct the signal. This sampling rate is exponentially lower than the Nyquist rate of W Hz. In contrast with Nyquist sampling, one must use nonlinear methods, such as convex programming, to recover the signal from the samples taken by the random demodulator. This paper provides a detailed theoretical analysis of the system’s performance that supports the empirical observations.
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University