DMCA
A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization and continuation (2010)
Cached
Download Links
Venue: | SIAM Journal on Scientific Computing |
Citations: | 53 - 8 self |
Citations
3606 | Compressed sensing
- Donoho
- 2006
(Show Context)
Citation Context ... the ℓ0-minimization problem (1.1) min x∈R n ‖x‖0 subject to (s.t.) Ax = b, where ‖x‖0 := |{i, xi = 0}| and K ≤ m ≤ n (often K ≪ m ≪ n). Moreover, Candes, Romberg, and Tao (see [11, 12, 13]), Donoho =-=[21]-=-, and their colleagues have shown that, under some reasonable conditions on ¯x and A, the solution ¯x of problem (1.1) can be found by solving the basis pursuit (BP) problem (1.2) min x∈R n ‖x‖1 s.t. ... |
2620 | Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
- Candès, Romberg, et al.
- 2006
(Show Context)
Citation Context ...n A ∈ R m×n by solving the ℓ0-minimization problem (1.1) min x∈R n ‖x‖0 subject to (s.t.) Ax = b, where ‖x‖0 := |{i, xi = 0}| and K ≤ m ≤ n (often K ≪ m ≪ n). Moreover, Candes, Romberg, and Tao (see =-=[11, 12, 13]-=-), Donoho [21], and their colleagues have shown that, under some reasonable conditions on ¯x and A, the solution ¯x of problem (1.1) can be found by solving the basis pursuit (BP) problem (1.2) min x∈... |
1666 | Matching pursuits with time-frequency dictionaries
- Mallat, Zhang
- 1993
(Show Context)
Citation Context ...e solution of problem (1.1). Greedy algorithms work when the data satisfy certain conditions, such as the restricted isometry property [13]. These algorithms include Orthogonal Matching Pursuit (OMP) =-=[42, 53]-=-, Stagewise OMP (StOMP) [23], CoSaMP [45], Subspace Pursuit (SP) [17], and many other variants. These algorithms, by and large, involve solving a sequence of subspace optimization problems of the form... |
1504 | Near optimal signal recovery from random projections: Universal encoding strategies
- Candès, Tao
- 2006
(Show Context)
Citation Context ...n A ∈ R m×n by solving the ℓ0-minimization problem (1.1) min x∈R n ‖x‖0 subject to (s.t.) Ax = b, where ‖x‖0 := |{i, xi = 0}| and K ≤ m ≤ n (often K ≪ m ≪ n). Moreover, Candes, Romberg, and Tao (see =-=[11, 12, 13]-=-), Donoho [21], and their colleagues have shown that, under some reasonable conditions on ¯x and A, the solution ¯x of problem (1.1) can be found by solving the basis pursuit (BP) problem (1.2) min x∈... |
911 | Greed is Good: Algorithmic Results for Sparse Approximation
- Tropp
- 2004
(Show Context)
Citation Context ...e solution of problem (1.1). Greedy algorithms work when the data satisfy certain conditions, such as the restricted isometry property [13]. These algorithms include Orthogonal Matching Pursuit (OMP) =-=[42, 53]-=-, Stagewise OMP (StOMP) [23], CoSaMP [45], Subspace Pursuit (SP) [17], and many other variants. These algorithms, by and large, involve solving a sequence of subspace optimization problems of the form... |
768 | CoSaMP: Iterative signal recovery from incomplete and inaccurate samples
- Needell, Tropp
- 2009
(Show Context)
Citation Context ...s work when the data satisfy certain conditions, such as the restricted isometry property [13]. These algorithms include Orthogonal Matching Pursuit (OMP) [42, 53], Stagewise OMP (StOMP) [23], CoSaMP =-=[45]-=-, Subspace Pursuit (SP) [17], and many other variants. These algorithms, by and large, involve solving a sequence of subspace optimization problems of the form (1.3) min x ‖Ax − b‖ 2 2 s.t. xi =0∀i ∈... |
537 | Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems
- Figueiredo, Nowak, et al.
- 2007
(Show Context)
Citation Context ...algorithm (SPA) [1] that solves problem (1.4) with the quadratic penalty replaced by1834 Z. WEN, W. YIN, D. GOLDFARB, AND Y. ZHANG an “exact” ℓ2-penalty, the spectral gradient projection method GPSR =-=[29]-=-, the fixedpoint continuation method FPC [33] for the ℓ1-regularized problem (1.4), and the gradient method in [46] for minimizing the more general function J(x)+H(x), where J is nonsmooth, H is smoot... |
537 | Sparse MRI: The application of compressed sensing for rapid MR imaging
- Lustig, Donoho, et al.
- 2007
(Show Context)
Citation Context ...s on ¯x and A, the solution ¯x of problem (1.1) can be found by solving the basis pursuit (BP) problem (1.2) min x∈R n ‖x‖1 s.t. Ax = b. For more information on compressive sensing, see, for example, =-=[21, 54, 60, 61, 51, 41, 55, 37, 39, 38, 64]-=-. ∗Received by the editors January 24, 2009; accepted for publication (in revised form) March 29, 2010; published electronically June 23, 2010. http://www.siam.org/journals/sisc/32-4/74769.html † Depa... |
509 | Signal recovery by proximal forward-backward splitting, Multiscale Modeling and Simulation
- Combettes, Wajs
- 2006
(Show Context)
Citation Context ...=sgn(y) ⊙ max {|y|−ν, 0} , yields the unique minimizer of the function ν‖x‖1 + 1 2 ‖x−y‖2 2. The iteration (2.1) has been independently proposed by different groups of researchers in various contexts =-=[4, 15, 19, 24, 25, 28, 33, 48]-=-. Various modifications and enhancements have been applied to (2.1), which has also been generalized to certain other nonsmooth functions; see [26, 5, 29, 63]. Although (2.1) is very easy to compute, ... |
478 | Just relax: Convex programming methods for identifying sparse signals in noise
- Tropp
(Show Context)
Citation Context ...s on ¯x and A, the solution ¯x of problem (1.1) can be found by solving the basis pursuit (BP) problem (1.2) min x∈R n ‖x‖1 s.t. Ax = b. For more information on compressive sensing, see, for example, =-=[21, 54, 60, 61, 51, 41, 55, 37, 39, 38, 64]-=-. ∗Received by the editors January 24, 2009; accepted for publication (in revised form) March 29, 2010; published electronically June 23, 2010. http://www.siam.org/journals/sisc/32-4/74769.html † Depa... |
442 | Efficient sparse coding algorithms
- Lee, Battle, et al.
- 2007
(Show Context)
Citation Context ... or linear constraints in [44, 6, 43, 7, 32], and LP and QP subproblems have been used to solve general nonlinear programs in [8, 9]. The difference between our algorithm and the active-set algorithm =-=[40]-=- lies in the way in which the working index set is chosen. Thanks to the solution sparsity, our approach is more aggressive and effective. Our algorithm is different from any existing method in the li... |
398 | Gradient methods for minimizing composite objective function
- Nesterov
- 2007
(Show Context)
Citation Context ...RB, AND Y. ZHANG an “exact” ℓ2-penalty, the spectral gradient projection method GPSR [29], the fixedpoint continuation method FPC [33] for the ℓ1-regularized problem (1.4), and the gradient method in =-=[46]-=- for minimizing the more general function J(x)+H(x), where J is nonsmooth, H is smooth, and both are convex. In this paper, we propose an alternating-stage algorithm that combines the good features of... |
376 | Benchmarking optimization software with performance profiles
- Dolan, Moré
(Show Context)
Citation Context ...f the matrix-vector products spent on various tasks in the shrinkage and subspace optimization stages are also presented. We present our numerical results by using performance profiles as proposed in =-=[20]-=-. These profiles provide a way to graphically compare the quantities tp,s, suchas the number of iterations or CPU time required to solve problem p by each solver s. Define rp,s to be the ratio between... |
371 | Sparse reconstruction by separable approximation
- Wright, Nowak, et al.
- 2009
(Show Context)
Citation Context ...order method. These features make the development of efficient optimization algorithms for CS applications an interesting research area. Examples of such algorithms include shrinkage-based algorithms =-=[28, 48, 19, 4, 24, 25, 14, 33, 63, 56, 49]-=-, the interior-point algorithm ℓ1 ℓs [36], SPGL1 [58] for the LASSO problem, NESTA [3] for the BP denoising problem, a smoothed penalty algorithm (SPA) [1] that solves problem (1.4) with the quadratic... |
365 | Probing the Pareto frontier for basis pursuit solutions
- Berg, Friedlander
- 2008
(Show Context)
Citation Context ...−b‖ denotes the l2-norm of the residual, and nMat denotes the total number matrix-vector products involving A or A⊤ . We also use “nnzx” to denote the number of nonzeros in x which we estimate (as in =-=[5]-=-) by the minimum cardinality of a subset of the components of x that account for 99.5% of ‖x‖1, i.e., { (7.1) nnzx := min |Ω| : ∑ } { |xi| > 0.995‖x‖1 = min k : i∈Ω k∑ } |x(i)| ≥ 0.995‖x‖1 where x (i)... |
352 | An EM algorithm for wavelet-based image restoration
- Figueiredo, Nowak
(Show Context)
Citation Context ...order method. These features make the development of efficient optimization algorithms for CS applications an interesting research area. Examples of such algorithms include shrinkage-based algorithms =-=[28, 48, 19, 4, 24, 25, 14, 33, 63, 56, 49]-=-, the interior-point algorithm ℓ1 ℓs [36], SPGL1 [58] for the LASSO problem, NESTA [3] for the BP denoising problem, a smoothed penalty algorithm (SPA) [1] that solves problem (1.4) with the quadratic... |
314 |
Two point step size gradient methods
- BARZILAI, BORWEIN
- 1988
(Show Context)
Citation Context ...mk→∞‖d k ‖ = 0. It is also proved that the sequence {x k } is at least R-linearly convergent under some mild assumptions. We now specify a strategy, which is based on the Barzilai-Borwein method (BB) =-=[2]-=-, for choosing the parameter λ k . The shrinkage iteration (2.2) first takes a gradient descent step with step size λ k along the negative gradient direction gk of the smooth function f(x), and then a... |
290 | An interior-point method for large-scale l1-regularized logistic regression
- Koh, Kim, et al.
- 2007
(Show Context)
Citation Context ...rithms for CS applications an interesting research area. Examples of such algorithms include shrinkage-based algorithms [28, 48, 19, 4, 24, 25, 14, 33, 63, 56, 49], the interior-point algorithm ℓ1 ℓs =-=[36]-=-, SPGL1 [58] for the LASSO problem, NESTA [3] for the BP denoising problem, a smoothed penalty algorithm (SPA) [1] that solves problem (1.4) with the quadratic penalty replaced by1834 Z. WEN, W. YIN,... |
274 | Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit,”
- Donoho, Tsaig, et al.
- 2006
(Show Context)
Citation Context ...edy algorithms work when the data satisfy certain conditions, such as the restricted isometry property [13]. These algorithms include Orthogonal Matching Pursuit (OMP) [42, 53], Stagewise OMP (StOMP) =-=[23]-=-, CoSaMP [45], Subspace Pursuit (SP) [17], and many other variants. These algorithms, by and large, involve solving a sequence of subspace optimization problems of the form (1.3) min x ‖Ax − b‖ 2 2 s.... |
233 |
A nonmonotone line search technique for Newton’s method
- Grippo, Lampariello, et al.
- 1986
(Show Context)
Citation Context ...ion of the previous reference value Ck−1 and the function value ψμ(x k ), and as the iterations proceed, the weight on Ck is increased. For further information on nonmonotone line search methods, see =-=[18, 31, 52]-=-. Algorithm 2. Nonmonotone line search algorithm (NMLS). Initialization: Chooseastartingguessx 0 and parameters 0 <η<1, 0 <ρ<1, and 0 <λm <λM < ∞. SetC0 = ψμ(x 0 ), Q0 =1,andk =0. while “not converge”... |
183 | A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoratin
- Bioucas-Dias, Figueiredo
(Show Context)
Citation Context ...chers in various contexts [4, 15, 19, 24, 25, 28, 33, 48]. Various modifications and enhancements have been applied to (2.1), which has also been generalized to certain other nonsmooth functions; see =-=[26, 5, 29, 63]-=-. Although (2.1) is very easy to compute, it can take more than thousands of iterations to achieve an acceptable accuracy for difficult problems. It is proved in [33] that (2.1) yields xk with the sam... |
181 | Quantitative robust uncertainty principles and optimally sparse decompositions
- Candes, Romberg
- 2006
(Show Context)
Citation Context ...n A ∈ R m×n by solving the ℓ0-minimization problem (1.1) min x∈R n ‖x‖0 subject to (s.t.) Ax = b, where ‖x‖0 := |{i, xi = 0}| and K ≤ m ≤ n (often K ≪ m ≪ n). Moreover, Candes, Romberg, and Tao (see =-=[11, 12, 13]-=-), Donoho [21], and their colleagues have shown that, under some reasonable conditions on ¯x and A, the solution ¯x of problem (1.1) can be found by solving the basis pursuit (BP) problem (1.2) min x∈... |
169 | NESTA: A fast and accurate first-order method for sparse recovery
- Becker, Bobin, et al.
(Show Context)
Citation Context ...arch area. Examples of such algorithms include shrinkage-based algorithms [28, 48, 19, 4, 24, 25, 14, 33, 63, 56, 49], the interior-point algorithm ℓ1 ℓs [36], SPGL1 [58] for the LASSO problem, NESTA =-=[3]-=- for the BP denoising problem, a smoothed penalty algorithm (SPA) [1] that solves problem (1.4) with the quadratic penalty replaced by1834 Z. WEN, W. YIN, D. GOLDFARB, AND Y. ZHANG an “exact” ℓ2-pena... |
160 |
A coordinate gradient descent method for nonsmooth separable minimization.Math
- TSENG, YUN
- 2009
(Show Context)
Citation Context ...)) ⊤ (x −S(x − λg, μλ)) + μλ(‖x‖1 −‖S(x − λg, μλ)‖1) ≥ 0, which gives (3.3) g ⊤ d + μ(‖x + ‖1 −‖x‖1) ≤− 1 λ ‖d‖2 2 after rearranging terms. An alternative derivation of (3.3) is given in Lemma 2.1 in =-=[57]-=-. We can also reformulate (1.4) as (3.4) min f(x)+μξ s.t. (x, ξ) ∈ Ω, whose first-order optimality conditions for a stationary point x ∗ are (3.5) ∇f(x ∗ )(x − x ∗ )+μ(ξ −‖x ∗ ‖1) ≥ 0 ∀(x, ξ) ∈ Ω,183... |
147 |
Algorithm 778. L-BFGS-B, Fortran subroutines for Large-Scale bound constrained optimization
- Zhu, Byrd, et al.
- 1997
(Show Context)
Citation Context ... . Therefore, our subspace optimization problem is (3.16) min ϕμ(x) s.t. x ∈ Ω(x k ), which can be solved by a limited-memory quasi-Newton method for problems with simple bound constraints (L-BFGS-B) =-=[67]-=-. In our implementations, we also consider subspace optimization without the bound constraints, i.e., (3.17) min x∈R n ϕμ(x) s.t. xi =0 ∀i ∈A(x k ). This is essentially an unconstrained minimization p... |
115 | simple shrinkage is still relevant for redundant representations ?”
- Elad
- 2006
(Show Context)
Citation Context ...order method. These features make the development of efficient optimization algorithms for CS applications an interesting research area. Examples of such algorithms include shrinkage-based algorithms =-=[28, 48, 19, 4, 24, 25, 14, 33, 63, 56, 49]-=-, the interior-point algorithm ℓ1 ℓs [36], SPGL1 [58] for the LASSO problem, NESTA [3] for the BP denoising problem, a smoothed penalty algorithm (SPA) [1] that solves problem (1.4) with the quadratic... |
108 | A New Compressive Imaging Camera Architecture using Optical-Domain Compression”,
- Takhar, Laska, et al.
- 2006
(Show Context)
Citation Context ...s on ¯x and A, the solution ¯x of problem (1.1) can be found by solving the basis pursuit (BP) problem (1.2) min x∈R n ‖x‖1 s.t. Ax = b. For more information on compressive sensing, see, for example, =-=[21, 54, 60, 61, 51, 41, 55, 37, 39, 38, 64]-=-. ∗Received by the editors January 24, 2009; accepted for publication (in revised form) March 29, 2010; published electronically June 23, 2010. http://www.siam.org/journals/sisc/32-4/74769.html † Depa... |
104 |
On the solution of large quadratic programming problems with bound constraints.
- MORE, TORALDO
- 1991
(Show Context)
Citation Context ...tter able to take advantage of warm starts [47, 50]. For example, gradient projection and conjugate gradient steps have been combined to solve problems with bound constraints or linear constraints in =-=[44, 6, 43, 7, 32]-=-, and LP and QP subproblems have been used to solve general nonlinear programs in [8, 9]. The difference between our algorithm and the active-set algorithm [40] lies in the way in which the working in... |
96 | Linearized Bregman iterations for compressed sensing
- Cai, Osher, et al.
- 2009
(Show Context)
Citation Context ...order method. These features make the development of efficient optimization algorithms for CS applications an interesting research area. Examples of such algorithms include shrinkage-based algorithms =-=[28, 48, 19, 4, 24, 25, 14, 33, 63, 56, 49]-=-, the interior-point algorithm ℓ1 ℓs [36], SPGL1 [58] for the LASSO problem, NESTA [3] for the BP denoising problem, a smoothed penalty algorithm (SPA) [1] that solves problem (1.4) with the quadratic... |
94 |
Random filters for compressive sampling and reconstruction
- Tropp, Wakin, et al.
(Show Context)
Citation Context ...s on ¯x and A, the solution ¯x of problem (1.1) can be found by solving the basis pursuit (BP) problem (1.2) min x∈R n ‖x‖1 s.t. Ax = b. For more information on compressive sensing, see, for example, =-=[21, 54, 60, 61, 51, 41, 55, 37, 39, 38, 64]-=-. ∗Received by the editors January 24, 2009; accepted for publication (in revised form) March 29, 2010; published electronically June 23, 2010. http://www.siam.org/journals/sisc/32-4/74769.html † Depa... |
89 | Theory and implementation of an analog-to-information converter using random demodulation
- Laska, Kirolos, et al.
- 2007
(Show Context)
Citation Context ...s on ¯x and A, the solution ¯x of problem (1.1) can be found by solving the basis pursuit (BP) problem (1.2) min x∈R n ‖x‖1 s.t. Ax = b. For more information on compressive sensing, see, for example, =-=[21, 54, 60, 61, 51, 41, 55, 37, 39, 38, 64]-=-. ∗Received by the editors January 24, 2009; accepted for publication (in revised form) March 29, 2010; published electronically June 23, 2010. http://www.siam.org/journals/sisc/32-4/74769.html † Depa... |
87 | An architecture for compressive imaging.
- Wakin, Laska, et al.
- 2006
(Show Context)
Citation Context ...s on ¯x and A, the solution ¯x of problem (1.1) can be found by solving the basis pursuit (BP) problem (1.2) min x∈R n ‖x‖1 s.t. Ax = b. For more information on compressive sensing, see, for example, =-=[21, 54, 60, 61, 51, 41, 55, 37, 39, 38, 64]-=-. ∗Received by the editors January 24, 2009; accepted for publication (in revised form) March 29, 2010; published electronically June 23, 2010. http://www.siam.org/journals/sisc/32-4/74769.html † Depa... |
84 | Subspace pursuit for compressive sensing: Closing the gap between performance and complexity,”
- Dai, Milenkovic
- 2009
(Show Context)
Citation Context ... certain conditions, such as the restricted isometry property [13]. These algorithms include Orthogonal Matching Pursuit (OMP) [42, 53], Stagewise OMP (StOMP) [23], CoSaMP [45], Subspace Pursuit (SP) =-=[17]-=-, and many other variants. These algorithms, by and large, involve solving a sequence of subspace optimization problems of the form (1.3) min x ‖Ax − b‖ 2 2 s.t. xi =0∀i ∈ T, where T is a subset of t... |
84 | Compressive imaging for video representation and coding,”
- Wakin, Laska, et al.
- 2006
(Show Context)
Citation Context ...s on ¯x and A, the solution ¯x of problem (1.1) can be found by solving the basis pursuit (BP) problem (1.2) min x∈R n ‖x‖1 s.t. Ax = b. For more information on compressive sensing, see, for example, =-=[21, 54, 60, 61, 51, 41, 55, 37, 39, 38, 64]-=-. ∗Received by the editors January 24, 2009; accepted for publication (in revised form) March 29, 2010; published electronically June 23, 2010. http://www.siam.org/journals/sisc/32-4/74769.html † Depa... |
83 | Bregman iterative algorithms for ℓ1minimization with applications to compressed sensing
- Yin, Osher, et al.
- 2008
(Show Context)
Citation Context ...problem (1.4) min x∈Rn ψμ(x) :=μ‖x‖1+ 1 2 ‖Ax − b‖22, where μ>0. The theory for penalty functions implies that the solution (1.4) goes to the solution of (1.2) as μ goes to zero. It has been shown in =-=[65]-=- that (1.2) is equivalent to (1.4) for a suitable choice of b (which is different from the b in (1.4)). Furthermore, if the measurements are contaminated with noise, problem (1.4) is preferred. Other ... |
80 |
SPGL1: A solver for large-scale sparse reconstruction
- Berg, Friedlander
- 2007
(Show Context)
Citation Context ...rithm FPC AS (version 1.1), we tested it on three different sets of problems and compared it with the state-of-the-art codes FPC (version 2.0) [34], spg bp in the software package SPGL1 (version 1.5) =-=[4]-=-, and CPLEX [36] with a MATLAB interface [1]. In subsection 7.1, we compare FPC AS to FPC, spg bp, and CPLEX for solving the basis pursuit problem (1.2) on a set of “pathological” problems with large ... |
69 |
On the identification of active constraints
- Burke, More
- 1988
(Show Context)
Citation Context ...her active set methods that follow this framework include those in which gradient projection and conjugate gradient methods are combined to solve problems with bound constraints or linear constraints =-=[44, 7, 43, 8, 31]-=- and those that combine linear and quadratic programming approaches to solve general nonlinear programming problems [9, 10]. In our approach, shrinkage is used to identify an active set in the first s... |
67 | Coordinate and subspace optimization methods for linear least squares with non-quadratic regularization
- Elad, Matalon, et al.
- 2007
(Show Context)
Citation Context ...chers in various contexts [4, 15, 19, 24, 25, 28, 33, 48]. Various modifications and enhancements have been applied to (2.1), which has also been generalized to certain other nonsmooth functions; see =-=[26, 5, 29, 63]-=-. Although (2.1) is very easy to compute, it can take more than thousands of iterations to achieve an acceptable accuracy for difficult problems. It is proved in [33] that (2.1) yields xk with the sam... |
64 | On the accurate identification of active constraints
- Facchinei, Fischer, et al.
- 1998
(Show Context)
Citation Context ...(|g k |−μ)‖2 provides a measure of the violation of the complementary conditions at the point xk . To calculate ξk, we use an identification function (3.24) ρ(x k √ ,ξk) := χ(xk ,μ,ξk)+ζk proposed in =-=[27]-=- for nonlinear programming that is based on the amount that the current iterate x k violates the optimality conditions for (1.4). Specifically, we set the threshold ξk initially to ξ0 = ¯ ξm andthenup... |
62 | Proximal thresholding algorithm for minimization over orthonormal bases,
- Combettes, Pesquet
- 2007
(Show Context)
Citation Context ...order method. These features make the development of efficient optimization algorithms for CS applications an interesting research area. Examples of such algorithms include shrinkage-based algorithms =-=[28, 48, 19, 4, 24, 25, 14, 33, 63, 56, 49]-=-, the interior-point algorithm ℓ1 ℓs [36], SPGL1 [58] for the LASSO problem, NESTA [3] for the BP denoising problem, a smoothed penalty algorithm (SPA) [1] that solves problem (1.4) with the quadratic... |
60 | Random sampling for analog-to-information conversion of wideband signals
- Laska, Kirolos, et al.
(Show Context)
Citation Context ...s on ¯x and A, the solution ¯x of problem (1.1) can be found by solving the basis pursuit (BP) problem (1.2) min x∈R n ‖x‖1 s.t. Ax = b. For more information on compressive sensing, see, for example, =-=[21, 54, 60, 61, 51, 41, 55, 37, 39, 38, 64]-=-. ∗Received by the editors January 24, 2009; accepted for publication (in revised form) March 29, 2010; published electronically June 23, 2010. http://www.siam.org/journals/sisc/32-4/74769.html † Depa... |
59 | Analog-to-information conversion via random demodulation, Design, Applications, Integration and Software,
- Kirolos, Laska, et al.
- 2006
(Show Context)
Citation Context ...s on ¯x and A, the solution ¯x of problem (1.1) can be found by solving the basis pursuit (BP) problem (1.2) min x∈R n ‖x‖1 s.t. Ax = b. For more information on compressive sensing, see, for example, =-=[21, 54, 60, 61, 51, 41, 55, 37, 39, 38, 64]-=-. ∗Received by the editors January 24, 2009; accepted for publication (in revised form) March 29, 2010; published electronically June 23, 2010. http://www.siam.org/journals/sisc/32-4/74769.html † Depa... |
56 |
Optimization Theory and Methods,
- Yuan, Sun
- 1997
(Show Context)
Citation Context ...gnals. Our alternating two-stage algorithm behaves like an active-set method. Compared with interior-point methods, active-set methods are more robust and better able to take advantage of warm starts =-=[47, 50]-=-. For example, gradient projection and conjugate gradient steps have been combined to solve problems with bound constraints or linear constraints in [44, 6, 43, 7, 32], and LP and QP subproblems have ... |
54 |
A fast alternating direction method for TV l1-l2 signal reconstruction from partial fourier data
- Yang, Zhang, et al.
(Show Context)
Citation Context ...s on ¯x and A, the solution ¯x of problem (1.1) can be found by solving the basis pursuit (BP) problem (1.2) min x∈R n ‖x‖1 s.t. Ax = b. For more information on compressive sensing, see, for example, =-=[21, 54, 60, 61, 51, 41, 55, 37, 39, 38, 64]-=-. ∗Received by the editors January 24, 2009; accepted for publication (in revised form) March 29, 2010; published electronically June 23, 2010. http://www.siam.org/journals/sisc/32-4/74769.html † Depa... |
51 |
Fixed-point continuation for l1-minimization: methodology and convergence,
- HALE, YIN, et al.
- 2008
(Show Context)
Citation Context ...order method. These features make the development of efficient optimization algorithms for CS applications an interesting research area. Examples of such algorithms include shrinkage-based algorithms =-=[28, 48, 19, 4, 24, 25, 14, 33, 63, 56, 49]-=-, the interior-point algorithm ℓ1 ℓs [36], SPGL1 [58] for the LASSO problem, NESTA [3] for the BP denoising problem, a smoothed penalty algorithm (SPA) [1] that solves problem (1.4) with the quadratic... |
50 |
A ℓ1-unified variational framework for image restoration
- Bect, Féraud, et al.
(Show Context)
Citation Context ...e features make the development of efficient optimization algorithms for compressive sensing applications an interesting research area. Examples of such algorithms include, shrinkage-based algorithms =-=[28, 48, 19, 3, 23, 24, 15, 34, 33, 63, 58, 49]-=-, the interior-point algorithm ℓ1 ℓs [37], SPGL1 [59] for the LASSO problem, the spectral gradient projection method GPSR [29] and the fixed-point continuation method FPC [34] for the ℓ1-regularized p... |
49 | An algorithm for nonlinear optimization using linear programming and equality constrained subproblems.
- Byrd, Gould, et al.
- 2004
(Show Context)
Citation Context ...gradient steps have been combined to solve problems with bound constraints or linear constraints in [44, 6, 43, 7, 32], and LP and QP subproblems have been used to solve general nonlinear programs in =-=[8, 9]-=-. The difference between our algorithm and the active-set algorithm [40] lies in the way in which the working index set is chosen. Thanks to the solution sparsity, our approach is more aggressive and ... |
47 |
Algorithms for bound constrained quadratic programming problems
- Moré, Toraldo
- 1989
(Show Context)
Citation Context ...tter able to take advantage of warm starts [47, 50]. For example, gradient projection and conjugate gradient steps have been combined to solve problems with bound constraints or linear constraints in =-=[44, 6, 43, 7, 32]-=-, and LP and QP subproblems have been used to solve general nonlinear programs in [8, 9]. The difference between our algorithm and the active-set algorithm [40] lies in the way in which the working in... |
46 | A nonmonotone line search technique and its application to unconstrained optimization,
- ZHANG, HAGER
- 2004
(Show Context)
Citation Context ...hat satisfies (3.7) with C k = ψμ(x k ). Such a line search method is monotone since ψμ(x k+1 ) <ψμ(x k ). Instead of using it, we use a nonmonotone line search method based on a strategy proposed in =-=[66]-=- (See algorithm NMLS (Algorithm 2)). In this method, the reference value Ck in the Armijo-like condition (3.7) is taken as a convex combination of the previous reference value Ck−1 and the function va... |
42 | A wide-angle view at iterated shrinkage algorithms
- Elad, Matalon, et al.
(Show Context)
Citation Context ...order method. These features make the development of efficient optimization algorithms for CS applications an interesting research area. Examples of such algorithms include shrinkage-based algorithms =-=[28, 48, 19, 4, 24, 25, 14, 33, 63, 56, 49]-=-, the interior-point algorithm ℓ1 ℓs [36], SPGL1 [58] for the LASSO problem, NESTA [3] for the BP denoising problem, a smoothed penalty algorithm (SPA) [1] that solves problem (1.4) with the quadratic... |
35 | On the convergence of successive linear-quadratic programming algorithms,
- Byrd, Gould, et al.
- 2005
(Show Context)
Citation Context ...gradient steps have been combined to solve problems with bound constraints or linear constraints in [44, 6, 43, 7, 32], and LP and QP subproblems have been used to solve general nonlinear programs in =-=[8, 9]-=-. The difference between our algorithm and the active-set algorithm [40] lies in the way in which the working index set is chosen. Thanks to the solution sparsity, our approach is more aggressive and ... |
35 | A new active set algorithm for box constrained optimization
- Hager, Zhang
(Show Context)
Citation Context ...tter able to take advantage of warm starts [47, 50]. For example, gradient projection and conjugate gradient steps have been combined to solve problems with bound constraints or linear constraints in =-=[44, 6, 43, 7, 32]-=-, and LP and QP subproblems have been used to solve general nonlinear programs in [8, 9]. The difference between our algorithm and the active-set algorithm [40] lies in the way in which the working in... |
33 |
private communication.
- Becker, Chadwick
- 1995
(Show Context)
Citation Context ...hological problems described in Table 3. Only the performance of FPC AS CG is reported because FPC AS BD performed similarly. The first test set includes four problems, CaltechTest1, ...,CaltechTest4 =-=[10]-=-, which are pathological because the magnitudes of the nonzero entries of the exact solutions ¯x lie in a large range. Such pathological problems are exaggerations of a large number of realistic probl... |
30 | Exposing constraints
- Burke, Mor'e
- 1992
(Show Context)
Citation Context ...tter able to take advantage of warm starts [47, 50]. For example, gradient projection and conjugate gradient steps have been combined to solve problems with bound constraints or linear constraints in =-=[44, 6, 43, 7, 32]-=-, and LP and QP subproblems have been used to solve general nonlinear programs in [8, 9]. The difference between our algorithm and the active-set algorithm [40] lies in the way in which the working in... |
25 | K.C.: A block coordinate gradient descent method for regularized convex separable optimization and covariance selection
- Yun, Tseng, et al.
- 2011
(Show Context)
Citation Context ...order method. These features make the development of efficient optimization algorithms for CS applications an interesting research area. Examples of such algorithms include shrinkage-based algorithms =-=[28, 48, 19, 4, 24, 25, 14, 33, 63, 56, 49]-=-, the interior-point algorithm ℓ1 ℓs [36], SPGL1 [58] for the LASSO problem, NESTA [3] for the BP denoising problem, a smoothed penalty algorithm (SPA) [1] that solves problem (1.4) with the quadratic... |
22 |
On the nonmonotone line search,
- Dai
- 2002
(Show Context)
Citation Context ...ion of the previous reference value Ck−1 and the function value ψμ(x k ), and as the iterations proceed, the weight on Ck is increased. For further information on nonmonotone line search methods, see =-=[18, 31, 52]-=-. Algorithm 2. Nonmonotone line search algorithm (NMLS). Initialization: Chooseastartingguessx 0 and parameters 0 <η<1, 0 <ρ<1, and 0 <λm <λM < ∞. SetC0 = ψμ(x 0 ), Q0 =1,andk =0. while “not converge”... |
22 |
Fast wavelet-based image deconvolution using the EM algorithm
- Nowak, Figueiredo
- 2001
(Show Context)
Citation Context ...order method. These features make the development of efficient optimization algorithms for CS applications an interesting research area. Examples of such algorithms include shrinkage-based algorithms =-=[28, 48, 19, 4, 24, 25, 14, 33, 63, 56, 49]-=-, the interior-point algorithm ℓ1 ℓs [36], SPGL1 [58] for the LASSO problem, NESTA [3] for the BP denoising problem, a smoothed penalty algorithm (SPA) [1] that solves problem (1.4) with the quadratic... |
18 |
Defrise M.: A Note on Wavelet-based Inversion Algorithms
- Mol
- 2001
(Show Context)
Citation Context ...order method. These features make the development of efficient optimization algorithms for CS applications an interesting research area. Examples of such algorithms include shrinkage-based algorithms =-=[28, 48, 19, 4, 24, 25, 14, 33, 63, 56, 49]-=-, the interior-point algorithm ℓ1 ℓs [36], SPGL1 [58] for the LASSO problem, NESTA [3] for the BP denoising problem, a smoothed penalty algorithm (SPA) [1] that solves problem (1.4) with the quadratic... |
18 |
An assessment of nonmonotone linesearch technique for unconstrained optimization
- Toint
- 1996
(Show Context)
Citation Context ...ion of the previous reference value Ck−1 and the function value ψμ(x k ), and as the iterations proceed, the weight on Ck is increased. For further information on nonmonotone line search methods, see =-=[18, 31, 52]-=-. Algorithm 2. Nonmonotone line search algorithm (NMLS). Initialization: Chooseastartingguessx 0 and parameters 0 <η<1, 0 <ρ<1, and 0 <λm <λM < ∞. SetC0 = ψμ(x 0 ), Q0 =1,andk =0. while “not converge”... |
12 | Two-step algorithms for linear inverse problems with non-quadratic regularization
- Bioucas-Dias, Figueiredo
(Show Context)
Citation Context ... reduced Hessian of f(x) restricted to the support of x∗ . Various modifications and enhancements have been applied to (2.2), which has also been generalized to certain other nonsmooth functions; see =-=[25, 6, 29, 63]-=-. The first stage of our approach is based on the iterative shrinkage scheme (2.2). Once a “good” approximate solution xk of the ℓ1-regularized problem (1.4) is obtained from (2.2), the set of indices... |
11 | An iterative working-set method for large-scale non-convex quadratic programming
- Toint
(Show Context)
Citation Context ... Algorithm NMLS requires only that λk be bounded, and other strategies could easily be adopted. 3.1.1. An exact line search. An exact line search is possible if ψμ(·) isa piecewise quadratic function =-=[16, 30]-=-. We want to solve (3.12) min α∈[0,1] ψμ(x + αd) :=μ‖x+ αd‖1 + 1 2 ‖A(x + αd) − b‖22 = μ‖x + αd‖1 + 1 2 c1α 2 + c2α + c3, where c1 = ‖Ad‖2 2 , c2 =(Ad) ⊤ (Ax − b), and c3 =0.5 ‖Ax − b‖2 2 . The break ... |
5 |
Quadratic programming via a nondifferentiable penalty function
- Conn, Sinclair
- 1975
(Show Context)
Citation Context ... Algorithm NMLS requires only that λk be bounded, and other strategies could easily be adopted. 3.1.1. An exact line search. An exact line search is possible if ψμ(·) isa piecewise quadratic function =-=[16, 30]-=-. We want to solve (3.12) min α∈[0,1] ψμ(x + αd) :=μ‖x+ αd‖1 + 1 2 ‖A(x + αd) − b‖22 = μ‖x + αd‖1 + 1 2 c1α 2 + c2α + c3, where c1 = ‖Ad‖2 2 , c2 =(Ad) ⊤ (Ax − b), and c3 =0.5 ‖Ax − b‖2 2 . The break ... |
5 |
A numerical study of fixed-point continuation applied to compressed sensing
- Hale, Yin, et al.
- 2010
(Show Context)
Citation Context ...ke (3.11) λ k =max{λm, min{λ k,BB1 ,λM }} or λ k =max{λm, min{λ k,BB2 ,λM }}, where 0 <λm ≤ λM < ∞ are fixed parameters. We should point out that the idea of using BB steps in CS has also appeared in =-=[29, 34, 63]-=-. However, Algorithm NMLS requires only that λk be bounded, and other strategies could easily be adopted. 3.1.1. An exact line search. An exact line search is possible if ψμ(·) isa piecewise quadratic... |
5 | Algorithm 890: Sparco: A testing framework for sparse reconstruction
- Berg, Friedlander, et al.
(Show Context)
Citation Context ... 3(f). On this measure, FPC AS CG and FPC AS BD performedbetterthantheSPGL1 solvers. 4.3. Sparco collection. In this subsection, we compare FPC AS and spg bp on 13 problems from the Sparco collection =-=[59]-=- (also see [58]). The parameters of spg bp and spg bp sub were set to their default values, and the parameters μ =10−10, ɛ =10−12,andɛx=10−16 were set for the two variants of FPC AS. A summary of the ... |
4 |
CPLEXINT - Matlab interface for the CPLEX solver, http://control.ee.ethz.ch/˜hybrid/cplexint.php
- Torrisi, Baotic
(Show Context)
Citation Context ...that λ k be bounded and other strategies could easily be adopted. 3.1. An exact line search. An exact line search is possible if ψµ(·) is a piecewise quadratic function. We want to solve (3.12) min α∈=-=[0,1]-=- ψµ(x + αd) := µ‖x + αd‖1 + 1 2 ‖A(x + αd) − b‖22, = µ‖x + αd‖1 + 1 2 c1α 2 + c2α + c3, where c1 = ‖Ad‖ 2 2, c2 = (Ad) ⊤ (Ax − b) and c3 = ‖Ax − b‖ 2 2. The break points of ψµ(x + αd) are {αi = −xi/di... |
3 | A first-order smoothed penalty method for compressed sensing
- Aybat, Iyengar
- 2011
(Show Context)
Citation Context ...thms [28, 48, 19, 4, 24, 25, 14, 33, 63, 56, 49], the interior-point algorithm ℓ1 ℓs [36], SPGL1 [58] for the LASSO problem, NESTA [3] for the BP denoising problem, a smoothed penalty algorithm (SPA) =-=[1]-=- that solves problem (1.4) with the quadratic penalty replaced by1834 Z. WEN, W. YIN, D. GOLDFARB, AND Y. ZHANG an “exact” ℓ2-penalty, the spectral gradient projection method GPSR [29], the fixedpoin... |
2 |
On the convergence of an active set method for l1 minimization,
- WEN, YIN, et al.
- 2012
(Show Context)
Citation Context ... theoretical properties of our algorithm, including global convergence, R-linear convergence, and the identification of the active set after a finite number of steps, are studied in a companion paper =-=[62]-=-. Our line search scheme is based on properties of the search direction determined by shrinkage (2.1)–(2.2). Let d k := d (λk ) (x k ) denote this direction, i.e., (3.1) d (λ) (x) :=x + − x, x + = S(x... |