Results 11  20
of
54
A UNIFIED APPROACH FOR MINIMIZING COMPOSITE NORMS
"... Abstract. We propose a firstorder augmented Lagrangian algorithm (FALC) to solve the composite norm minimization problem minX∈Rm×n µ1‖σ(F(X) − G)‖α + µ2‖C(X) − d‖β, subject to A(X) − b ∈ Q, where σ(X) denotes the vector of singular values of X ∈ Rm×n, the matrix norm ‖σ(X)‖α denotes either the F ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We propose a firstorder augmented Lagrangian algorithm (FALC) to solve the composite norm minimization problem minX∈Rm×n µ1‖σ(F(X) − G)‖α + µ2‖C(X) − d‖β, subject to A(X) − b ∈ Q, where σ(X) denotes the vector of singular values of X ∈ Rm×n, the matrix norm ‖σ(X)‖α denotes either the Frobenius, the nuclear, or the ℓ2operator norm of X, the vector norm ‖.‖β denotes either the ℓ1norm, ℓ2norm or the ℓ∞norm; Q is a closed convex set and A(.), C(.), F(.) are linear operators from Rm×n to vector spaces of appropriate dimensions. Basis pursuit, matrix completion, robust principal component pursuit (PCP), and stable PCP problems are all special cases of the composite norm minimization problem. Thus, FALC is able to solve all these problems in a unified manner. We show that any limit point of FALC iterate sequence is an optimal solution of the composite norm minimization problem. We also show that for all ɛ> 0, the FALC iterates are ɛfeasible and ɛoptimal after O(log(ɛ−1)) iterations, which require O(ɛ−1) constrained shrinkage operations and Euclidean projection onto the set Q. Surprisingly, on the problem sets we tested, FALC required only O(log(ɛ−1)) constrained shrinkage, instead of the O(ɛ−1) worst case bound, to compute an ɛfeasible and ɛoptimal solution. To best of our knowledge, FALC is the first algorithm with a known complexity bound that solves the stable PCP problem. 1. Introduction. In
Sparse Signal Reconstruction via Ecme Hard Thresholding
, 2012
"... We propose a probabilistic model for sparse signal reconstruction and develop several novel algorithms for computing the maximum likelihood (ML) parameter estimates under this model. The measurements follow an underdetermined linear model where the regressioncoefficient vector is the sum of an un ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
We propose a probabilistic model for sparse signal reconstruction and develop several novel algorithms for computing the maximum likelihood (ML) parameter estimates under this model. The measurements follow an underdetermined linear model where the regressioncoefficient vector is the sum of an unknown deterministic sparse signal component and a zeromean white Gaussian component with an unknown variance. Our reconstruction schemes are based on an expectationconditional maximization either (ECME) iteration that aims at maximizing the likelihood function with respect to the unknown parameters for a given signal sparsity level. Compared with the existing iterative hard thresholding (IHT) method, the ECME algorithm contains an additional multiplicative term and guarantees monotonic convergence for a wide range of sensing (regression) matrices. We propose a double overrelaxation (DORE) thresholding scheme for accelerating the ECME iteration. We prove that, under certain mild conditions, the ECME and DORE iterations converge to local maxima of the likelihood function. The ECME and DORE iterations can be implemented exactly in smallscale applications and for the important class of largescale sensing operators with orthonormal rows used e.g., partial fast Fourier transform (FFT). If the signal sparsity level is unknown, we introduce an unconstrained sparsity selection (USS) criterion and a tuningfree automatic double overrelaxation (ADORE) thresholding method that employs USS to estimate the sparsity level. We compare the proposed and existing sparse signal reconstruction methods via onedimensional simulation and twodimensional image reconstruction experiments using simulated and real Xray CT data.
A FIRSTORDER AUGMENTED LAGRANGIAN METHOD FOR COMPRESSED SENSING
"... Abstract. In this paper, we propose a firstorder augmented Lagrangian algorithm (FAL) that solves the basis pursuit problem min{‖x‖1: Ax = b} by inexactly solving a sequence of problems of the form minx∈ℜn { λ (k) ‖x‖1 + 1 ‖Ax − b − 2 λ (k) θ (k) ‖2} 2, for an appropriately chosen sequence of multi ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we propose a firstorder augmented Lagrangian algorithm (FAL) that solves the basis pursuit problem min{‖x‖1: Ax = b} by inexactly solving a sequence of problems of the form minx∈ℜn { λ (k) ‖x‖1 + 1 ‖Ax − b − 2 λ (k) θ (k) ‖2} 2, for an appropriately chosen sequence of multipliers {(λ(k), θ (k))}k∈Z. Each of these subproblems are solved using Algorithm 3 in [19] wherein each update reduces to “shrinkage ” [12] or constrained “shrinkage”. We show that FAL converges to an optimal solution x ∗ of the basis pursuit problem, i.e. x ∗ ∈ argmin{‖x‖1: Ax = b} and that there exist a priori fixed sequence {λ(k)}k∈Z such that for all ɛ> 0, iterates x + (k) computed by FAL are ɛfeasible, i.e. ‖Ax (k) − b‖2 ≤ ɛ, and ɛoptimal, ∣ ‖x (k) ‖1 − ‖x∗‖1 ∣ ≤ ɛ, after O(ɛ−1) iterations, where the complexity of each iteration is O(n log(n)). We also report the results of numerical experiments comparing the performance of FAL with SPA [1], NESTA [18], FPC [10, 11], FPCAS [21] and a Bregmanregularized solver [20]. A very striking result that we observed in our numerical experiments was that FAL always correctly identifies the zeroset of the target signal without any thresholding or postprocessing for all reasonably small error tolerance values. 1. Introduction. In
PARNES: A RAPIDLY CONVERGENT ALGORITHM FOR ACCURATE RECOVERY OF SPARSE AND APPROXIMATELY SPARSE SIGNALS
"... In this article we propose an algorithm, parnes, for the basis pursuit denoise problem bp(σ) which approximately finds a minimum onenorm solution to an underdetermined least squares problem. parnes, (1) combines what we think are the best features of currently available methods spgl1 [35] and nes ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
In this article we propose an algorithm, parnes, for the basis pursuit denoise problem bp(σ) which approximately finds a minimum onenorm solution to an underdetermined least squares problem. parnes, (1) combines what we think are the best features of currently available methods spgl1 [35] and nesta [3], and (2) incorporates a new improvement that exhibits linear convergence under the assumption of the restricted isometry property (rip). As with spgl1, our approach ‘probes the Pareto frontier ’ and determines a solution to the bpdn problem bp(σ) by exploiting the relation between the lasso problem ls(τ) and bp(σ) given by their Pareto curve. As with nesta we rely on the accelerated proximal gradient method proposed by Yu. Nesterov [27, 26] that takes a remarkable O ( √ L/ε) iterations to come within ɛ> 0 of the optimal value, where L is the Lipschitz constant of the gradient of the objective function. Furthermore we introduce an ‘outer loop’ that regularly updates the prox center. Nesterov’s accelerated proximal gradient method then becomes the ‘inner loop’. The restricted isometry property together with the Lipschitz differentiability of our objective function permits us to derive a condition for switching between the inner and outer loop in a provably optimal manner. A byproduct of our approach is a new algorithm for the lasso problem that also exhibits linear convergence under rip.
An optimal subgradient algorithm for largescale convex optimization in simple domains
, 2015
"... This paper shows that the OSGA algorithm – which uses firstorder information to solve convex optimization problems with optimal complexity – can be used to efficiently solve arbitrary boundconstrained convex optimization problems. This is done by constructing an explicit method as well as an inex ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
This paper shows that the OSGA algorithm – which uses firstorder information to solve convex optimization problems with optimal complexity – can be used to efficiently solve arbitrary boundconstrained convex optimization problems. This is done by constructing an explicit method as well as an inexact scheme for solving the boundconstrained rational subproblem required by OSGA. This leads to an efficient implementation of OSGA on largescale problems in applications arising signal and image processing, machine learning and statistics. Numerical experiments demonstrate the promising performance of OSGA on such problems. A software package implementing OSGA for boundconstrained convex problems is available.
Improved Iterative Curvelet Thresholding for Compressed Sensing
"... A new theory named compressed sensing for simultaneous sampling and compression of signals has been becoming popular in the communities of signal processing, imaging and applied mathematics. In this paper, we present improved/accelerated iterative curvelet thresholding methods for compressed sensing ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
A new theory named compressed sensing for simultaneous sampling and compression of signals has been becoming popular in the communities of signal processing, imaging and applied mathematics. In this paper, we present improved/accelerated iterative curvelet thresholding methods for compressed sensing reconstruction in the fields of remote sensing. Some recent strategies including BioucasDias and Figueiredo’s twostep iteration, Beck and Teboulle’s fast method, and Osher et al’s linearized Bregman iteration are applied to iterative curvelet thresholding in order to accelerate convergence. Advantages and disadvantages of the proposed methods are studied using the socalled pseudoPareto curve in the numerical experiments on singlepixel remote sensing and Fourierdomain random imaging.
A firstorder smoothed penalty method for compressed sensing
 SIAM J. Optim
, 2011
"... Abstract. We propose a firstorder smoothed penalty algorithm (SPA) to solve the sparse recovery problem min{ x 1 : Ax = b}. SPA is efficient as long as the matrixvector product Ax and A T y can be computed efficiently; in particular, A need not be an orthogonal projection matrix. SPA converges to ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We propose a firstorder smoothed penalty algorithm (SPA) to solve the sparse recovery problem min{ x 1 : Ax = b}. SPA is efficient as long as the matrixvector product Ax and A T y can be computed efficiently; in particular, A need not be an orthogonal projection matrix. SPA converges to the target signal by solving a sequence of penalized optimization subproblems, and each subproblem is solved using Nesterov's optimal algorithm for simple sets 2 ) iterations. We also bound the suboptimality,  x k 1 − x * 1  for any iterate x k ; thus, the user can stop the algorithm at any iteration k with guarantee on the suboptimality. SPA is able to work with 1 , 2 or ∞ penalty on the infeasibility, and SPA can be easily extended to solve the relaxed recovery problem min{ x 1 : Ax − b 2 ≤ } 1. Introduction. In this paper we are interested in computing sparse solutions for the system of equations
ON THE CONVERGENCE OF AN ACTIVE SET METHOD FOR ℓ1 MINIMIZATION
"... We analyze an abridged version of the activeset algorithm FPC AS proposed in [18] for solving the l1regularized problem, i.e., a weighted sum of the l1norm ‖x‖1 and a smooth function f(x). The active set algorithm alternatively iterates between two stages. In the first “nonmonotone line search ( ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We analyze an abridged version of the activeset algorithm FPC AS proposed in [18] for solving the l1regularized problem, i.e., a weighted sum of the l1norm ‖x‖1 and a smooth function f(x). The active set algorithm alternatively iterates between two stages. In the first “nonmonotone line search (NMLS)” stage, an iterative firstorder method based on “shrinkage” is used to estimate the support at the solution. In the second “subspace optimization” stage, a smaller smooth problem is solved to recover the magnitudes of the nonzero components of x. We show that NMLS itself is globally convergent and the convergence rate is at least Rlinearly. In particular, NMLS is able to identify of the zero components of a stationary point after a finite number of steps under some mild conditions. The global convergence of FPC AS is established based on the properties
A Bayesian maxproduct EM algorithm for reconstructing structured sparse signals
 in Proc. Conf. Inform. Sci. Syst
, 2012
"... Part of the Signal Processing Commons The complete bibliographic information for this item can be found at ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Part of the Signal Processing Commons The complete bibliographic information for this item can be found at
A fast hybrid algorithm for large scale ℓ1regularized logistic regression
 Journal of Machine Learning Research
"... Editor: ℓ1regularized logistic regression, also known as sparse logistic regression, is widely used in machine learning, computer vision, data mining, bioinformatics and neural signal processing. The use of ℓ1regularization attributes attractive properties to the classifier, such as feature select ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Editor: ℓ1regularized logistic regression, also known as sparse logistic regression, is widely used in machine learning, computer vision, data mining, bioinformatics and neural signal processing. The use of ℓ1regularization attributes attractive properties to the classifier, such as feature selection, robustness to noise, and as a result, classifier generality in the context of supervised learning. When a sparse logistic regression problem has largescale data in high dimensions, it is computationally expensive to minimize the nondifferentiable ℓ1norm in the objective function. Motivated by recent work (Hale et al., 2008; Koh et al., 2007), we propose a novel hybrid algorithm based on combining two types of optimization iterations: one being very fast and memory friendly while the other being slower but more accurate. Called hybrid iterative shrinkage (HIS), the resulting algorithm is comprised of a fixed point continuation phase and an interior point phase. The first phase is based completely on memory efficient operations such as matrixvector multiplications, while the second phase is based on a truncated Newton’s method. Furthermore, we show that various