Results 1  10
of
38
An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems
, 2009
"... ..."
(Show Context)
Templates for Convex Cone Problems with Applications to Sparse Signal Recovery
, 2010
"... This paper develops a general framework for solving a variety of convex cone problems that frequently arise in signal processing, machine learning, statistics, and other fields. The approach works as follows: first, determine a conic formulation of the problem; second, determine its dual; third, app ..."
Abstract

Cited by 124 (7 self)
 Add to MetaCart
This paper develops a general framework for solving a variety of convex cone problems that frequently arise in signal processing, machine learning, statistics, and other fields. The approach works as follows: first, determine a conic formulation of the problem; second, determine its dual; third, apply smoothing; and fourth, solve using an optimal firstorder method. A merit of this approach is its flexibility: for example, all compressed sensing problems can be solved via this approach. These include models with objective functionals such as the totalvariation norm, ‖W x‖1 where W is arbitrary, or a combination thereof. In addition, the paper also introduces a number of technical contributions such as a novel continuation scheme, a novel approach for controlling the step size, and some new results showing that the smooth and unsmoothed problems are sometimes formally equivalent. Combined with our framework, these lead to novel, stable and computationally efficient algorithms. For instance, our general implementation is competitive with stateoftheart methods for solving intensively studied problems such as the LASSO. Further, numerical experiments show that one can solve the Dantzig selector problem, for which no efficient largescale solvers exist, in a few hundred iterations. Finally, the paper is accompanied with a software release. This software is not a single, monolithic solver; rather, it is a suite of programs and routines designed to serve as building blocks for constructing complete algorithms. Keywords. Optimal firstorder methods, Nesterov’s accelerated descent algorithms, proximal algorithms, conic duality, smoothing by conjugation, the Dantzig selector, the LASSO, nuclearnorm minimization.
Linearized Alternating Direction Method with Adaptive Penalty for LowRank Representation
"... Many machine learning and signal processing problems can be formulated as linearly constrained convex programs, which could be efficiently solved by the alternating direction method (ADM). However, usually the subproblems in ADM are easily solvable only when the linear mappings in the constraints ar ..."
Abstract

Cited by 53 (8 self)
 Add to MetaCart
(Show Context)
Many machine learning and signal processing problems can be formulated as linearly constrained convex programs, which could be efficiently solved by the alternating direction method (ADM). However, usually the subproblems in ADM are easily solvable only when the linear mappings in the constraints are identities. To address this issue, we propose a linearized ADM (LADM) method by linearizing the quadratic penalty term and adding a proximal term when solving the subproblems. For fast convergence, we also allow the penalty to change adaptively according a novel update rule. We prove the global convergence of LADM with adaptive penalty (LADMAP). As an example, we apply LADMAP to solve lowrank representation (LRR), which is an important subspace clustering technique yet suffers from high computation cost. By combining LADMAP with a skinny SVD representation technique, we are able to reduce the complexity O(n 3) of the original ADM based method to O(rn 2), where r and n are the rank and size of the representation matrix, respectively, hence making LRR possible for large scale applications. Numerical experiments verify that for LRR our LADMAP based methods are much faster than stateoftheart algorithms. 1
Sparse signal reconstruction via iterative support detection
 Siam Journal on Imaging Sciences, issue
, 2010
"... Abstract. We present a novel sparse signal reconstruction method, iterative support detection (ISD), aiming to achieve fast reconstruction and a reduced requirement on the number of measurements compared to the classical ℓ1 minimization approach. ISD addresses failed reconstructions of ℓ1 minimizati ..."
Abstract

Cited by 36 (5 self)
 Add to MetaCart
(Show Context)
Abstract. We present a novel sparse signal reconstruction method, iterative support detection (ISD), aiming to achieve fast reconstruction and a reduced requirement on the number of measurements compared to the classical ℓ1 minimization approach. ISD addresses failed reconstructions of ℓ1 minimization due to insufficient measurements. It estimates a support set I from a current reconstruction and obtains a new reconstruction by solving the minimization problem min { ∑ i/∈I xi  : Ax = b}, and it iterates these two steps for a small number of times. ISD differs from the orthogonal matching pursuit method, as well as its variants, because (i) the index set I in ISD is not necessarily nested or increasing, and (ii) the minimization problem above updates all the components of x at the same time. We generalize the null space property to the truncated null space property and present our analysis of ISD based on the latter. We introduce an efficient implementation of ISD, called thresholdISD, for recovering signals with fast decaying distributions of nonzeros from compressive sensing measurements. Numerical experiments show that thresholdISD has significant advantages over the classical ℓ1 minimization approach, as well as two stateoftheart algorithms: the iterative reweighted ℓ1 minimization algorithm (IRL1) and the iterative reweighted leastsquares algorithm (IRLS). MATLAB code is available for download from
Linearized Alternating Direction Method with Gaussian Back Substitution for Separable Convex Programming
, 2011
"... Abstract. Recently, we have proposed to combine the alternating direction method (ADM) with a Gaussian back substitution procedure for solving the convex minimization model with linear constraints and a general separable objective function, i.e., the objective function is the sum of many functions w ..."
Abstract

Cited by 35 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Recently, we have proposed to combine the alternating direction method (ADM) with a Gaussian back substitution procedure for solving the convex minimization model with linear constraints and a general separable objective function, i.e., the objective function is the sum of many functions without coupled variables. In this paper, we further study this topic and show that the decomposed subproblems in the ADM procedure can be substantially alleviated by linearizing the involved quadratic terms arising from the augmented Lagrangian penalty on the model’s linear constraints. When the resolvent operators of the separable functions in the objective have closedform representations, embedding the linearization into the ADM subproblems becomes necessary to yield easy subproblems with closedform solutions. We thus show theoretically that the blend of ADM, Gaussian back substitution and linearization works effectively for the separable convex minimization model under consideration.
Linearized augmented Lagrangian and alternating direction methods for nuclear norm minimization
 Mathematics of Computation
"... Abstract. The nuclear norm is widely used to induce lowrank solutions for many optimization problems with matrix variables. Recently, it has been shown that the augmented Lagrangian method (ALM) and the alternating direction method (ADM) are very efficient for many convex programming problems arisi ..."
Abstract

Cited by 27 (4 self)
 Add to MetaCart
Abstract. The nuclear norm is widely used to induce lowrank solutions for many optimization problems with matrix variables. Recently, it has been shown that the augmented Lagrangian method (ALM) and the alternating direction method (ADM) are very efficient for many convex programming problems arising from various applications, provided that the resulting subproblems are sufficiently simple to have closedform solutions. In this paper, we are interested in the application of the ALM and the ADM for some nuclear norm involved minimization problems. When the resulting subproblems do not have closedform solutions, we propose to linearize these subproblems such that closedform solutions of these linearized subproblems can be easily derived. Global convergence of these linearized ALM and ADM are established under standard assumptions. Finally, we verify the effectiveness and efficiency of these new methods by some numerical experiments. 1.
AUGMENTED LAGRANGIAN METHOD FOR TOTAL VARIATION RESTORATION WITH NONQUADRATIC FIDELITY
"... Abstract. Recently augmented Lagrangian method has been successfully applied to image restoration with L2 fidelity. In this paper we extend the method to total variation (TV) restoration models with nonquadratic fidelities. We will first introduce the method and present the iterative algorithm for ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
Abstract. Recently augmented Lagrangian method has been successfully applied to image restoration with L2 fidelity. In this paper we extend the method to total variation (TV) restoration models with nonquadratic fidelities. We will first introduce the method and present the iterative algorithm for TV restoration with a quite general fidelity. In each iteration, three subproblems need to be solved, two of which can be very efficiently solved via FFT implementation or closed form solution. In general the third subproblem need iterative solvers. We then apply our method to TV restoration with L1 and KullbackLeibler (KL) fidelities, two common and important data terms for deblurring images corrupted by impulsive noise and Poisson noise, respectively. For these typical fidelities, we show that the third subproblem also has closed form solution and thus can be efficiently solved. In addition, convergence analysis of these algorithms are given, which cannot be obtained by previous analysis techniques.
Augmented ℓ1 and nuclearnorm models with a globally linearly convergent algorithm. Rice University CAAM
, 2012
"... This paper studies the longexisting idea of adding a nice smooth function to “smooth ” a nondifferentiable objective function in the context of sparse optimization, in particular, the minimization of ‖x‖1 + 1 2α ‖x‖22, where x is a vector, as well as the minimization of ‖X‖ ∗ + 1 2α ‖X‖2F, where X ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
(Show Context)
This paper studies the longexisting idea of adding a nice smooth function to “smooth ” a nondifferentiable objective function in the context of sparse optimization, in particular, the minimization of ‖x‖1 + 1 2α ‖x‖22, where x is a vector, as well as the minimization of ‖X‖ ∗ + 1 2α ‖X‖2F, where X is a matrix and ‖X‖ ∗ and ‖X‖F are the nuclear and Frobenius norms of X, respectively. We show that they let sparse vectors and lowrank matrices be efficiently recovered. In particular, they enjoy exact and stable recovery guarantees similar to those known for the minimization of ‖x‖1 and ‖X‖ ∗ under the conditions on the sensing operator such as its nullspace property, restricted isometry property, spherical section property, or “RIPless ” property. To recover a (nearly) sparse vector x 0, minimizing ‖x‖1+ 1 2α ‖x‖22 returns (nearly) the same solution as minimizing ‖x‖1 whenever α ≥ 10‖x 0 ‖∞. The same relation also holds between minimizing ‖X‖ ∗ + 1 2α ‖X‖2F and minimizing ‖X‖ ∗ for recovering a (nearly) lowrank matrix X 0 if α ≥ 10‖X 0 ‖2. Furthermore, we show that the linearized Bregman algorithm, as well as its two fast variants, for minimizing ‖x‖1 + 1 2α ‖x‖2 2 subject to Ax = b enjoys global linear convergence as long as a nonzero solution exists, and we give an explicit rate of convergence. The convergence property does not require a sparse solution or any properties on A. To our knowledge, this is the best known global convergence result for firstorder sparse optimization algorithms. 1
An adaptive inverse scale space method for compressed sensing
, 2011
"... In this paper we introduce a novel adaptive approach for solving ℓ 1minimization problems as frequently arising in compressed sensing, which is based on the recently introduced inverse scale space method. The scheme allows to efficiently compute minimizers by solving a sequence of lowdimensional n ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
In this paper we introduce a novel adaptive approach for solving ℓ 1minimization problems as frequently arising in compressed sensing, which is based on the recently introduced inverse scale space method. The scheme allows to efficiently compute minimizers by solving a sequence of lowdimensional nonnegative leastsquares problems. We provide a detailed convergence analysis in a general setup as well as refined results under special conditions. In addition we discuss experimental observations in several numerical examples.
An optimal subgradient algorithm for largescale convex optimization in simple domains
, 2015
"... This paper shows that the OSGA algorithm – which uses firstorder information to solve convex optimization problems with optimal complexity – can be used to efficiently solve arbitrary boundconstrained convex optimization problems. This is done by constructing an explicit method as well as an inex ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
This paper shows that the OSGA algorithm – which uses firstorder information to solve convex optimization problems with optimal complexity – can be used to efficiently solve arbitrary boundconstrained convex optimization problems. This is done by constructing an explicit method as well as an inexact scheme for solving the boundconstrained rational subproblem required by OSGA. This leads to an efficient implementation of OSGA on largescale problems in applications arising signal and image processing, machine learning and statistics. Numerical experiments demonstrate the promising performance of OSGA on such problems. A software package implementing OSGA for boundconstrained convex problems is available.