Results 1  10
of
26
Improved iteratively reweighted least squares for unconstrained smoothed lq minimization
 SIAM J. Numer. Anal
, 2013
"... Abstract. In this paper, we first study ℓq minimization and its associated iterative reweighted algorithm for recovering sparse vectors. Unlike most existing work, we focus on unconstrained ℓq minimization, for which we show a few advantages on noisy measurements and/or approximately sparse vectors. ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we first study ℓq minimization and its associated iterative reweighted algorithm for recovering sparse vectors. Unlike most existing work, we focus on unconstrained ℓq minimization, for which we show a few advantages on noisy measurements and/or approximately sparse vectors. Inspired by the results in [Daubechies et al., Comm. Pure Appl. Math., 63 (2010), pp. 1–38] for constrained ℓq minimization, we start with a preliminary yet novel analysis for unconstrained ℓq minimization, which includes convergence, error bound, and local convergence behavior. Then, the algorithm and analysis are extended to the recovery of lowrank matrices. The algorithms for both vector and matrix recovery have been compared to some stateoftheart algorithms and show superior performance on recovering sparse vectors and lowrank matrices.
Iterative reweighted minimization methods for lp regularized unconstrained nonlinear programming
, 2013
"... In this paper we study general lp regularized unconstrained minimization problems. In particular, we derive lower bounds for nonzero entries of the first and secondorder stationary points and hence also of local minimizers of the lp minimization problems. We extend some existing iterative reweight ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
(Show Context)
In this paper we study general lp regularized unconstrained minimization problems. In particular, we derive lower bounds for nonzero entries of the first and secondorder stationary points and hence also of local minimizers of the lp minimization problems. We extend some existing iterative reweighted l1 (IRL1) and l2 (IRL2) minimization methods to solve these problems and propose new variants for them in which each subproblem has a closedform solution. Also, we provide a unified convergence analysis for these methods. In addition, we propose a novel Lipschitz continuous approximation to ‖x‖pp. Using this result, we develop new IRL1 methods for the lp minimization problems and show that any accumulation point of the sequence generated by these methods is a firstorder stationary point, provided that the approximation parameter is below a computable threshold value. This is a remarkable result since all existing iterative reweighted minimization methods require that be dynamically updated and approach zero. Our computational results demonstrate that the new IRL1 method and the new variants generally outperform the existing IRL1 methods [20, 17].
Inference for highdimensional sparse econometric models
 Advances in Economics and Econometrics. 10th World Congress of Econometric Society
, 2011
"... ar ..."
Complexity Analysis of Interior Point Algorithms for NonLipschitz and Nonconvex Minimization
 MATHEMATICAL PROGRAMMING
"... We propose a first order interior point algorithm for a class of nonLipschitz and nonconvex minimization problems with box constraints, which arise from applications in variable selection and regularized optimization. The objective functions of these problems are continuously differentiable typical ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
We propose a first order interior point algorithm for a class of nonLipschitz and nonconvex minimization problems with box constraints, which arise from applications in variable selection and regularized optimization. The objective functions of these problems are continuously differentiable typically at interior points of the feasible set. Our algorithm is easy to implement and the objective function value is reduced monotonically along the iteration points. We show that the worstcase complexity for finding an ɛ scaled first order stationary point is O(ɛ −2). Moreover, we develop a second order interior point algorithm using the Hessian matrix, and solve a quadratic program with ball constraint at each iteration. Although the second order interior point algorithm costs more computational time than that of the first order algorithm in each iteration, its worstcase complexity for finding an ɛ scaled second order stationary point is reduced to O(ɛ −3/2). An ɛ scaled second order stationary point is an ɛ scaled first order stationary point.
Nonconvex TV qModels in Image; Restoration: Analysis and a TrustRegion Regularization Based Superlinearly Convergent Solver
, 2011
"... A nonconvex variational model is introduced which contains the ℓq“norm”, q ∈ (0, 1), of the gradient of the image to be reconstructed as the regularization term together with a leastsquares type data fidelity term which may depend on a possibly spatially dependent weighting parameter. Hence, the re ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
A nonconvex variational model is introduced which contains the ℓq“norm”, q ∈ (0, 1), of the gradient of the image to be reconstructed as the regularization term together with a leastsquares type data fidelity term which may depend on a possibly spatially dependent weighting parameter. Hence, the regularization term in this functional is a nonconvex compromise between the minimization of the support of the reconstruction and the classical convex total variation model. In the discrete setting, existence of a minimizer is proven, a Newtontype solution algorithm is introduced and its global as well as locally superlinear convergence is established. The potential indefiniteness (or negative definiteness) of the Hessian of the objective during the iteration is handled by a trustregion based regularization scheme. The performance of the new algorithm is studied by means of a series of numerical tests. For the associated infinite dimensional model an existence result based on the weakly lower semicontinuous envelope is established and its relation to the original problem is discussed.
Recovery of sparsest signals via ℓq minimization
, 2011
"... In this paper, it is proved that every ssparse vector x ∈ R n can be exactly recovered from the measurement vector z = Ax ∈ R m via some ℓ qminimization with 0 < q ≤ 1, as soon as each ssparse vector x ∈ R n is uniquely determined by the measurement z. Moreover it is shown that the exponent ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
In this paper, it is proved that every ssparse vector x ∈ R n can be exactly recovered from the measurement vector z = Ax ∈ R m via some ℓ qminimization with 0 < q ≤ 1, as soon as each ssparse vector x ∈ R n is uniquely determined by the measurement z. Moreover it is shown that the exponent q in the ℓ qminimization can be so chosen to be about 0.6796 × (1 − δ2s(A)), where δ2s(A) is the restricted isometry constant of order 2s for the measurement matrix A.
Iterative Reweighted Singular Value Minimization Methods for lp Regularized Unconstrained Matrix Minimization∗
, 2014
"... In this paper we study general lp regularized unconstrained matrix minimization problems. In particular, we first introduce a class of firstorder stationary points for them. And we show that the firstorder stationary points introduced in [11] for an lp regularized vector minimization problem are e ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
In this paper we study general lp regularized unconstrained matrix minimization problems. In particular, we first introduce a class of firstorder stationary points for them. And we show that the firstorder stationary points introduced in [11] for an lp regularized vector minimization problem are equivalent to those of an lp regularized matrix minimization reformulation. We also establish that any local minimizer of the lp regularized matrix minimization problems must be a firstorder stationary point. Moreover, we derive lower bounds for nonzero singular values of the firstorder stationary points and hence also of the local minimizers for the lp matrix minimization problems. The iterative reweighted singular value minimization (IRSVM) approaches are then proposed to solve these problems in which each subproblem has a closedform solution. We show that any accumulation point of the sequence generated by these methods is a firstorder stationary point of the problems. In addition, we study a nonmontone proximal gradient (NPG) method for solving the lp matrix minimization problems and establish its global convergence. Our computational results demonstrate that the IRSVM and NPG methods generally outperform some existing stateoftheart methods in terms of solution quality and/or speed. Moreover, the IRSVM methods are slightly faster than the NPG method. Key words: lp regularized matrix minimization, iterative reweighted singular value minimization, iterative reweighted least squares, nonmonotone proximal gradient method
Joint power and admission control: Nonconvex approximation and an efficient polynomial time deflation approach. CoRR abs/1311.3045 (2013) 32 YaFeng Liu et al
"... ar ..."
(Show Context)