Results 1  10
of
58
Proximal Splitting Methods in Signal Processing
"... The proximity operator of a convex function is a natural extension of the notion of a projection operator onto a convex set. This tool, which plays a central role in the analysis and the numerical solution of convex optimization problems, has recently been introduced in the arena of inverse problems ..."
Abstract

Cited by 264 (32 self)
 Add to MetaCart
(Show Context)
The proximity operator of a convex function is a natural extension of the notion of a projection operator onto a convex set. This tool, which plays a central role in the analysis and the numerical solution of convex optimization problems, has recently been introduced in the arena of inverse problems and, especially, in signal processing, where it has become increasingly important. In this paper, we review the basic properties of proximity operators which are relevant to signal processing and present optimization methods based on these operators. These proximal splitting methods are shown to capture and extend several wellknown algorithms in a unifying framework. Applications of proximal methods in signal recovery and synthesis are discussed.
Computational methods for sparse solution of linear inverse problems
, 2009
"... The goal of sparse approximation problems is to represent a target signal approximately as a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, ..."
Abstract

Cited by 164 (0 self)
 Add to MetaCart
The goal of sparse approximation problems is to represent a target signal approximately as a linear combination of a few elementary signals drawn from a fixed collection. This paper surveys the major practical algorithms for sparse approximation. Specific attention is paid to computational issues, to the circumstances in which individual methods tend to perform well, and to the theoretical guarantees available. Many fundamental questions in electrical engineering, statistics, and applied mathematics can be posed as sparse approximation problems, making these algorithms versatile and relevant to a wealth of applications.
Minimization of Nonsmooth, Nonconvex Functionals by Iterative Thresholding
, 2009
"... Preprint 10The consecutive numbering of the publications is determined by their chronological order. The aim of this preprint series is to make new research rapidly available for scientific discussion. Therefore, the responsibility for the contents is solely due to the authors. The publications will ..."
Abstract

Cited by 127 (2 self)
 Add to MetaCart
(Show Context)
Preprint 10The consecutive numbering of the publications is determined by their chronological order. The aim of this preprint series is to make new research rapidly available for scientific discussion. Therefore, the responsibility for the contents is solely due to the authors. The publications will be distributed by the authors. Minimization of nonsmooth, nonconvex functionals by iterative thresholding
Signal Restoration with Overcomplete Wavelet Transforms: Comparison of Analysis and Synthesis Priors
"... The variational approach to signal restoration calls for the minimization of a cost function that is the sum of a data fidelity term and a regularization term, the latter term constituting a ‘prior’. A synthesis prior represents the sought signal as a weighted sum of ‘atoms’. On the other hand, an a ..."
Abstract

Cited by 47 (5 self)
 Add to MetaCart
The variational approach to signal restoration calls for the minimization of a cost function that is the sum of a data fidelity term and a regularization term, the latter term constituting a ‘prior’. A synthesis prior represents the sought signal as a weighted sum of ‘atoms’. On the other hand, an analysis prior models the coefficients obtained by applying the forward transform to the signal. For orthonormal transforms, the synthesis prior and analysis prior are equivalent; however, for overcomplete transforms the two formulations are different. We compare analysis and synthesis ℓ1norm regularization with overcomplete transforms for denoising and deconvolution.
Convergence rates and source conditions for Tikhonov regularization with sparsity constraints. Submitted for publication, 2008. convergence of iterative softthresholding 27
"... This paper addresses the regularization by sparsity constraints by means of weighted ℓ p penalties for 0 ≤ p ≤ 2. For 1 ≤ p ≤ 2 special attention is payed to convergence rates in norm and to source conditions. As main results it is proven that one gets a convergence rate of √ δ in the 2norm for 1 & ..."
Abstract

Cited by 42 (15 self)
 Add to MetaCart
(Show Context)
This paper addresses the regularization by sparsity constraints by means of weighted ℓ p penalties for 0 ≤ p ≤ 2. For 1 ≤ p ≤ 2 special attention is payed to convergence rates in norm and to source conditions. As main results it is proven that one gets a convergence rate of √ δ in the 2norm for 1 < p ≤ 2 and in the 1norm for p = 1 as soon as the unknown solution is sparse. The case p = 1 needs a special technique where not only Bregman distances but also a socalled BregmanTaylor distance has to be employed. For p < 1 only preliminary results are shown. These results indicate that, different from p ≥ 1, the regularizing properties depend on the interplay of the operator and the basis of sparsity. A counterexample for p = 0 shows that regularization need not to happen. AMS Subject classification: Primary 47A52; Secondary 65J20, 65F22. 1
FAST GLOBAL CONVERGENCE OF GRADIENT METHODS FOR HIGHDIMENSIONAL STATISTICAL RECOVERY
 SUBMITTED TO THE ANNALS OF STATISTICS
, 2012
"... Many statistical Mestimators are based on convex optimization problems formed by the combination of a datadependent loss function with a normbased regularizer. We analyze the convergence rates of projected gradient and composite gradient methods for solving such problems, working within a highdi ..."
Abstract

Cited by 36 (5 self)
 Add to MetaCart
Many statistical Mestimators are based on convex optimization problems formed by the combination of a datadependent loss function with a normbased regularizer. We analyze the convergence rates of projected gradient and composite gradient methods for solving such problems, working within a highdimensional framework that allows the ambient dimension d to grow with (and possibly exceed) the sample size n. Our theory identifies conditions under which projected gradient descent enjoys globally linear convergence up to the statistical precision of the model, meaning the typical distance between the true unknown parameter θ ∗ and an optimal solution ̂ θ. By establishing these conditions with high probability for numerous statistical models, our analysis applies to a wide range of Mestimators, including sparse linear regression using Lasso; group Lasso for block sparsity; loglinear models with regularization; lowrank matrix recovery using nuclear norm regularization; and matrix decomposition
Nested iterative algorithms for convex constrained image recovery problems
 IEEE Journal of Selected Topics in Signal Processing
, 2007
"... The objective of this paper is to develop methods for solving image recovery problems subject to constraints on the solution. More precisely, we will be interested in problems which can be formulated as the minimization over a closed convex constraint set of the sum of two convex functions f and g, ..."
Abstract

Cited by 30 (8 self)
 Add to MetaCart
(Show Context)
The objective of this paper is to develop methods for solving image recovery problems subject to constraints on the solution. More precisely, we will be interested in problems which can be formulated as the minimization over a closed convex constraint set of the sum of two convex functions f and g, where f may be nonsmooth and g is differentiable with a Lipschitzcontinuous gradient. To reach this goal, we derive two types of algorithms that combine forwardbackward and DouglasRachford iterations. The weak convergence of the proposed algorithms is proved. In the case when the Lipschitzcontinuity property of the gradient of g is not satisfied, we also show that, under some assumptions, it remains possible to apply these methods to the considered optimization problem by making use of a quadratic extension technique. The effectiveness of the algorithms is demonstrated for two waveletbased image restoration problems involving a signaldependent Gaussian noise and a Poisson noise, respectively. 1
Dualization of signal recovery problems
, 2009
"... In convex optimization, duality theory can sometimes lead to simpler solution methods than those resulting from direct primal analysis. In this paper, this principle is applied to a class of composite variational problems arising in particular in signal recovery. These problems are not easily amenab ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
(Show Context)
In convex optimization, duality theory can sometimes lead to simpler solution methods than those resulting from direct primal analysis. In this paper, this principle is applied to a class of composite variational problems arising in particular in signal recovery. These problems are not easily amenable to solution by current methods but they feature FenchelMoreauRockafellar dual problems that can be solved by forwardbackward splitting. The proposed algorithm produces simultaneously a sequence converging weakly to a dual solution, and a sequence converging strongly to the primal solution. Our framework is shown to capture and extend several existing dualitybased signal recovery methods and to be applicable to a variety of new problems beyond their scope.
ElasticNet Regularization: Error estimates and Active Set Methods
, 905
"... This paper investigates theoretical properties and efficient numerical algorithms for the socalled elasticnet regularization originating from statistics, which enforces simultaneously ℓ 1 and ℓ 2 regularization. The stability of the minimizer and its consistency are studied, and convergence rates ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
(Show Context)
This paper investigates theoretical properties and efficient numerical algorithms for the socalled elasticnet regularization originating from statistics, which enforces simultaneously ℓ 1 and ℓ 2 regularization. The stability of the minimizer and its consistency are studied, and convergence rates for both a priori and a posteriori parameter choice rules are established. Two iterative numerical algorithms of active set type are proposed, and their convergence properties are discussed. Numerical results are presented to illustrate the features of the functional and algorithms. 1