Results 1  10
of
29
A primaldual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms
, 2013
"... We propose a new firstorder splitting algorithm for solving jointly the primal and dual formulations of largescale convex minimization problems involving the sum of a smooth function with Lipschitzian gradient, a nonsmooth proximable function, and linear composite functions. This is a full splitti ..."
Abstract

Cited by 60 (10 self)
 Add to MetaCart
(Show Context)
We propose a new firstorder splitting algorithm for solving jointly the primal and dual formulations of largescale convex minimization problems involving the sum of a smooth function with Lipschitzian gradient, a nonsmooth proximable function, and linear composite functions. This is a full splitting approach, in the sense that the gradient and the linear operators involved are applied explicitly without any inversion, while the nonsmooth functions are processed individually via their proximity operators. This work brings together and notably extends several classical splitting schemes, like the forward–backward and Douglas–Rachford methods, as well as the recent primal–dual method of Chambolle and Pock designed for problems with linear composite terms.
iPiano: Inertial Proximal Algorithm for Nonconvex Optimization
, 2014
"... In this paper we study an algorithm for solving a minimization problem composed of a differentiable (possibly nonconvex) and a convex (possibly nondifferentiable) function. The algorithm iPiano combines forwardbackward splitting with an inertial force. It can be seen as a nonsmooth split versio ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
In this paper we study an algorithm for solving a minimization problem composed of a differentiable (possibly nonconvex) and a convex (possibly nondifferentiable) function. The algorithm iPiano combines forwardbackward splitting with an inertial force. It can be seen as a nonsmooth split version of the Heavyball method from Polyak. A rigorous analysis of the algorithm for the proposed class of problems yields global convergence of the function values and the arguments. This makes the algorithm robust for usage on nonconvex problems. The convergence result is obtained based on the Kurdyka Lojasiewicz inequality. This is a very weak restriction, which was used to prove convergence for several other gradient methods. First, an abstract convergence theorem for a generic algorithm is proved, and, then iPiano is shown to satisfy the requirements of this theorem. Furthermore, a convergence rate is established for the general problem class. We demonstrate iPiano on computer vision problems: image denoising with learned priors and diffusion based image compression. 1
Fixed points of generalized approximate message passing with arbitrary matrices
 in Proc. ISIT
, 2013
"... ar ..."
(Show Context)
On the convergence of approximate message passing with arbitrary matrices,” avabilable at arXiv:1402.3210 [cs.IT
, 2014
"... ar ..."
(Show Context)
Generalized approximate message passing for the cosparse analysis model,” arXiv:1312.3968, 2013, (Matlab codes at http://www2.ece.ohiostate.edu/˜schniter/GrAMPA
"... In cosparse analysis compressive sensing (CS), one seeks to estimate a nonsparse signal vector from noisy subNyquist linear measurements by exploiting the knowledge that a given linear transform of the signal is cosparse, i.e., has sufficiently many zeros. We propose a novel approach to cosparse ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
In cosparse analysis compressive sensing (CS), one seeks to estimate a nonsparse signal vector from noisy subNyquist linear measurements by exploiting the knowledge that a given linear transform of the signal is cosparse, i.e., has sufficiently many zeros. We propose a novel approach to cosparse analysis CS based on the generalized approximate message passing (GAMP) algorithm. Unlike other AMPbased approaches to this problem, ours works with a wide range of analysis operators and regularizers. In addition, we propose a novel ℓ0like softthresholder based on MMSE denoising for a spikeandslab distribution with an infinitevariance slab. Numerical demonstrations on synthetic and practical datasets demonstrate advantages over existing AMPbased, greedy, and reweightedℓ1 approaches. Index Terms — Approximate message passing, belief propagation, compressed sensing. 1.
A forwardbackward view of some primaldual optimization methods in image recovery
 in Proc. Int. Conf. Image Process
, 2014
"... A wide array of image recovery problems can be abstracted into the problem of minimizing a sum of composite convex functions in a Hilbert space. To solve such problems, primaldual proximal approaches have been developed which provide efficient solutions to largescale optimization problems. The ob ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
A wide array of image recovery problems can be abstracted into the problem of minimizing a sum of composite convex functions in a Hilbert space. To solve such problems, primaldual proximal approaches have been developed which provide efficient solutions to largescale optimization problems. The objective of this paper is to show that a number of existing algorithms can be derived from a general form of the forwardbackward algorithm applied in a suitable product space. Our approach also allows us to develop useful extensions of existing algorithms by introducing a variable metric. An illustration to image restoration is provided. Index Terms — convex optimization, duality, parallel computing, proximal algorithm, variational methods, image recovery.
Playing with duality: An overview of recent primaldual approaches for . . .
, 2014
"... Optimization methods are at the core of many problems in signal/image processing, computer vision, and machine learning. For a long time, it has been recognized that looking at the dual of an optimization problem may drastically simplify its solution. Deriving efficient strategies jointly bringing i ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Optimization methods are at the core of many problems in signal/image processing, computer vision, and machine learning. For a long time, it has been recognized that looking at the dual of an optimization problem may drastically simplify its solution. Deriving efficient strategies jointly bringing into play the primal and the dual problems is however a more recent idea which has generated many important new contributions in the last years. These novel developments are grounded on recent advances in convex analysis, discrete optimization, parallel processing, and nonsmooth optimization with emphasis on sparsity issues. In this paper, we aim at presenting the principles of primaldual approaches, while giving an overview of numerical methods which have been proposed in different contexts. We show the benefits which can be drawn from primaldual algorithms both for solving largescale convex optimization problems and discrete ones, and we provide various application examples to illustrate their usefulness.
A TWOSTAGE IMAGE SEGMENTATION METHOD USING A CONVEX VARIANT OF THE MUMFORDSHAH MODEL AND THRESHOLDING
"... Abstract. The MumfordShah model is one of the most important image segmentation models, and has been studied extensively in the last twenty years. In this paper, we propose a twostage segmentation method based on the MumfordShah model. The first stage of our method is to find a smooth solution g ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Abstract. The MumfordShah model is one of the most important image segmentation models, and has been studied extensively in the last twenty years. In this paper, we propose a twostage segmentation method based on the MumfordShah model. The first stage of our method is to find a smooth solution g to a convex variant of the MumfordShah model. Once g is obtained, then in the second stage, the segmentation is done by thresholding g into different phases. The thresholds can be given by the users or can be obtained automatically using any clustering methods. Because of the convexity of the model, g can be solved efficiently by techniques like the splitBregman algorithm or the ChambollePock method. We prove that our method is convergent and the solution g is always unique. In our method, there is no need to specify the number of segments K (K ≥ 2) before finding g. We can obtain any Kphase segmentations by choosing (K −1) thresholds after g is found in the first stage; and in the second stage there is no need to recompute g if the thresholds are changed to reveal different segmentation features in the image. Experimental results show that our twostage method performs better than many standard twophase or multiphase segmentation methods for very general images, including antimass, tubular, MRI, noisy, and blurry images. Key words. Image segmentation, MumfordShah model, splitBregman, total variation. AMS subject classifications. 52A41, 65D15, 68W40, 90C25, 90C90
An inertial forwardbackward algorithm for monotone inclusions
 J. Math. Imaging Vis
, 2014
"... In this paper, we propose a new accelerated forward backward splitting algorithm to compute a zero of the sum of two monotone operators, with one of the two operators being cocoercive. The algorithm is inspired by the accelerated gradient method of Nesterov, but can be applied to a much larger clas ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
In this paper, we propose a new accelerated forward backward splitting algorithm to compute a zero of the sum of two monotone operators, with one of the two operators being cocoercive. The algorithm is inspired by the accelerated gradient method of Nesterov, but can be applied to a much larger class of problems including convexconcave saddle point problems and general monotone inclusions. We prove convergence of the algorithm in a Hilbert space setting and show that several recently proposed firstorder methods can be obtained as special cases of the general algorithm. Numerical results show that the proposed algorithm converges faster than existing methods, while keeping the computational cost of each iteration basically unchanged. 1
A primaldual algorithmic framework for constrained convex minimization,” arXiv preprint:1406.5403
, 2014
"... We present a primaldual algorithmic framework to obtain approximate solutions to a prototypical constrained convex optimization problem, and rigorously characterize how common structural assumptions affect the numerical efficiency. Our main analysis technique provides a fresh perspective on Nester ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
We present a primaldual algorithmic framework to obtain approximate solutions to a prototypical constrained convex optimization problem, and rigorously characterize how common structural assumptions affect the numerical efficiency. Our main analysis technique provides a fresh perspective on Nesterov’s excessive gap technique in a structured fashion and unifies it with smoothing and primaldual methods. For instance, through the choices of a dual smoothing strategy and a center point, our framework subsumes decomposition algorithms, augmented Lagrangian as well as the alternating direction methodofmultipliers methods as its special cases, and provides optimal convergence rates on the primal objective residual as well as the primal feasibility gap of the iterates for all.