Results 1  10
of
20
An inertial forwardbackwardforward primaldual splitting algorithm for solving monotone inclusion problems
, 2015
"... We introduce and investigate the convergence properties of an inertial forwardbackwardforward splitting algorithm for approaching the set of zeros of the sum of a maximally monotone operator and a singlevalued monotone and Lipschitzian operator. By making use of the product space approach, we ex ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
We introduce and investigate the convergence properties of an inertial forwardbackwardforward splitting algorithm for approaching the set of zeros of the sum of a maximally monotone operator and a singlevalued monotone and Lipschitzian operator. By making use of the product space approach, we expand it to the solving of inclusion problems involving mixtures of linearly composed and parallelsum type monotone operators. We obtain in this way an inertial forwardbackwardforward primaldual splitting algorithm having as main characteristic the fact that in the iterative scheme all operators are accessed separately either via forward or via backward evaluations. We present also the variational case when one is interested in the solving of a primaldual pair of convex optimization problems with complexly structured objectives, which we also illustrate by numerical experiments in image processing.
iPiasco: Inertial Proximal Algorithm for strongly convex Optimization
, 2014
"... In this paper, we present a forwardbackward splitting algorithm with additional inertial term for solving a strongly convex optimization problem of a certain type. The strongly convex objective function is assumed to be a sum of a nonsmooth convex and a smooth convex function. This additional kno ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
In this paper, we present a forwardbackward splitting algorithm with additional inertial term for solving a strongly convex optimization problem of a certain type. The strongly convex objective function is assumed to be a sum of a nonsmooth convex and a smooth convex function. This additional knowledge is used for deriving a convergence rate for the proposed algorithm. It is proved to be an optimal algorithm with linear rate of convergence. For certain problems this linear rate of convergence is better than the provably optimal rate of convergence for smooth strongly convex functions. We demonstrate the efficiency of the proposed algorithm in numerical experiments and an example from image processing. 1
A general inertial proximal point method for mixed variational inequality problem
"... Abstract. In this paper, we first propose a general inertial proximal point method for the mixed variational inequality (VI) problem. Based on our knowledge, without stronger assumptions, convergence rate result is not known in the literature for inertial type proximal point methods. Under certain c ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we first propose a general inertial proximal point method for the mixed variational inequality (VI) problem. Based on our knowledge, without stronger assumptions, convergence rate result is not known in the literature for inertial type proximal point methods. Under certain conditions, we are able to establish the global convergence and a o(1/k) convergence rate result (under certain measure) of the proposed general inertial proximal point method. We then show that the linearized alternating direction method of multipliers (ADMM) for separable convex optimization with linear constraints is an application of a general proximal point method, provided that the algorithmic parameters are properly chosen. As byproducts of this finding, we establish global convergence and O(1/k) convergence rate results of the linearized ADMM in both ergodic and nonergodic sense. In particular, by applying the proposed inertial proximal point method for mixed VI to linearly constrained separable convex optimization, we obtain an inertial version of the linearized ADMM for which the global convergence is guaranteed. We also demonstrate the effect of the inertial extrapolation step via experimental results on the compressive principal component pursuit problem.
D.: The PrimalDual Hybrid Gradient Method for Semiconvex Splittings. preprint http://arxiv.org/abs/1407.1723
, 2014
"... Abstract. This paper deals with the analysis of a recent reformulation of the primaldual hybrid gradient method, which allows one to apply it to nonconvex regularizers. Particularly, it investigates variational problems for which the energy to be minimized can be written as G(u) + F (Ku), where G ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract. This paper deals with the analysis of a recent reformulation of the primaldual hybrid gradient method, which allows one to apply it to nonconvex regularizers. Particularly, it investigates variational problems for which the energy to be minimized can be written as G(u) + F (Ku), where G is convex, F is semiconvex, and K is a linear operator. We study the method and prove convergence in the case where the nonconvexity of F is compensated for by the strong convexity of G. The convergence proof yields an interesting requirement for the choice of algorithm parameters, which we show to be not only sufficient, but also necessary. Additionally, we show boundedness of the iterates under much weaker conditions. Finally, in several numerical experiments we demonstrate effectiveness and convergence of the algorithm beyond the theoretical guarantees.
Inertial primaldual algorithms for structured convex optimization
"... that CPA and variants are closely related to preconditioned versions of the popular alternating direction method of multipliers (abbreviated as ADM). In this paper, we further clarify this connection and show that CPAs generate exactly the same sequence of points with the socalled linearized ADM (a ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
that CPA and variants are closely related to preconditioned versions of the popular alternating direction method of multipliers (abbreviated as ADM). In this paper, we further clarify this connection and show that CPAs generate exactly the same sequence of points with the socalled linearized ADM (abbreviated as LADM) applied to either the primal problem or its Lagrangian dual, depending on different updating orders of the primal and the dual variables in CPAs, as long as the initial points for the LADM are properly chosen. The dependence on initial points for LADM can be relaxed by focusing on cyclically equivalent forms of the algorithms. Furthermore, by utilizing the fact that CPAs are applications of a general weighted proximal point method to the mixed variational inequality formulation of the KKT system, where the weighting matrix is positive definite under a parameter condition, we are able to propose and analyze inertial variants of CPAs. Under certain conditions, global pointconvergence, nonasymptotic O(1/k) and asymptotic o(1/k) convergence rate of the proposed inertial CPAs can be guaranteed, where k denotes the iteration index. Finally, we demonstrate the profits gained by introducing the inertial extrapolation step via experimental results on compressive image reconstruction based on total variation minimization.
Low Rank Priors for Color Image Regularization
"... Abstract. In this work we consider the regularization of vectorial data such as color images. Based on the observation that edge alignment across image channels is a desirable prior for multichannel image restoration, we propose a novel scheme of minimizing the rank of the image Jacobian and extend ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. In this work we consider the regularization of vectorial data such as color images. Based on the observation that edge alignment across image channels is a desirable prior for multichannel image restoration, we propose a novel scheme of minimizing the rank of the image Jacobian and extend this idea to second derivatives in the framework of total generalized variation. We compare the proposed convex and nonconvex relaxations of the rank function based on the Schattenq norm to previous color image regularizers and show in our numerical experiments that they have several desirable properties. In particular, the nonconvex relaxations lead to better preservation of discontinuities. The efficient minimization of energies involving nonconvex and nonsmooth regularizers is still an important open question. We extend a recently proposed primaldual splitting approach for nonconvex optimization and show that it can be effectively used to minimize such energies. Furthermore, we propose a novel algorithm for efficiently evaluating the proximal mapping of the `q norm appearing during optimization. We experimentally verify convergence of the proposed optimization method and show that it performs comparably to sequential convex programming. 1
An iteratively reweighted Algorithm for Nonsmooth Nonconvex Optimization in Computer Vision
, 2014
"... Natural image statistics indicate that we should use nonconvex norms for most regularization tasks in image processing and computer vision. Still, they are rarely used in practice due to the challenge of optimization. Recently, iteratively reweighed `1 minimization (IRL1) has been proposed as a way ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Natural image statistics indicate that we should use nonconvex norms for most regularization tasks in image processing and computer vision. Still, they are rarely used in practice due to the challenge of optimization. Recently, iteratively reweighed `1 minimization (IRL1) has been proposed as a way to tackle a class of nonconvex functions by solving a sequence of convex `2`1 problems. We extend the problem class to the sum of a convex function and a (nonconvex) nondeceasing function applied to another convex function. The proposed algorithm sequentially optimizes suitably constructed convex majorizers. Convergence to a critical point is proved when the Kurdyka Lojasiewicz property and additional mild restrictions hold for the objective function. The efficiency of the algorithm and the practical importance of the algorithm is demonstrated in computer vision tasks such as image denoising and optical flow. Most applications seek smooth results with sharp discontinuities. This is achieved by combining nonconvexity
INERTIAL PROXIMAL ADMM FOR LINEARLY CONSTRAINED SEPARABLE CONVEX OPTIMIZATION
"... Abstract. The alternating direction method of multipliers (ADMM) is a popular and efficient firstorder method that has recently found numerous applications, and the proximal ADMM is an important variant of it. The main contributions of this paper are the proposition and the analysis of a class of i ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. The alternating direction method of multipliers (ADMM) is a popular and efficient firstorder method that has recently found numerous applications, and the proximal ADMM is an important variant of it. The main contributions of this paper are the proposition and the analysis of a class of inertial proximal ADMMs, which unify the basic ideas of the inertial proximal point method and the proximal ADMM, for linearly constrained separable convex optimization. This class of methods are of inertial nature because at each iteration the proximal ADMM is applied to a point extrapolated at the current iterate in the direction of last movement. The recently proposed inertial primaldual algorithm [1, Algorithm 3] and the inertial linearized ADMM [2, Eq. (3.23)] are covered as special cases. The proposed algorithmic framework is very general in the sense that the weighting matrices in the proximal terms are allowed to be only positive semidefinite, but not necessarily positive definite as required by existing methods of the same kind. By setting the two proximal terms to zero, we obtain an inertial variant of the classical ADMM, which is new to the best of our knowledge. We carry out a unified analysis for the entire class of methods under very mild assumptions. In particular, convergence, as well as asymptotic o(1/ k) and nonasymptotic O(1/ k) rates of convergence, are established for the best primal function value and feasibility residues, where k denotes the iteration counter. The global iterate convergence of the generated sequence is established under an additional assumption. We also
An Inertial Tseng’s Type Proximal Algorithm for Nonsmooth and Nonconvex Optimization Problems
, 2015
"... Abstract. We investigate the convergence of a forwardbackwardforward proximaltype algorithm with inertial and memory effects when minimizing the sum of a nonsmooth function with a smooth one in the absence of convexity. The convergence is obtained provided an appropriate regularization of the obj ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. We investigate the convergence of a forwardbackwardforward proximaltype algorithm with inertial and memory effects when minimizing the sum of a nonsmooth function with a smooth one in the absence of convexity. The convergence is obtained provided an appropriate regularization of the objective satisfies the Kurdyka Lojasiewicz inequality, which is for instance fulfilled for semialgebraic functions.