Results 1 
9 of
9
An inertial forwardbackwardforward primaldual splitting algorithm for solving monotone inclusion problems
, 2015
"... We introduce and investigate the convergence properties of an inertial forwardbackwardforward splitting algorithm for approaching the set of zeros of the sum of a maximally monotone operator and a singlevalued monotone and Lipschitzian operator. By making use of the product space approach, we ex ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
We introduce and investigate the convergence properties of an inertial forwardbackwardforward splitting algorithm for approaching the set of zeros of the sum of a maximally monotone operator and a singlevalued monotone and Lipschitzian operator. By making use of the product space approach, we expand it to the solving of inclusion problems involving mixtures of linearly composed and parallelsum type monotone operators. We obtain in this way an inertial forwardbackwardforward primaldual splitting algorithm having as main characteristic the fact that in the iterative scheme all operators are accessed separately either via forward or via backward evaluations. We present also the variational case when one is interested in the solving of a primaldual pair of convex optimization problems with complexly structured objectives, which we also illustrate by numerical experiments in image processing.
A general inertial proximal point method for mixed variational inequality problem
"... Abstract. In this paper, we first propose a general inertial proximal point method for the mixed variational inequality (VI) problem. Based on our knowledge, without stronger assumptions, convergence rate result is not known in the literature for inertial type proximal point methods. Under certain c ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we first propose a general inertial proximal point method for the mixed variational inequality (VI) problem. Based on our knowledge, without stronger assumptions, convergence rate result is not known in the literature for inertial type proximal point methods. Under certain conditions, we are able to establish the global convergence and a o(1/k) convergence rate result (under certain measure) of the proposed general inertial proximal point method. We then show that the linearized alternating direction method of multipliers (ADMM) for separable convex optimization with linear constraints is an application of a general proximal point method, provided that the algorithmic parameters are properly chosen. As byproducts of this finding, we establish global convergence and O(1/k) convergence rate results of the linearized ADMM in both ergodic and nonergodic sense. In particular, by applying the proposed inertial proximal point method for mixed VI to linearly constrained separable convex optimization, we obtain an inertial version of the linearized ADMM for which the global convergence is guaranteed. We also demonstrate the effect of the inertial extrapolation step via experimental results on the compressive principal component pursuit problem.
INERTIAL PROXIMAL ADMM FOR LINEARLY CONSTRAINED SEPARABLE CONVEX OPTIMIZATION
"... Abstract. The alternating direction method of multipliers (ADMM) is a popular and efficient firstorder method that has recently found numerous applications, and the proximal ADMM is an important variant of it. The main contributions of this paper are the proposition and the analysis of a class of i ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. The alternating direction method of multipliers (ADMM) is a popular and efficient firstorder method that has recently found numerous applications, and the proximal ADMM is an important variant of it. The main contributions of this paper are the proposition and the analysis of a class of inertial proximal ADMMs, which unify the basic ideas of the inertial proximal point method and the proximal ADMM, for linearly constrained separable convex optimization. This class of methods are of inertial nature because at each iteration the proximal ADMM is applied to a point extrapolated at the current iterate in the direction of last movement. The recently proposed inertial primaldual algorithm [1, Algorithm 3] and the inertial linearized ADMM [2, Eq. (3.23)] are covered as special cases. The proposed algorithmic framework is very general in the sense that the weighting matrices in the proximal terms are allowed to be only positive semidefinite, but not necessarily positive definite as required by existing methods of the same kind. By setting the two proximal terms to zero, we obtain an inertial variant of the classical ADMM, which is new to the best of our knowledge. We carry out a unified analysis for the entire class of methods under very mild assumptions. In particular, convergence, as well as asymptotic o(1/ k) and nonasymptotic O(1/ k) rates of convergence, are established for the best primal function value and feasibility residues, where k denotes the iteration counter. The global iterate convergence of the generated sequence is established under an additional assumption. We also
An inertial alternating direction method of multipliers
, 2014
"... Abstract. In the context of convex optimization problems in Hilbert spaces, we induce inertial effects into the classical ADMM numerical scheme and obtain in this way socalled inertial ADMM algorithms, the convergence properties of which we investigate into detail. To this aim we make use of the in ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. In the context of convex optimization problems in Hilbert spaces, we induce inertial effects into the classical ADMM numerical scheme and obtain in this way socalled inertial ADMM algorithms, the convergence properties of which we investigate into detail. To this aim we make use of the inertial version of the DouglasRachford splitting method for monotone inclusion problems recently introduced in [12], in the context of concomitantly solving a convex minimization problem and its Fenchel dual. The convergence of both sequences of the generated iterates and of the objective function values is addressed. We also show how the obtained results can be extended to the treating of convex minimization problems having as objective a finite sum of convex functions.
An Inertial Tseng’s Type Proximal Algorithm for Nonsmooth and Nonconvex Optimization Problems
, 2015
"... Abstract. We investigate the convergence of a forwardbackwardforward proximaltype algorithm with inertial and memory effects when minimizing the sum of a nonsmooth function with a smooth one in the absence of convexity. The convergence is obtained provided an appropriate regularization of the obj ..."
Abstract
 Add to MetaCart
Abstract. We investigate the convergence of a forwardbackwardforward proximaltype algorithm with inertial and memory effects when minimizing the sum of a nonsmooth function with a smooth one in the absence of convexity. The convergence is obtained provided an appropriate regularization of the objective satisfies the Kurdyka Lojasiewicz inequality, which is for instance fulfilled for semialgebraic functions.
An inertial forwardbackward algorithm for the minimization of the sum of two nonconvex functions
, 2015
"... Abstract. We propose a forwardbackward proximaltype algorithm with inertial/memory effects for minimizing the sum of a nonsmooth function with a smooth one in the nonconvex setting. Every sequence of iterates generated by the algorithm converges to a critical point of the objective function provid ..."
Abstract
 Add to MetaCart
Abstract. We propose a forwardbackward proximaltype algorithm with inertial/memory effects for minimizing the sum of a nonsmooth function with a smooth one in the nonconvex setting. Every sequence of iterates generated by the algorithm converges to a critical point of the objective function provided an appropriate regularization of the objective satisfies the Kurdyka Lojasiewicz inequality, which is for instance fulfilled for semialgebraic functions. We illustrate the theoretical results by considering two numerical experiments: the first one concerns the ability of recovering the local optimal solutions of nonconvex optimization problems, while the second one refers to the restoration of a noisy blurred image.