Results 1 
5 of
5
A general inertial proximal point method for mixed variational inequality problem
"... Abstract. In this paper, we first propose a general inertial proximal point method for the mixed variational inequality (VI) problem. Based on our knowledge, without stronger assumptions, convergence rate result is not known in the literature for inertial type proximal point methods. Under certain c ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we first propose a general inertial proximal point method for the mixed variational inequality (VI) problem. Based on our knowledge, without stronger assumptions, convergence rate result is not known in the literature for inertial type proximal point methods. Under certain conditions, we are able to establish the global convergence and a o(1/k) convergence rate result (under certain measure) of the proposed general inertial proximal point method. We then show that the linearized alternating direction method of multipliers (ADMM) for separable convex optimization with linear constraints is an application of a general proximal point method, provided that the algorithmic parameters are properly chosen. As byproducts of this finding, we establish global convergence and O(1/k) convergence rate results of the linearized ADMM in both ergodic and nonergodic sense. In particular, by applying the proposed inertial proximal point method for mixed VI to linearly constrained separable convex optimization, we obtain an inertial version of the linearized ADMM for which the global convergence is guaranteed. We also demonstrate the effect of the inertial extrapolation step via experimental results on the compressive principal component pursuit problem.
Inertial primaldual algorithms for structured convex optimization
"... that CPA and variants are closely related to preconditioned versions of the popular alternating direction method of multipliers (abbreviated as ADM). In this paper, we further clarify this connection and show that CPAs generate exactly the same sequence of points with the socalled linearized ADM (a ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
that CPA and variants are closely related to preconditioned versions of the popular alternating direction method of multipliers (abbreviated as ADM). In this paper, we further clarify this connection and show that CPAs generate exactly the same sequence of points with the socalled linearized ADM (abbreviated as LADM) applied to either the primal problem or its Lagrangian dual, depending on different updating orders of the primal and the dual variables in CPAs, as long as the initial points for the LADM are properly chosen. The dependence on initial points for LADM can be relaxed by focusing on cyclically equivalent forms of the algorithms. Furthermore, by utilizing the fact that CPAs are applications of a general weighted proximal point method to the mixed variational inequality formulation of the KKT system, where the weighting matrix is positive definite under a parameter condition, we are able to propose and analyze inertial variants of CPAs. Under certain conditions, global pointconvergence, nonasymptotic O(1/k) and asymptotic o(1/k) convergence rate of the proposed inertial CPAs can be guaranteed, where k denotes the iteration index. Finally, we demonstrate the profits gained by introducing the inertial extrapolation step via experimental results on compressive image reconstruction based on total variation minimization.
An iteratively reweighted Algorithm for Nonsmooth Nonconvex Optimization in Computer Vision
, 2014
"... Natural image statistics indicate that we should use nonconvex norms for most regularization tasks in image processing and computer vision. Still, they are rarely used in practice due to the challenge of optimization. Recently, iteratively reweighed `1 minimization (IRL1) has been proposed as a way ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Natural image statistics indicate that we should use nonconvex norms for most regularization tasks in image processing and computer vision. Still, they are rarely used in practice due to the challenge of optimization. Recently, iteratively reweighed `1 minimization (IRL1) has been proposed as a way to tackle a class of nonconvex functions by solving a sequence of convex `2`1 problems. We extend the problem class to the sum of a convex function and a (nonconvex) nondeceasing function applied to another convex function. The proposed algorithm sequentially optimizes suitably constructed convex majorizers. Convergence to a critical point is proved when the Kurdyka Lojasiewicz property and additional mild restrictions hold for the objective function. The efficiency of the algorithm and the practical importance of the algorithm is demonstrated in computer vision tasks such as image denoising and optical flow. Most applications seek smooth results with sharp discontinuities. This is achieved by combining nonconvexity
INERTIAL PROXIMAL ADMM FOR LINEARLY CONSTRAINED SEPARABLE CONVEX OPTIMIZATION
"... Abstract. The alternating direction method of multipliers (ADMM) is a popular and efficient firstorder method that has recently found numerous applications, and the proximal ADMM is an important variant of it. The main contributions of this paper are the proposition and the analysis of a class of i ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. The alternating direction method of multipliers (ADMM) is a popular and efficient firstorder method that has recently found numerous applications, and the proximal ADMM is an important variant of it. The main contributions of this paper are the proposition and the analysis of a class of inertial proximal ADMMs, which unify the basic ideas of the inertial proximal point method and the proximal ADMM, for linearly constrained separable convex optimization. This class of methods are of inertial nature because at each iteration the proximal ADMM is applied to a point extrapolated at the current iterate in the direction of last movement. The recently proposed inertial primaldual algorithm [1, Algorithm 3] and the inertial linearized ADMM [2, Eq. (3.23)] are covered as special cases. The proposed algorithmic framework is very general in the sense that the weighting matrices in the proximal terms are allowed to be only positive semidefinite, but not necessarily positive definite as required by existing methods of the same kind. By setting the two proximal terms to zero, we obtain an inertial variant of the classical ADMM, which is new to the best of our knowledge. We carry out a unified analysis for the entire class of methods under very mild assumptions. In particular, convergence, as well as asymptotic o(1/ k) and nonasymptotic O(1/ k) rates of convergence, are established for the best primal function value and feasibility residues, where k denotes the iteration counter. The global iterate convergence of the generated sequence is established under an additional assumption. We also
Cyclic Schemes for PDEBased Image Analysis
"... We investigate a class of efficient numerical algorithms for many partial differential equations (PDEs) in image analysis. They are applicable to parabolic or elliptic PDEs that have bounded coefficients and lead to space discretisations with symmetric matrices. Our schemes are easy to implement an ..."
Abstract
 Add to MetaCart
(Show Context)
We investigate a class of efficient numerical algorithms for many partial differential equations (PDEs) in image analysis. They are applicable to parabolic or elliptic PDEs that have bounded coefficients and lead to space discretisations with symmetric matrices. Our schemes are easy to implement and wellsuited for parallel implementations on GPUs, since they are based on the explicit diffusion scheme in the parabolic case, and the Jacobi method in the elliptic case. By supplementing these methods with cyclically varying time step sizes or relaxation parameters, we achieve efficiency gains of several orders of magnitude. We call the resulting algorithms Fast Explicit Diffusion (FED) and Fast Jacobi (FJ) methods. To achieve a good compromise between efficiency and accuracy, we show that one should use parameter cycles that result from factorisations of box filters. For these cycles we establish stability results in the Euclidean norm. Our