Results 1 
2 of
2
A stochastic coordinate descent primaldual algorithm and applications to largescale composite optimization,
, 2014
"... AbstractBased on the idea of randomized coordinate descent of αaveraged operators, a randomized primaldual optimization algorithm is introduced, where a random subset of coordinates is updated at each iteration. The algorithm builds upon a variant of a recent (deterministic) algorithm proposed b ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
AbstractBased on the idea of randomized coordinate descent of αaveraged operators, a randomized primaldual optimization algorithm is introduced, where a random subset of coordinates is updated at each iteration. The algorithm builds upon a variant of a recent (deterministic) algorithm proposed by Vũ and Condat that includes the well known ADMM as a particular case. The obtained algorithm is used to solve asynchronously a distributed optimization problem. A network of agents, each having a separate cost function containing a differentiable term, seek to find a consensus on the minimum of the aggregate objective. The method yields an algorithm where at each iteration, a random subset of agents wake up, update their local estimates, exchange some data with their neighbors, and go idle. Numerical results demonstrate the attractive performance of the method. The general approach can be naturally adapted to other situations where coordinate descent convex optimization algorithms are used with a random choice of the coordinates.
CONVERGENCE RATE ANALYSIS OF PRIMALDUAL SPLITTING SCHEMES∗
"... Abstract. Primaldual splitting schemes are a class of powerful algorithms that solve complicated monotone inclusions and convex optimization problems that are built from many simpler pieces. They decompose problems that are built from sums, linear compositions, and infimal convolutions of simple f ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Primaldual splitting schemes are a class of powerful algorithms that solve complicated monotone inclusions and convex optimization problems that are built from many simpler pieces. They decompose problems that are built from sums, linear compositions, and infimal convolutions of simple functions so that each simple term is processed individually via proximal mappings, gradient mappings, and multiplications by the linear maps. This leads to easily implementable and highly parallelizable or distributed algorithms, which often obtain nearly stateoftheart performance. In this paper, we analyze a monotone inclusion problem that captures a large class of primaldual splittings as a special case. We introduce a unifying scheme and use some abstract analysis of the algorithm to prove convergence rates of the proximal point algorithm, forwardbackward splitting, PeacemanRachford splitting, and forwardbackwardforward splitting applied to the model problem. Our ergodic convergence rates are deduced under variable metrics, stepsizes, and relaxation. Our nonergodic convergence rates are the first shown in the literature. Finally, we apply our results to a large class of primaldual algorithms that are a special case of our scheme and deduce their convergence rates.