• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

A Class of Randomized Primal-Dual Algorithms for Distributed Optimization,” ArXiv e-prints, (2014)

by J-C Pesquet, A Repetti
Add To MetaCart

Tools

Sorted by:
Results 1 - 2 of 2

A stochastic coordinate descent primal-dual algorithm and applications to large-scale composite optimization,

by P Bianchi , W Hachem , F Iutzeler , 2014
"... Abstract-Based on the idea of randomized coordinate descent of α-averaged operators, a randomized primal-dual optimization algorithm is introduced, where a random subset of coordinates is updated at each iteration. The algorithm builds upon a variant of a recent (deterministic) algorithm proposed b ..."
Abstract - Cited by 5 (2 self) - Add to MetaCart
Abstract-Based on the idea of randomized coordinate descent of α-averaged operators, a randomized primal-dual optimization algorithm is introduced, where a random subset of coordinates is updated at each iteration. The algorithm builds upon a variant of a recent (deterministic) algorithm proposed by Vũ and Condat that includes the well known ADMM as a particular case. The obtained algorithm is used to solve asynchronously a distributed optimization problem. A network of agents, each having a separate cost function containing a differentiable term, seek to find a consensus on the minimum of the aggregate objective. The method yields an algorithm where at each iteration, a random subset of agents wake up, update their local estimates, exchange some data with their neighbors, and go idle. Numerical results demonstrate the attractive performance of the method. The general approach can be naturally adapted to other situations where coordinate descent convex optimization algorithms are used with a random choice of the coordinates.
(Show Context)

Citation Context

...collections of sets {A1,A2, . . .} such that P[ξ1 = Ai] is positive satisfies ⋃ Ai = V . In other words, any agent is selected with a positive probability. The following theorem is proven in Appendix D. Theorem 5 Let Assumptions 4, 5, and 6 hold true. Assume that condition (13) holds true. Let (xk+1n )n∈V be the output of the DAPD algorithm. For any initial value (x0, λ0), the sequences xk1 , . . . , x k N converge almost surely as k → ∞ to a random variable x? supported by the set of minimizers of Problem (10). Before turning to the numerical illustrations, we note that the very recent paper [36] also deals with asynchronous 7 primal-dual distributed algorithms by relying on the idea of random coordinate descent. V. NUMERICAL ILLUSTRATIONS We address the problem of the so called `2-regularized logistic regression. Denoting by m the number of observations and by p the number of features, our optimization problem is written min x∈Rp 1 m m∑ t=1 log ( 1 + e−yta T t x ) + µ‖x‖2 where the (yt)mt=1 are in {−1,+1}, the (at)mt=1 are in Rp, and µ > 0 is a scalar. We consider the case where the dataset is scattered over a network. Indeed, massive data sets are often distributed on different phys...

CONVERGENCE RATE ANALYSIS OF PRIMAL-DUAL SPLITTING SCHEMES∗

by unknown authors
"... Abstract. Primal-dual splitting schemes are a class of powerful algorithms that solve compli-cated monotone inclusions and convex optimization problems that are built from many simpler pieces. They decompose problems that are built from sums, linear compositions, and infimal convolutions of simple f ..."
Abstract - Add to MetaCart
Abstract. Primal-dual splitting schemes are a class of powerful algorithms that solve compli-cated monotone inclusions and convex optimization problems that are built from many simpler pieces. They decompose problems that are built from sums, linear compositions, and infimal convolutions of simple functions so that each simple term is processed individually via proximal mappings, gradient mappings, and multiplications by the linear maps. This leads to easily implementable and highly parallelizable or distributed algorithms, which often obtain nearly state-of-the-art performance. In this paper, we analyze a monotone inclusion problem that captures a large class of primal-dual splittings as a special case. We introduce a unifying scheme and use some abstract analysis of the algorithm to prove convergence rates of the proximal point algorithm, forward-backward splitting, Peaceman-Rachford splitting, and forward-backward-forward splitting applied to the model problem. Our ergodic convergence rates are deduced under variable metrics, stepsizes, and relaxation. Our nonergodic convergence rates are the first shown in the literature. Finally, we apply our results to a large class of primal-dual algorithms that are a special case of our scheme and deduce their convergence rates.
(Show Context)

Citation Context

... schemes 29 Reference Algorithm Metric Level w Rates [15, Algorithm 1] PPA (5.13) 1 1 O(1/(k + 1)) ergodic [15] [9, Algorithm 2.2] PPA (5.15) 2 1 none [22, 46] FBS (5.13) 1 1 O(1/(k + 1)) ergodic [6] =-=[16, 18, 39]-=- FBS (5.17) 1 1 none [9, Algorithm 2.1] PRS (5.13) 1 1/2 none [12, Remark 2.9], [35] PRS (5.17) 1 0 none [12, 19] FBF (5.17) 1 0 O(1/(k + 1)) ergodic [11] Table 1 This table lists the original appeara...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University