• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

A stochastic coordinate descent primal-dual algorithm and applications to large-scale composite optimization, (2014)

by P Bianchi, W Hachem, F Iutzeler
Add To MetaCart

Tools

Sorted by:
Results 1 - 5 of 5

A Class of Randomized Primal-Dual Algorithms for Distributed Optimization, arXiv preprint arXiv:1406.6404v3

by Jean-Christophe Pesquet , Audrey Repetti , 2014
"... Abstract Based on a preconditioned version of the randomized block-coordinate forward-backward algorithm recently proposed in ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
Abstract Based on a preconditioned version of the randomized block-coordinate forward-backward algorithm recently proposed in

Success and Failure of Adaptation-Diffusion Algorithms for Consensus in Multi-Agent Networks

by Gemma Morral, Pascal Bianchi, Gersende Fort
"... Abstract—This paper investigates the problem of distributed stochastic approximation in multi-agent systems. The algorithm under study consists of two steps: a local stochastic approxi-mation step and a diffusion step which drives the network to a consensus. The diffusion step uses row-stochastic ma ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
Abstract—This paper investigates the problem of distributed stochastic approximation in multi-agent systems. The algorithm under study consists of two steps: a local stochastic approxi-mation step and a diffusion step which drives the network to a consensus. The diffusion step uses row-stochastic matrices to weight the network exchanges. As opposed to previous works, exchange matrices are not supposed to be doubly stochastic, and may also depend on the past estimate. We prove that non-doubly stochastic matrices generally in-fluence the limit points of the algorithm. Nevertheless, the limit points are not affected by the choice of the matrices provided that the latter are doubly-stochastic in expectation. This conclusion legitimates the use of broadcast-like diffusion protocols, which are easier to implement. Next, by means of a central limit theorem, we prove that doubly stochastic protocols perform asymptotically as well as centralized algorithms and we quantify the degradation caused by the use of non doubly stochastic matrices. Throughout the paper, a special emphasis is put on the special case of distributed non-convex optimization as an illustration of our results. I.
(Show Context)

Citation Context

...optimize some objective function known by all agents (possibly up to some additive noise). More recently, numerous works extended this kind of algorithm to more involved multiagent scenarios, see [7]–=-=[19]-=- as a non-exhaustive list. In this context, one seeks to minimize a sum of local private cost functions fi of the agents: min θ∈Rd N∑ i=1 fi(θ) , (3) where for all i, the function fi is supposed to be...

Stochastic Approximations and Perturbations in Forward-Backward Splitting for Monotone Operators *

by Patrick L Combettes , Jean-Christophe Pesquet
"... Abstract We investigate the asymptotic behavior of a stochastic version of the forward-backward splitting algorithm for finding a zero of the sum of a maximally monotone set-valued operator and a cocoercive operator in Hilbert spaces. Our general setting features stochastic approximations of the co ..."
Abstract - Add to MetaCart
Abstract We investigate the asymptotic behavior of a stochastic version of the forward-backward splitting algorithm for finding a zero of the sum of a maximally monotone set-valued operator and a cocoercive operator in Hilbert spaces. Our general setting features stochastic approximations of the cocoercive operator and stochastic perturbations in the evaluation of the resolvents of the set-valued operator. In addition, relaxations and not necessarily vanishing proximal parameters are allowed. Weak and strong almost sure convergence properties of the iterates is established under mild conditions on the underlying stochastic processes. Leveraging these results, we also establish the almost sure convergence of the iterates of a stochastic variant of a primal-dual proximal splitting method for composite minimization problems.
(Show Context)

Citation Context

...}‖V−1‖1/2αn βn = ‖V‖1/2‖(1 + 2‖U‖1/2‖L‖)βn. (5.26) Thus, ∑ n∈N √ λnαn < +∞ and ∑ n∈N λnβn < +∞. Finally, (5.16) and (g) guarantee that supn∈N(1 + τn)γn < 2ϑ. All the assumptions of Proposition 4.4 are therefore satisfied for algorithm (5.17), which concludes the proof. Remark 5.4 (i) Algorithm 5.10 can be viewed as a stochastic version of the primal-dual algorithm investigated in [20, Example 6.4] when the metric is fixed in the latter. Particular cases of such fixed metric primal-algorithm can be found in [12, 16, 30, 34, 35]. (ii) The same type of primal-dual algorithm is investigated in [5, 43] in a different context since in those papers the stochastic nature of the algorithms stems from the random activation of blocks of variables. 22 5.2 Example We illustrate an implementation of Algorithm 5.2 in a simple scenario with H = RN by constructing an example in which the gradient approximation conditions are fulfilled. For every k ∈ {1, . . . , q} and every n ∈ N, set sk,n = ∇j∗k(vk,n) and suppose that (yn)n∈N is almost surely bounded. This assumption is satisfied, in particular, if dom f and (bn)n∈N are bounded. In addition, let (∀n ∈ N) Xn = σ ( x0,v0, (Kn′ , zn′)06n′<mn , (bn′ , cn′...

A Robust Block-Jacobi Algorithm for Quadratic Programming under Lossy Communications

by M. Todescato, G. Cavraro, R. Carli, L. Schenato
"... Abstract: We address the problem distributed quadratic programming under lossy commu-nications where the global cost function is the sum of coupled local cost functions, typical in localization problems and partition-based state estimation. We propose a novel solution based on a generalized gradient ..."
Abstract - Add to MetaCart
Abstract: We address the problem distributed quadratic programming under lossy commu-nications where the global cost function is the sum of coupled local cost functions, typical in localization problems and partition-based state estimation. We propose a novel solution based on a generalized gradient descent strategy, namely a Block-Jacobi descent algorithm, which is amenable for a distributed implementation and which is provably robust to communication failure if the step size is suciently small. Interestingly, robustness to packet loss, implies also robustness of the algorithm to broadcast communication protocols, asynchronous computation and bounded random communication delays. The theoretical analysis relies on the separation of time scales and singular perturbation theory. Our algorithm is numerically studied in the context of partition-based state estimation in smart grids based on the IEEE 123 nodes distribution feeder benchmark. The proposed algorithm is observed to exhibit a similar convergence rate when compared with the well known ADMM algorithm with no packet losses, while it has considerably better performance when including moderate packet losses.

REVISED VERSION 1 Explicit Convergence Rate of a Distributed Alternating Direction Method of Multipliers

by F. Iutzeler, P. Bianchi, Ph. Ciblat, W. Hachem
"... Abstract — Consider a set of N agents seeking to solve dis-tributively the minimization problem infx ∑N n=1 fn(x) where the convex functions fn are local to the agents. The popular Alternating Direction Method of Multipliers has the potential to handle distributed optimization problems of this kind. ..."
Abstract - Add to MetaCart
Abstract — Consider a set of N agents seeking to solve dis-tributively the minimization problem infx ∑N n=1 fn(x) where the convex functions fn are local to the agents. The popular Alternating Direction Method of Multipliers has the potential to handle distributed optimization problems of this kind. We provide a general reformulation of the problem and obtain a class of distributed algorithms which encompass various network architectures. The rate of convergence of our method is consid-ered. It is assumed that the infimum of the problem is reached at a point x?, the functions fn are twice differentiable at this point and ∑∇2fn(x?)&gt; 0 in the positive definite ordering of symmetric matrices. With these assumptions, it is shown that the convergence to the consensus x? is linear and the exact rate is provided. Application examples where this rate can be optimized with respect to the ADMM free parameter ρ are also given.
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University