Results 1 -
5 of
5
A Class of Randomized Primal-Dual Algorithms for Distributed Optimization, arXiv preprint arXiv:1406.6404v3
, 2014
"... Abstract Based on a preconditioned version of the randomized block-coordinate forward-backward algorithm recently proposed in ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
Abstract Based on a preconditioned version of the randomized block-coordinate forward-backward algorithm recently proposed in
Success and Failure of Adaptation-Diffusion Algorithms for Consensus in Multi-Agent Networks
"... Abstract—This paper investigates the problem of distributed stochastic approximation in multi-agent systems. The algorithm under study consists of two steps: a local stochastic approxi-mation step and a diffusion step which drives the network to a consensus. The diffusion step uses row-stochastic ma ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Abstract—This paper investigates the problem of distributed stochastic approximation in multi-agent systems. The algorithm under study consists of two steps: a local stochastic approxi-mation step and a diffusion step which drives the network to a consensus. The diffusion step uses row-stochastic matrices to weight the network exchanges. As opposed to previous works, exchange matrices are not supposed to be doubly stochastic, and may also depend on the past estimate. We prove that non-doubly stochastic matrices generally in-fluence the limit points of the algorithm. Nevertheless, the limit points are not affected by the choice of the matrices provided that the latter are doubly-stochastic in expectation. This conclusion legitimates the use of broadcast-like diffusion protocols, which are easier to implement. Next, by means of a central limit theorem, we prove that doubly stochastic protocols perform asymptotically as well as centralized algorithms and we quantify the degradation caused by the use of non doubly stochastic matrices. Throughout the paper, a special emphasis is put on the special case of distributed non-convex optimization as an illustration of our results. I.
Stochastic Approximations and Perturbations in Forward-Backward Splitting for Monotone Operators *
"... Abstract We investigate the asymptotic behavior of a stochastic version of the forward-backward splitting algorithm for finding a zero of the sum of a maximally monotone set-valued operator and a cocoercive operator in Hilbert spaces. Our general setting features stochastic approximations of the co ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract We investigate the asymptotic behavior of a stochastic version of the forward-backward splitting algorithm for finding a zero of the sum of a maximally monotone set-valued operator and a cocoercive operator in Hilbert spaces. Our general setting features stochastic approximations of the cocoercive operator and stochastic perturbations in the evaluation of the resolvents of the set-valued operator. In addition, relaxations and not necessarily vanishing proximal parameters are allowed. Weak and strong almost sure convergence properties of the iterates is established under mild conditions on the underlying stochastic processes. Leveraging these results, we also establish the almost sure convergence of the iterates of a stochastic variant of a primal-dual proximal splitting method for composite minimization problems.
A Robust Block-Jacobi Algorithm for Quadratic Programming under Lossy Communications
"... Abstract: We address the problem distributed quadratic programming under lossy commu-nications where the global cost function is the sum of coupled local cost functions, typical in localization problems and partition-based state estimation. We propose a novel solution based on a generalized gradient ..."
Abstract
- Add to MetaCart
Abstract: We address the problem distributed quadratic programming under lossy commu-nications where the global cost function is the sum of coupled local cost functions, typical in localization problems and partition-based state estimation. We propose a novel solution based on a generalized gradient descent strategy, namely a Block-Jacobi descent algorithm, which is amenable for a distributed implementation and which is provably robust to communication failure if the step size is suciently small. Interestingly, robustness to packet loss, implies also robustness of the algorithm to broadcast communication protocols, asynchronous computation and bounded random communication delays. The theoretical analysis relies on the separation of time scales and singular perturbation theory. Our algorithm is numerically studied in the context of partition-based state estimation in smart grids based on the IEEE 123 nodes distribution feeder benchmark. The proposed algorithm is observed to exhibit a similar convergence rate when compared with the well known ADMM algorithm with no packet losses, while it has considerably better performance when including moderate packet losses.
REVISED VERSION 1 Explicit Convergence Rate of a Distributed Alternating Direction Method of Multipliers
"... Abstract — Consider a set of N agents seeking to solve dis-tributively the minimization problem infx ∑N n=1 fn(x) where the convex functions fn are local to the agents. The popular Alternating Direction Method of Multipliers has the potential to handle distributed optimization problems of this kind. ..."
Abstract
- Add to MetaCart
Abstract — Consider a set of N agents seeking to solve dis-tributively the minimization problem infx ∑N n=1 fn(x) where the convex functions fn are local to the agents. The popular Alternating Direction Method of Multipliers has the potential to handle distributed optimization problems of this kind. We provide a general reformulation of the problem and obtain a class of distributed algorithms which encompass various network architectures. The rate of convergence of our method is consid-ered. It is assumed that the infimum of the problem is reached at a point x?, the functions fn are twice differentiable at this point and ∑∇2fn(x?)> 0 in the positive definite ordering of symmetric matrices. With these assumptions, it is shown that the convergence to the consensus x? is linear and the exact rate is provided. Application examples where this rate can be optimized with respect to the ADMM free parameter ρ are also given.