Results 1  10
of
28
DADMM: A communicationefficient distributed algorithm for separable optimization
 IEEE Trans. Sig. Proc
, 2013
"... ar ..."
(Show Context)
Local Linear Convergence of the Alternating Direction Method of Multipliers on Quadratic or Linear Programs
"... We introduce a novel matrix recurrence yielding a new spectral analysis of the local transient convergence behavior of the Alternating Direction Method of Multipliers (ADMM), for the particular case of a quadratic program or a linear program. We identify a particular combination of vector iterates w ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
We introduce a novel matrix recurrence yielding a new spectral analysis of the local transient convergence behavior of the Alternating Direction Method of Multipliers (ADMM), for the particular case of a quadratic program or a linear program. We identify a particular combination of vector iterates whose convergence can be analyzed via a spectral analysis. The theory predicts that ADMM should go through up to four convergence regimes, such as constant step convergence or linear convergence, ending with the latter when close enough to the optimal solution if the optimal solution is unique and satisfies strict complementarity.
Fast distributed gradient methods
 IEEE Trans. Autom. Control
"... Abstract—We study distributed optimization problems when nodes minimize the sum of their individual costs subject to a common vector variable. The costs are convex, have Lipschitz continuous gradient (with constant), and bounded gradient. We propose two fast distributed gradient algorithms based on ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Abstract—We study distributed optimization problems when nodes minimize the sum of their individual costs subject to a common vector variable. The costs are convex, have Lipschitz continuous gradient (with constant), and bounded gradient. We propose two fast distributed gradient algorithms based on the centralized Nesterov gradient algorithm and establish their convergence rates in terms of the pernode communications and the pernode gradient evaluations. Our first method, Distributed Nesterov Gradient, achieves rates and. Our second method, Distributed Nesterov gradient with Consensus iterations, assumes at all nodes knowledge of and – the second largest singular value of the doubly stochastic weight matrix. It achieves rates and ( arbitrarily small). Further, we give for both methods explicit dependence of the convergence constants on and. Simulation examples illustrate our findings. Index Terms—Consensus, convergence rate, distributed optimization, Nesterov gradient. I.
Distributed constrained optimization by consensusbased primaldual perturbation method,” submitted to IEEE Trans. Automatic Control. Available on arxiv.org
"... ar ..."
(Show Context)
Distributed sparse signal recovery for sensor networks
 in Proc. IEEE Int. Conf. on Acoust., Speech, and Sig. Proc. (ICASSP
"... We propose a distributed algorithm for sparse signal recovery in sensor networks based on Iterative Hard Thresholding (IHT). Every agent has a set of measurements of a signal x, and the objective is for the agents to recover x from their collective measurements at a minimal communication cost and ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
We propose a distributed algorithm for sparse signal recovery in sensor networks based on Iterative Hard Thresholding (IHT). Every agent has a set of measurements of a signal x, and the objective is for the agents to recover x from their collective measurements at a minimal communication cost and with low computational complexity. A naı̈ve distributed implementation of IHT would require global communication of every agent’s full state in each iteration. We find that we can dramatically reduce this communication cost by leveraging solutions to the distributed topK problem in the database literature. Evaluations show that our algorithm requires up to three orders of magnitude less total bandwidth than the bestknown distributed basis pursuit method. Index Terms — compressed sensing, distributed algorithm, iterative hard thresholding, topK 1.
Alternating directions dual decomposition
, 2012
"... We propose AD³, a new algorithm for approximate maximum a posteriori (MAP) inference on factor graphs based on the alternating directions method of multipliers. Like dual decomposition algorithms, AD3 uses worker nodes to iteratively solve local subproblems and a controller node to combine these loc ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
We propose AD³, a new algorithm for approximate maximum a posteriori (MAP) inference on factor graphs based on the alternating directions method of multipliers. Like dual decomposition algorithms, AD3 uses worker nodes to iteratively solve local subproblems and a controller node to combine these local solutions into a global update. The key characteristic of AD3 is that each local subproblem has a quadratic regularizer, leading to a faster consensus than subgradientbased dual decomposition, both theoretically and in practice. We provide closedform solutions for these AD3 subproblems for binary pairwise factors and factors imposing firstorder logic constraints. For arbitrary factors (large or combinatorial), we introduce an active set method which requires only an oracle for computing a local MAP configuration, making AD3 applicable to a wide range of problems. Experiments on synthetic and realworld problems show that AD³ compares favorably with the stateoftheart.
Distributed soft thresholding for sparse signal recovery
 CoRR
"... Abstract—In this paper, we address the problem of distributed sparse recovery of signals acquired via compressed measurements in a sensor network. We propose a new class of distributed algorithms to solve Lasso regression problems, when the communication to a fusion center is not possible, e.g., du ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract—In this paper, we address the problem of distributed sparse recovery of signals acquired via compressed measurements in a sensor network. We propose a new class of distributed algorithms to solve Lasso regression problems, when the communication to a fusion center is not possible, e.g., due to communication cost or privacy reasons. More precisely, we introduce a distributed iterative soft thresholding algorithm (DISTA) that consists of three steps: an averaging step, a gradient step, and a soft thresholding operation. We prove the convergence of DISTA in networks represented by regular graphs, and we compare it with existing methods in terms of performance, memory, and complexity. Index Terms—Distributed compressed sensing, distributed optimization, consensus algorithms, gradientthresholding algorithms. I.
On the convergence of decentralized gradient descent
, 2013
"... Consider the consensus problem of minimizing f(x) = ∑n i=1 fi(x) where each fi is only known to one individual agent i belonging to a connected network of n agents. All the agents shall collaboratively solve this problem and obtain the solution via data exchanges only between neighboring agents. Suc ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Consider the consensus problem of minimizing f(x) = ∑n i=1 fi(x) where each fi is only known to one individual agent i belonging to a connected network of n agents. All the agents shall collaboratively solve this problem and obtain the solution via data exchanges only between neighboring agents. Such algorithms avoid the need of a fusion center, offer better network load balance, and improve data privacy. We study the decentralized gradient descent method in which each agent i updates its variable x(i), which is a local approximate to the unknown variable x, by taking the average of its neighbors ’ followed by making a local negative gradient step −α∇fi(x(i)). The iteration is x(i)(k + 1)←
for consensus on colored networks
 2012 IEEE 51st Annual Conference on Decision and Control (CDC), 2012
"... Abstract — We propose a novel distributed algorithm for one of the most fundamental problems in networks: the average consensus. We view the average consensus as an optimization problem, which allows us to use recent techniques and results from the optimization area. Based on the assumption that a c ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract — We propose a novel distributed algorithm for one of the most fundamental problems in networks: the average consensus. We view the average consensus as an optimization problem, which allows us to use recent techniques and results from the optimization area. Based on the assumption that a coloring scheme of the network is available, we derive a decentralized, asynchronous, and communicationefficient algorithm that is based on the Alternating Direction Method of Multipliers (ADMM). Our simulations with other stateoftheart consensus algorithms show that the proposed algorithm is the one exhibiting the most stable performance across several network models. I.
1 Distributed Compressed Sensing For Static and TimeVarying Networks
"... Abstract—We consider the problem of innetwork compressed sensing from distributed measurements. Every agent has a set of measurements of a signal x, and the objective is for the agents to recover x from their collective measurements using only communication with neighbors in the network. Our distri ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract—We consider the problem of innetwork compressed sensing from distributed measurements. Every agent has a set of measurements of a signal x, and the objective is for the agents to recover x from their collective measurements using only communication with neighbors in the network. Our distributed approach to this problem is based on the centralized Iterative Hard Thresholding algorithm (IHT). We first present a distributed IHT algorithm for static networks that leverages standard tools from distributed computing to execute innetwork computations with minimized bandwidth consumption. Next, we address distributed signal recovery in networks with timevarying topologies. The network dynamics necessarily introduce inaccuracies to our innetwork computations. To accommodate these inaccuracies, we show how centralized IHT can be extended to include inexact computations while still providing the same recovery guarantees as the original IHT algorithm. We then leverage these new theoretical results to develop a distributed version of IHT for timevarying networks. Evaluations show that our distributed algorithms for both static and timevarying networks outperform previously proposed solutions in time and bandwidth by several orders of magnitude. Index Terms—compressed sensing, distributed algorithm, iterative hard thresholding, distributed consensus I.