Results 1  10
of
93
Gossip algorithms for distributed signal processing
 PROCEEDINGS OF THE IEEE
, 2010
"... Gossip algorithms are attractive for innetwork processing in sensor networks because they do not require any specialized routing, there is no bottleneck or single point of failure, and they are robust to unreliable wireless network conditions. Recently, there has been a surge of activity in the co ..."
Abstract

Cited by 116 (30 self)
 Add to MetaCart
Gossip algorithms are attractive for innetwork processing in sensor networks because they do not require any specialized routing, there is no bottleneck or single point of failure, and they are robust to unreliable wireless network conditions. Recently, there has been a surge of activity in the computer science, control, signal processing, and information theory communities, developing faster and more robust gossip algorithms and deriving theoretical performance guarantees. This paper presents an overview of recent work in the area. We describe convergence rate results, which are related to the number of transmittedmessages and thus the amount of energy consumed in the network for gossiping. We discuss issues related to gossiping over wireless links, including the effects of quantization and noise, and we illustrate the use of gossip algorithms for canonical signal processing tasks including distributed estimation, source localization, and compression.
On ergodicity, infinite flow and consensus in random models
, 2010
"... We consider the ergodicity and consensus problem for a discretetime linear dynamic model driven by random stochastic matrices, which is equivalent to studying these concepts for the product of such matrices. Our focus is on the model where the random matrices have independent but timevariant dist ..."
Abstract

Cited by 31 (14 self)
 Add to MetaCart
We consider the ergodicity and consensus problem for a discretetime linear dynamic model driven by random stochastic matrices, which is equivalent to studying these concepts for the product of such matrices. Our focus is on the model where the random matrices have independent but timevariant distribution. We introduce a new phenomenon, the infinite flow, and we study its fundamental properties and relations with the ergodicity and consensus. The central result is the infinite flow theorem establishing the equivalence between the infinite flow and the ergodicity for a class of independent random models, where the matrices in the model have a common steady state in expectation and a feedback property. For such models, this result demonstrates that the expected infinite flow is both necessary and sufficient for the ergodicity. The result is providing a deterministic characterization of the ergodicity, which can be used for studying the consensus and average consensus over random graphs.
Distributed Asynchronous Constrained Stochastic Optimization
, 2011
"... In this paper we study two problems which often occur in various applications arising in wireless sensor networks. These are the problem of reaching an agreement on the value of local variables in a network of computational agents and the problem of cooperative solution to a convex optimization prob ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
In this paper we study two problems which often occur in various applications arising in wireless sensor networks. These are the problem of reaching an agreement on the value of local variables in a network of computational agents and the problem of cooperative solution to a convex optimization problem, where the objective function is the aggregate sum of local convex objective functions. We incorporate the presence of a random communication graph between the agents in our model as a more realistic abstraction of the gossip and broadcast communication protocols of a wireless network. An added ingredient is the presence of local constraint sets to which the local variables of each agent is constrained. Our model allows for the objective functions to be nondifferentiable and accommodates the presence of noisy communication links and subgradient errors. For the consensus problem we provide a diminishing step size algorithm which guarantees asymptotic convergence. The distributed optimization algorithm uses two diminishing step size sequences to account for communication noise and subgradient errors. We establish conditions on these step sizes under which we can achieve the dual task of reaching consensus and convergence to the optimal set with probability one. In both cases we consider the constant step size behavior of the algorithm and establish asymptotic error bounds.
Cooperative Convex Optimization in Networked Systems: Augmented Lagrangian Algorithms with Directed Gossip Communication
 IEEE Transactions on Signal Processing
"... We study distributed optimization in networked systems, where nodes cooperate to find the optimal quantity of common interest, x = x ⋆. The objective function of the corresponding10 optimization problem is the sum of private (known only by a node,) convex, nodes ’ objectives and each node imposes a ..."
Abstract

Cited by 26 (12 self)
 Add to MetaCart
(Show Context)
We study distributed optimization in networked systems, where nodes cooperate to find the optimal quantity of common interest, x = x ⋆. The objective function of the corresponding10 optimization problem is the sum of private (known only by a node,) convex, nodes ’ objectives and each node imposes a private convex constraint on the allowed values of x. We solve this problem for generic connected network topologies with asymmetric random link failures with a novel distributed, decentralized algorithm. We refer to this algorithm as AL–G (augmented Lagrangian gossiping,) and to its variants as AL–MG (augmented Lagrangian multi neighbor gossiping) and AL–BG (augmented Lagrangian broadcast gossiping.) The AL–G algorithm is based on the augmented Lagrangian dual function. Dual variables are updated by the standard method of multipliers, at a slow time scale. To update the primal variables, we propose a novel, GaussSeidel type, randomized algorithm, at a fast time scale. AL–G uses unidirectional gossip communication, only between immediate neighbors in the network and is resilient to random link failures. For networks with reliable communication (i.e., no failures,) the simplified, AL– BG (augmented Lagrangian broadcast gossiping) algorithm reduces communication, computation and data storage cost. We prove convergence for all proposed algorithms and demonstrate by simulations
Diffusion strategies outperform consensus strategies for distributed estimation over adaptive networks
, 2012
"... ..."
Hierarchical Spatial Gossip for MultiResolution Representations in Sensor Networks
, 2007
"... In this paper we propose a lightweight algorithm for constructing multiresolution data representations for sensor networks. We compute, at each sensor node u, O(log n) aggregates about exponentially enlarging neighborhoods centered at u. The ith aggregate is the aggregated data among nodes approxim ..."
Abstract

Cited by 25 (10 self)
 Add to MetaCart
(Show Context)
In this paper we propose a lightweight algorithm for constructing multiresolution data representations for sensor networks. We compute, at each sensor node u, O(log n) aggregates about exponentially enlarging neighborhoods centered at u. The ith aggregate is the aggregated data among nodes approximately within 2 i hops of u. We present a scheme, named the hierarchical spatial gossip algorithm, to extract and construct these aggregates, for all sensors simultaneously, with a total communication cost of O(npolylog n). The hierarchical gossip algorithm adopts atomic communication steps with each node choosing to exchange information with a node distance d away with probability 1/d 3. The attractiveness of the algorithm attributes to its simplicity, low communication cost, distributed nature and robustness to node failures and link failures. Besides the natural applications of multiresolution data summaries in data validation and information mining, we also demonstrate the application of the precomputed spatial multiresolution data summaries in answering range queries efficiently.
On the O(1/k) Convergence of Asynchronous Distributed Alternating Direction Method of Multipliers
, 2013
"... We consider a network of agents that are cooperatively solving a global optimization problem, where the objective function is the sum of privately known local objective functions of the agents and the decision variables are coupled via linear constraints. Recent literature focused on special cases o ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
(Show Context)
We consider a network of agents that are cooperatively solving a global optimization problem, where the objective function is the sum of privately known local objective functions of the agents and the decision variables are coupled via linear constraints. Recent literature focused on special cases of this formulation and studied their distributed solution through either subgradient based methods with O(1 /√k) rate of convergence (where k is the iteration number) or Alternating Direction Method of Multipliers (ADMM) based methods, which require a synchronous implementation and a globally known order on the agents. In this paper, we present a novel asynchronous ADMM based distributed method for the general formulation and show that it converges at the rate O (1/k).
Pushsum distributed dual averaging for convex optimization
 in IEEE CDC
, 2012
"... Abstract — In this paper we extend and analyze the distributed dual averaging algorithm [1] to handle communication delays and general stochastic consensus protocols. Assuming each network link experiences some fixed bounded delay, we show that distributed dual averaging converges and the error dec ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
Abstract — In this paper we extend and analyze the distributed dual averaging algorithm [1] to handle communication delays and general stochastic consensus protocols. Assuming each network link experiences some fixed bounded delay, we show that distributed dual averaging converges and the error decays at a rate O(T−0.5) where T is the number of iterations. This bound is an improvement over [1] by a logarithmic factor in T for networks of fixed size. Finally, we extend the algorithm to the case of using general nonaveraging consensus protocols. We prove that the bias introduced in the optimization can be removed by a simple correction that depends on the stationary distribution of the consensus matrix. I.
Asynchronous BroadcastBased Convex Optimization over a Network
, 2010
"... We consider a distributed multiagent network system where each agent has its own convex objective function, which can be evaluated with stochastic errors. The problem consists of minimizing the sum of the agent functions over a commonly known constraint set, but without a central coordinator and w ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
We consider a distributed multiagent network system where each agent has its own convex objective function, which can be evaluated with stochastic errors. The problem consists of minimizing the sum of the agent functions over a commonly known constraint set, but without a central coordinator and without agents sharing the explicit form of their objectives. We propose an asynchronous broadcastbased algorithm where the communications over the network are subject to random link failures. We investigate the convergence properties of the algorithm for a diminishing (random) stepsize and a constant stepsize, where each agent chooses its own stepsize independently of the other agents. Under some standard conditions on the gradient errors, we establish almost sure convergence of the method to an optimal point for diminishing stepsize. For constant stepsize, we establish some error bounds on the expected distance from the optimal point and the expected function value. We also provide numerical results.