Results 1  10
of
25
Distributed stochastic subgradient projection algorithms for convex optimization
 Journal of Optimization Theory and Applications
, 2010
"... Abstract. We consider a distributed multiagent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines ..."
Abstract

Cited by 87 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We consider a distributed multiagent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradient information (known with stochastic errors) of its own function and by projecting onto the constraint set. The goal of this paper is to explore the effects of stochastic subgradient errors on the convergence of the algorithm. We first consider the behavior of the algorithm in mean, and then the convergence with probability 1 and in mean square. We consider general stochastic errors that have uniformly bounded second moments and obtain bounds on the limiting performance of the algorithm in mean for diminishing and nondiminishing stepsizes. When the means of the errors diminish, we prove that there is mean consensus between the agents and mean convergence to the optimum function value for diminishing stepsizes. When the mean errors diminish sufficiently fast, we strengthen the results to consensus and convergence of the iterates to an optimal solution with probability 1 and in mean square.
Distributed parameter estimation in sensor networks: Nonlinear observation models and imperfect communication
 IEEE Transactions on Information Theory
, 2012
"... ar ..."
(Show Context)
On ergodicity, infinite flow and consensus in random models
, 2010
"... We consider the ergodicity and consensus problem for a discretetime linear dynamic model driven by random stochastic matrices, which is equivalent to studying these concepts for the product of such matrices. Our focus is on the model where the random matrices have independent but timevariant dist ..."
Abstract

Cited by 31 (14 self)
 Add to MetaCart
We consider the ergodicity and consensus problem for a discretetime linear dynamic model driven by random stochastic matrices, which is equivalent to studying these concepts for the product of such matrices. Our focus is on the model where the random matrices have independent but timevariant distribution. We introduce a new phenomenon, the infinite flow, and we study its fundamental properties and relations with the ergodicity and consensus. The central result is the infinite flow theorem establishing the equivalence between the infinite flow and the ergodicity for a class of independent random models, where the matrices in the model have a common steady state in expectation and a feedback property. For such models, this result demonstrates that the expected infinite flow is both necessary and sufficient for the ergodicity. The result is providing a deterministic characterization of the ergodicity, which can be used for studying the consensus and average consensus over random graphs.
Distributed Asynchronous Constrained Stochastic Optimization
, 2011
"... In this paper we study two problems which often occur in various applications arising in wireless sensor networks. These are the problem of reaching an agreement on the value of local variables in a network of computational agents and the problem of cooperative solution to a convex optimization prob ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
In this paper we study two problems which often occur in various applications arising in wireless sensor networks. These are the problem of reaching an agreement on the value of local variables in a network of computational agents and the problem of cooperative solution to a convex optimization problem, where the objective function is the aggregate sum of local convex objective functions. We incorporate the presence of a random communication graph between the agents in our model as a more realistic abstraction of the gossip and broadcast communication protocols of a wireless network. An added ingredient is the presence of local constraint sets to which the local variables of each agent is constrained. Our model allows for the objective functions to be nondifferentiable and accommodates the presence of noisy communication links and subgradient errors. For the consensus problem we provide a diminishing step size algorithm which guarantees asymptotic convergence. The distributed optimization algorithm uses two diminishing step size sequences to account for communication noise and subgradient errors. We establish conditions on these step sizes under which we can achieve the dual task of reaching consensus and convergence to the optimal set with probability one. In both cases we consider the constant step size behavior of the algorithm and establish asymptotic error bounds.
Distributed consensus with limited communication data rate
 IEEE TRANSACTIONS ON AUTOMATIC CONTROL
, 2011
"... Communication data rate and energy constraints are important factors which have to be considered when investigating distributed coordination of multiagent networks. Although many proposed averageconsensus protocols are available, a fundamental theoretic problem remains open, namely, how many bits ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
Communication data rate and energy constraints are important factors which have to be considered when investigating distributed coordination of multiagent networks. Although many proposed averageconsensus protocols are available, a fundamental theoretic problem remains open, namely, how many bits of information are necessary for each pair of adjacent agents to exchange at each time step to ensure average consensus? In this paper, we consider averageconsensus control of undirected networks of discretetime firstorder agents under communication constraints. Each agent has a realvalued state but can only exchange symbolic data with its neighbors. A distributed protocol is proposed based on dynamic encoding and decoding. It is proved that under the protocol designed, for a connected network, average consensus can be achieved with an exponential convergence rate based on merely one bit information exchange between each pair of adjacent agents at each time step. An explicit form of the asymptotic convergence rate is given. It is shown that as the number of agents increases, the asymptotic convergence rate is related to the scale of the network, the number of quantization levels and the ratio of the second smallest eigenvalue to the largest eigenvalue of the Laplacian of the communication graph. We also give a performance index to characterize the total communication energy to achieve average consensus and show that the minimization of the communication energy leads to a tradeoff between the convergence rate and the number of quantization levels.
Optimal strategies in the average consensus problem
 in Proceedings of the IEEE Conference on Decision and Control
, 2007
"... Abstract — We prove that for a set of communicating agents to compute the average of their initial positions (average consensus problem), the optimal topology of communication is given by a de Bruijn’s graph. Consensus is then reached in a finitely many steps. A more general family of strategies, co ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
(Show Context)
Abstract — We prove that for a set of communicating agents to compute the average of their initial positions (average consensus problem), the optimal topology of communication is given by a de Bruijn’s graph. Consensus is then reached in a finitely many steps. A more general family of strategies, constructed by block Kronecker products, is investigated and compared to Cayley strategies. I.
Distributed consensus over network with noisy links
 in Proceedings of the 12th International Conference on Information Fusion, 2009
"... Abstract – We consider a distributed consensus problem where a set of agents want to agree on a common value through local computations and communications. We assume that agents communicate over a network with timevarying topology and noisy communication links. We are interested in the case when t ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
(Show Context)
Abstract – We consider a distributed consensus problem where a set of agents want to agree on a common value through local computations and communications. We assume that agents communicate over a network with timevarying topology and noisy communication links. We are interested in the case when the link noise is independent in time, and it has zero mean and bounded variance. We present and study an iterative algorithm with a diminishing stepsize. We show that the algorithm converges in expectation and almost surely to a “random ” consensus, and we characterize the statistics of the consensus. In particular, we give the expected value of the consensus and provide an upper bound on its variance.
Analysis of consensus protocols with bounded measurement errors
"... This paper analyzes two classes of consensus algorithms in presence of bounded measurement errors. The considered protocols adopt an updating rule based either on constant or vanishing weights. Under the assumption of bounded error, the consensus problem is cast in a setmembership framework, and t ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
This paper analyzes two classes of consensus algorithms in presence of bounded measurement errors. The considered protocols adopt an updating rule based either on constant or vanishing weights. Under the assumption of bounded error, the consensus problem is cast in a setmembership framework, and the agreement of the team is studied by analyzing the evolution of the feasible state set. Bounds on the asymptotic difference between the states of the agents are explicitly derived, in terms of the bounds on the measurement noise and the values of the weight matrix.
DISTRIBUTED LINEAR PARAMETER ESTIMATION: ASYMPTOTICALLY EFFICIENT ADAPTIVE STRATEGIES ∗
"... Abstract. This paper considers the problem of distributed adaptive linear parameter estimation in multiagent inference networks. Local sensing model information is only partially available at the agents, and interagent communication is assumed to be unpredictable. The paper develops a generic mixed ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Abstract. This paper considers the problem of distributed adaptive linear parameter estimation in multiagent inference networks. Local sensing model information is only partially available at the agents, and interagent communication is assumed to be unpredictable. The paper develops a generic mixed timescale stochastic procedure consisting of simultaneous distributed learning and estimation, in which the agents adaptively assess their relative observation quality over time and fuse the innovations accordingly. Under rather weak assumptions on the statistical model and the interagent communication, it is shown that, by properly tuning the consensus potential with respect to the innovation potential, the asymptotic information rate loss incurred in the learning process may be made negligible. As such, it is shown that the agent estimates are asymptotically efficient, in that their asymptotic covariance coincides with that of a centralized estimator (the inverse of the centralized Fisher information rate for Gaussian systems) with perfect global model information and having access to all observations at all times. The proof techniques are mainly based on convergence arguments for nonMarkovian mixed timescale stochastic approximation procedures. Several approximation results developed in the process are of independent interest.