Results 1  10
of
89
Opinion Dynamics and Learning in Social Networks
, 2010
"... We provide an overview of recent research on belief and opinion dynamics in social networks. We discuss both Bayesian and nonBayesian models of social learning and focus on the implications of the form of learning (e.g., Bayesian vs. nonBayesian), the sources of information (e.g., observation vs. ..."
Abstract

Cited by 40 (0 self)
 Add to MetaCart
We provide an overview of recent research on belief and opinion dynamics in social networks. We discuss both Bayesian and nonBayesian models of social learning and focus on the implications of the form of learning (e.g., Bayesian vs. nonBayesian), the sources of information (e.g., observation vs. communication), and the structure of social networks in which individuals are situated on three key questions: (1) whether social learning will lead to consensus, i.e., to agreement among individuals starting with different views; (2) whether social learning will effectively aggregate dispersed information and thus weed out incorrect beliefs; (3) whether media sources, prominent agents, politicians and the state will be able to manipulate beliefs and spread misinformation in a society.
An overview of recent progress in the study of distributed multiagent coordination
, 2012
"... ..."
(Show Context)
On ergodicity, infinite flow and consensus in random models
, 2010
"... We consider the ergodicity and consensus problem for a discretetime linear dynamic model driven by random stochastic matrices, which is equivalent to studying these concepts for the product of such matrices. Our focus is on the model where the random matrices have independent but timevariant dist ..."
Abstract

Cited by 31 (14 self)
 Add to MetaCart
We consider the ergodicity and consensus problem for a discretetime linear dynamic model driven by random stochastic matrices, which is equivalent to studying these concepts for the product of such matrices. Our focus is on the model where the random matrices have independent but timevariant distribution. We introduce a new phenomenon, the infinite flow, and we study its fundamental properties and relations with the ergodicity and consensus. The central result is the infinite flow theorem establishing the equivalence between the infinite flow and the ergodicity for a class of independent random models, where the matrices in the model have a common steady state in expectation and a feedback property. For such models, this result demonstrates that the expected infinite flow is both necessary and sufficient for the ergodicity. The result is providing a deterministic characterization of the ergodicity, which can be used for studying the consensus and average consensus over random graphs.
Distributed Subgradient Methods for Convex Optimization over Random Networks
, 2009
"... We consider the problem of cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents. We assume that each agent has information about his local function, and communicate with the other agents over a timevarying network topology. For ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
(Show Context)
We consider the problem of cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents. We assume that each agent has information about his local function, and communicate with the other agents over a timevarying network topology. For this problem, we propose a distributed subgradient method that uses averaging algorithms for locally sharing information among the agents. In contrast to previous works on multiagent optimization that make worstcase assumptions about the connectivity of the agents (such as bounded communication intervals between nodes), we assume that links fail according to a given stochastic process. Under the assumption that the link failures are independent and identically distributed over time (possibly correlated across links), we provide almost sure convergence results for our subgradient algorithm.
On the O(1/k) Convergence of Asynchronous Distributed Alternating Direction Method of Multipliers
, 2013
"... We consider a network of agents that are cooperatively solving a global optimization problem, where the objective function is the sum of privately known local objective functions of the agents and the decision variables are coupled via linear constraints. Recent literature focused on special cases o ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
(Show Context)
We consider a network of agents that are cooperatively solving a global optimization problem, where the objective function is the sum of privately known local objective functions of the agents and the decision variables are coupled via linear constraints. Recent literature focused on special cases of this formulation and studied their distributed solution through either subgradient based methods with O(1 /√k) rate of convergence (where k is the iteration number) or Alternating Direction Method of Multipliers (ADMM) based methods, which require a synchronous implementation and a globally known order on the agents. In this paper, we present a novel asynchronous ADMM based distributed method for the general formulation and show that it converges at the rate O (1/k).
Distributed MultiAgent Optimization with StateDependent Communication
, 2010
"... We study distributed algorithms for solving global optimization problems in which the objective function is the sum of local objective functions of agents and the constraint set is given by the intersection of local constraint sets of agents. We assume that each agent knows only his own local obje ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
(Show Context)
We study distributed algorithms for solving global optimization problems in which the objective function is the sum of local objective functions of agents and the constraint set is given by the intersection of local constraint sets of agents. We assume that each agent knows only his own local objective function and constraint set, and exchanges information with the other agents over a randomly varying network topology to update his information state. We assume a statedependent communication model over this topology: communication is Markovian with respect to the states of the agents and the probability with which the links are available depends on the states of the agents. In this paper, we study a projected multiagent subgradient algorithm under statedependent communication. The algorithm involves each agent performing a local averaging to combine his estimate with the other agents’ estimates, taking a subgradient step along his local objective function, and projecting the estimates
Almost sure convergence to consensus in Markovian random graphs
 in Proc. 47th IEEE CDC, Cancun
, 2008
"... Abstract — In this paper we discuss the consensus problem for a network of dynamic agents with undirected information flow and random switching topologies. The switching is determined by a Markov chain, each topology corresponding to a state of the Markov chain. We show that in order to achieve cons ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
(Show Context)
Abstract — In this paper we discuss the consensus problem for a network of dynamic agents with undirected information flow and random switching topologies. The switching is determined by a Markov chain, each topology corresponding to a state of the Markov chain. We show that in order to achieve consensus almost surely and from any initial state the sets of graphs corresponding to the closed positive recurrent sets of the Markov chain must be jointly connected. The analysis relies on tools from matrix theory, Markovian jump linear systems theory and random processes theory. The distinctive feature of this work is addressing the consensus problem with “Markovian switching ” topologies. I.
Distributed Subgradient Methods over Random Networks
, 2008
"... We consider the problem of cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents. We assume that each agent has information about his local function, and communicate with the other agents over a timevarying network topology. For ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
We consider the problem of cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents. We assume that each agent has information about his local function, and communicate with the other agents over a timevarying network topology. For this problem, we propose a distributed subgradient method that uses averaging algorithms for locally sharing information among the agents. In contrast to previous works that make worstcase assumptions about the connectivity of the agents (such as bounded communication intervals between nodes), we assume that links fail according to a given stochastic process. Under the assumption that the link failures are independent and identically distributed over time (possibly correlated across links), we provide convergence results and convergence rate estimates for our subgradient algorithm.
Distributed randomized algorithms for the PageRank computation
 IEEE Trans. Autom. Control
, 1987
"... ar ..."
Convergence Analysis of Distributed Subgradient Methods over Random Networks
"... We consider the problem of cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents. We assume that each agent has information about his local function, and communicate with the other agents over a timevarying network topology. Fo ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
(Show Context)
We consider the problem of cooperatively minimizing the sum of convex functions, where the functions represent local objective functions of the agents. We assume that each agent has information about his local function, and communicate with the other agents over a timevarying network topology. For this problem, we propose a distributed subgradient method that uses averaging algorithms for locally sharing information among the agents. In contrast to previous works that make worstcase assumptions about the connectivity of the agents (such as bounded communication intervals between nodes), we assume that links fail according to a given stochastic process. Under the assumption that the link failures are independent and identically distributed over time (possibly correlated across links), we provide convergence results and convergence rate estimates for our subgradient algorithm.