Results 1 - 10
of
133
Distributed stochastic subgradient projection algorithms for convex optimization
- Journal of Optimization Theory and Applications
, 2010
"... Abstract. We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines ..."
Abstract
-
Cited by 87 (1 self)
- Add to MetaCart
Abstract. We consider a distributed multi-agent network system where the goal is to minimize a sum of convex objective functions of the agents subject to a common convex constraint set. Each agent maintains an iterate sequence and communicates the iterates to its neighbors. Then, each agent combines weighted averages of the received iterates with its own iterate, and adjusts the iterate by using subgradient information (known with stochastic errors) of its own function and by projecting onto the constraint set. The goal of this paper is to explore the effects of stochastic subgradient errors on the convergence of the algorithm. We first consider the behavior of the algorithm in mean, and then the convergence with probability 1 and in mean square. We consider general stochastic errors that have uniformly bounded second moments and obtain bounds on the limiting performance of the algorithm in mean for diminishing and non-diminishing stepsizes. When the means of the errors diminish, we prove that there is mean consensus between the agents and mean convergence to the optimum function value for diminishing stepsizes. When the mean errors diminish sufficiently fast, we strengthen the results to consensus and convergence of the iterates to an optimal solution with probability 1 and in mean square.
Consensus-based decentralized auctions for robust task allocation
- IEEE Transactions on Robotics
, 2009
"... Abstract—This paper addresses task allocation to coordinate a fleet of autonomous vehicles by presenting two decentralized algorithms: the consensus-based auction algorithm (CBAA) and its generalization to the multi-assignment problem, i.e., the consensus-based bundle algorithm (CBBA). These algorit ..."
Abstract
-
Cited by 79 (28 self)
- Add to MetaCart
Abstract—This paper addresses task allocation to coordinate a fleet of autonomous vehicles by presenting two decentralized algorithms: the consensus-based auction algorithm (CBAA) and its generalization to the multi-assignment problem, i.e., the consensus-based bundle algorithm (CBBA). These algorithms utilize a market-based decision strategy as the mechanism for decentralized task selection and use a consensus routine based on local communication as the conflict resolution mechanism to achieve agreement on the winning bid values. Under reasonable assumptions on the scoring scheme, both of the proposed algorithms are proven to guarantee convergence to a conflict-free assignment, and it is shown that the converged solutions exhibit provable worst-case performance. It is also demonstrated that CBAA and CBBA produce conflict-free feasible solutions that are robust to both inconsistencies in the situational awareness across the fleet and variations in the communication network topology. Numerical experiments confirm superior convergence properties and performance when compared with existing auction-based task-allocation algorithms. Index Terms—Distributed robot systems, networked robots, task allocation for multiple mobile robots. I.
On Distributed Convex Optimization Under Inequality and Equality Constraints
- UNIVERSITY OF CALIFORNIA, SAN DIEGO (UC SAN
, 2012
"... We consider a general multi-agent convex optimization problem where the agents are to collectively minimize a global objective function subject to a global inequality constraint, a global equality constraint, and a global constraint set. The objective function is defined by a sum of local objective ..."
Abstract
-
Cited by 52 (8 self)
- Add to MetaCart
(Show Context)
We consider a general multi-agent convex optimization problem where the agents are to collectively minimize a global objective function subject to a global inequality constraint, a global equality constraint, and a global constraint set. The objective function is defined by a sum of local objective functions, while the global constraint set is produced by the intersection of local constraint sets. In particular, we study two cases: one where the equality constraint is absent, and the other where the local constraint sets are identical. We devise two distributed primal-dual subgradient algorithms based on the characterization of the primal-dual optimal solutions as the saddle points of the Lagrangian and penalty functions. These algorithms can be implemented over networks with dynamically changing topologies but satisfying a standard connectivity property, and allow the agents to asymptotically agree on optimal solutions and optimal values of the optimization problem under the Slater’s condition.
Incremental stochastic subgradient algorithms for convex optimization
- SIAM J. OPTIM
, 2008
"... In this paper we study the effect of stochastic errors on two constrained incremental sub-gradient algorithms. We view the incremental sub-gradient algorithms as decentralized network optimization algorithms as applied to minimize a sum of functions, when each component function is known only to a ..."
Abstract
-
Cited by 49 (7 self)
- Add to MetaCart
In this paper we study the effect of stochastic errors on two constrained incremental sub-gradient algorithms. We view the incremental sub-gradient algorithms as decentralized network optimization algorithms as applied to minimize a sum of functions, when each component function is known only to a particular agent of a distributed network. We first study the standard cyclic incremental sub-gradient algorithm in which the agents form a ring structure and pass the iterate in a cycle. We consider the method with stochastic errors in the sub-gradient evaluations and provide sufficient conditions on the moments of the stochastic errors that guarantee almost sure convergence when a diminishing step-size is used. We also obtain almost sure bounds on the algorithm’s performance when a constant step-size is used. We then consider the Markov randomized incremental subgradient method, which is a non-cyclic version of the incremental algorithm where the sequence of computing agents is modeled as a time non-homogeneous Markov chain. Such a model is appropriate for mobile networks, as the network topology changes across time in these networks. We establish the convergence results and error bounds for the Markov randomized method in the presence of stochastic errors for diminishing and constant step-sizes, respectively.
Spread of (mis)information in social networks
, 2009
"... We provide a model to investigate the tension between information aggregation and spread of misinformation in large societies (conceptualized as networks of agents communicating with each other). Each individual holds a belief represented by a scalar. Individuals meet pairwise and exchange informati ..."
Abstract
-
Cited by 43 (7 self)
- Add to MetaCart
(Show Context)
We provide a model to investigate the tension between information aggregation and spread of misinformation in large societies (conceptualized as networks of agents communicating with each other). Each individual holds a belief represented by a scalar. Individuals meet pairwise and exchange information, which is modeled as both individuals adopting the average of their pre-meeting beliefs. When all individuals engage in this type of information exchange, the society will be able to effectively aggregate the initial information held by all individuals. There is also the possibility of misinformation, however, because some of the individuals are “forceful, ” meaning that they influence the beliefs of (some) of the other individuals they meet, but do not change their own opinion. The paper characterizes how the presence of forceful agents interferes with information aggregation. Under the assumption that even forceful agents obtain some information (however infrequent) from some others (and additional weak regularity conditions), we first show that beliefs in this class of societies converge to a consensus among all individuals. This consensus value is a random variable, however, and we characterize its behavior. Our main results quantify the extent of misinformation in the society by either providing bounds or exact results (in some special cases) on how far the consensus value can be from the benchmark without forceful agents (where there is efficient information aggregation). The worst outcomes obtain when there are several forceful agents and forceful agents themselves update their beliefs only on the basis of information they obtain from individuals most likely to have received their own information previously.
Distributed control of robotic networks: a mathematical approach to motion coordination algorithms
, 2009
"... (i) You are allowed to freely download, share, print, or photocopy this document. (ii) You are not allowed to modify, sell, or claim authorship of any part of this document. (iii) We thank you for any feedback information, including errors, suggestions, evaluations, and teaching or research uses. 2 ..."
Abstract
-
Cited by 41 (1 self)
- Add to MetaCart
(i) You are allowed to freely download, share, print, or photocopy this document. (ii) You are not allowed to modify, sell, or claim authorship of any part of this document. (iii) We thank you for any feedback information, including errors, suggestions, evaluations, and teaching or research uses. 2 “Distributed Control of Robotic Networks ” by F. Bullo, J. Cortés and S. Martínez
Opinion Dynamics and Learning in Social Networks
, 2010
"... We provide an overview of recent research on belief and opinion dynamics in social networks. We discuss both Bayesian and non-Bayesian models of social learning and focus on the implications of the form of learning (e.g., Bayesian vs. non-Bayesian), the sources of information (e.g., observation vs. ..."
Abstract
-
Cited by 40 (0 self)
- Add to MetaCart
We provide an overview of recent research on belief and opinion dynamics in social networks. We discuss both Bayesian and non-Bayesian models of social learning and focus on the implications of the form of learning (e.g., Bayesian vs. non-Bayesian), the sources of information (e.g., observation vs. communication), and the structure of social networks in which individuals are situated on three key questions: (1) whether social learning will lead to consensus, i.e., to agreement among individuals starting with different views; (2) whether social learning will effectively aggregate dispersed information and thus weed out incorrect beliefs; (3) whether media sources, prominent agents, politicians and the state will be able to manipulate beliefs and spread misinformation in a society.
A distributed newton method for network utility maximization
, 2010
"... Abstract — Most existing work uses dual decomposition and subgradient methods to solve Network Utility Maximization (NUM) problems in a distributed manner, which suffer from slow rate of convergence properties. This work develops an alternative distributed Newton-type fast converging algorithm for s ..."
Abstract
-
Cited by 38 (5 self)
- Add to MetaCart
(Show Context)
Abstract — Most existing work uses dual decomposition and subgradient methods to solve Network Utility Maximization (NUM) problems in a distributed manner, which suffer from slow rate of convergence properties. This work develops an alternative distributed Newton-type fast converging algorithm for solving network utility maximization problems with selfconcordant utility functions. By using novel matrix splitting techniques, both primal and dual updates for the Newton step can be computed using iterative schemes in a decentralized manner with limited information exchange. Similarly, the stepsize can be obtained via an iterative consensus-based averaging scheme. We show that even when the Newton direction and the stepsize in our method are computed within some error (due to finite truncation of the iterative schemes), the resulting objective function value still converges superlinearly to an explicitly characterized error neighborhood. Simulation results demonstrate significant convergence rate improvement of our algorithm relative to the existing subgradient methods based on dual decomposition. I.
An overview of recent progress in the study of distributed multi-agent coordination
, 2012
"... ..."
(Show Context)
Self-improving algorithms
- in SODA ’06: Proceedings of the seventeenth annual ACMSIAM symposium on Discrete algorithm
"... We investigate ways in which an algorithm can improve its expected performance by finetuning itself automatically with respect to an arbitrary, unknown input distribution. We give such self-improving algorithms for sorting and computing Delaunay triangulations. The highlights of this work: (i) an al ..."
Abstract
-
Cited by 33 (6 self)
- Add to MetaCart
We investigate ways in which an algorithm can improve its expected performance by finetuning itself automatically with respect to an arbitrary, unknown input distribution. We give such self-improving algorithms for sorting and computing Delaunay triangulations. The highlights of this work: (i) an algorithm to sort a list of numbers with optimal expected limiting complexity; and (ii) an algorithm to compute the Delaunay triangulation of a set of points with optimal expected limiting complexity. In both cases, the algorithm begins with a training phase during which it adjusts itself to the input distribution, followed by a stationary regime in which the algorithm settles to its optimized incarnation. 1