Results 1  10
of
51
private communication
"... A rigid interval graph is an interval graph which has only one clique tree. In 2009, Panda and Das show that all connected unit interval graphs are rigid interval graphs. Generalizing the two classic graph search algorithms, Lexicographic BreadthFirst Search (LBFS) and Maximum Cardinality Search (M ..."
Abstract

Cited by 88 (6 self)
 Add to MetaCart
A rigid interval graph is an interval graph which has only one clique tree. In 2009, Panda and Das show that all connected unit interval graphs are rigid interval graphs. Generalizing the two classic graph search algorithms, Lexicographic BreadthFirst Search (LBFS) and Maximum Cardinality Search (MCS), Corneil and Krueger propose in 2008 the socalled Maximal Neighborhood Search (MNS) and show that one sweep of MNS is enough to recognize chordal graphs. We develop the MNS properties of rigid interval graphs and characterize this graph class in several different ways. This allows us obtain several linear time multisweep MNS algorithms for recognizing rigid interval graphs and unit interval graphs, generalizing a corresponding 3sweep LBFS algorithm for unit interval graph recognition designed by Corneil in 2004. For unit interval graphs, we even present a new linear time 2sweep MNS certifying recognition algorithm. Submitted:
Distributed MultiAgent Optimization with StateDependent Communication
, 2010
"... We study distributed algorithms for solving global optimization problems in which the objective function is the sum of local objective functions of agents and the constraint set is given by the intersection of local constraint sets of agents. We assume that each agent knows only his own local obje ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
(Show Context)
We study distributed algorithms for solving global optimization problems in which the objective function is the sum of local objective functions of agents and the constraint set is given by the intersection of local constraint sets of agents. We assume that each agent knows only his own local objective function and constraint set, and exchanges information with the other agents over a randomly varying network topology to update his information state. We assume a statedependent communication model over this topology: communication is Markovian with respect to the states of the agents and the probability with which the links are available depends on the states of the agents. In this paper, we study a projected multiagent subgradient algorithm under statedependent communication. The algorithm involves each agent performing a local averaging to combine his estimate with the other agents’ estimates, taking a subgradient step along his local objective function, and projecting the estimates
NewtonRaphson consensus for distributed convex optimization
 In CDC and European Control Conference
, 2011
"... Abstract — We study the problem of unconstrained distributed optimization in the context of multiagents systems subject to limited communication connectivity. In particular we focus on the minimization of a sum of convex cost functions, where each component of the global function is available only ..."
Abstract

Cited by 21 (9 self)
 Add to MetaCart
Abstract — We study the problem of unconstrained distributed optimization in the context of multiagents systems subject to limited communication connectivity. In particular we focus on the minimization of a sum of convex cost functions, where each component of the global function is available only to a specific agent and can thus be seen as a private local cost. The agents need to cooperate to compute the minimizer of the sum of all costs. We propose a consensuslike strategy to estimate a NewtonRaphson descending update for the local estimates of the global minimizer at each agent. In particular, the algorithm is based on the separation of timescales principle and it is proved to converge to the global minimizer if a specific parameter that tunes the rate of convergence is chosen sufficiently small. We also provide numerical simulations and compare them with alternative distributed optimization strategies like the Alternating Direction Method of Multipliers and the Distributed Subgradient Method. Index Terms — distributed optimization, convex optimization, consensus algorithms, multiagent systems, NewtonRaphson methods I.
Achieving Pareto Optimality Through Distributed Learning
, 2012
"... We propose a simple payoffbased learning rule that is completely decentralized, and that leads to an efficient configuration of actions in any nperson finite strategicform game with generic payoffs. The algorithm follows the theme of exploration versus exploitation and is hence stochastic in natu ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
We propose a simple payoffbased learning rule that is completely decentralized, and that leads to an efficient configuration of actions in any nperson finite strategicform game with generic payoffs. The algorithm follows the theme of exploration versus exploitation and is hence stochastic in nature. We prove that if all agents adhere to this algorithm, then the agents will select the action profile that maximizes the sum of the agents ’ payoffs a high percentage of time. The algorithm requires no communication. Agents respond solely to changes in their own realized payoffs, which are affected by the actions of other agents in the system in ways that they do not necessarily understand. The method can be applied to the optimization of complex systems with many distributed components, such as the routing of information in networks and the design and control of wind farms. The proof of the proposed learning algorithm relies on the theory of large deviations for perturbed Markov chains.
A distributed simplex algorithm for degenerate linear programs and multiagent assignment. Automatica
, 2011
"... In this paper we propose a novel distributed algorithm to solve degenerate linear programs on asynchronous peertopeer networks with distributed information structures. We propose a distributed version of the wellknown simplex algorithm for general degenerate linear programs. A network of agents, ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
(Show Context)
In this paper we propose a novel distributed algorithm to solve degenerate linear programs on asynchronous peertopeer networks with distributed information structures. We propose a distributed version of the wellknown simplex algorithm for general degenerate linear programs. A network of agents, running our algorithm, will agree on a common optimal solution, even if the optimal solution is not unique, or will determine infeasibility or unboundedness of the problem. We establish how the multiagent assignment problem can be efficiently solved by means of our distributed simplex algorithm. We provide simulations supporting the conjecture that the completion time scales linearly with the diameter of the communication graph.
Distributed continuoustime convex optimization on weightbalanced digraphs
, 2013
"... This paper studies the continuoustime distributed optimization of a sum of convex functions over directed graphs. Contrary to what is known in the consensus literature, where the same dynamics works for both undirected and directed scenarios, we show that the consensusbased dynamics that solves th ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
(Show Context)
This paper studies the continuoustime distributed optimization of a sum of convex functions over directed graphs. Contrary to what is known in the consensus literature, where the same dynamics works for both undirected and directed scenarios, we show that the consensusbased dynamics that solves the continuoustime distributed optimization problem for undirected graphs fails to converge when transcribed to the directed setting. This study sets the basis for the design of an alternative distributed dynamics which we show is guaranteed to converge, on any strongly connected weightbalanced digraph, to the set of minimizers of a sum of convex differentiable functions with globally Lipschitz gradients. Our technical approach combines notions of invariance and cocoercivity with the positive definiteness properties of graph matrices to establish the results.
Designing Games for Distributed Optimization
"... Abstract — The central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to a given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s c ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
(Show Context)
Abstract — The central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to a given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s control law on the least amount of information possible. Unfortunately, there are no existing methodologies for addressing this design challenge. The goal of this paper is to address this challenge using the field of game theory. Utilizing game theory for the design and control of multiagent systems requires two steps: (i) defining a local objective function for each decision maker and (ii) specifying a distributed learning algorithm to reach a desirable operating point. One of the core advantages of this game theoretic approach is that this two step process can be decoupled by utilizing specific classes of games. For example, if the designed objective functions result in a potential game then the system designer can utilize distributed learning algorithms for potential games to complete step (ii) of the design process. Unfortunately, designing agent objective functions to meet objectives such as locality of information and efficiency of resulting equilibria within the framework of potential games is fundamentally challenging and in many case impossible. In this paper we develop a systematic methodology for meeting these objectives using a broader framework of games termed state based potential games. State based potential games is an extension of potential games where an additional state variable is introduced into the game environment hence permitting more flexibility in our design space. Furthermore, state based potential games possess an underlying structure that can be exploited by distributed learning algorithms in a similar fashion to potential games hence providing a new baseline for our decomposition. I.
Distributed constrained optimization by consensusbased primaldual perturbation method,” submitted to IEEE Trans. Automatic Control. Available on arxiv.org
"... ar ..."
(Show Context)
Fast distributed gradient methods
 IEEE Trans. Autom. Control
"... Abstract—We study distributed optimization problems when nodes minimize the sum of their individual costs subject to a common vector variable. The costs are convex, have Lipschitz continuous gradient (with constant), and bounded gradient. We propose two fast distributed gradient algorithms based on ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
(Show Context)
Abstract—We study distributed optimization problems when nodes minimize the sum of their individual costs subject to a common vector variable. The costs are convex, have Lipschitz continuous gradient (with constant), and bounded gradient. We propose two fast distributed gradient algorithms based on the centralized Nesterov gradient algorithm and establish their convergence rates in terms of the pernode communications and the pernode gradient evaluations. Our first method, Distributed Nesterov Gradient, achieves rates and. Our second method, Distributed Nesterov gradient with Consensus iterations, assumes at all nodes knowledge of and – the second largest singular value of the doubly stochastic weight matrix. It achieves rates and ( arbitrarily small). Further, we give for both methods explicit dependence of the convergence constants on and. Simulation examples illustrate our findings. Index Terms—Consensus, convergence rate, distributed optimization, Nesterov gradient. I.