Results 1  10
of
10
Achieving Pareto Optimality Through Distributed Learning
, 2012
"... We propose a simple payoffbased learning rule that is completely decentralized, and that leads to an efficient configuration of actions in any nperson finite strategicform game with generic payoffs. The algorithm follows the theme of exploration versus exploitation and is hence stochastic in natu ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
We propose a simple payoffbased learning rule that is completely decentralized, and that leads to an efficient configuration of actions in any nperson finite strategicform game with generic payoffs. The algorithm follows the theme of exploration versus exploitation and is hence stochastic in nature. We prove that if all agents adhere to this algorithm, then the agents will select the action profile that maximizes the sum of the agents ’ payoffs a high percentage of time. The algorithm requires no communication. Agents respond solely to changes in their own realized payoffs, which are affected by the actions of other agents in the system in ways that they do not necessarily understand. The method can be applied to the optimization of complex systems with many distributed components, such as the routing of information in networks and the design and control of wind farms. The proof of the proposed learning algorithm relies on the theory of large deviations for perturbed Markov chains.
Designing Games for Distributed Optimization
"... Abstract — The central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to a given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s c ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
(Show Context)
Abstract — The central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to a given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s control law on the least amount of information possible. Unfortunately, there are no existing methodologies for addressing this design challenge. The goal of this paper is to address this challenge using the field of game theory. Utilizing game theory for the design and control of multiagent systems requires two steps: (i) defining a local objective function for each decision maker and (ii) specifying a distributed learning algorithm to reach a desirable operating point. One of the core advantages of this game theoretic approach is that this two step process can be decoupled by utilizing specific classes of games. For example, if the designed objective functions result in a potential game then the system designer can utilize distributed learning algorithms for potential games to complete step (ii) of the design process. Unfortunately, designing agent objective functions to meet objectives such as locality of information and efficiency of resulting equilibria within the framework of potential games is fundamentally challenging and in many case impossible. In this paper we develop a systematic methodology for meeting these objectives using a broader framework of games termed state based potential games. State based potential games is an extension of potential games where an additional state variable is introduced into the game environment hence permitting more flexibility in our design space. Furthermore, state based potential games possess an underlying structure that can be exploited by distributed learning algorithms in a similar fashion to potential games hence providing a new baseline for our decomposition. I.
Decoupling Coupled Constraints Through Utility Design
"... The central goal in multiagent systems is to engineer a decision making architecture where agents make independent decisions in response to local information while ensuring that the emergent global behavior is desirable with respect to a given system level objective. In many systems this control de ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
The central goal in multiagent systems is to engineer a decision making architecture where agents make independent decisions in response to local information while ensuring that the emergent global behavior is desirable with respect to a given system level objective. In many systems this control design is further complicated by coupled constraints on the agents’ behavior. This paper seeks to address the design of such algorithms using the field of game theory. In particular, we derive a systematic methodology for designing local agent utility functions such that (i) all resulting pure Nash equilibria of the designed game optimize the given system level objective and satisfy the given coupled constraint (ii) the resulting game possesses an inherent structure that can be exploited in distributed learning, e.g., potential games. Such developments would greatly simplify the control design by eliminating the need to explicitly consider the constraint. One key to this realization is introducing an estimate of the coupled constraint and incorporating exterior penalty functions and barrier functions into the design of the agents’ utility functions.
Overcoming the Limitations of Utility Design for Multiagent Systems
, 2011
"... Cooperative control focuses on deriving desirable collective behavior in multiagent systems through the design of local control algorithms. Game theory is beginning to emerge as a valuable set of tools for achieving this objective. A central component of this game theoretic approach is the assignmen ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
Cooperative control focuses on deriving desirable collective behavior in multiagent systems through the design of local control algorithms. Game theory is beginning to emerge as a valuable set of tools for achieving this objective. A central component of this game theoretic approach is the assignment of utility functions to the individual agents. Here, the goal is to assign utility functions within an “admissible” design space such that the resulting game possesses desirable properties. Our first set of results illustrates the complexity associated with such a task. In particular, we prove that if we restrict the class of utility functions to be local, scalable, and budgetbalanced then (i) ensuring that the resulting game possesses a pure Nash equilibrium requires computing a Shapley value, which can be computationally prohibitive for largescale systems, and (ii) ensuring that the allocation which optimizes the system level objective is a pure Nash equilibrium is impossible. The last part of this paper demonstrates that both limitations can be overcome by introducing an underlying state space into the potential game structure.
Potential games are necessary to ensure pure Nash equilibria in cost sharing games
 Mathematics of Operations Research
"... We consider the problem of designing distribution rules to share ‘welfare ’ (cost or revenue) among individually strategic agents. There are many known distribution rules that guarantee the existence of a (pure) Nash equilibrium in this setting, e.g., the Shapley value and its weighted variants; ho ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
We consider the problem of designing distribution rules to share ‘welfare ’ (cost or revenue) among individually strategic agents. There are many known distribution rules that guarantee the existence of a (pure) Nash equilibrium in this setting, e.g., the Shapley value and its weighted variants; however, a characterization of the space of distribution rules that guarantee the existence of a Nash equilibrium is unknown. Our work provides an exact characterization of this space for a specific class of scalable and separable games, which includes a variety of applications such as facility location, routing, network formation, and coverage games. Given arbitrary local welfare functions W, we prove that a distribution rule guarantees equilibrium existence for all games (i.e., all possible sets of resources, agent action sets, etc.) if and only if it is equivalent to a generalized weighted Shapley value on some ‘ground ’ welfare functions W′, which can be distinct from W. However, if budgetbalance is required in addition to the existence of a Nash equilibrium, then W ′ must be the same as W. We also provide an alternate characterization of this space in terms of ‘generalized’ marginal contributions, which is more appealing from the point of view of computational tractability. A possibly surprising consequence of our result is that, in order to guarantee equilibrium existence in all games with any fixed local welfare functions, it is necessary to work within the class of potential games.
Stable Utility Design for Distributed Resource Allocation*
"... Abstract — The framework of resource allocation games is becoming an increasingly popular modeling choice for distributed control and optimization. In recent years, this approach has evolved into the paradigm of gametheoretic control, which consists of first modeling the interaction between the di ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract — The framework of resource allocation games is becoming an increasingly popular modeling choice for distributed control and optimization. In recent years, this approach has evolved into the paradigm of gametheoretic control, which consists of first modeling the interaction between the distributed agents as a strategic form game, and then designing local utility functions for these agents such that the resulting game possesses a stable outcome (e.g., a pure Nash equilibrium) that is efficient (e.g., good “price of anarchy ” properties). One then appeals to the large, existing literature on learning in games for distributed algorithms for agents that guarantee convergence to such an equilibrium. An important first problem is to obtain a characterization of stable utility designs, that is, those that guarantee equilibrium existence for a large class of games. Recent work has explored this question in the general, multiselection context, that is, when agents are allowed to choose more than one resource at a time, showing that the only stable utility designs are the socalled “weighted Shapley values”. It remains an open problem to obtain a similar characterization in the singleselection context, which several practical problems such as vehicle target assignment, sensor coverage, etc. fall into. We survey recent work in the multiselection scenario, and show that even though other utility designs become stable for specific singleselection applications, perhaps surprisingly, in a broader context, the limitation to “weighted Shapley value” utility design continues to prevail. I.
Coarse Resistance Tree Methods For Stochastic Stability Analysis
"... Abstract — Emergent behavior in natural and manmade systems can often be characterized by the limiting distribution of a special class of Markov processes termed regular perturbed processes. Resistance trees have gained popularity as a computationally efficient way to characterize the stochastically ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract — Emergent behavior in natural and manmade systems can often be characterized by the limiting distribution of a special class of Markov processes termed regular perturbed processes. Resistance trees have gained popularity as a computationally efficient way to characterize the stochastically stable states (i.e., support of the limiting distribution); however, there are three main limitations of this approach. First, it often requires finding a minimum weight spanning tree for each state in a potentially large state space. Second, perturbations to transition probabilities must decay at an exponentially smooth rate. Lastly, the approach is shown to hold purely in the context of finite Markov chains. In this paper we seek to address these limitations by developing new tools for characterizing the stochastically stable states. First, we provide necessary conditions for stochastic stability via a coarse, and less computationally intensive, state space analysis. Next, we identify necessary conditions for stochastic stability when smooth convergence requirements are relaxed. Lastly, we establish similar tools for stochastic stability analysis in Markov chains over a continuous state space.
Potential Games are Necessary to Ensure
"... All intext references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately. ..."
Abstract
 Add to MetaCart
All intext references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately.