Results 1  10
of
56
Cooperative MultiAgent Learning: The State of the Art
 Autonomous Agents and MultiAgent Systems
, 2005
"... Cooperative multiagent systems are ones in which several agents attempt, through their interaction, to jointly solve tasks or to maximize utility. Due to the interactions among the agents, multiagent problem complexity can rise rapidly with the number of agents or their behavioral sophistication. ..."
Abstract

Cited by 182 (8 self)
 Add to MetaCart
(Show Context)
Cooperative multiagent systems are ones in which several agents attempt, through their interaction, to jointly solve tasks or to maximize utility. Due to the interactions among the agents, multiagent problem complexity can rise rapidly with the number of agents or their behavioral sophistication. The challenge this presents to the task of programming solutions to multiagent systems problems has spawned increasing interest in machine learning techniques to automate the search and optimization process. We provide a broad survey of the cooperative multiagent learning literature. Previous surveys of this area have largely focused on issues common to specific subareas (for example, reinforcement learning or robotics). In this survey we attempt to draw from multiagent learning work in a spectrum of areas, including reinforcement learning, evolutionary computation, game theory, complex systems, agent modeling, and robotics. We find that this broad view leads to a division of the work into two categories, each with its own special issues: applying a single learner to discover joint solutions to multiagent problems (team learning), or using multiple simultaneous learners, often one per agent (concurrent learning). Additionally, we discuss direct and indirect communication in connection with learning, plus open issues in task decomposition, scalability, and adaptive dynamics. We conclude with a presentation of multiagent learning problem domains, and a list of multiagent learning resources. 1
An Empirical Analysis of Collaboration Methods in Cooperative Coevolutionary Algorithms
 In Proceedings from the Genetic and Evolutionary Computation Conference
"... Although a variety of coevolutionary methods have been explored over the years, it has only been recently that a general architecture for cooperative coevolution has been proposed. Since that time, the flexibility and success of this cooperative coevolutionary architecture (CCA) has been shown ..."
Abstract

Cited by 68 (6 self)
 Add to MetaCart
(Show Context)
Although a variety of coevolutionary methods have been explored over the years, it has only been recently that a general architecture for cooperative coevolution has been proposed. Since that time, the flexibility and success of this cooperative coevolutionary architecture (CCA) has been shown in an array of different kinds of problems.
Ideal Evaluation from Coevolution
 Evolutionary Computation
, 2004
"... In many problems of interest, performance can be evaluated using tests, such as examples in concept learning, test points in function approximation, and opponents in gameplaying. Evaluation on all tests is often infeasible. Identification of an accurate evaluation or fitness function is a difficult ..."
Abstract

Cited by 68 (6 self)
 Add to MetaCart
(Show Context)
In many problems of interest, performance can be evaluated using tests, such as examples in concept learning, test points in function approximation, and opponents in gameplaying. Evaluation on all tests is often infeasible. Identification of an accurate evaluation or fitness function is a difficult problem in itself, and approximations are likely to introduce human biases into the search process. Coevolution evolves the set of tests used for evaluation, but has so far often led to inaccurate evaluation. We show that for any set of learners, a Complete Evaluation Set can be determined that provides ideal evaluation as specified by Evolutionary MultiObjective Optimization. This provides a principled approach to evaluation in coevolution, and thereby brings automatic ideal evaluation within reach. The Complete Evaluation Set is of manageable size, and progress towards it can be accurately measured. Based on this observation, an algorithm named DELPHI is developed. The algorithm is tested on problems likely to permit progress on only a subset of the underlying objectives. Where all comparison methods result in overspecialization, the proposed method and a variant achieve sustained progress in all underlying objectives. These findings demonstrate that ideal evaluation may be approximated by practical algorithms, and that accurate evaluation for testbased problems is possible even when the underlying objectives of a problem are unknown.
Pareto optimality in coevolutionary learning
, 2001
"... www.demo.cs.brandeis.edu Abstract. We develop a novel coevolutionary algorithm based upon the concept of Pareto optimality. The Pareto criterion is core to conventional multiobjective optimization (MOO) algorithms. We can think of agents in a coevolutionary system as performing MOO, as well: An age ..."
Abstract

Cited by 66 (11 self)
 Add to MetaCart
(Show Context)
www.demo.cs.brandeis.edu Abstract. We develop a novel coevolutionary algorithm based upon the concept of Pareto optimality. The Pareto criterion is core to conventional multiobjective optimization (MOO) algorithms. We can think of agents in a coevolutionary system as performing MOO, as well: An agent interacts with many other agents, each of which can be regarded as an objective for optimization. We adapt the Pareto concept to allow agents to follow gradient and create gradient for others to follow, such that coevolutionary learning succeeds. We demonstrate our Pareto coevolution methodology with the majority function, a density classification task for cellular automata. 1
Pareto coevolution: Using performance against coevolved opponents in a game as dimensions for Pareto selection
 Proceedings of the Genetic and Evolutionary Computation Conference, GECCO2001
, 2001
"... When using an automatic discovery method to nd a good strategy in a game, we hope to nd one that performs well against a wide variety of opponents. An appealing notion in the use of evolutionary algorithms to coevolve strategies is that the population represents a set of dierent strategies ag ..."
Abstract

Cited by 45 (3 self)
 Add to MetaCart
When using an automatic discovery method to nd a good strategy in a game, we hope to nd one that performs well against a wide variety of opponents. An appealing notion in the use of evolutionary algorithms to coevolve strategies is that the population represents a set of dierent strategies against which a player must do well. Implicit here is the idea that dierent players represent dierent \dimensions" of the domain, and being a robust player means being good in many (preferably all) dimensions of the game. Pareto coevolution makes this idea of \players as dimensions" explicit. By explicitly treating each player as a dimension, or objective, we may then use established multiobjective optimization techniques to nd robust strategies. In this paper, we apply Pareto coevolution to Texas Hold'em poker, a complex realworld game of imperfect information. The performance of our Pareto coevolution algorithm is compared with that of a conventional genetic algorithm and shown to be promising. 1
A Mathematical Framework for the Study of Coevolution
 Foundations of Genetic Algorithms 7
, 2003
"... Despite achieving compelling results in engineering and optimization problems, coevolutionary algorithms remain difficult to understand, with most knowledge to date coming from practical successes and failures, not from theoretical understanding. Thus, explaining why coevolution succeeds is still ..."
Abstract

Cited by 36 (11 self)
 Add to MetaCart
(Show Context)
Despite achieving compelling results in engineering and optimization problems, coevolutionary algorithms remain difficult to understand, with most knowledge to date coming from practical successes and failures, not from theoretical understanding. Thus, explaining why coevolution succeeds is still more art than science. In this paper, we present a theoretical framework for studying coevolution based on the mathematics of ordered sets.
Improving coevolutionary search for optimal multiagent behaviors
 In Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence (IJCAI
, 2003
"... Evolutionary computation is a useful technique for learning behaviors in multiagent systems. Among the several types of evolutionary computation, one natural and popular method is to coevolve multiagent behaviors in multiple, cooperating populations. Recent research has suggested that coevolutionary ..."
Abstract

Cited by 34 (12 self)
 Add to MetaCart
(Show Context)
Evolutionary computation is a useful technique for learning behaviors in multiagent systems. Among the several types of evolutionary computation, one natural and popular method is to coevolve multiagent behaviors in multiple, cooperating populations. Recent research has suggested that coevolutionary systems may favor stability rather than performance in some domains. In order to improve upon existing methods, this paper examines the idea of modifying traditional coevolution, biasing it to search for maximal rewards. We introduce a theoretical justification of the improved method and present experiments in three problem domains. We conclude that biasing can help coevolution find better results in some multiagent problem domains. 1
The MaxSolve algorithm for coevolution
 In Beyer, H.G. (Ed.), Proceedings of the Genetic and Evolutionary Computation Conference, GECCO05
, 2005
"... Coevolution can be used to adaptively choose the tests used for evaluating candidate solutions. A longstanding question is how this dynamic setup may be organized to yield reliable search methods. Reliability can only be considered in connection with a particular solution concept specifying what co ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
(Show Context)
Coevolution can be used to adaptively choose the tests used for evaluating candidate solutions. A longstanding question is how this dynamic setup may be organized to yield reliable search methods. Reliability can only be considered in connection with a particular solution concept specifying what constitutes a solution. Recently, monotonic coevolution algorithms have been proposed for several solution concepts. Here, we introduce a new algorithm that guarantees monotonicity for the solution concept of maximizing the expected utility of a candidate solution. The method, called MaxSolve, is compared to the IPCA algorithm and found to perform more efficiently for a range of parameter values on an abstract test problem.