Results 1  10
of
215
Learning Bayesian networks: The combination of knowledge and statistical data
 Machine Learning
, 1995
"... We describe scoring metrics for learning Bayesian networks from a combination of user knowledge and statistical data. We identify two important properties of metrics, which we call event equivalence and parameter modularity. These properties have been mostly ignored, but when combined, greatly simpl ..."
Abstract

Cited by 1142 (36 self)
 Add to MetaCart
(Show Context)
We describe scoring metrics for learning Bayesian networks from a combination of user knowledge and statistical data. We identify two important properties of metrics, which we call event equivalence and parameter modularity. These properties have been mostly ignored, but when combined, greatly simplify the encoding of a user’s prior knowledge. In particular, a user can express his knowledge—for the most part—as a single prior Bayesian network for the domain. 1
Approximate Signal Processing
, 1997
"... It is increasingly important to structure signal processing algorithms and systems to allow for trading off between the accuracy of results and the utilization of resources in their implementation. In any particular context, there are typically a variety of heuristic approaches to managing these tra ..."
Abstract

Cited by 516 (2 self)
 Add to MetaCart
It is increasingly important to structure signal processing algorithms and systems to allow for trading off between the accuracy of results and the utilization of resources in their implementation. In any particular context, there are typically a variety of heuristic approaches to managing these tradeoffs. One of the objectives of this paper is to suggest that there is the potential for developing a more formal approach, including utilizing current research in Computer Science on Approximate Processing and one of its central concepts, Incremental Refinement. Toward this end, we first summarize a number of ideas and approaches to approximate processing as currently being formulated in the computer science community. We then present four examples of signal processing algorithms/systems that are structured with these goals in mind. These examples may be viewed as partial inroads toward the ultimate objective of developing, within the context of signal processing design and implementation,...
Coalition Structure Generation with Worst Case Guarantees
, 1999
"... Coalition formation is a key topic in multiagent systems. One may prefer a coalition structure that maximizes the sum of the values of the coalitions, but often the number of coalition structures is too large to allow exhaustive search for the optimal one. Furthermore, finding the optimal coalition ..."
Abstract

Cited by 266 (10 self)
 Add to MetaCart
Coalition formation is a key topic in multiagent systems. One may prefer a coalition structure that maximizes the sum of the values of the coalitions, but often the number of coalition structures is too large to allow exhaustive search for the optimal one. Furthermore, finding the optimal coalition structure is NPcomplete. But then, can the coalition structure found via a partial search be guaranteed to be within a bound from optimum? We show that none of the previous coalition structure generation algorithms can establish any bound because they search fewer nodes than a threshold that we show necessary for establishing a bound. We present an algorithm that establishes a tight bound within this minimal amount of search, and show that any other algorithm would have to search strictly more. The fraction of nodes needed to be searched approaches zero as the number of agents grows. If additional time remains, our anytime algorithm searches further, and establishes a progressively lower tight bound. Surprisingly, just searching one more node drops the bound in half. As desired, our algorithm lowers the bound rapidly early on, and exhibits diminishing returns to computation. It also significantly outperforms its obvious contenders. Finally, we show how to distribute the desired
Coalitions Among Computationally Bounded Agents
 Artificial Intelligence
, 1997
"... This paper analyzes coalitions among selfinterested agents that need to solve combinatorial optimization problems to operate e ciently in the world. By colluding (coordinating their actions by solving a joint optimization problem) the agents can sometimes save costs compared to operating individua ..."
Abstract

Cited by 202 (26 self)
 Add to MetaCart
(Show Context)
This paper analyzes coalitions among selfinterested agents that need to solve combinatorial optimization problems to operate e ciently in the world. By colluding (coordinating their actions by solving a joint optimization problem) the agents can sometimes save costs compared to operating individually. A model of bounded rationality is adopted where computation resources are costly. It is not worthwhile solving the problems optimally: solution quality is decisiontheoretically traded o against computation cost. A normative, application and protocolindependent theory of coalitions among boundedrational agents is devised. The optimal coalition structure and its stability are signi cantly a ected by the agents ' algorithms ' performance pro les and the cost of computation. This relationship is rst analyzed theoretically. Then a domain classi cation including rational and boundedrational agents is introduced. Experimental results are presented in vehicle routing with real data from ve dispatch centers. This problem is NPcomplete and the instances are so large thatwith current technologyany agent's rationality is bounded by computational complexity. 1
Introducing the Tileworld: Experimentally evaluating agent architectures
 In Proceedings of the National Conference on Artificial Intelligence
, 1990
"... We describe a system called Tileworld, which consists of a simulated robot agent and a simulated environment which is both dynamic and unpredictable. Both the agent and the environment are highly parameterized, enabling one to control certain characteristics of each. We can thus experimentally inves ..."
Abstract

Cited by 194 (13 self)
 Add to MetaCart
(Show Context)
We describe a system called Tileworld, which consists of a simulated robot agent and a simulated environment which is both dynamic and unpredictable. Both the agent and the environment are highly parameterized, enabling one to control certain characteristics of each. We can thus experimentally investigate the behavior of various metalevel reasoning strategies by tuning the parameters of the agent, and can assess the success of alternative strategies in dierent environments by tuning the environmental parameters. Our hypothesis is that the appropriateness of a particular metalevel reasoning strategy will depend in large part upon the characteristics of the environment in which the agent incorporating that strategy is situated. We describe our initial experiments using Tileworld, in which we have been evaluating a version of the metalevel reasoning strategy proposed in earlier work by one of the authors [5].
Using Anytime Algorithms in Intelligent Systems
, 1996
"... Anytime algorithms give intelligent systems the capability to trade deliberation time for quality of results. This capability is essential for successful operation in domains such as signal interpretation, realtime diagnosis and repair, and mobile robot control. What characterizes these domains i ..."
Abstract

Cited by 192 (8 self)
 Add to MetaCart
Anytime algorithms give intelligent systems the capability to trade deliberation time for quality of results. This capability is essential for successful operation in domains such as signal interpretation, realtime diagnosis and repair, and mobile robot control. What characterizes these domains is that it is not feasible (computationally) or desirable (economically) to compute the optimal answer. This article surveys the main control problems that arise when a system is composed of several anytime algorithms. These problems relate to optimal management of uncertainty and precision. After a brief introduction to anytime computation, I outline a wide range of existing solutions to the metalevel control problem and describe current work that is aimed at increasing the applicability of anytime computation.
CIRCA: A Cooperative Intelligent RealTime Control Architecture
 IEEE Trans. Systems, Man, and Cybernetics
, 1993
"... Most research into applying AI techniques to realtime control problems has limited the power of AI methods or embedded \reactivity " in an AI system. We present an alternative, cooperative architecture that uses separate AI and realtime subsystems to address the problems for which each is des ..."
Abstract

Cited by 181 (50 self)
 Add to MetaCart
(Show Context)
Most research into applying AI techniques to realtime control problems has limited the power of AI methods or embedded \reactivity " in an AI system. We present an alternative, cooperative architecture that uses separate AI and realtime subsystems to address the problems for which each is designed; a structured interface allows the subsystems to communicate without compromising their respective performance goals. By reasoning about its own bounded reactivity, CIRCA can guarantee that it will meet hard deadlines while still using unpredictable AI methods. With its abilities to guarantee or trade o the timeliness, precision, condence, and completeness of its output, CIRCA provides more
exible performance than previous systems.
Principles of Metareasoning
 Artificial Intelligence
, 1991
"... In this paper we outline a general approach to the study of metareasoning, not in the sense of explicating the semantics of explicitly specified metalevel control policies, but in the sense of providing a basis for selecting and justifying computational actions. This research contributes to a devel ..."
Abstract

Cited by 178 (10 self)
 Add to MetaCart
(Show Context)
In this paper we outline a general approach to the study of metareasoning, not in the sense of explicating the semantics of explicitly specified metalevel control policies, but in the sense of providing a basis for selecting and justifying computational actions. This research contributes to a developing attack on the problem of resourcebounded rationality, by providing a means for analysing and generating optimal computational strategies. Because reasoning about a computation without doing it necessarily involves uncertainty as to its outcome, probability and decision theory will be our main tools. We develop a general formula for the utility of computations, this utility being derived directly from the ability of computations to affect an agent's external actions. We address some philosophical difficulties that arise in specifying this formula, given our assumption of limited rationality. We also describe a methodology for applying the theory to particular problemsolving systems, a...
DecisionTheoretic Deliberation Scheduling for Problem Solving In . . .
 ARTIFICIAL INTELLIGENCE
, 1994
"... We are interested in the problem faced byanagent with limited computational capabilities, embedded in a complex environment with other agents and processes not under its control. Careful management of computational resources is important for complex problemsolving tasks in which the time spent in ..."
Abstract

Cited by 174 (3 self)
 Add to MetaCart
We are interested in the problem faced byanagent with limited computational capabilities, embedded in a complex environment with other agents and processes not under its control. Careful management of computational resources is important for complex problemsolving tasks in which the time spent in decision making affects the quality of the responses generated by a system.
Iterative Combinatorial Auctions: Achieving Economic and Computational Efficiency
 DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE, UNIVERSITY OF PENNSYLVANIA
, 2001
"... This thesis presents new auctionbased mechanisms to coordinate systems of selfinterested and autonomous agents, and new methods to design such mechanisms and prove their optimality... ..."
Abstract

Cited by 158 (21 self)
 Add to MetaCart
This thesis presents new auctionbased mechanisms to coordinate systems of selfinterested and autonomous agents, and new methods to design such mechanisms and prove their optimality...