Results 1  10
of
2,135,059
Planning and acting in partially observable stochastic domains
 ARTIFICIAL INTELLIGENCE
, 1998
"... In this paper, we bring techniques from operations research to bear on the problem of choosing optimal actions in partially observable stochastic domains. We begin by introducing the theory of Markov decision processes (mdps) and partially observable mdps (pomdps). We then outline a novel algorithm ..."
Abstract

Cited by 1089 (42 self)
 Add to MetaCart
In this paper, we bring techniques from operations research to bear on the problem of choosing optimal actions in partially observable stochastic domains. We begin by introducing the theory of Markov decision processes (mdps) and partially observable mdps (pomdps). We then outline a novel algorithm
Acting Optimally in Partially Observable Stochastic Domains
, 1994
"... In this paper, we describe the partially observable Markov decision process (pomdp) approach to finding optimal or nearoptimal control strategies for partially observable stochastic environments, given a complete model of the environment. The pomdp approach was originally developed in the oper ..."
Abstract

Cited by 320 (18 self)
 Add to MetaCart
In this paper, we describe the partially observable Markov decision process (pomdp) approach to finding optimal or nearoptimal control strategies for partially observable stochastic environments, given a complete model of the environment. The pomdp approach was originally developed
Probabilistic robot navigation in partially observable environments
 In Proc. of the International Joint Conference on Artificial Intelligence (IJCAI
, 1995
"... Autonomous mobile robots need very reliable navigation capabilities in order to operate unattended for long periods of time. This paper reports on first results of a research program that uses partially observable Markov models to robustly track a robot’s location in office environments and to direc ..."
Abstract

Cited by 293 (13 self)
 Add to MetaCart
Autonomous mobile robots need very reliable navigation capabilities in order to operate unattended for long periods of time. This paper reports on first results of a research program that uses partially observable Markov models to robustly track a robot’s location in office environments
Learning policies for partially observable environments: Scaling up
, 1995
"... Partially observable Markov decision processes (pomdp's) model decision problems in which an agent tries to maximize its reward in the face of limited and/or noisy sensor feedback. While the study of pomdp's is motivated by a need to address realistic problems, existing techniques for fin ..."
Abstract

Cited by 296 (12 self)
 Add to MetaCart
Partially observable Markov decision processes (pomdp's) model decision problems in which an agent tries to maximize its reward in the face of limited and/or noisy sensor feedback. While the study of pomdp's is motivated by a need to address realistic problems, existing techniques
Partial Observers
, 2006
"... We attempt to dissolve the measurement problem using an anthropic principle which allows us to invoke rational observers. We argue that the key feature of such observers is that they are rational (we need not care whether they are ‘classical ’ or ‘macroscopic ’ for example) and thus, since quantum t ..."
Abstract
 Add to MetaCart
We attempt to dissolve the measurement problem using an anthropic principle which allows us to invoke rational observers. We argue that the key feature of such observers is that they are rational (we need not care whether they are ‘classical ’ or ‘macroscopic ’ for example) and thus, since quantum
Timed Control with Partial Observability
, 2003
"... We consider the problem of synthesizing controllers for timed systems modeled using timed automata. The point of departure from earlier work is that we consider controllers that have only a partial observation of the system that it controls. In discrete event systems (where continuous time is not ..."
Abstract

Cited by 50 (6 self)
 Add to MetaCart
We consider the problem of synthesizing controllers for timed systems modeled using timed automata. The point of departure from earlier work is that we consider controllers that have only a partial observation of the system that it controls. In discrete event systems (where continuous time
Dynamic Programming for Partially Observable Stochastic Games
 IN PROCEEDINGS OF THE NINETEENTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE
, 2004
"... We develop an exact dynamic programming algorithm for partially observable stochastic games (POSGs). The algorithm is a synthesis of dynamic programming for partially observable Markov decision processes (POMDPs) and iterated elimination of dominated strategies in normal form games. ..."
Abstract

Cited by 156 (25 self)
 Add to MetaCart
We develop an exact dynamic programming algorithm for partially observable stochastic games (POSGs). The algorithm is a synthesis of dynamic programming for partially observable Markov decision processes (POMDPs) and iterated elimination of dominated strategies in normal form games.
Complexity of Planning with Partial Observability
 ICAPS 2004. Proceedings of the Fourteenth International Conference on Automated Planning and Scheduling
, 2004
"... We show that for conditional planning with partial observability the problem of testing existence of plans with success probability 1 is 2EXPcomplete. This result completes the complexity picture for nonprobabilistic propositional planning. We also give new proofs for the EXPhardness of conditio ..."
Abstract

Cited by 48 (3 self)
 Add to MetaCart
We show that for conditional planning with partial observability the problem of testing existence of plans with success probability 1 is 2EXPcomplete. This result completes the complexity picture for nonprobabilistic propositional planning. We also give new proofs for the EXP
of Interactive Partially Observable Markov
"... This paper extends the framework of dynamic influence diagrams (DIDs) to the multiagent setting. DIDs are computational representations of the Partially Observable Markov Decision Processes (POMDP), which are frameworks for sequential decisionmaking in single ..."
Abstract
 Add to MetaCart
This paper extends the framework of dynamic influence diagrams (DIDs) to the multiagent setting. DIDs are computational representations of the Partially Observable Markov Decision Processes (POMDP), which are frameworks for sequential decisionmaking in single
Partial Functions
"... this article we prove some auxiliary theorems and schemes related to the articles: [1] and [2]. MML Identifier: PARTFUN1. WWW: http://mizar.org/JFM/Vol1/partfun1.html The articles [4], [6], [3], [5], [7], [8], and [1] provide the notation and terminology for this paper. We adopt the following rules ..."
Abstract

Cited by 494 (10 self)
 Add to MetaCart
this article we prove some auxiliary theorems and schemes related to the articles: [1] and [2]. MML Identifier: PARTFUN1. WWW: http://mizar.org/JFM/Vol1/partfun1.html The articles [4], [6], [3], [5], [7], [8], and [1] provide the notation and terminology for this paper. We adopt the following rules: x, y, y 1 , y 2 , z, z 1 , z 2 denote sets, P , Q, X , X 0 , X 1 , X 2 , Y , Y 0 , Y 1 , Y 2 , V , Z denote sets, and C, D denote non empty sets. We now state three propositions: (1) If P ` [: X 1
Results 1  10
of
2,135,059