Results 1  10
of
18,176
Planning and acting in partially observable stochastic domains
 ARTIFICIAL INTELLIGENCE
, 1998
"... In this paper, we bring techniques from operations research to bear on the problem of choosing optimal actions in partially observable stochastic domains. We begin by introducing the theory of Markov decision processes (mdps) and partially observable mdps (pomdps). We then outline a novel algorithm ..."
Abstract

Cited by 1095 (38 self)
 Add to MetaCart
In this paper, we bring techniques from operations research to bear on the problem of choosing optimal actions in partially observable stochastic domains. We begin by introducing the theory of Markov decision processes (mdps) and partially observable mdps (pomdps). We then outline a novel algorithm
Acting Optimally in Partially Observable Stochastic Domains
, 1994
"... In this paper, we describe the partially observable Markov decision process (POMDP) approach to finding optimal or nearoptimal control strategies for partially observable stochastic environments, given a complete model of the environment. The POMDP approach was originally developed in the operation ..."
Abstract

Cited by 327 (16 self)
 Add to MetaCart
In this paper, we describe the partially observable Markov decision process (POMDP) approach to finding optimal or nearoptimal control strategies for partially observable stochastic environments, given a complete model of the environment. The POMDP approach was originally developed
Probabilistic robot navigation in partially observable environments
 In Proc. of the International Joint Conference on Artificial Intelligence (IJCAI
, 1995
"... Autonomous mobile robots need very reliable navigation capabilities in order to operate unattended for long periods of time. This paper reports on first results of a research program that uses partially observable Markov models to robustly track a robot’s location in office environments and to direc ..."
Abstract

Cited by 293 (13 self)
 Add to MetaCart
Autonomous mobile robots need very reliable navigation capabilities in order to operate unattended for long periods of time. This paper reports on first results of a research program that uses partially observable Markov models to robustly track a robot’s location in office environments
Learning policies for partially observable environments: Scaling up
, 1995
"... Partially observable Markov decision processes (pomdp's) model decision problems in which an agent tries to maximize its reward in the face of limited and/or noisy sensor feedback. While the study of pomdp's is motivated by a need to address realistic problems, existing techniques for fin ..."
Abstract

Cited by 296 (11 self)
 Add to MetaCart
Partially observable Markov decision processes (pomdp's) model decision problems in which an agent tries to maximize its reward in the face of limited and/or noisy sensor feedback. While the study of pomdp's is motivated by a need to address realistic problems, existing techniques
Partial Observers
, 2006
"... We attempt to dissolve the measurement problem using an anthropic principle which allows us to invoke rational observers. We argue that the key feature of such observers is that they are rational (we need not care whether they are ‘classical ’ or ‘macroscopic ’ for example) and thus, since quantum t ..."
Abstract
 Add to MetaCart
We attempt to dissolve the measurement problem using an anthropic principle which allows us to invoke rational observers. We argue that the key feature of such observers is that they are rational (we need not care whether they are ‘classical ’ or ‘macroscopic ’ for example) and thus, since quantum
Timed Control with Partial Observability
, 2003
"... We consider the problem of synthesizing controllers for timed systems modeled using timed automata. The point of departure from earlier work is that we consider controllers that have only a partial observation of the system that it controls. In discrete event systems (where continuous time is not ..."
Abstract

Cited by 52 (6 self)
 Add to MetaCart
We consider the problem of synthesizing controllers for timed systems modeled using timed automata. The point of departure from earlier work is that we consider controllers that have only a partial observation of the system that it controls. In discrete event systems (where continuous time
Dynamic Programming for Partially Observable Stochastic Games
 IN PROCEEDINGS OF THE NINETEENTH NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE
, 2004
"... We develop an exact dynamic programming algorithm for partially observable stochastic games (POSGs). The algorithm is a synthesis of dynamic programming for partially observable Markov decision processes (POMDPs) and iterated elimination of dominated strategies in normal form games. ..."
Abstract

Cited by 159 (25 self)
 Add to MetaCart
We develop an exact dynamic programming algorithm for partially observable stochastic games (POSGs). The algorithm is a synthesis of dynamic programming for partially observable Markov decision processes (POMDPs) and iterated elimination of dominated strategies in normal form games.
Complexity of Planning with Partial Observability
 ICAPS 2004. Proceedings of the Fourteenth International Conference on Automated Planning and Scheduling
, 2004
"... We show that for conditional planning with partial observability the problem of testing existence of plans with success probability 1 is 2EXPcomplete. This result completes the complexity picture for nonprobabilistic propositional planning. We also give new proofs for the EXPhardness of conditio ..."
Abstract

Cited by 51 (3 self)
 Add to MetaCart
We show that for conditional planning with partial observability the problem of testing existence of plans with success probability 1 is 2EXPcomplete. This result completes the complexity picture for nonprobabilistic propositional planning. We also give new proofs for the EXP
Partially observable markov decision processes with continuous observations for dialogue management
 Computer Speech and Language
, 2005
"... This work shows how a dialogue model can be represented as a Partially Observable Markov Decision Process (POMDP) with observations composed of a discrete and continuous component. The continuous component enables the model to directly incorporate a confidence score for automated planning. Using a t ..."
Abstract

Cited by 217 (52 self)
 Add to MetaCart
This work shows how a dialogue model can be represented as a Partially Observable Markov Decision Process (POMDP) with observations composed of a discrete and continuous component. The continuous component enables the model to directly incorporate a confidence score for automated planning. Using a
Partially observed values, in
 Proc. Int. Joint Conf. on Neural Networks (IJCNN 2004
, 2004
"... It is common to have both observed and missing values in data. This paper concentrates on the case where a value can be somewhere between those two ends, partially observed and partially missing. To achieve that, a method of using evidence nodes in a Bayesian network is studied. Different ways of ha ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
It is common to have both observed and missing values in data. This paper concentrates on the case where a value can be somewhere between those two ends, partially observed and partially missing. To achieve that, a method of using evidence nodes in a Bayesian network is studied. Different ways
Results 1  10
of
18,176