Results 1 - 10
of
202
Monte Carlo Localization: Efficient Position Estimation for Mobile Robots
- IN PROC. OF THE NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE (AAAI
, 1999
"... This paper presents a new algorithm for mobile robot localization, called Monte Carlo Localization (MCL). MCL is a version of Markov localization, a family of probabilistic approaches that have recently been applied with great practical success. However, previous approaches were either computational ..."
Abstract
-
Cited by 343 (46 self)
- Add to MetaCart
This paper presents a new algorithm for mobile robot localization, called Monte Carlo Localization (MCL). MCL is a version of Markov localization, a family of probabilistic approaches that have recently been applied with great practical success. However, previous approaches were either computationally cumbersome (such as grid-based approaches that represent the state space by high-resolution 3D grids), or had to resort to extremely coarse-grained resolutions. Our approach is computationally efficient while retaining the ability to represent (almost) arbitrary distributions. MCL applies sampling-based methods for approximating probability distributions, in a way that places computation " where needed." The number of samples is adapted on-line, thereby invoking large sample sets only when necessary. Empirical results illustrate that MCL yields improved accuracy while requiring an order of magnitude less computation when compared to previous approaches. It is also much easier to implement...
The interactive museum tour-guide robot
, 1998
"... This paper describes the software architecture of an autonomous tour-guide/tutor robot. This robot was recently deployed in the “Deutsches Museum Bonn, ” were it guided hundreds of visitors through the museum during a six-day deployment period. The robot’s control software integrates low-level proba ..."
Abstract
-
Cited by 225 (32 self)
- Add to MetaCart
This paper describes the software architecture of an autonomous tour-guide/tutor robot. This robot was recently deployed in the “Deutsches Museum Bonn, ” were it guided hundreds of visitors through the museum during a six-day deployment period. The robot’s control software integrates low-level probabilistic reasoning with high-level problem solving embedded in first order logic. A collection of software innovations, described in this paper, enabled the robot to navigate at high speeds through dense crowds, while reliably avoiding collisions with obstacles—some of which could not even be perceived. Also described in this paper is a user interface tailored towards non-expert users, which was essential for the robot’s success in the museum. Based on these experiences, this paper argues that time is ripe for the development of AI-based commercial service robots that assist people in everyday life.
Probabilistic Algorithms and the Interactive Museum Tour-Guide Robot Minerva
, 2000
"... This paper describes Minerva, an interactive tour-guide robot that was successfully deployed in a Smithsonian museum. Minerva's software is pervasively probabilistic, relying on explicit representations of uncertainty in perception and control. This article describes ..."
Abstract
-
Cited by 196 (38 self)
- Add to MetaCart
(Show Context)
This paper describes Minerva, an interactive tour-guide robot that was successfully deployed in a Smithsonian museum. Minerva's software is pervasively probabilistic, relying on explicit representations of uncertainty in perception and control. This article describes
EXACT AND APPROXIMATE ALGORITHMS FOR PARTIALLY OBSERVABLE MARKOV DECISION PROCESSES
, 1998
"... Automated sequential decision making is crucial in many contexts. In the face of uncertainty, this task becomes even more important, though at the same time, computing optimal decision policies becomes more complex. The more sources of uncertainty there are, the harder the problem becomes to solve. ..."
Abstract
-
Cited by 186 (2 self)
- Add to MetaCart
Automated sequential decision making is crucial in many contexts. In the face of uncertainty, this task becomes even more important, though at the same time, computing optimal decision policies becomes more complex. The more sources of uncertainty there are, the harder the problem becomes to solve. In this work, we look at sequential decision making in environments where the actions have probabilistic outcomes and in which the system state is only partially observable. We focus on using a model called a partially observable Markov decision process (POMDP) and explore algorithms which address computing both optimal and approximate policies for use in controlling processes that are modeled using POMDPs. Although solving for the optimal policy is PSPACE-complete (or worse), the study and improvements of exact algorithms lends insight into the optimal solution structure as well as providing a basis for approximate solutions. We present some improvements, analysis and empirical comparisons for some existing and some novel approaches for computing the optimal POMDP policy exactly. Since it is also hard (NP-complete or worse) to derive close approximations to the optimal solution for POMDPs, we consider a number of approaches for deriving policies that yield sub-optimal control and empirically explore their performance on a range of problems. These approaches
Bayesian Map Learning in Dynamic Environments
- In Neural Info. Proc. Systems (NIPS
"... We show how map learning can be formulated as inference in a graphical model, which allows us to handle changing environments in a natural manner. We describe several different approximation schemes for the problem, and illustrate some results on a simulated grid-world with doors that can open a ..."
Abstract
-
Cited by 163 (2 self)
- Add to MetaCart
(Show Context)
We show how map learning can be formulated as inference in a graphical model, which allows us to handle changing environments in a natural manner. We describe several different approximation schemes for the problem, and illustrate some results on a simulated grid-world with doors that can open and close. We close by briefly discussing how to learn more general models of (partially observed) environments, which can contain a variable number of objects with changing internal state. 1 Introduction Mobile robots need to navigate in dynamic environments: on a short time scale, obstacles, such as people, can appear and disappear, and on longer time scales, structural changes, such as doors opening and closing, can occur. In this paper, we consider how to create models of dynamic environments. In particular, we are interested in modeling the location of objects, which we can represent using a map. This enables the robot to perform path planning, etc. We propose a Bayesian approach in ...
Adapting the Sample Size in Particle Filters Through KLD-Sampling
- International Journal of Robotics Research
, 2003
"... Over the last years, particle filters have been applied with great success to a variety of state estimation problems. In this paper we present a statistical approach to increasing the efficiency of particle filters by adapting the size of sample sets during the estimation process. ..."
Abstract
-
Cited by 150 (9 self)
- Add to MetaCart
(Show Context)
Over the last years, particle filters have been applied with great success to a variety of state estimation problems. In this paper we present a statistical approach to increasing the efficiency of particle filters by adapting the size of sample sets during the estimation process.
Learning Topological Maps with Weak Local Odometric Information
- IN PROCEEDINGS OF IJCAI-97. IJCAI, INC
, 1997
"... Topological maps provide a useful abstraction for robotic navigation and planning. Although stochastic maps can theoretically be learned using the Baum-Welch algorithm, without strong prior constraint on the structure of the model it is slow to converge, requires a great deal of data, and is o ..."
Abstract
-
Cited by 136 (4 self)
- Add to MetaCart
(Show Context)
Topological maps provide a useful abstraction for robotic navigation and planning. Although stochastic maps can theoretically be learned using the Baum-Welch algorithm, without strong prior constraint on the structure of the model it is slow to converge, requires a great deal of data, and is often stuck in local minima. In this paper, we consider a special case of hidden Markov models for robot-navigation environments, in which states are associated with points in a metric configuration space. We assume that the robot has some odometric ability to measure relative transformations between its configurations. Such odometry is typically not precise enough to suffice for building a global map, but it does give valuable local information about relations between adjacent states. We present an extension of the Baum-Welch algorithm that takes advantage of this local odometric information, yielding faster convergence to better solutions with less data.
Xavier: A Robot Navigation Architecture Based on Partially Observable Markov Decision Process Models
- ARTIFICIAL INTELLIGENCE BASED MOBILE ROBOTICS: CASE STUDIES OF SUCCESSFUL ROBOT SYSTEMS
, 1998
"... Autonomous mobile robots need very reliable navigation capabilities in order to operate unattended for long periods of time. We present a technique for achieving this goal that uses partially observable Markov decision process models (POMDPs) to explicitly model navigation uncertainty, including act ..."
Abstract
-
Cited by 115 (11 self)
- Add to MetaCart
(Show Context)
Autonomous mobile robots need very reliable navigation capabilities in order to operate unattended for long periods of time. We present a technique for achieving this goal that uses partially observable Markov decision process models (POMDPs) to explicitly model navigation uncertainty, including actuator and sensor uncertainty and approximate knowledge of the environment. This allows the robot to maintain a probability distribution over its current pose. Thus, while the robot rarely knows exactly where it is, it always has some belief as to what its true pose is, and is never completely lost. We present a navigation architecture based on POMDPs that provides a uniform framework with an established theoretical foundation for pose estimation, path planning, robot control during navigation, and learning. Our experiments show that this architecture indeed leads to robust corridor navigation for an actual indoor mobile robot.
A Layered Architecture for Office Delivery Robots
, 1997
"... Office delivery robots have to perform many tasks. They have to determine the order in which to visit ofces, plan paths to those offices, follow paths reliably, and avoid static and dynamic obstacles in the process. Reliability and efficiency are key issues in the design of such autonomous robot sys ..."
Abstract
-
Cited by 101 (20 self)
- Add to MetaCart
Office delivery robots have to perform many tasks. They have to determine the order in which to visit ofces, plan paths to those offices, follow paths reliably, and avoid static and dynamic obstacles in the process. Reliability and efficiency are key issues in the design of such autonomous robot systems. They must deal reliably with noisy sensors and actuators and with incomplete knowledge of the environment. They must also act efficiently, in real time, to deal with dynamic situations. Our architecture is composed of four abstraction layers: obstacle avoidance, navigation, path planning, and task scheduling. The layers are independent, communicating processes that are always active, processing sensory data and status information to update their decisions and actions. A version of our robot architecture has been in nearly daily use in our building since December 1995. As of July 1996, the robot has traveled more than 75 kilometers in service of over 1800 navigation requests that were specified using our World Wide Web interface.
Exploiting Structure to Efficiently Solve Large Scale Partially Observable Markov Decision Processes
, 2005
"... Partially observable Markov decision processes (POMDPs) provide a natural and principled framework to model a wide range of sequential decision making problems under uncertainty. To date, the use of POMDPs in real-world problems has been limited by the poor scalability of existing solution algorithm ..."
Abstract
-
Cited by 91 (6 self)
- Add to MetaCart
Partially observable Markov decision processes (POMDPs) provide a natural and principled framework to model a wide range of sequential decision making problems under uncertainty. To date, the use of POMDPs in real-world problems has been limited by the poor scalability of existing solution algorithms, which can only solve problems with up to ten thousand states. In fact, the complexity of finding an optimal policy for a finite-horizon discrete POMDP is PSPACE-complete. In practice, two important sources of intractability plague most solution algorithms: large policy spaces and large state spaces. On the other hand,