Results 11  20
of
52
Bayesian inference for motion control and planning
, 2007
"... Abstract. Bayesian motion control and planning is based on the idea of fusing motion objectives (constraints, goals, priors, etc) using probabilistic inference techniques in a way similar to Bayesian sensor fusing. This approach seems promising for tackling two fundamental problems in robotic contro ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Abstract. Bayesian motion control and planning is based on the idea of fusing motion objectives (constraints, goals, priors, etc) using probabilistic inference techniques in a way similar to Bayesian sensor fusing. This approach seems promising for tackling two fundamental problems in robotic control and planning: (1) Bayesian inference methods are an ideal candidate for fusing many sources of information or constraints – usually employed in the sensor processing context. Complex motion is characterised by such a multitude of concurrent constraints and tasks and the Bayesian approach provides a solution of which classical solutions (e.g., prioritised inverse kinematics) are a special case. (2) In the future we will require planning methods that are not based on representing the system state as one highdimensional state variable but rather cope with structured state representations (distributed, hierarchical, hybrid discretecontinuous) that more directly reflect and exploit the natural structure of the environment. Probabilistic inference offers methods that can in principle handle such representations. Our approach will, for the first time, allow to transfer these methods to the realm of motion control and planning. The first part of this technical report will review standard optimal (motion rate or dynamic) control from an optimisation perspective and then derive Bayesian versions of the classical solutions. The new control laws show that motion control can be framed as an inference problem in an appropriately formulated probabilistic model. In the second part, by extending the probabilistic models to be Markovian models of the whole trajectory, we show that probabilistic inference methods (belief propagation) yield solutions to motion planning problems. This approach computes a posterior distribution over trajectories and control signals conditioned on goals and constraints. 1
WHAT THE DRAUGHTSMAN’S HAND TELLS THE DRAUGHTSMAN’S EYE: A SENSORIMOTOR ACCOUNT OF DRAWING
"... In this paper we address the challenging problem of sensorimotor integration, with reference to eyehand coordination of an artificial agent engaged in a natural drawing task. Under the assumption that eyehand coupling influences observed movements, a motor continuity hypothesis is exploited to acc ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
In this paper we address the challenging problem of sensorimotor integration, with reference to eyehand coordination of an artificial agent engaged in a natural drawing task. Under the assumption that eyehand coupling influences observed movements, a motor continuity hypothesis is exploited to account for how gaze shifts are constrained by hand movements. A Bayesian model of such coupling is presented in the form of a novel Dynamic Bayesian Network, namely an Input–Output Coupled Hidden Markov Model. Simulation results are compared to those obtained by eyetracked human subjects involved in drawing experiments.
Human Behavior Modeling with Maximum Entropy Inverse Optimal Control
"... In our research, we view human behavior as a structured sequence of contextsensitive decisions. We develop a conditional probabilistic model for predicting human decisions given the contextual situation. Our approach employs the principle of maximum entropy within the Markov Decision Process framew ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
In our research, we view human behavior as a structured sequence of contextsensitive decisions. We develop a conditional probabilistic model for predicting human decisions given the contextual situation. Our approach employs the principle of maximum entropy within the Markov Decision Process framework. Modeling human behavior is reduced to recovering a contextsensitive utility function that explains demonstrated behavior within the probabilistic model. In this work, we review the development of our probabilistic model (Ziebart et al. 2008a) and the results of its application to modeling the contextsensitive route preferences of drivers (Ziebart et al. 2008b). We additionally expand the approach’s applicability to domains with stochastic dynamics, present preliminary experiments on modeling timeusage, and discuss remaining challenges for applying our approach to other human behavior modeling problems.
MAP Estimation for Graphical Models by Likelihood Maximization
"... Computing a maximum a posteriori (MAP) assignment in graphical models is a crucial inference problem for many practical applications. Several provably convergent approaches have been successfully developed using linear programming (LP) relaxation of the MAP problem. We present an alternative approac ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Computing a maximum a posteriori (MAP) assignment in graphical models is a crucial inference problem for many practical applications. Several provably convergent approaches have been successfully developed using linear programming (LP) relaxation of the MAP problem. We present an alternative approach, which transforms the MAP problem into that of inference in a mixture of simple Bayes nets. We then derive the Expectation Maximization (EM) algorithm for this mixture that also monotonically increases a lower bound on the MAP assignment until convergence. The update equations for the EM algorithm are remarkably simple, both conceptually and computationally, and can be implemented using a graphbased message passing paradigm similar to maxproduct computation. Experiments on the realworld protein design dataset show that EM’s convergence rate is significantly higher than the previous LP relaxation based approach MPLP. EM also achieves a solution quality within 95 % of optimal for most instances. 1
Parallels between sensory and motor information processing
"... The computational problems solved by the sensory and motor systems appear very different: one has to do with inferring the state of the world given sensory data, the other with generating motor commands appropriate for given task goals. However recent mathematical developments summarized in this cha ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
The computational problems solved by the sensory and motor systems appear very different: one has to do with inferring the state of the world given sensory data, the other with generating motor commands appropriate for given task goals. However recent mathematical developments summarized in this chapter show that these two problems are in many ways related. Therefore information processing in the sensory and motor systems may be more similar than previously thought – not only in terms of computations but also in terms of algorithms and neural representations. Here we explore these similarities as well as clarify some differences between the two systems. Similarity between inference and control: an intuitive introduction Consider a control problem where we want to achieve a certain goal at some point in time in the future – say, grasp a coffee cup within 1 sec. To achieve this goal, the motor system has to generate a sequence of muscle activations which result in joint torques which act on the musculoskeletal plant in such a way that the fingers end up curled around the cup. Actually the motor system does not have to compute the entire
Bayesian Maps: Probabilistic and Hierarchical Models for Mobile Robot Navigation, in "Probabilistic Reasoning and Decision Making in SensoryMotor Systems
 springer
"... Imagine yourself lying in your bed at night. Now try and answer these questions: Is your body parallel or not to the sofa you have, two rooms away from your bedroom? What is the distance between your bed and the sofa? If we except cases like rotating beds, people who actually sleep in their sofas, o ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Imagine yourself lying in your bed at night. Now try and answer these questions: Is your body parallel or not to the sofa you have, two rooms away from your bedroom? What is the distance between your bed and the sofa? If we except cases like rotating beds, people who actually sleep in their sofas, or
Hierarchies of Probabilistic Models of Space for Mobile Robots: the Bayesian Map and the Abstraction operator
 In Reasoning with Uncertainty in Robotics (IJCAI’03 Workshop
, 2003
"... This paper presents a new method for probabilistic modelling of space, called the Bayesian Map formalism. ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
This paper presents a new method for probabilistic modelling of space, called the Bayesian Map formalism.
Optimal limitcycle control recast as bayesian inference
"... Abstract: We introduce an algorithm that generates an optimal controller for stochastic nonlinear problems with a periodic solution, e.g. locomotion. Uniquely, the quantity we approximate is neither the Value nor Policy functions, but rather the stationary statedistribution of the optimallycontroll ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract: We introduce an algorithm that generates an optimal controller for stochastic nonlinear problems with a periodic solution, e.g. locomotion. Uniquely, the quantity we approximate is neither the Value nor Policy functions, but rather the stationary statedistribution of the optimallycontrolled process. We recast the control problem as Bayesian inference over a graphical model with a ring topology. The posterior approximates the controlled stationary distribution with local gaussians along the optimal limitcycle. Linearfeedback gains and openloop controls are extracted from the covariances and the means, respectively. Complexity scales linearly or quadratically with the state dimension, depending on the dynamics approximation. We demonstrate our algorithm on a toy 2dimensional problem and then on a challenging 23dimensional simulated walking robot.
Optimal control for autonomous motor behavior
, 2012
"... This dissertation presents algorithms that allow robots to generate optimal behavior from first principles. Instead of hardcoding every desired behavior, we encode the task as a cost function, and use numerical optimization to find action sequences that can accomplish the task. Using the theoreti ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This dissertation presents algorithms that allow robots to generate optimal behavior from first principles. Instead of hardcoding every desired behavior, we encode the task as a cost function, and use numerical optimization to find action sequences that can accomplish the task. Using the theoretical framework of optimal control, we develop methods for generating autonomous motor behavior in highdimensional domains of legged locomotion. We identify three foundational problems that limit the application of existing optimal control algorithms, and present guiding principles that address these issues. First, some traditional algorithms use global optimization, where every possible state is considered. This approach cannot be applied in continuous domains, where every additional mechanical degree of freedom exponentially increases the volume of state space. In order to sidestep this curse of dimensionality, we focus on trajectory optimization, which finds locallyoptimal solutions while scaling only polynomially with state dimensionality. Second, many algorithms of optimal control and reinforcement learning cannot be directly applied to continuous domains with contacts, due to the nonsmooth dynamics. We present techniques of contact smoothing that enable the use of standard continuous optimization