Results 1 -
4 of
4
Explore to See, Learn to Perceive, Get the Actions for Free: SKILLABILITY
, 2014
"... How can a humanoid robot autonomously learn and refine multiple sensorimotor skills as a byproduct of curiosity driven exploration, upon its high-dimensional unprocessed visual input? We present SKILLABILITY, which makes this possible. It combines the recently introduced Curiosity Driven Modular In ..."
Abstract
- Add to MetaCart
(Show Context)
How can a humanoid robot autonomously learn and refine multiple sensorimotor skills as a byproduct of curiosity driven exploration, upon its high-dimensional unprocessed visual input? We present SKILLABILITY, which makes this possible. It combines the recently introduced Curiosity Driven Modular Incremental Slow Feature Analysis (Curious Dr. MISFA) with the well-known options framework. Curious Dr. MISFA’s objective is to acquire abstractions as quickly as possible. These abstractions map high-dimensional pixel-level vision to a low-dimensional manifold. We find that each learnable abstraction augments the robot’s state space (a set of poses) with new information about the environment, for example, when the robot is grasping a cup. The abstraction is a function on an image, called a slow feature, which can effectively discretize a high-dimensional visual sequence. For example, it maps the sequence of the robot watching its arm as it moves around, grasping randomly, then grasping a cup, and moving around some more while holding the cup, into a step function having two outputs: when the cup is or is not currently grasped. The new state space includes this grasped/not grasped information. Each abstraction is coupled with an option. The reward function for the option’s policy (learned through Least Squares Policy Iteration) is high for transitions that produce a large change in the step-function-like slow features. This corresponds to finding bottleneck states, which are known good subgoals for hierarchical reinforcement learning- in the example, the subgoal corresponds to grasping the cup. The final skill includes both the learned policy and the learned abstraction. SKILLABILITY makes our iCub the first humanoid robot to learn complex skills such as to topple or grasp an object, from raw high-dimensional video input, driven purely by its intrinsic motivations.
Rapid Humanoid Motion Learning through Coordinated, Parallel Evolution
"... Abstract. Planning movements for humanoid robots is still a major challenge due to the very high degrees-of-freedom involved. Most hu-manoid control frameworks incorporate dynamical constraints related to a task that require detailed knowledge of the robot’s dynamics, making them impractical as effi ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract. Planning movements for humanoid robots is still a major challenge due to the very high degrees-of-freedom involved. Most hu-manoid control frameworks incorporate dynamical constraints related to a task that require detailed knowledge of the robot’s dynamics, making them impractical as efficient planning. In previous work, we introduced a novel planning method that uses an inverse kinematics solver called Nat-ural Gradient Inverse Kinematics (NGIK) to build task-relevant roadmaps (graphs in task space representing robot configurations that satisfy task constraints) by searching the configuration space via the Natural Evolu-tion Strategies (NES) algorithm. The approach places minimal require-ments on the constraints, allowing for complex planning in the task space. However, building a roadmap via NGIK is too slow for dynamic envi-ronments. In this paper, the approach is scaled-up to a fully-parallelized implementation where additional constraints coordinate the interaction between independent NES searches running on separate threads. Par-allelization yields a 12 × speedup that moves this promising planning method a major step closer to working in dynamic environments.
A Bottom-Up Integration of Vision and Actions To Create Cognitive Humanoids
"... ..."
(Show Context)
Reactive Reaching and Grasping on a Humanoid Towards Closing the Action-Perception Loop on the iCub
"... robot perception, eye-hand coordination, computer vision Abstract: We propose a system incorporating a tight integration between computer vision and robot control modules on a complex, high-DOF humanoid robot. Its functionality is showcased by having our iCub humanoid robot pick-up objects from a ta ..."
Abstract
- Add to MetaCart
robot perception, eye-hand coordination, computer vision Abstract: We propose a system incorporating a tight integration between computer vision and robot control modules on a complex, high-DOF humanoid robot. Its functionality is showcased by having our iCub humanoid robot pick-up objects from a table in front of it. An important feature is that the system can avoid obstacles – other objects detected in the visual stream – while reaching for the intended target object. Our integration also allows for non-static environments, i.e. the reaching is adapted on-the-fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. Furthermore we show that this system can be used both in autonomous and tele-operation scenarios. 1