Results 1 -
3 of
3
Bayesian Calibration of the Hand-Eye Kinematics of an Anthropomorphic Robot
"... Abstract — We present a Bayesian approach to calibrating the hand-eye kinematics of an anthropomorphic robot. In our approach, the robot perceives the pose of its end-effector with its head-mounted camera through visual markers attached to its end-effector. It collects training observations at sever ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
Abstract — We present a Bayesian approach to calibrating the hand-eye kinematics of an anthropomorphic robot. In our approach, the robot perceives the pose of its end-effector with its head-mounted camera through visual markers attached to its end-effector. It collects training observations at several configurations of its 7-DoF arm and 2-DoF neck which are subsequently used for an optimization in a batch process. We tune Denavit-Hartenberg parameters and joint gear reductions as a minimal representation of the rigid kinematic chain. In order to handle the uncertainties of marker pose estimates and joint position measurements, we use a maximum a posteriori formulation that allows for incorporating prior model knowledge. This way, a multitude of parameters can be optimized from only few observations. We demonstrate our approach in simulation experiments and with a real robot and provide indepth experimental analysis of our optimization approach. I.
Mirror Perspective-Taking with a Humanoid Robot
"... The ability to use a mirror as an instrument for spatial rea-soning enables an agent to make meaningful inferences about the positions of objects in space based on the appearance of their reflections in mirrors. The model presented in this paper enables a robot to infer the perspective from which ob ..."
Abstract
- Add to MetaCart
(Show Context)
The ability to use a mirror as an instrument for spatial rea-soning enables an agent to make meaningful inferences about the positions of objects in space based on the appearance of their reflections in mirrors. The model presented in this paper enables a robot to infer the perspective from which objects re-flected in a mirror appear to be observed, allowing the robot to use this perspective as a virtual camera. Prior work by our group presented an architecture through which a robot learns the spatial relationship between its body and visual sense, mimicking an early form of self-knowledge in which infants learn about their bodies and senses through their interactions with each other. In this work, this self-knowledge is utilized in order to determine the mirror’s perspective. Witnessing the position of its end-effector in a mirror in several distinct poses, the robot determines a perspective that is consistent with these observations. The system is evaluated by mea-suring how well the robot’s predictions of its end-effector’s position in 3D, relative to the robot’s egocentric coordinate system, and in 2D, as projected onto it’s cameras, match mea-surements of a marker tracked by its stereo vision system. Reconstructions of the 3D position end-effector, as computed from the perspective of the mirror, are found to agree with the forward kinematic model within a mean of 31.55mm. When observed directly by the robot’s cameras, reconstruc-tions agree within 5.12mm. Predictions of the 2D position of the end-effector in the visual field agree with visual measure-ments within a mean of 18.47 pixels, when observed in the mirror, or 5.66 pixels, when observed directly by the robot’s cameras.
Abstract Robot Self-M odeling
, 2014
"... Traditionally, models of a robot’s kinematics and sensors have been provided by designers through manual processes. Such models are used for sensorimotor tasks, such as manipulation and stereo vision. However, these techniques often yield static models based on one-time calibrations or ideal enginee ..."
Abstract
- Add to MetaCart
(Show Context)
Traditionally, models of a robot’s kinematics and sensors have been provided by designers through manual processes. Such models are used for sensorimotor tasks, such as manipulation and stereo vision. However, these techniques often yield static models based on one-time calibrations or ideal engineering drawings; models that often fail to represent the actual hardware, or in which individual unimodal models, such as those describing kinematics and vision, may disagree with each other. Humans, on the other hand, are not so limited. One of the earliest forms of self-knowledge learned during infancy is knowledge of the body and senses. In fants learn about their bodies and senses through the experience of using them in conjunction with each other. Inspired by this early form of self-awareness, the re search presented in this thesis attempts to enable robots to learn unified models of themselves through data sampled during operation. In the presented experiments, an upper torso humanoid robot, Nico, creates a highly-accurate self-representation through data sampled by its sensors while it operates. The power of this model is demonstrated through a novel robot vision task in which the robot infers the visual