Results 1 -
8 of
8
Gain-field modulation mechanism in multimodal networks for spatial perception
- in International Conference on Humanoid Robots, Osaka, Japon
, 2011
"... Abstract—Seeing is not just done through the eyes, it involves the integration of other modalities such as auditory, proprio-ceptive and tactile information, to locate objets, persons and also the limbs. We hypothesize that the neural mechanism of gain-field modulation, which is found to process coo ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
(Show Context)
Abstract—Seeing is not just done through the eyes, it involves the integration of other modalities such as auditory, proprio-ceptive and tactile information, to locate objets, persons and also the limbs. We hypothesize that the neural mechanism of gain-field modulation, which is found to process coordinate transform between modalities in the superior colliculus and in the parietal area, plays a key role to build such unified perceptual world. In experiments with a head-neck-eye’s robot with a camera and microphones, we study how gain-field modulation in neural networks can serve for transcribing one modality’s reference frame into another one (e.g., audio signals into eyes ’ coordinate). It follows that each modality influences the estimations of the position of a stimulus (multimodal enhancement). This can be used in example for mapping sound signals into retina coordinates for audio-visual speech perception. I.
Augmenting the Reachable Space in the NAO Humanoid Robot
"... Reaching for a target requires estimating the spatial po-sition of the target and to convert such a position in a suitable arm-motor command. In the proposed frame-work, the location of the target is represented implic-itly by the gaze direction of the robot and by the dis-tance of the target. The N ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Reaching for a target requires estimating the spatial po-sition of the target and to convert such a position in a suitable arm-motor command. In the proposed frame-work, the location of the target is represented implic-itly by the gaze direction of the robot and by the dis-tance of the target. The NAO robot is provided with two cameras, one to look ahead and one to look down, which constitute two independent head-centered coor-dinate systems. These head-centered frames of refer-ence are converted into reaching commands by two neu-ral networks. The weights of networks are learned by moving the arm while gazing the hand, using an on-line learning algorithm that maintains the covariance ma-trix of weights. This work adapts a previously proposed model that worked on a full humanoid robot torso, to work with the NAO and is a step toward a more generic framework for the implicit representation of the periper-sonal space in humanoid robots.
1 Pose Estimation through Cue Integration: a Neuroscience-Inspired Approach
"... Abstract—Primates possess a superior ability in dealing with objects in their environment. One of the keys for achieving such ability is the continuous concurrent use of multiple cues, especially of visual nature. This work is aimed at improving the skills of robotic systems in their interaction wit ..."
Abstract
- Add to MetaCart
Abstract—Primates possess a superior ability in dealing with objects in their environment. One of the keys for achieving such ability is the continuous concurrent use of multiple cues, especially of visual nature. This work is aimed at improving the skills of robotic systems in their interaction with nearby objects. The basic idea is to improve visual estimation of objects in the world through the merging of different visual cues of the same stimuli. A computational model of stereoptic and perspective orientation estimators, merged according to different criteria, is implemented on a robotic setup and tested in different conditions. Experimental results suggest that the integration of monocular and binocular cues can make robot sensory systems more reliable and versatile. I.
Adaptive Behaviour, doi: 10.1177/1059712315607363. A Neural Model of Binocular Saccade Planning and Vergence Control
"... The human visual system uses saccadic and vergence eye movements to foveate visual targets. To mimic this aspect of the biological visual system the PC/BC-DIM neural network is used as an omni-directional basis function network for learning and performing sensory-sensory and sensory-motor transforma ..."
Abstract
- Add to MetaCart
(Show Context)
The human visual system uses saccadic and vergence eye movements to foveate visual targets. To mimic this aspect of the biological visual system the PC/BC-DIM neural network is used as an omni-directional basis function network for learning and performing sensory-sensory and sensory-motor transformations without using any hardcoded geometric information. A hierarchical PC/BC-DIM network is used to learn a head-centred representation of visual targets by dividing the whole problem into independent subtasks. The learnt head-centred representation is then used to generate saccade and vergence motor commands. The performance of the proposed system is tested using the iCub humanoid robot simulator.
1 E
"... Abstract—Previous research suggests that reaching and walk-ing behaviors may be linked developmentally as reaching changes at the onset of walking. Here we report new evidence on an appar-ent loss of the distinction between the reachable and nonreachable distances as children start walking. The expe ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—Previous research suggests that reaching and walk-ing behaviors may be linked developmentally as reaching changes at the onset of walking. Here we report new evidence on an appar-ent loss of the distinction between the reachable and nonreachable distances as children start walking. The experiment compare non-walkers, walkers with help, and independent walkers in a reaching task to targets at varying distances including reachable and nonreachable ones. Reaching attempts, contact, leaning, and communication behaviors were recorded. Most of the children reached for the unreachable objects the first time. Non-walkers, however, reached less on the subsequent trials showing clear adjustment of their reaching decisions with the failures. On the contrary, walkers consistently attempted reaches to targets at unreachable distances. We suggest that these reaching errors may result from inappropriate integration of reaching and locomotor actions, attention control and near/far visual space. We propose a reward-mediated model implemented on a NAO humanoid robot that replicates the main results from our study showing an increase in reaching attempts to nonreachable distances after the onset of walking. Index Terms—infant reaching, perceived reachability, reaching and walking, near and far space integration.
Running head: Humanoid robots become human-like partners
"... When humanoid robots become human-like interaction partners: co-representation of robotic actions ..."
Abstract
- Add to MetaCart
When humanoid robots become human-like interaction partners: co-representation of robotic actions
248 PUBLICATIONS 1,565 CITATIONS SEE PROFILE
"... When humanoid robots become human-like interaction partners: co-representation of robotic actions ..."
Abstract
- Add to MetaCart
When humanoid robots become human-like interaction partners: co-representation of robotic actions