Results 1 - 10
of
13
Voluntary head movement and allocentric perception of space
- Psychological Science
"... Abstract. Although visual input is egocentric, some visual percep-tions and representations may be allocentric, i.e., independent of the observer’s vantage point or motion. By comparing the visual perception of 3D object motion during voluntary and involuntary motion in human subjects, results of th ..."
Abstract
-
Cited by 13 (3 self)
- Add to MetaCart
(Show Context)
Abstract. Although visual input is egocentric, some visual percep-tions and representations may be allocentric, i.e., independent of the observer’s vantage point or motion. By comparing the visual perception of 3D object motion during voluntary and involuntary motion in human subjects, results of three experiments show that the motor command contributes to the objective perception of space: observers executing voluntary head movements are more likely to apply, consciously and un-consciously, spatial criteria relative to an allocentric frame of reference than while undergoing similar involuntary displacements—which lead to a more egocentric bias. Furthermore, details of the motor command are crucial to spatial vision, since allocentric bias decreases or disappears unless self-motion and motor command match. An important property of our visual system is that its viewpoint con-stantly moves through space, usually as a result of voluntary motor action on the observer’s part. At least two reference frames are therefore possible
Perceptual Stability During Head Movement in Virtual Reality
- Proceedings of the IEEE Virtual Reality 2002 (VR
, 2002
"... Virtual reality display introduce spatial distortions that arevery hard to correct because of the difficulty ofprecisely modelling the camera from the nodal point of each eyh How significant are these distortions for spatial perception in virtual reality In thisstudy we used a helmet mounted ..."
Abstract
-
Cited by 13 (1 self)
- Add to MetaCart
(Show Context)
Virtual reality display introduce spatial distortions that arevery hard to correct because of the difficulty ofprecisely modelling the camera from the nodal point of each eyh How significant are these distortions for spatial perception in virtual reality In thisstudy we used a helmet mounted display and a mechanical head tracker to investigate the tolerance to errors between head motions and the resulting visual display The relationship between the head movement and the associated updating of the visualdisplay was adjustedby subjects until the image was judged as stable relative to the world. Both rotational and translational movements were tested and the relationship between the movements and the direction of gravity was varied syed,`q`43,2`z Tyed,`q` for thedisplay to be judged as stable, subjects needed the visual world to be moved in the opposite direction of the head movementby an amount greater than the head movement itself, during both rotational and translational head movements, although a large range of movement was tolerated and judged as appearing stable. These results suggest that it notnecessary to model the visual geometry accurately and suggest circumstances when tracker drift can be corrected by jumps in thedisplay which will pass unnoticedby the user.
Vestibular signals of posterior parietal cortex neurons during active and passive head movements in macaque monkeys
- Ann.N.Y.Acad.Sci
, 2003
"... ABSTRACT: The posterior parietal cortex may function as an interface between sensory and motor cortices and thus could be involved in the formation of motor plans as well as abstract representations of space. We have recorded from neurons in the intraparietal sulcus, namely, the ventral and medial i ..."
Abstract
-
Cited by 11 (0 self)
- Add to MetaCart
ABSTRACT: The posterior parietal cortex may function as an interface between sensory and motor cortices and thus could be involved in the formation of motor plans as well as abstract representations of space. We have recorded from neurons in the intraparietal sulcus, namely, the ventral and medial intraparietal areas (VIP and MIP, respectively), and analyzed their head-movement– related signals in relation to passive and active movements. To generate active head movements, we made the animals track a moving fixation spot in the horizontal plane under head-free conditions. When under certain circumstances the animals were tracking the fixation spot almost exclusively via head movements, a clear correlation between neuronal firing rate and head movement could be established. Furthermore, a newly employed paradigm, the “replay method, ” made available direct comparison of neuronal firing behavior under active and passive movement conditions. In such case, the animals were allowed to make spontaneous head movements in darkness. Subsequently, the heads were fixed and the previously recorded active head-movement profile was reproduced by a turntable as passive stimulation. Neuronal responses ranged from total extinction of the vestibular signal during active movement to presence of activity only during active movement. Furthermore, in approximately one-third of the neurons, a change of vestibular on-direction depending on active versus passive movement mode was observed, that is, type I neurons became type II neurons, etc. We suggest that the role of parietal vestibular neurons has to be sought in sensory space representation rather than reflex behavior and motor control contexts.
Simulating self motion I: cues for the perception of motion
- in Virtual Reality, Springer-Verlag, Issue 6, Num
, 2002
"... When people move there are many visual and non-visual cues that can inform them about their movement. Simulating self motion in a virtual-reality environment thus needs to take these non-visual cues into account in addition to the normal high-quality visual display. Here we examine the contribution ..."
Abstract
-
Cited by 6 (1 self)
- Add to MetaCart
(Show Context)
When people move there are many visual and non-visual cues that can inform them about their movement. Simulating self motion in a virtual-reality environment thus needs to take these non-visual cues into account in addition to the normal high-quality visual display. Here we examine the contribution of visual and non-visual cues to our perception of self-motion. The perceived distance of self motion can be estimated from the visual flow field, physical forces or the act of moving. On its own, passive visual motion is a very effective cue to self motion, and evokes a perception of self motion that is related to the actual motion in a way that varies with acceleration. Passive physical motion turns out to be a particularly potent self motion cue: not only does it evoke an exaggerated sensation of motion, but it also tends to dominate other cues.
Allocentric Perception Of Space And Voluntary Head Movement
- Psychological Science
, 2003
"... . Although visual input depends on the position of the eye, at least some neural representations of space in mammals are allocentric, i.e., independent of the observer's vantage point or motion. By comparing the visual perception of 3D object motion in actively and passively moving human su ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
. Although visual input depends on the position of the eye, at least some neural representations of space in mammals are allocentric, i.e., independent of the observer's vantage point or motion. By comparing the visual perception of 3D object motion in actively and passively moving human subjects, I show that the motor command contributes to the perception of space in an observer-independent reference frame: observers executing active head movements are more likely to apply minimal-motion criteria relative to an allocentric frame of reference than are observers undergoing similar passive displacements. However, the bias towards an allocentric reference frame decreases or disappears unless self-motion matches the motor command. A fundamental feature of our visual system is that the viewpoint can move through space, either as a result of voluntary motor action on the observer's part, or passively. While eye rotations result in almost uniform shifts of the entire 2D retinal ima...
by
, 2011
"... ii To my grandmothers, Ana and Lydia iii Acknowledgements To me graduate school has been a time of growth, not only professional and intellectual, but also in terms of my expanding and ever supportive family. This family is made up of a number of separate but essential units. My laboratory family – ..."
Abstract
- Add to MetaCart
ii To my grandmothers, Ana and Lydia iii Acknowledgements To me graduate school has been a time of growth, not only professional and intellectual, but also in terms of my expanding and ever supportive family. This family is made up of a number of separate but essential units. My laboratory family – my advisor W. Michael King and technician and friend Jonie Dye who were always there to help me run an experiment, listen and improve a crazy idea or support me through a particularly hard patch. My neuroscience family, especially with Joonkoo Park’s willingness to teach me about statistics and great cooking, Youngbin Kwak’s encouragement and support, Christy Itoga’s lessons about the balance between hard work and a full life and, of course, Elizabeth Gibbs ’ willingness to listen, advise, discuss or just be there in any capacity I might have needed during the many years and life events that went by in graduate school. My family of friends, which consists of many beloved members and
OCULAR COUNTER ROTATION DURING GAZE SHIFTS OCULAR COUNTER ROTATION DURING GAZE SHIFTS
"... Abducens motor neurons (ABD) are known to receive oculomotor signals via the excitatory and inhibitory burst neurons (BN) as well as head velocity related signals via the vestibular nucleus (VN). If the oculomotor input to the ABD was the same, would there be a difference in the properties of the o ..."
Abstract
- Add to MetaCart
(Show Context)
Abducens motor neurons (ABD) are known to receive oculomotor signals via the excitatory and inhibitory burst neurons (BN) as well as head velocity related signals via the vestibular nucleus (VN). If the oculomotor input to the ABD was the same, would there be a difference in the properties of the observed eye movement between head restrained (HR) and head unrestrained (HU) gaze shifts? To answer this question, the activity of 22 BN was recorded during HR and HU visual motor tasks performed by non human primates. A template matching algorithm was used to find a pair of trials (HR, HU) with matching BN activity. This guaranteed that the oculomotor input to the ABD was the same. Matched trials were found to have similar gaze amplitudes, but the peak eye velocity of HU movement was lower than HR movement. A time varying gain of the head velocity input was calculated as the ratio of the difference between the eye velocities over the head velocity. This yielded a gain that was high at the onset of the movement, decreased through out the gaze shift and plateau at one after gaze shift offset. Thus the head movement was highly inhibiting the eye movement at HU gaze onset which decreased throughout the gaze shift until it reached and remained at one at the end of the gaze shift. Finally a computer simulation was used to check if the difference in the eye velocities could be explained by VN inputs. The simulation modeled the difference in ABD firing rate (ΔABD) between HR and HU matched trials as the weighted sum of the difference in the firing rate of VN (ΔPVPc, ΔPVPi and ΔEHc) input. The BN input was not used in this model because for matched iv trials the input was the same, thus the difference was equal to zero. The simulation showed that the weight of EHc cells input was the highest, thus accounting for most of the difference in the ABD. This lead to the conclusion that EHc cells played a major role in reducing the eye velocity during head unrestrained gaze shifts.
PSYCHOLOGICAL SCIENCE Research Article VOLUNTARY HEAD MOVEMENT AND ALLOCENTRIC PERCEPTION OF SPACE
"... Abstract—Although visual input is egocentric, at least some visual perceptions and representations are allocentric, that is, independent of the observer’s vantage point or motion. Three experiments investigated the visual perception of three-dimensional object motion during voluntary and involuntary ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—Although visual input is egocentric, at least some visual perceptions and representations are allocentric, that is, independent of the observer’s vantage point or motion. Three experiments investigated the visual perception of three-dimensional object motion during voluntary and involuntary motion in human subjects. The results show that the motor command contributes to the objective perception of space: Observers are more likely to apply, consciously and unconsciously, spatial criteria relative to an allocentric frame of reference when they are executing voluntary head movements than while they are undergoing similar involuntary displacements (which lead to a more egocentric bias). Furthermore, details of the motor command are crucial to spatial vision, as allocentric bias decreases or disappears when self-motion and motor command do not match. An important property of the human visual system is that its viewpoint constantly moves through space, usually as a result of voluntary
Journal of Vestibular Research 13 (2003) 265--271 265 IOS Press
, 2003
"... We measured how much the visual world could be moved during various head rotations and translations and still be perceived as visually stable. Using this as a monitor of how well subjects know about their own movement, we compared performance in different directions relative to gravity. For head r ..."
Abstract
- Add to MetaCart
We measured how much the visual world could be moved during various head rotations and translations and still be perceived as visually stable. Using this as a monitor of how well subjects know about their own movement, we compared performance in different directions relative to gravity. For head rotations, we compared the range of visual motion judged compatible with a stable environment while rotating around an axis orthogonal to gravity (where rotation created a rotating gravity vector across the otolith macula), with judgements made when rotation was around an earth-vertical axis. For translations, we compared the corresponding range of visual motion when translation was parallel to gravity (when imposed accelerations added to or subtracted from gravity), with translations orthogonal to gravity. Ten subjects wore a head-mounted display and made active head movements at 0.5 Hz that were monitored by a low-latency mechanical tracker. Subjects adjusted the ratio between head and image motion until the display appeared perceptually stable. For neither rotation nor translation were there any differences in judgements of perceptual stability that depended on the direction of the movement with respect to the direction of gravity.
The Vestibular System Implements a Linear–Nonlinear Transformation In Order to Encode Self-Motion
, 2012
"... Although it is well established that the neural code representing the world changes at each stage of a sensory pathway, the transformations that mediate these changes are not well understood. Here we show that self-motion (i.e. vestibular) sensory information encoded by VIIIth nerve afferents is int ..."
Abstract
- Add to MetaCart
(Show Context)
Although it is well established that the neural code representing the world changes at each stage of a sensory pathway, the transformations that mediate these changes are not well understood. Here we show that self-motion (i.e. vestibular) sensory information encoded by VIIIth nerve afferents is integrated nonlinearly by post-synaptic central vestibular neurons. This response nonlinearity was characterized by a strong (,50%) attenuation in neuronal sensitivity to low frequency stimuli when presented concurrently with high frequency stimuli. Using computational methods, we further demonstrate that a static boosting nonlinearity in the input-output relationship of central vestibular neurons accounts for this unexpected result. Specifically, when low and high frequency stimuli are presented concurrently, this boosting nonlinearity causes an intensity-dependent bias in the output firing rate, thereby attenuating neuronal sensitivities. We suggest that nonlinear integration of afferent input extends the coding range of central vestibular neurons and enables them to better extract the high frequency features of self-motion when embedded with low frequency motion during natural movements. These findings challenge the traditional notion that the vestibular system uses a linear rate code to transmit information and have important consequences for understanding how the representation of sensory information changes across sensory pathways.