Results 1 - 10
of
277
Noise characteristics and prior expectations in human visual speed perception.
- Nature Neuroscience,
, 2006
"... Human visual speed perception is qualitatively consistent with a Bayesian observer that optimally combines noisy measurements with a prior preference for lower speeds. Quantitative validation of this model, however, is difficult because the precise noise characteristics and prior expectations are u ..."
Abstract
-
Cited by 111 (11 self)
- Add to MetaCart
Human visual speed perception is qualitatively consistent with a Bayesian observer that optimally combines noisy measurements with a prior preference for lower speeds. Quantitative validation of this model, however, is difficult because the precise noise characteristics and prior expectations are unknown. Here, we present an augmented observer model that accounts for the variability of subjective responses in a speed discrimination task. This allowed us to infer the shape of the prior probability as well as the internal noise characteristics directly from psychophysical data. For all subjects, we found that the fitted model provides an accurate description of the data across a wide range of stimulus parameters. The inferred prior distribution shows significantly heavier tails than a Gaussian, and the amplitude of the internal noise is approximately proportional to stimulus speed and depends inversely on stimulus contrast. The framework is general and should prove applicable to other experiments and perceptual modalities.
Causal inference in multisensory perception
- PLoS ONE
, 2007
"... Perceptual events derive their significance to an animal from their meaning about the world, that is from the information they carry about their causes. The brain should thus be able to efficiently infer the causes underlying our sensory events. Here we use multisensory cue combination to study caus ..."
Abstract
-
Cited by 71 (9 self)
- Add to MetaCart
Perceptual events derive their significance to an animal from their meaning about the world, that is from the information they carry about their causes. The brain should thus be able to efficiently infer the causes underlying our sensory events. Here we use multisensory cue combination to study causal inference in perception. We formulate an ideal-observer model that infers whether two sensory cues originate from the same location and that also estimates their location(s). This model accurately predicts the nonlinear integration of cues by human subjects in two auditory-visual localization tasks. The results show that indeed humans can efficiently infer the causal structure as well as the location of causes. By combining insights from the study of causal inference with the ideal-observer approach to sensory cue combination, we show that the capacity to infer causal structure is not limited to conscious, high-level cognition; it is also performed continually and effortlessly in perception.
Slant from texture and disparity cues: Optimal cue combination
- Journal of Vision
"... How does the visual system combine information from different depth cues to estimate three-dimensional scene parameters? We tested a maximum-likelihood estimation (MLE) model of cue combination for perspective (texture) and binocular disparity cues to surface slant. By factoring the reliability of e ..."
Abstract
-
Cited by 60 (5 self)
- Add to MetaCart
(Show Context)
How does the visual system combine information from different depth cues to estimate three-dimensional scene parameters? We tested a maximum-likelihood estimation (MLE) model of cue combination for perspective (texture) and binocular disparity cues to surface slant. By factoring the reliability of each cue into the combination process, MLE provides more reliable estimates of slant than would be available from either cue alone. We measured the reliability of each cue in isolation across a range of slants and distances using a slant-discrimination task. The reliability of the texture cue increases as |slant | increases and does not change with distance. The reliability of the disparity cue decreases as distance increases and varies with slant in a way that also depends on viewing distance. The trends in the single-cue data can be understood in terms of the information available in the retinal images and issues related to solving the binocular correspondence problem. To test the MLE model, we measured perceived slant of two-cue stimuli when disparity and texture were in conflict and the reliability of slant estimation when both cues were available. Results from the two-cue study indicate, consistent with the MLE model, that observers weight each cue according to its relative reliability: Disparity weight decreased as distance and |slant | increased. We also observed the expected improvement in slant estimation when both cues were available. With few discrepancies, our data indicate that observers combine cues in a statistically optimal fashion and thereby reduce the variance of slant estimates below that which could be achieved from either cue alone. These results are consistent with other studies that quantitatively examined the MLE model of cue combination.
Medium-Term Review
, 2005
"... Updated information and services can be found at: ..."
(Show Context)
Combining priors and noisy visual cues in a rapid pointing task
- J. NEUROSCI
, 2006
"... Statistical decision theory suggests that choosing an ideal action requires taking several factors into account: (1) prior knowledge of the probability of various world states, (2) sensory information concerning the world state, (3) the probability of outcomes given a choice of action, and (4) the l ..."
Abstract
-
Cited by 32 (1 self)
- Add to MetaCart
Statistical decision theory suggests that choosing an ideal action requires taking several factors into account: (1) prior knowledge of the probability of various world states, (2) sensory information concerning the world state, (3) the probability of outcomes given a choice of action, and (4) the loss or gain associated with those outcomes. In previous work, we found that, in many circumstances, humans act like ideal decision makers in planning a reaching movement. They select a movement aim point that maximizes expected gain, thus taking into account outcome uncertainty (motor noise) and the consequences of their actions. Here, we ask whether humans can optimally combine prior knowledge and uncertain sensory information in planning a reach. Subjects rapidly pointed at unseen targets, indicated with dots drawn from a distribution centered on the invisible target location. Target location had a prior distribution, the form of which was known to the subject. We varied the number of dots and hence target spatial uncertainty. An analysis of the sources of uncertainty impacting performance in this task indicated that the optimal strategy was to aim between the mean of the prior (the screen center) and the mean stimulus location (centroid of the dot cloud). With increased target location uncertainty, the aim point should have moved closer to the prior. Subjects used near-optimal strategies, combining stimulus uncertainty and prior information appropriately. Observer behavior was well modeled as having three additional sources of inefficiency originating in the motor system, calculation of centroid location, and calculation of aim points.
Robust cue integration: a Bayesian model and evidence from cueconflict studies with stereoscopic and figure cues to slant.
- Journal of Vision,
, 2007
"... Most research on depth cue integration has focused on stimulus regimes in which stimuli contain the small cue conflicts that one might expect to normally arise from sensory noise. In these regimes, linear models for cue integration provide a good approximation to system performance. This article fo ..."
Abstract
-
Cited by 31 (2 self)
- Add to MetaCart
(Show Context)
Most research on depth cue integration has focused on stimulus regimes in which stimuli contain the small cue conflicts that one might expect to normally arise from sensory noise. In these regimes, linear models for cue integration provide a good approximation to system performance. This article focuses on situations in which large cue conflicts can naturally occur in stimuli. We describe a Bayesian model for nonlinear cue integration that makes rational inferences about scenes across the entire range of possible cue conflicts. The model derives from the simple intuition that multiple properties of scenes or causal factors give rise to the image information associated with most cues. To make perceptual inferences about one property of a scene, an ideal observer must necessarily take into account the possible contribution of these other factors to the information provided by a cue. In the context of classical depth cues, large cue conflicts most commonly arise when one or another cue is generated by an object or scene that violates the strongest form of constraint that makes the cue informative. For example, when binocularly viewing a slanted trapezoid, the slant interpretation of the figure derived by assuming that the figure is rectangular may conflict greatly with the slant suggested by stereoscopic disparities. An optimal Bayesian estimator incorporates the possibility that different constraints might apply to objects in the world and robustly integrates cues with large conflicts by effectively switching between different internal models of the prior constraints underlying one or both cues. We performed two experiments to test the predictions of the model when applied to estimating surface slant from binocular disparities and the compression cue (the aspect ratio of figures in an image). The apparent weight that subjects gave to the compression cue decreased smoothly as a function of the conflict between the cues but did not shrink to zero; that is, subjects did not fully veto the compression cue at large cue conflicts. A Bayesian model that assumes a mixed prior distribution of figure shapes in the world, with a large proportion being very regular and a smaller proportion having random shapes, provides a good quantitative fit for subjects' performance. The best fitting model parameters are consistent with the sensory noise to be expected in measurements of figure shape, further supporting the Bayesian model as an account of robust cue integration.
The combination of vision and touch depends on spatial proximity
- Journal of Vision
, 2005
"... The nervous system often combines visual and haptic information about object properties such that the combined estimate is more precise than with vision or haptics alone. We examined how the system determines when to combine the signals. Presumably, signals should not be combined when they come from ..."
Abstract
-
Cited by 27 (1 self)
- Add to MetaCart
(Show Context)
The nervous system often combines visual and haptic information about object properties such that the combined estimate is more precise than with vision or haptics alone. We examined how the system determines when to combine the signals. Presumably, signals should not be combined when they come from different objects. The likelihood that signals come from different objects is highly correlated with the spatial separation between the signals, so we asked how the spatial separation between visual and haptic signals affects their combination. To do this, we first created conditions for each observer in which the effect of combinationVthe increase in discrimination precision with two modalities relative to performance with one modalityVshould be maximal. Then under these conditions, we presented visual and haptic stimuli separated by different spatial distances and compared human performance with predictions of a model that combined signals optimally. We found that discrimination precision was essentially optimal when the signals came from the same location, and that discrimination precision was poorer when the signals came from different locations. Thus, the mechanism of visualYhaptic combination is specialized for signals that coincide in space.
Bayesian integration of spatial information
- Psychological Bulletin
, 2007
"... Spatial judgments and actions are often based on multiple cues. The authors review a multitude of phenomena on the integration of spatial cues in diverse species to consider how nearly optimally animals combine the cues. Under the banner of Bayesian perception, cues are sometimes combined and weight ..."
Abstract
-
Cited by 27 (3 self)
- Add to MetaCart
(Show Context)
Spatial judgments and actions are often based on multiple cues. The authors review a multitude of phenomena on the integration of spatial cues in diverse species to consider how nearly optimally animals combine the cues. Under the banner of Bayesian perception, cues are sometimes combined and weighted in a near optimal fashion. In other instances when cues are combined, how optimal the integration is might be unclear. Only 1 cue may be relied on, or cues may seem to compete with one another. The authors attempt to bring some order to the diversity by taking into account the subjective discrepancy in the dictates of multiple cues. When cues are too discrepant, it may be best to rely on 1 cue source. When cues are not too discrepant, it may be advantageous to combine cues. Such a dual principle provides an extended Bayesian framework for understanding the functional reasons for the integration of spatial cues.
Humans Trade Off Viewing Time and Movement Duration to Improve Visuomotor Accuracy in a Fast Reaching Task
, 2007
"... Previous research has shown that the brain uses statistical knowledge of both sensory and motor accuracy to optimize behavioral performance. Here, we present the results of a novel experiment in which participants could control both of these quantities at once. Specifically, maximum performance dema ..."
Abstract
-
Cited by 17 (1 self)
- Add to MetaCart
Previous research has shown that the brain uses statistical knowledge of both sensory and motor accuracy to optimize behavioral performance. Here, we present the results of a novel experiment in which participants could control both of these quantities at once. Specifically, maximum performance demanded the simultaneous choices of viewing and movement durations, which directly impacted visual and motor accuracy. Participants reached to a target indicated imprecisely by a two-dimensional distribution of dots within a 1200 ms time limit. By choosing when to reach, participants selected the quality of visual information regarding target location as well as the remaining time available to execute the reach. New dots, and consequently more visual information, appeared until the reach was initiated; after reach initiation, no new dots appeared. However, speed accuracy trade-offs in motor control make early reaches (much remaining time) precise and late reaches (little remaining time) imprecise. Based on each participant’s visual- and motor-only targethitting performances, we computed an “ideal reacher ” that selects reach initiation times that minimize predicted reach endpoint deviations from the true target location. The participant’s timing choices were qualitatively consistent with ideal predictions: choices varied with stimulus changes (but less than the predicted magnitude) and resulted in near-optimal performance despite the absence of direct feedback defining ideal performance. Our results suggest visual estimates, and their respective accuracies are passed to motor planning systems, which in turn predict the precision of potential reaches and control viewing and movement timing to favorably trade off visual and motor accuracy.