Results 1 - 10
of
13
Arousal-Biased Competition in Perception and Memory
- Perspectives on Psychological Science
, 2011
"... Much research indicates that experiencing emotional arousal (such as when hearing a loud gunshot) modifies perception, memory encoding and decision processes. But effects of arousal range from enhancing to impairing across different paradigms and stimuli and canonical models of emotional arousal can ..."
Abstract
-
Cited by 26 (1 self)
- Add to MetaCart
(Show Context)
Much research indicates that experiencing emotional arousal (such as when hearing a loud gunshot) modifies perception, memory encoding and decision processes. But effects of arousal range from enhancing to impairing across different paradigms and stimuli and canonical models of emotional arousal cannot explain both the enhancing and impairing effects. In this talk, I will make the case that arousal biases neural competition to enhance high priority information and suppress low priority information.
Organizing Multimodal Perception for Autonomous Learning and Interactive Systems
, 2008
"... A stable perception of the environment is a crucial prerequisite for researching the learning of semantics from human-robot interaction and also for the generation of behavior relying on the robots perception. In this paper, we propose several contributions to this research field. To organize visu ..."
Abstract
-
Cited by 4 (2 self)
- Add to MetaCart
A stable perception of the environment is a crucial prerequisite for researching the learning of semantics from human-robot interaction and also for the generation of behavior relying on the robots perception. In this paper, we propose several contributions to this research field. To organize visual perception the concept of proto-objects is used for the representation of scene elements. These proto-objects are created by several different sources and can be combined to provide the means for interactive autonomous behavior generation. They are also processed by several classifiers, extracting different visual properties. The robot learns to associate speech labels with these properties by using the outcome of the classifiers for online training of a speech recognition system. To ease the combination of visual and speech classifier outputs, a necessity for the online training and basis for future learning of semantics, a common representation for all classifier results is used. This uniform handling of multimodal information provides the necessary flexibility for further extension. We will show the feasibility of the proposed approach by interactive experiments with the humanoid robot ASIMO.
Top-Down Feedback in an HMAX-Like Cortical Model of Object Perception Based on Hierarchical Bayesian Networks and Belief Propagation
- PLoS ONE
, 2012
"... Hierarchical generative models, such as Bayesian networks, and belief propagation have been shown to provide a theoretical framework that can account for perceptual processes, including feedforward recognition and feedback modulation. The framework explains both psychophysical and physiological expe ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
(Show Context)
Hierarchical generative models, such as Bayesian networks, and belief propagation have been shown to provide a theoretical framework that can account for perceptual processes, including feedforward recognition and feedback modulation. The framework explains both psychophysical and physiological experimental data and maps well onto the hierarchical distributed cortical anatomy. However, the complexity required to model cortical processes makes inference, even using approximate methods, very computationally expensive. Thus, existing object perception models based on this approach are typically limited to tree-structured networks with no loops, use small toy examples or fail to account for certain perceptual aspects such as invariance to transformations or feedback reconstruction. In this study we develop a Bayesian network with an architecture similar to that of HMAX, a biologically-inspired hierarchical model of object recognition, and use loopy belief propagation to approximate the model operations (selectivity and invariance). Crucially, the resulting Bayesian network extends the functionality of HMAX by including top-down recursive feedback. Thus, the proposed model not only achieves successful feedforward recognition invariant to noise, occlusions, and changes in position and size, but is also able to reproduce modulatory effects such as illusory contour completion and attention. Our novel and rigorous methodology covers key aspects such as learning using a layerwise greedy algorithm, combining feedback information from multiple parents and reducing the number of operations required. Overall, this work extends an
Segregation of complex acoustic scenes based on temporal coherence
"... Abstract In contrast to the complex acoustic environments we encounter everyday, most studies of auditory segregation have used relatively simple signals. Here, we synthesized a new stimulus to examine the detection of coherent patterns (‘figures’) from overlapping ‘background ’ signals. In a series ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
Abstract In contrast to the complex acoustic environments we encounter everyday, most studies of auditory segregation have used relatively simple signals. Here, we synthesized a new stimulus to examine the detection of coherent patterns (‘figures’) from overlapping ‘background ’ signals. In a series of experiments, we demonstrate that human listeners are remarkably sensitive to the emergence of such figures and can tolerate a variety of spectral and temporal perturbations. This robust behavior is consistent with the existence of automatic auditory segregation mechanisms that are highly sensitive to correlations across frequency and time. The observed behavior cannot be explained purely on the basis of adaptation-based models used to explain the segregation of deterministic narrowband signals. We show that the present results are consistent with the predictions of a model of auditory perceptual organization based on temporal coherence. Our data thus support a role for temporal coherence as an organizational principle underlying auditory segregation. DOI: 10.7554/eLife.00699.001
There’s Waldo! A Normalization Model of Visual Search Predicts Single-Trial Human Fixations in an Object Search Task
"... When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global “priority map ” that integrates bottom-up visual information with top-down, target-specific signals. We propose amechanisticmodel of visual search that is c ..."
Abstract
- Add to MetaCart
When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global “priority map ” that integrates bottom-up visual information with top-down, target-specific signals. We propose amechanisticmodel of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images andwhen objects are pasted in natural scenes. Themodel can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects. Key words: computational modeling, normalization, object recognition, visual attention, visual search
GPGPU-BASED CORTICAL MODELING
, 2012
"... Cortical modeling is an area of research seeking to model and simulate the cerebral cortex of the brain, which is of fundamental importance to conscious thought and action. Computational power is a major challenge in this eld and the problem is inherently well-suited to SIMD architectures. This su ..."
Abstract
- Add to MetaCart
(Show Context)
Cortical modeling is an area of research seeking to model and simulate the cerebral cortex of the brain, which is of fundamental importance to conscious thought and action. Computational power is a major challenge in this eld and the problem is inherently well-suited to SIMD architectures. This suggests the implementation of a general-purpose GPU framework for the development and execution of cortical models, and indeed, several such frameworks do exist. However, they suffer from hardware and software vendor lock-in and unnecessary assumptions that limit the generality of the models they can execute. In order to overcome these obstacles in preparation for anticipated future work by the author and others, we have implemented a new cortical modeling framework in OpenCL using PyOpenCL. The new framework has several notable advantages: it is open-source, it does not suffer from hardware or software vendor lock-in, it is cross-platform compatible, and in principle it can simulate any model expressed as a message-passing architecture
Reprints and permission:
"... Our everyday surroundings besiege us with information. The battle is for a share of our limited attention and memory, with the brain selecting the winners and discarding the losers. Previous research shows that both bottom–up and top–down factors bias competition in favor of high priority stimuli. W ..."
Abstract
- Add to MetaCart
(Show Context)
Our everyday surroundings besiege us with information. The battle is for a share of our limited attention and memory, with the brain selecting the winners and discarding the losers. Previous research shows that both bottom–up and top–down factors bias competition in favor of high priority stimuli. We propose that arousal during an event increases this bias both in perception and in long-term memory of the event. Arousal-biased competition theory provides specific predictions about when arousal will enhance memory for events and when it will impair it, which accounts for some puzzling contradictions in the emotional memory literature. Keywords arousal, emotional memory, biased competition, attention Selection is the very keel on which our mental ship is built. And in this case of memory its utility is obvious. If we remembered everything, we should on most occasions be as ill off as if we remembered nothing. —William James, The Principles of Psychology (1890, p. 680) The brain’s ability to prioritize information allows us to think and take action without being overwhelmed by external
Neural Control: Closed-Loop Human
"... Closed-loop experimental testing of single medial temporal lobe neurons in humans reveals top-down effects, opening new possibilities for describing neural representations at the highest level. ..."
Abstract
- Add to MetaCart
(Show Context)
Closed-loop experimental testing of single medial temporal lobe neurons in humans reveals top-down effects, opening new possibilities for describing neural representations at the highest level.