Results 1 - 10
of
29
Spatial constraints on learning in visual search: Modeling contextual cuing
- Journal of Experimental Psychology – Human Perception and Performance
, 2007
"... Predictive visual context facilitates visual search, a benefit termed contextual cuing (M. M. Chun & Y. Jiang, 1998). In the original task, search arrays were repeated across blocks such that the spatial configuration (context) of all of the distractors in a display predicted an embedded target ..."
Abstract
-
Cited by 31 (3 self)
- Add to MetaCart
(Show Context)
Predictive visual context facilitates visual search, a benefit termed contextual cuing (M. M. Chun & Y. Jiang, 1998). In the original task, search arrays were repeated across blocks such that the spatial configuration (context) of all of the distractors in a display predicted an embedded target location. The authors modeled existing results using a connectionist architecture and then designed new behavioral experiments to test the model’s assumptions. The modeling and behavioral results indicate that learning may be restricted to the local context even when the entire configuration is predictive of target location. Local learning constrains how much guidance is produced by contextual cuing. The modeling and new data also demonstrate that local learning requires that the local context maintain its location in the overall global context.
Initial scene representations facilitate eye movement guidance in visual search
- Journal of Experimental Psychology: Human Perception and Performance
, 2006
"... What role does the initial glimpse of a scene play in subsequent eye movement guidance? In 4 experiments, a brief scene preview was followed by object search through the scene via a small moving window that was tied to fixation position. Experiment 1 demonstrated that the scene preview resulted in m ..."
Abstract
-
Cited by 28 (4 self)
- Add to MetaCart
(Show Context)
What role does the initial glimpse of a scene play in subsequent eye movement guidance? In 4 experiments, a brief scene preview was followed by object search through the scene via a small moving window that was tied to fixation position. Experiment 1 demonstrated that the scene preview resulted in more efficient eye movements compared with a control preview. Experiments 2 and 3 showed that this scene preview benefit was not due to the conceptual category of the scene or identification of the target object in the preview. Experiment 4 demonstrated that the scene preview benefit was unaffected by (size invariant) visual representation is generated in an initial scene glimpse and that this representation can be retained in memory and used to guide subsequent eye movements.
Overt attentional prioritization of new objects and feature changes during real-world scene viewing
- Visual Cognition
, 2009
"... The authors investigated the extent to which a change to an object’s colour is overtly prioritized for fixation relative to the appearance of a new object during real-world scene viewing. Both types of scene change captured gaze (and attention) when introduced during a fixation, although colour chan ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
The authors investigated the extent to which a change to an object’s colour is overtly prioritized for fixation relative to the appearance of a new object during real-world scene viewing. Both types of scene change captured gaze (and attention) when introduced during a fixation, although colour changes captured attention less often than new objects. Neither of these scene changes captured attention when they occurred during a saccade, but slower and less reliable memory-based mechanisms were nevertheless able to prioritize new objects and colour changes relative to the other stable objects in the scene. These results indicate that online memory for object identity and at least some object features are functional in detecting changes to real-world scenes. Additionally, visual factors such as the salience of onsets and colour changes did not affect prioritization of these events. We discuss these results in terms of current theories of attention allocation within, and online memory representations of, real-world scenes.
Guidance of attention to objects and locations by longterm memory of natural scenes
- J
, 2008
"... Four flicker change-detection experiments demonstrate that scene-specific long-term memory guides attention to both behaviorally relevant locations and objects within a familiar scene. Participants performed an initial block of change-detection trials, detecting the addition of an object to a natura ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
Four flicker change-detection experiments demonstrate that scene-specific long-term memory guides attention to both behaviorally relevant locations and objects within a familiar scene. Participants performed an initial block of change-detection trials, detecting the addition of an object to a natural scene. After a 30-min delay, participants performed an unanticipated 2nd block of trials. When the same scene occurred in the 2nd block, the change within the scene was (a) identical to the original change, (b) a new object appearing in the original change location, (c) the same object appearing in a new location, or (d) a new object appearing in a new location. Results suggest that attention is rapidly allocated to previously relevant locations and then to previously relevant objects. This pattern of locations dominating objects remained when object identity information was made more salient. Eye tracking verified that scene memory results in more direct scan paths to previously relevant locations and objects. This contextual guidance suggests that a high-capacity long-term memory for scenes is used to insure that limited attentional capacity is allocated efficiently rather than being squandered.
An object-mediated updating account of insensitivity to transsaccadic
"... change ..."
(Show Context)
When does repeated search in scenes involve memory? Looking at versus looking for objects in scenes
- Journal of Experimental Psychology: Human Perception and Performance
, 2012
"... One might assume that familiarity with a scene or previous encounters with objects embedded in a scene would benefit subsequent search for those items. However, in a series of experiments we show that this is not the case: When participants were asked to subsequently search for multiple objects in t ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
(Show Context)
One might assume that familiarity with a scene or previous encounters with objects embedded in a scene would benefit subsequent search for those items. However, in a series of experiments we show that this is not the case: When participants were asked to subsequently search for multiple objects in the same scene, search performance remained essentially unchanged over the course of searches despite increasing scene familiarity. Similarly, looking at target objects during previews, which included letter search, 30 seconds of free viewing, or even 30 seconds of memorizing a scene, also did not benefit search for the same objects later on. However, when the same object was searched for again memory for the previous search was capable of producing very substantial speeding of search despite many different intervening searches. This was especially the case when the previous search engagement had been active rather than supported by a cue. While these search benefits speak to the strength of memory-guided search when the same search target is repeated, the lack of memory guidance during initial object searches—despite previous encounters with the target objects—demonstrates the dominance of guidance by generic scene knowledge in real-world search.
Incremental Learning of Target Locations in Visual Search
"... The top-down guidance of visual attention is one of the main factors allowing humans to effectively process vast amounts of incoming visual information. Nevertheless we still lack a full understanding of the visual, semantic, and memory processes governing visual attention. In this paper, we present ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
(Show Context)
The top-down guidance of visual attention is one of the main factors allowing humans to effectively process vast amounts of incoming visual information. Nevertheless we still lack a full understanding of the visual, semantic, and memory processes governing visual attention. In this paper, we present a computational model of visual search capable of predicting the most likely positions of target objects. The model does not require a separate training phase, but learns likely target positions in an incremental fashion based on a memory of previous fixations. We evaluate the model on two search tasks and show that it outperforms saliency alone and comes close to the maximal performance the Contextual Guidance Model can achieve (CGM, Torralba et al. 2006; Ehinger et al. 2009), even though our model does not perform scene recognition or compute global image statistics.
Toward ecologically realistic theories in visual short-term memory research
- Attention, Perception, & Psychophysics
, 2014
"... Abstract Recent evidence from neuroimaging and psycho-physics suggests common neural and representational sub-strates for visual perception and visual short-term memory (VSTM). Visual perception is adapted to a rich set of statistical regularities present in the natural visual environment. Common ne ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
Abstract Recent evidence from neuroimaging and psycho-physics suggests common neural and representational sub-strates for visual perception and visual short-term memory (VSTM). Visual perception is adapted to a rich set of statistical regularities present in the natural visual environment. Common neural and representational substrates for visual perception and VSTM suggest that VSTM is adapted to these same statistical regularities too. This article discusses how the study of VSTM can be extended to stimuli that are ecologi-cally more realistic than those commonly used in standard VSTM experiments and what the implications of such an extension could be for our current view of VSTM. We advo-cate for the development of unified models of visual percep-tion and VSTM—probabilistic and hierarchical in nature— incorporating prior knowledge of natural scene statistics.
Emotion modulates eye movement patterns and subse-quent memory for the gist and details of movie scenes
- J. Vision
, 2014
"... A basic question in vision research regards where people look in complex scenes and how this influences their performance in various tasks. Previous studies with static images have demonstrated a close link between where people look and what they remember. Here, we examined the pattern of eye movem ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
A basic question in vision research regards where people look in complex scenes and how this influences their performance in various tasks. Previous studies with static images have demonstrated a close link between where people look and what they remember. Here, we examined the pattern of eye movements when participants watched neutral and emotional clips from Hollywood-style movies. Participants answered multiple-choice memory questions concerning visual and auditory scene details immediately upon viewing 1-min-long neutral or emotional movie clips. Fixations were more narrowly focused for emotional clips, and immediate memory for object details was worse compared to matched neutral scenes, implying preferential attention to emotional events. Although we found the expected correlation between where people looked and what they remembered for neutral clips, this relationship broke down for emotional clips. When participants were subsequently presented with key frames (static images) extracted from the movie clips such that presentation duration of the target objects (TOs) corresponding to the multiple-choice questions was matched and the earlier questions were repeated, more fixations were observed on the TOs, and memory performance also improved significantly, confirming that emotion modulates the relationship between gaze position and memory performance. Finally, in a long-term memory test, old/new recognition performance was significantly better for emotional scenes as compared to neutral scenes. Overall, these results are consistent with the hypothesis that emotional content draws eye fixations and strengthens memory for the scene gist while weakening encoding of peripheral scene details.