• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Visual memory for natural scenes: Evidence from change detection and visual search. (2006)

by A Hollingworth
Venue:Visual Cognition,
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 29
Next 10 →

Spatial constraints on learning in visual search: Modeling contextual cuing

by Timothy F. Brady, Marvin M. Chun - Journal of Experimental Psychology – Human Perception and Performance , 2007
"... Predictive visual context facilitates visual search, a benefit termed contextual cuing (M. M. Chun & Y. Jiang, 1998). In the original task, search arrays were repeated across blocks such that the spatial configuration (context) of all of the distractors in a display predicted an embedded target ..."
Abstract - Cited by 31 (3 self) - Add to MetaCart
Predictive visual context facilitates visual search, a benefit termed contextual cuing (M. M. Chun & Y. Jiang, 1998). In the original task, search arrays were repeated across blocks such that the spatial configuration (context) of all of the distractors in a display predicted an embedded target location. The authors modeled existing results using a connectionist architecture and then designed new behavioral experiments to test the model’s assumptions. The modeling and behavioral results indicate that learning may be restricted to the local context even when the entire configuration is predictive of target location. Local learning constrains how much guidance is produced by contextual cuing. The modeling and new data also demonstrate that local learning requires that the local context maintain its location in the overall global context.
(Show Context)

Citation Context

..., 1999; Mackworth & Morandi, 1967; but see Hollingworth & Henderson, 1998). Likewise, context information constrains the positions of objects within scenes (Biederman, Mezzanotte, & Rabinowitz, 1982; =-=Hollingworth, 2006-=-; Palmer, 1975). This helps cut down on the massive information overload because it provides constraints on the range of possible objects that can be expected to occur in a particular context (e.g., v...

Initial scene representations facilitate eye movement guidance in visual search

by Monica S. Castelhano, John M. Henderson - Journal of Experimental Psychology: Human Perception and Performance , 2006
"... What role does the initial glimpse of a scene play in subsequent eye movement guidance? In 4 experiments, a brief scene preview was followed by object search through the scene via a small moving window that was tied to fixation position. Experiment 1 demonstrated that the scene preview resulted in m ..."
Abstract - Cited by 28 (4 self) - Add to MetaCart
What role does the initial glimpse of a scene play in subsequent eye movement guidance? In 4 experiments, a brief scene preview was followed by object search through the scene via a small moving window that was tied to fixation position. Experiment 1 demonstrated that the scene preview resulted in more efficient eye movements compared with a control preview. Experiments 2 and 3 showed that this scene preview benefit was not due to the conceptual category of the scene or identification of the target object in the preview. Experiment 4 demonstrated that the scene preview benefit was unaffected by (size invariant) visual representation is generated in an initial scene glimpse and that this representation can be retained in memory and used to guide subsequent eye movements.
(Show Context)

Citation Context

... scene memory, evidence for relatively detailed representation is found (Castelhano & Henderson, 2005; Hollingworth, 2005; Hollingworth & Henderson, 2002; see reviews by Henderson & Castelhano, 2005; =-=Hollingworth, 2006-=-). The results of the present study are consistent with these 760 CASTELHANO AND HENDERSON latter findings in the sense that a representation generated from an initial scene glimpse was detailed enoug...

Overt attentional prioritization of new objects and feature changes during real-world scene viewing

by Michi Matsukura, James R. Brockmole, John M. Henderson - Visual Cognition , 2009
"... The authors investigated the extent to which a change to an object’s colour is overtly prioritized for fixation relative to the appearance of a new object during real-world scene viewing. Both types of scene change captured gaze (and attention) when introduced during a fixation, although colour chan ..."
Abstract - Cited by 4 (0 self) - Add to MetaCart
The authors investigated the extent to which a change to an object’s colour is overtly prioritized for fixation relative to the appearance of a new object during real-world scene viewing. Both types of scene change captured gaze (and attention) when introduced during a fixation, although colour changes captured attention less often than new objects. Neither of these scene changes captured attention when they occurred during a saccade, but slower and less reliable memory-based mechanisms were nevertheless able to prioritize new objects and colour changes relative to the other stable objects in the scene. These results indicate that online memory for object identity and at least some object features are functional in detecting changes to real-world scenes. Additionally, visual factors such as the salience of onsets and colour changes did not affect prioritization of these events. We discuss these results in terms of current theories of attention allocation within, and online memory representations of, real-world scenes.

Guidance of attention to objects and locations by longterm memory of natural scenes

by Mark W. Becker, Ian P. Rasmussen - J , 2008
"... Four flicker change-detection experiments demonstrate that scene-specific long-term memory guides attention to both behaviorally relevant locations and objects within a familiar scene. Participants performed an initial block of change-detection trials, detecting the addition of an object to a natura ..."
Abstract - Cited by 3 (1 self) - Add to MetaCart
Four flicker change-detection experiments demonstrate that scene-specific long-term memory guides attention to both behaviorally relevant locations and objects within a familiar scene. Participants performed an initial block of change-detection trials, detecting the addition of an object to a natural scene. After a 30-min delay, participants performed an unanticipated 2nd block of trials. When the same scene occurred in the 2nd block, the change within the scene was (a) identical to the original change, (b) a new object appearing in the original change location, (c) the same object appearing in a new location, or (d) a new object appearing in a new location. Results suggest that attention is rapidly allocated to previously relevant locations and then to previously relevant objects. This pattern of locations dominating objects remained when object identity information was made more salient. Eye tracking verified that scene memory results in more direct scan paths to previously relevant locations and objects. This contextual guidance suggests that a high-capacity long-term memory for scenes is used to insure that limited attentional capacity is allocated efficiently rather than being squandered.

An object-mediated updating account of insensitivity to transsaccadic

by A. Caglar Tas, Cathleen M. Moore, Andrew Hollingworth
"... change ..."
Abstract - Cited by 3 (0 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...esentations. Instead, higher-level object representations are retained in VWM and LTM from previously fixated objects during the course of extended scene viewing (Hollingworth, 2004; for reviews, see =-=Hollingworth 2006-=-, 2008). An important aspect of the present results is that they demonstrate a direct role for surface feature representations in the computation of object correspondence and the experience of continu...

When does repeated search in scenes involve memory? Looking at versus looking for objects in scenes

by Melissa L. -h. Vo ̃, Jeremy M. Wolfe - Journal of Experimental Psychology: Human Perception and Performance , 2012
"... One might assume that familiarity with a scene or previous encounters with objects embedded in a scene would benefit subsequent search for those items. However, in a series of experiments we show that this is not the case: When participants were asked to subsequently search for multiple objects in t ..."
Abstract - Cited by 2 (1 self) - Add to MetaCart
One might assume that familiarity with a scene or previous encounters with objects embedded in a scene would benefit subsequent search for those items. However, in a series of experiments we show that this is not the case: When participants were asked to subsequently search for multiple objects in the same scene, search performance remained essentially unchanged over the course of searches despite increasing scene familiarity. Similarly, looking at target objects during previews, which included letter search, 30 seconds of free viewing, or even 30 seconds of memorizing a scene, also did not benefit search for the same objects later on. However, when the same object was searched for again memory for the previous search was capable of producing very substantial speeding of search despite many different intervening searches. This was especially the case when the previous search engagement had been active rather than supported by a cue. While these search benefits speak to the strength of memory-guided search when the same search target is repeated, the lack of memory guidance during initial object searches—despite previous encounters with the target objects—demonstrates the dominance of guidance by generic scene knowledge in real-world search.
(Show Context)

Citation Context

...at the observers have never seen before and that change from trial to trial (e.g., Castelhano & Henderson, 2007; Eckstein, Drescher, & Shimozaki, 2006; Henderson, Brockmole, Catslehano, & Mack, 2007; =-=Hollingworth, 2006-=-; Malcolm & Henderson, 2010; Võ & Henderson, 2010). Unsurprisingly, models that aim to explain the deployment of visual attention during real-world search, have focused on search behavior in novel sc...

Incremental Learning of Target Locations in Visual Search

by Michal Dziemianko, Frank Keller, Moreno I. Coco
"... The top-down guidance of visual attention is one of the main factors allowing humans to effectively process vast amounts of incoming visual information. Nevertheless we still lack a full understanding of the visual, semantic, and memory processes governing visual attention. In this paper, we present ..."
Abstract - Cited by 2 (2 self) - Add to MetaCart
The top-down guidance of visual attention is one of the main factors allowing humans to effectively process vast amounts of incoming visual information. Nevertheless we still lack a full understanding of the visual, semantic, and memory processes governing visual attention. In this paper, we present a computational model of visual search capable of predicting the most likely positions of target objects. The model does not require a separate training phase, but learns likely target positions in an incremental fashion based on a memory of previous fixations. We evaluate the model on two search tasks and show that it outperforms saliency alone and comes close to the maximal performance the Contextual Guidance Model can achieve (CGM, Torralba et al. 2006; Ehinger et al. 2009), even though our model does not perform scene recognition or compute global image statistics.
(Show Context)

Citation Context

...ge is converted into a saliency map. The map is then modulated with a bound calculated based on memorized target fixations. The resulting map is thresholded to select likely fixation locations. 1998; =-=Hollingworth, 2006-=-) and our assumptions are not entirely consistent with current theories of memory, we believe they are sufficient, as previous studies have either been conducted on artificial stimuli, or focused on a...

Toward ecologically realistic theories in visual short-term memory research

by A. Emin Orhan, Robert A. Jacobs - Attention, Perception, & Psychophysics , 2014
"... Abstract Recent evidence from neuroimaging and psycho-physics suggests common neural and representational sub-strates for visual perception and visual short-term memory (VSTM). Visual perception is adapted to a rich set of statistical regularities present in the natural visual environment. Common ne ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
Abstract Recent evidence from neuroimaging and psycho-physics suggests common neural and representational sub-strates for visual perception and visual short-term memory (VSTM). Visual perception is adapted to a rich set of statistical regularities present in the natural visual environment. Common neural and representational substrates for visual perception and VSTM suggest that VSTM is adapted to these same statistical regularities too. This article discusses how the study of VSTM can be extended to stimuli that are ecologi-cally more realistic than those commonly used in standard VSTM experiments and what the implications of such an extension could be for our current view of VSTM. We advo-cate for the development of unified models of visual percep-tion and VSTM—probabilistic and hierarchical in nature— incorporating prior knowledge of natural scene statistics.
(Show Context)

Citation Context

...limitations of visual attention or VSTM, such as insufficient time given subjects to encode the details of the scene (Brady, Konkle, Oliva, & Alvarez, 2009) or a lack of fixation near target objects (=-=Hollingworth, 2006-=-; Hollingworth & Henderson, 2002). Therefore, a failure to detect changes in natural scenes in change blindness studies does not necessarily imply a severe capacity limitation in visual attention or i...

Emotion modulates eye movement patterns and subse-quent memory for the gist and details of movie scenes

by Ramanathan Subramanian , Divya Shankar , # $ , David Melcher - J. Vision , 2014
"... A basic question in vision research regards where people look in complex scenes and how this influences their performance in various tasks. Previous studies with static images have demonstrated a close link between where people look and what they remember. Here, we examined the pattern of eye movem ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
A basic question in vision research regards where people look in complex scenes and how this influences their performance in various tasks. Previous studies with static images have demonstrated a close link between where people look and what they remember. Here, we examined the pattern of eye movements when participants watched neutral and emotional clips from Hollywood-style movies. Participants answered multiple-choice memory questions concerning visual and auditory scene details immediately upon viewing 1-min-long neutral or emotional movie clips. Fixations were more narrowly focused for emotional clips, and immediate memory for object details was worse compared to matched neutral scenes, implying preferential attention to emotional events. Although we found the expected correlation between where people looked and what they remembered for neutral clips, this relationship broke down for emotional clips. When participants were subsequently presented with key frames (static images) extracted from the movie clips such that presentation duration of the target objects (TOs) corresponding to the multiple-choice questions was matched and the earlier questions were repeated, more fixations were observed on the TOs, and memory performance also improved significantly, confirming that emotion modulates the relationship between gaze position and memory performance. Finally, in a long-term memory test, old/new recognition performance was significantly better for emotional scenes as compared to neutral scenes. Overall, these results are consistent with the hypothesis that emotional content draws eye fixations and strengthens memory for the scene gist while weakening encoding of peripheral scene details.
(Show Context)

Citation Context

..., 2001; Tatler, Gilchrist, & Land, 2005; Pertzov, Avidan, & Zohary, 2009). First, it is generally reported that participants are better at remembering items that were fixated (Melcher & Kowler, 2001; =-=Hollingworth, 2006-=-). Second, studies of memory for pictures and photographic images have provided evidence that observers accumulate information about the visual details of scenes over time and across separate glances ...

Change Blindness and Cueing

by Fachbereich Erziehungswissenschaft Und Psychologie, Dipl Psych, Kühnel Anja
"... Erstgutachter/in Prof. Dr. Michael Niedeggen ..."
Abstract - Add to MetaCart
Erstgutachter/in Prof. Dr. Michael Niedeggen
(Show Context)

Citation Context

...nformation processing. Memory for visual information contains foursdifferent stores: 1) visible persistence, 2) informational persistence, 3) visual short-termsmemory, and 4) visual long-term memory (=-=Hollingworth, 2006-=-). Visible persistencesrepresents the visual information as sensed for an extremely short time (80-100 ms) whereassinformational persistence is somewhat longer (150-300 ms) but does not represent thes...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University