Results 1 - 10
of
321
Six views of embodied cognition
- PSYCHONOMIC BULLETIN AND REVIEW
, 2002
"... The emerging viewpoint of embodied cognition holds that cognitive processes are deeply rooted in the body’s interactions with the world. This position actually houses a number of distinct claims, some of which are more controversial than others. This paper distinguishes and evaluates the following s ..."
Abstract
-
Cited by 339 (1 self)
- Add to MetaCart
The emerging viewpoint of embodied cognition holds that cognitive processes are deeply rooted in the body’s interactions with the world. This position actually houses a number of distinct claims, some of which are more controversial than others. This paper distinguishes and evaluates the following six claims: 1) cognition is situated; 2) cognition is time-pressured; 3) we off-load cognitive work onto the environment; 4) the environment is part of the cognitive system; 5) cognition is for action; 6) off-line cognition is body-based. Of these, the first three and the fifth appear to be at least partially true, and their usefulness is best evaluated in terms of the range of their applicability. The fourth claim, I argue, is deeply problematic. The sixth claim has received the least attention in the literature on embodied cognition, but it may in fact be the best documented and most powerful of the six claims.
Accurate visual memory for previously attended objects in natural scenes
, 2002
"... The nature of the information retained from previously fixated (and hence attended) objects in natural scenes was investigated. In a saccade-contingent change paradigm, participants successfully detected type and token changes (Experiment 1) or token and rotation changes (Experiment 2) to a target o ..."
Abstract
-
Cited by 159 (30 self)
- Add to MetaCart
(Show Context)
The nature of the information retained from previously fixated (and hence attended) objects in natural scenes was investigated. In a saccade-contingent change paradigm, participants successfully detected type and token changes (Experiment 1) or token and rotation changes (Experiment 2) to a target object when the object had been previously attended but was no longer within the focus of attention when the change occurred. In addition, participants demonstrated accurate type-, token-, and orientation-discrimination performance on subsequent long-term memory tests (Experiments 1 and 2) and during online perceptual processing of a scene (Experiment 3). These data suggest that relatively detailed visual information is retained in memory from previously attended objects in natural scenes. A model of scene perception and long-term memory is proposed.
Visual Indexes, Preconceptual Objects, and Situated Vision. Cognition,
, 2002
"... ..."
(Show Context)
Current approaches to change blindness
- Visual Cognition: Special Issue on Change Detection and Visual Memory
"... Across saccades, blinks, blank screens, movie cuts, and other interruptions, ob-servers fail to detect substantial changes to the visual details of objects and scenes. This inability to spot changes (“change blindness”) is the focus of this special issue of Visual Cognition. This introductory paper ..."
Abstract
-
Cited by 146 (10 self)
- Add to MetaCart
(Show Context)
Across saccades, blinks, blank screens, movie cuts, and other interruptions, ob-servers fail to detect substantial changes to the visual details of objects and scenes. This inability to spot changes (“change blindness”) is the focus of this special issue of Visual Cognition. This introductory paper briefly reviews recent studies of change blindness, noting the relation of these findings to earlier re-search and discussing the inferences we can draw from them. Most explanations of change blindness assume that we fail to detect changes because the changed display masks or overwrites the initial display. Here I draw a distinction between intentional and incidental change detection tasks and consider how alternatives to the “overwriting ” explanation may provide better explanations for change blindness. Imagine you are watching a movie in which an actor is sitting in a cafeteria with a jacket slung over his shoulder. The camera then cuts to a close-up and his jacket is now over the back of his chair. You might think that everyone would notice this obvious editing mistake. Yet, recent research on visual memory has found that people are surprisingly poor at noticing large changes to objects, photographs, and motion pictures from one instant to the next (see Simons & Levin, 1997 for a review). Although researchers have long noted the existence
An Active Vision Architecture based on Iconic Representations
- Artificial Intelligence
, 1995
"... Active vision systems have the capability of continuously interacting with the environment. The rapidly changing environment of such systems means that it is attractive to replace static representations with visual routines that compute information on demand. Such routines place a premium on image d ..."
Abstract
-
Cited by 146 (13 self)
- Add to MetaCart
(Show Context)
Active vision systems have the capability of continuously interacting with the environment. The rapidly changing environment of such systems means that it is attractive to replace static representations with visual routines that compute information on demand. Such routines place a premium on image data structures that are easily computed and used. The purpose of this paper is to propose a general active vision architecture based on efficiently computable iconic representations. This architecture employs two primary visual routines, one for identifying the visual image near the fovea (object identification), and another for locating a stored prototype on the retina (object location). This design allows complex visual behaviors to be obtained by composing these two routines with different parameters. The iconic representations are comprised of high-dimensional feature vectors obtained from the responses of an ensemble of Gaussian derivative spatial filters at a number of orientations and...
Dynamic Model of Visual Recognition Predicts Neural Response Properties in the Visual Cortex
- Neural Computation
, 1995
"... this paper, we describe a hierarchical network model of visual recognition that explains these experimental observations by using a form of the extended Kalman filter as given by the Minimum Description Length (MDL) principle. The model dynamically combines input-driven bottom-up signals with expec ..."
Abstract
-
Cited by 113 (20 self)
- Add to MetaCart
this paper, we describe a hierarchical network model of visual recognition that explains these experimental observations by using a form of the extended Kalman filter as given by the Minimum Description Length (MDL) principle. The model dynamically combines input-driven bottom-up signals with expectation-driven top-down signals to predict current recognition state. Synaptic weights in the model are adapted in a Hebbian manner according to a learning rule also derived from the MDL principle. The resulting prediction/learning scheme can be viewed as implementing a form of the Expectation-Maximization (EM) algorithm. The architecture of the model posits an active computational role for the reciprocal connections between adjoining visual cortical areas in determining neural response properties. In particular, the model demonstrates the possible role of feedback from higher cortical areas in mediating neurophysiological effects due to stimuli from beyond the classical receptive field. Si
Learning to Use Selective Attention and Short-Term Memory in Sequential Tasks
- From Animals to Animats 4: Proceedings of the Fourth International Conference on Simulation of Adaptive Behavior
, 1996
"... This paper presents U-Tree, a reinforcement learning algorithm that uses selective attention and shortterm memory to simultaneously address the intertwined problems of large perceptual state spaces and hidden state. By combining the advantages of work in instance-based (or "memory-based") ..."
Abstract
-
Cited by 88 (2 self)
- Add to MetaCart
This paper presents U-Tree, a reinforcement learning algorithm that uses selective attention and shortterm memory to simultaneously address the intertwined problems of large perceptual state spaces and hidden state. By combining the advantages of work in instance-based (or "memory-based") learning and work with robust statistical tests for separating noise from task structure, the method learns quickly, creates only task-relevant state distinctions, and handles noise well. U-Tree uses a tree-structured representation, and is related to work on Prediction Suffix Trees [Ron et al., 1994] , Parti-game [Moore, 1993] , G-algorithm [Chapman and Kaelbling, 1991] , and Variable Resolution Dynamic Programming [Moore, 1991] . It builds on Utile Suffix Memory [McCallum, 1995c] , which only used short-term memory, not selective perception. The algorithm is demonstrated solving a highway driving task in which the agent weaves around slower and faster traffic. The agent uses active perception with ...
Doing without schema hierarchies: A recurrent connectionist approach to normal and impaired routine sequential action
- Psychological Review
, 2004
"... In everyday tasks, selecting actions in the proper sequence requires a continuously updated representation of temporal context. Many existing models address this problem by positing a hierarchy of processing units, mirroring the roughly hierarchical structure of naturalistic tasks themselves. Such a ..."
Abstract
-
Cited by 85 (13 self)
- Add to MetaCart
(Show Context)
In everyday tasks, selecting actions in the proper sequence requires a continuously updated representation of temporal context. Many existing models address this problem by positing a hierarchy of processing units, mirroring the roughly hierarchical structure of naturalistic tasks themselves. Such an approach has led to a number of difficulties, including a reliance on overly rigid sequencing mechanisms, an inability to account for context sensitivity in behavior, and a failure to address learning. We consider here an alternative framework, according to which the representation of temporal context is facilitated by recurrent connections within a network mapping from environmental inputs to actions. Applying this approach to a specific, and in many ways prototypical, everyday task (coffee-making), we examine its ability to account for several central characteristics of normal and impaired human performance. The model we consider learns to deal flexibly with a complex set of sequencing constraints, encoding contextual information at multiple time-scales within a single, distributed internal representation. Mildly degrading this context representation leads
Intelligence by Design: Principles of Modularity and Coordination for Engineering Complex Adaptive Agents
, 2001
"... All intelligence relies on search --- for example, the search for an intelligent agent's next action. Search is only likely to succeed in resource-bounded agents if they have already been biased towards finding the right answer. In artificial agents, the primary source of bias is engineering. T ..."
Abstract
-
Cited by 81 (27 self)
- Add to MetaCart
All intelligence relies on search --- for example, the search for an intelligent agent's next action. Search is only likely to succeed in resource-bounded agents if they have already been biased towards finding the right answer. In artificial agents, the primary source of bias is engineering. This dissertation
Soft Constraints In Interactive Behavior: The Case Of Ignoring Perfect Knowledge In-The-World For Imperfect Knowledge In-The-Head
, 2004
"... Constraints and dependencies among the elements of embodied cognition form patterns or microstrategies of interactive behavior. Hard constraints determine which microstrategies are possible. Soft constraints determine which of the possible microstrategies are most likely to be selected. When selecti ..."
Abstract
-
Cited by 79 (16 self)
- Add to MetaCart
Constraints and dependencies among the elements of embodied cognition form patterns or microstrategies of interactive behavior. Hard constraints determine which microstrategies are possible. Soft constraints determine which of the possible microstrategies are most likely to be selected. When selection is non-deliberate or automatic the least effort microstrategy is chosen. In calculating the effort required to execute a microstrategy each of the three types of operations, memory retrieval, perception, and action, are given equal weight; that is, perceptual-motor activity does not have a privileged status with respect to memory. Soft constraints can work contrary to the designer's intentions by making the access of perfect knowledge in-the-world more effortful than the access of imperfect knowledge in-the-head. These implications of soft constraints are tested in two experiments. In experiment 1 we varied the perceptual-motor effort of accessing knowledge in-the-world as well as the effort of retrieving items from memory. In experiment 2 we replicated one of the experiment 1 conditions to collect eye movement data. The results suggest that milliseconds matter. Soft constraints lead to a reliance on knowledge in-the-head even when the absolute difference in perceptual-motor versus memory retrieval effort is small, and even when relying on memory leads to a higher error rate and lower performance. We discuss the implications of # Supplementary data associated with this article can be found, in the online version, at doi:10.1016/ j.cogsci.2003.12.001 ## An earlier, much simpler version of this report was presented as an eight-page conference paper at CHI2001. That paper is archived as Gray and Fu (2001).