Results 11 - 20
of
808
Searching imagined environments
- Journal of Experimental Psychology: General
, 1990
"... Subjects read narratives describing directions of objects around a standing or reclimng observer, who was periodically reoriented. RTs were measured to identify which object was currently located beyond the observer's head, feet, front, back, fight, and left. When the observer was standing, hea ..."
Abstract
-
Cited by 114 (20 self)
- Add to MetaCart
(Show Context)
Subjects read narratives describing directions of objects around a standing or reclimng observer, who was periodically reoriented. RTs were measured to identify which object was currently located beyond the observer's head, feet, front, back, fight, and left. When the observer was standing, head/feet RTs were fastest, followed by front/back and then right/left. For the reclining observer, front/back RTs were fastest, followed by head/feet and then right/left. The data support the spatial framework model, according to which space is conceptualized in terms of three axes whose accessibility depends on body asymmetries and the relation of the body to the world. The data allow rejection of the equiavailability model, according to which RTs to all directions are equal, and the mental transformation model, according to which RTs increase with angular disparity from front. Consider the following passage ("The Gambler, the Nun, and the Radio, " Hemingway, 1927, p. 41): Out of the window of the hospital you could see a field with tumbleweed coming out of the snow, and a bare clay butte.... From the other window, if the bed was turned, you could see the
Dynamic mental representations
- Psychological Review
, 1987
"... This article pursues the possibility that perceivers are sensitive to implicit dynamic information even when they are not able to observe real-time change. Recent empirical results in the domains of handwriting recognition and picture perception are discussed in support of the hypothesis that percep ..."
Abstract
-
Cited by 110 (1 self)
- Add to MetaCart
This article pursues the possibility that perceivers are sensitive to implicit dynamic information even when they are not able to observe real-time change. Recent empirical results in the domains of handwriting recognition and picture perception are discussed in support of the hypothesis that perception involves acquiring information about transitions, whether the stimuli are static or dynamic. It is then argued that dynamic information has a special status in mental representation as well as in perception. In particular I propose that some mental representations may be dynamic, in that a temporal dimension is necessary to the representation. Recent evidence that mental representations may exhibit a form of momentum is discussed in support of this claim. There has been a growing appreciation of the impressive ability that the human mind has for perceiving events that take place over time. J. J. Gibson (1979), Johansson (1975), and others have noted that we are particularly receptive to information contained in patterns of change in the environment, as opposed to static information (such as that contained in a snapshot). In this article I will first propose that people perceive dynamic information even when the stimuli being inspected (such as snapshots) are not changing in real time. I will then propose that the importance of dynamic information to perception has implications for mental representation. In particular I will argue that mental representations may sometimes contain a temporal dimension and may thus themselves he dynamic. Perceiving Transitions I pr~ose that in oerception, acquiring information about transitions between states is as important as acquiring information about the states themselves. I believe that the proclivity oeople show for picking up transitional information extends to situations in which the stimuli are static. A more precise proposition can be stated as follows: When the perceptual system cannot directly perceive change over time it will seek out implicit Sections of this article are based on portions of a dissertation I submitted
A survey of design issues in spatial input
, 1994
"... We present a survey of design issues for developing effective free-space three-dimensional (3D) user interfaces. Our survey is based upon previous work in 3D interaction, our experience in developing free-space interfaces, and our informal observations of test users. We illustrate our design issues ..."
Abstract
-
Cited by 110 (3 self)
- Add to MetaCart
(Show Context)
We present a survey of design issues for developing effective free-space three-dimensional (3D) user interfaces. Our survey is based upon previous work in 3D interaction, our experience in developing free-space interfaces, and our informal observations of test users. We illustrate our design issues using examples drawn from instances of 3D interfaces. For example, our first issue suggests that users have difficulty understanding three-dimensional space. We offer a set of strategies which may help users to better perceive a 3D virtual environment, including the use of spatial references, relative gesture, two-handed interaction, multisensory feedback, physical constraints, and head tracking. We describe interfaces which employ these strategies. Our major contribution is the synthesis of many scattered results, observations, and examples into a common framework. This framework should serve as a guide to researchers or systems builders who may not be familiar with design issues in spatial input. Where appropriate, we also try to identify areas in free-space 3D interaction which we see as likely candidates for additional research. An extended and annotated version of the references list for this paper is available on-line through mosaic at address
Cognitive coordinate systems: Accounts of mental rotation and individual differences in spatial ability
- Psychological Review
, 1985
"... Strategic differences in spatial tasks can be explained in terms of different cognitive coordinate systems that subjects adopt. The strategy of mental rotation that occurs in many recent experiments uses a coordinate system denned by the standard axes of our visual world (i. e., horizontal, vertical ..."
Abstract
-
Cited by 105 (13 self)
- Add to MetaCart
(Show Context)
Strategic differences in spatial tasks can be explained in terms of different cognitive coordinate systems that subjects adopt. The strategy of mental rotation that occurs in many recent experiments uses a coordinate system denned by the standard axes of our visual world (i. e., horizontal, vertical, and depth axes). Several other possible coordinate systems (and hence other strategies) for solving the problems that occur in psychometric tests of spatial ability are examined in this article. One alternative strategy uses a coordinate system denned by the demands of each test item, resulting in mental rotation around arbitrary, taskdefined axes. Another strategy uses a coordinate system denned exclusively by the objects, producing representations that are invariant with the objects ' orientation. A detailed theoretical account of the mental rotation of individuals of low and high spatial ability, solving problems taken from psychometric tests, is instantiated as two related computer simulation models whose performance corresponds to the response latencies, eye-fixation patterns, and retrospective strategy reports of the two ability groups. The main purpose of this article is to provide a theory of how people solve problems on psychometric tests of spatial ability, focusing on the mental operations, representations, and strategies that are used for different types of problems. The theory is instantiated in terms of computer simulation models whose performance characteristics resemble human characteristics. A second purpose of the article is to analyze the processing differences between people of high and low spatial ability. One computer model simulates the processes
Toward a unified theory of similarity and recognition
- Psychological Review
, 1988
"... A new theory of similarity, rooted in the detection and recognition literatures, is developed. The general recognition theory assumes that the perceptual effect of a stimulus is random but that on any single trial it can be represented as a point in a multidimensional space. Similarity is a function ..."
Abstract
-
Cited by 104 (6 self)
- Add to MetaCart
(Show Context)
A new theory of similarity, rooted in the detection and recognition literatures, is developed. The general recognition theory assumes that the perceptual effect of a stimulus is random but that on any single trial it can be represented as a point in a multidimensional space. Similarity is a function of the overlap of perceptual distributions. It is shown that the general recognition theory contains Euclidean distance models of similarity as a special case but that unlike them, it is not constrained by any distance axioms. Three experiments are reported that test the empirical validity of the theory. In these experiments the general recognition theory accounts for similarity data as well as the cur-rently popular similarity theories do, and it accounts for identification data as well as the long-standing "champion " identification model does. The concept of similarity is of fundamental importance in psychology. Not only is there a vast literature concerned directly with the interpretation of subjective similarity judgments (e.g., as in multidimensional scaling) but the concept also plays a cru-cial but less direct role in the modeling of many psychophysical tasks. This is particularly true in the case of pattern and form recognition. It is frequently assumed that the greater the simi-larity between a pair of stimuli, the more likely one will be con-fused with the other in a recognition task (e.g., Luce, 1963; Shepard, 1964; Tversky & Gati, 1982). Yet despite the poten-tially close relationship between the two, there have been only a few attempts at developing theories that unify the similarity and recognition literatures. Most attempts to link the two have used a distance-based similarity measure to predict the confusions in recognition ex-
Maximizing Rigidity: The Incremental Recovery Of 3-D Structure From Rigid And . . .
- Perception
, 1983
"... The human visual system can extract 3-D shape information of unfamiliar moving objects from their projected transformations. Computational studies of this capacity have established that 3-D shape, can be extracted correctly from a brief presentation, provided that the moving objects are rigid. The ..."
Abstract
-
Cited by 101 (2 self)
- Add to MetaCart
The human visual system can extract 3-D shape information of unfamiliar moving objects from their projected transformations. Computational studies of this capacity have established that 3-D shape, can be extracted correctly from a brief presentation, provided that the moving objects are rigid. The human visual system requires a longer temporal extension, but it can cope, however, with considerable deviations from rigidity. It is shown how the 3-D structure of rigid and non-rigid objects can be recovered by maintaining an internal model of the viewed object and modifying it at each instant by the minimal non-rigid change that is sufficient to account for the observed transformation. The results of applying this incremental rigidity scheme to rigid and non-rigid objects in motion are described and compared with human perceptions.
Motor Processes in Mental Rotation
, 1998
"... Much indirect evidence supports the hypothesis that transformations of mental images are ..."
Abstract
-
Cited by 96 (8 self)
- Add to MetaCart
Much indirect evidence supports the hypothesis that transformations of mental images are
Modulation of Parietal Activation by Semantic Distance in a Number Comparison Task
- NeuroImage
, 2001
"... INTRODUCTION How do we go from seeing a word to accessing its meaning? Classical models of word processing postulate that words are initially recognized in modalityspecific input lexicons before contacting a common semantic representation (Caramazza, 1996; Morton, 1979). This predicts that areas wh ..."
Abstract
-
Cited by 93 (20 self)
- Add to MetaCart
INTRODUCTION How do we go from seeing a word to accessing its meaning? Classical models of word processing postulate that words are initially recognized in modalityspecific input lexicons before contacting a common semantic representation (Caramazza, 1996; Morton, 1979). This predicts that areas which are engaged in semantic-level processing should activate in direct correlation with the amount of semantic manipulation required by the task and do so independent of the modality of presentation of the concept (Chao et al., 2000; Perani et al., 1999; Vandenberghe et al., 1996). Here, we attempt to identify the cerebral areas engaged in the coding and internal manipulation of an abstract semantic content, the meaning of number words. Although numbers can be written in multiple notations, such as words or digits, the parietal lobes are thought to comprise a notation-independent representation of their semantic content as quantities. According to the "triple-code model" of number process