Results 1 - 10
of
179
Rule-plusexception model of classification learning
- Psychological Review
, 1994
"... The authors propose a rule-plus-exception model (RULEX) of classification learning. According to RULEX, people learn to classify objects by forming simple logical rules and remembering occasional exceptions to those rules. Because the learning process in RULEX is stochastic, the model predicts that ..."
Abstract
-
Cited by 287 (19 self)
- Add to MetaCart
(Show Context)
The authors propose a rule-plus-exception model (RULEX) of classification learning. According to RULEX, people learn to classify objects by forming simple logical rules and remembering occasional exceptions to those rules. Because the learning process in RULEX is stochastic, the model predicts that individual Ss will vary greatly in the particular rules that are formed and the exceptions that are stored. Averaged classification data are presumed to represent mixtures of these highly idiosyncratic rules and exceptions. RULEX accounts for numerous fundamental classification phenomena, including prototype and specific exemplar effects, sensitivity to correlational information, difficulty of learning linearly separable versus nonlinearly separable categories, selective attention effects, and difficulty of learning concepts with rules of differing complexity. RULEX also predicts distributions of generalization patterns observed at the individual subject level. Psychologists have witnessed a major shift in the study of category learning during the past few decades. Early research was dominated by the concept-identification paradigm, in which subjects learned well-defined categories structured according to simple logical rules. Owing to the influence of researchers such as Rosch (1973) and Posner and Keele (1968), interest shifted to more ill-defined categories as might be found in the natural world. For ill-defined categories, no simple logical rules exist for classifying objects, and the boundaries demarcating alternative categories are fuzzy. With the shift in emphasis from well-defined to ill-defined categories, there has also been a major shift in the types of models used for explaining classification learning. Early research was dominated by hypothesis-testing and rule-formation
Tests of an exemplar model for relating perceptual classification and recognition memory
- Journal of Experimental Psychology: Human Perception & Performance
, 1991
"... Experiments were conducted in which Ss made classification, recognition, and similarity judgments for 34 schematic faces. A multidimensional scaling (MDS) solution for the faces was derived on the basis of the similarity judgments. This MDS solution was then used in conjunction with an exemplar-simi ..."
Abstract
-
Cited by 112 (21 self)
- Add to MetaCart
(Show Context)
Experiments were conducted in which Ss made classification, recognition, and similarity judgments for 34 schematic faces. A multidimensional scaling (MDS) solution for the faces was derived on the basis of the similarity judgments. This MDS solution was then used in conjunction with an exemplar-similarity model to accurately predict Ss ' classification and recognition judgments. Evidence was provided that Ss allocated attention to the psychological dimensions differentially for classification and recognition. The distribution of attention came close to the ideal-observer distribution for classification, and some tendencies in that direction were observed for recognition. Evidence was also provided for interactive effects of individual exemplar frequencies and similarities on classification and recognition, in accord with the predictions of the exemplar model. Unexpectedly, however, the frequency effects appeared to be larger for classification than for recognition. The purpose of this study was to provide tests of a model for relating perceptual classification performance and oldnew recognition memory. The model under investigation is the context theory of classification proposed by Medin and
Representation is Representation of Similarities
- Behavioral and Brain Sciences
, 1996
"... Advanced perceptual systems are faced with the problem of securing a principled relationship between the world and its internal representation. I propose a unified approach to visual representation, based on Shepard's (1968) notion of second-order isomorphism. According to the proposed theory, ..."
Abstract
-
Cited by 110 (21 self)
- Add to MetaCart
(Show Context)
Advanced perceptual systems are faced with the problem of securing a principled relationship between the world and its internal representation. I propose a unified approach to visual representation, based on Shepard's (1968) notion of second-order isomorphism. According to the proposed theory, a shape is represented internally by the responses of a few tuned modules, each of which is broadly selective for some reference shape, whose similarity to the stimulus it measures. The result is a philosophically appealing, computationally feasible, biologically credible, and formally veridical representation of a distal shape space. This approach supports representation of and discrimination among shapes radically different from the reference ones, while bypassing the need for the computationally problematic decomposition into parts; it also addresses the needs of shape categorization, and can be used to derive a range of models of perceived similarity. Representation is Representation of Sim...
Recent views on conceptual structure
- Psychological Bulletin
, 1992
"... This article reviews theories of concept structure proposed since the mid-1970s, when the discov-ery of typicality effects led to the rejection of the view that instances of a concept share necessary and sufficient attributes. To replace that classical view, psychologists proposed the family resem-b ..."
Abstract
-
Cited by 110 (0 self)
- Add to MetaCart
(Show Context)
This article reviews theories of concept structure proposed since the mid-1970s, when the discov-ery of typicality effects led to the rejection of the view that instances of a concept share necessary and sufficient attributes. To replace that classical view, psychologists proposed the family resem-blance and exemplar views (and hybrids of the 2), which argue that instances of a concept share a certain level of overall similarity, rather than necessary and sufficient attributes. These similarity-based views account for much of the typicality data but fail to provide an adequate explanation of the coherence of conceptual categories and of various context effects. Recently proposed explana-tion-based accounts address these issues but raise further questions about the distinction between concept-specific information and general knowledge and about the relationship between concep-tual knowledge and various forms of inference. Psychologists have traditionally equated knowing the mean-ing of a word with knowing (or perhaps more accurately, having) the concept labeled by a word (e.g., Ogden & Richards, 1956; but see Clark, 1983). In this approach, a concept is assumed to be the mental representation of a category or class (Gleitman, Armstrong, & Gleitman, 1983; Medin & Smith, 1984). The con-tents of such a mental representation (i.e., the intension of a word), in concert with certain assumptions about how those contents are processed, have been taken to explain a wide vari-ety of phenomena, including people's knowledge of linguistic relations (e.g., synonymy, antynomy, hy ponomy), how people rec-ognize the objects, events, and so on properly labeled by the word (i.e., the extension of the word), how people understand novel combinations of the word with other words, and the infer-ences people are able to make about an object, event, and so on, properly labeled by the word (Johnson-Laird, Herrmann, &
Determinants of wordlikeness: Phonotactics or lexical neighborhoods
- Journal of Memory and Language
, 2001
"... Wordlikeness, the extent to which a sound sequence is typical of words in a language, affects language acquisition, language processing, and verbal short-term memory. Wordlikeness has generally been equated with phonotactic knowledge of the possible or probable sequences of sounds within a language. ..."
Abstract
-
Cited by 91 (0 self)
- Add to MetaCart
(Show Context)
Wordlikeness, the extent to which a sound sequence is typical of words in a language, affects language acquisition, language processing, and verbal short-term memory. Wordlikeness has generally been equated with phonotactic knowledge of the possible or probable sequences of sounds within a language. Alternatively, wordlikeness might be derived directly from the mental lexicon, depending only on similarity to known words. This paper tests these two cognitively different possibilities by comparing measures of phonotactic probability and lexical influence, including a new model of lexical neighborhoods, in their ability to explain empirical wordlikeness judgments. Our data show independent contributions of both phonotactic probability and the lexicon, with relatively greater influence from the lexicon. The influence of a lexical neighbor is found to be an inverted-Ushaped function of its token frequency. However, our results also indicate that current measures are limited in their ability to account for sequence typicality. © 2001 Academic Press Key Words: wordlikeness; phonotactics; token frequency; lexical neighborhood; sequence typicality. Any speaker of English can tell that Zbigniew
An instance theory of attention and memory
- Psychological Review
, 2002
"... An instance theory of attention and memory (ITAM) is presented that integrates formal theories of attention and memory phenomena by exploiting commonalities in their formal structure. The core idea in each theory is that performance depends on a choice process that can be modeled as a race between c ..."
Abstract
-
Cited by 78 (10 self)
- Add to MetaCart
(Show Context)
An instance theory of attention and memory (ITAM) is presented that integrates formal theories of attention and memory phenomena by exploiting commonalities in their formal structure. The core idea in each theory is that performance depends on a choice process that can be modeled as a race between competing alternatives. Attention and categorization are viewed as different perspectives on the same race. Attention selects objects by categorizing them; objects are categorized by attending to them. ITAM incorporates each of its ancestors as a special case, so it inherits their successes. Imagine yourself on your way home from work. You walk into the parking lot and look for your car. It takes you a second, perhaps. Now imagine your colleagues analyzing the simple act of cognition underlying that look. A student of attention would be interested in how your gaze went to the cars rather than other structural features. A student of categorization would be interested in how you knew those were cars in the parking lot. And a student of memory would be interested in how you did (or did not) pick your own car out of the group. These differences in perspective
Contextualizing concepts using a mathematical generalization of the quantum formalism
- Trends in Cognitive Science
, 2000
"... We outline the rationale and preliminary results of using the State Context Property (SCOP) formalism, originally developed as a generalization of quantum mechanics, to describe the contextual manner in which concepts are evoked, used, and combined to generate meaning. The quantum formalism was deve ..."
Abstract
-
Cited by 78 (40 self)
- Add to MetaCart
(Show Context)
We outline the rationale and preliminary results of using the State Context Property (SCOP) formalism, originally developed as a generalization of quantum mechanics, to describe the contextual manner in which concepts are evoked, used, and combined to generate meaning. The quantum formalism was developed to cope with problems arising in the description of (1) the measurement process, and (2) the generation of new states with new properties when particles become entangled. Similar problems arising with concepts motivated the formal treatment introduced here. Concepts are viewed not as fixed representations, but entities existing in states of potentiality that require interaction with a context—a stimulus or another concept—to ‘collapse ’ to an instantiated form (e.g. exemplar, prototype, or other possibly imaginary instance). The stimulus situation plays the role of the measurement in physics, acting as context that induces a change of the cognitive state from superposition state to collapsed state. The collapsed state is more likely to consist of a conjunction of concepts for associative than analytic thought because more stimulus or concept properties take part in the collapse. We provide two contextual measures of conceptual distance—one using collapse probabilities and the other weighted properties—and show how they can be applied to conjunctions using the pet fish problem.
Rules and exemplars in categorization, identification, and recognition
- Journal of Experimental Psychology: Learning, Memory, and Cognition
, 1989
"... Subjects learned to classify perceptual stimuli varying along continuous, separable dimensions into rule-described categories. The categories were designed to contrast the predictions of a selective-attention exemplar model and a simple rule-based model formalizing an economy-ofdescription view. Con ..."
Abstract
-
Cited by 77 (8 self)
- Add to MetaCart
(Show Context)
Subjects learned to classify perceptual stimuli varying along continuous, separable dimensions into rule-described categories. The categories were designed to contrast the predictions of a selective-attention exemplar model and a simple rule-based model formalizing an economy-ofdescription view. Converging evidence about categorization strategies was obtained by also collecting identification and recognition data and by manipulating strategies via instructions. In free-strategy conditions, the exemplar model generally provided an accurate quantitative account of identification, categorization, and recognition performance, and it allowed for the interrelationship of these paradigms within a unified framework. Analyses of individual subject data also provided some evidence for the use of rules, but in general, the rules seemed to have a great deal in common with exemplar storage processes. Classification and recognition performance for subjects given explicit instructions to use specific rules contrasted dramatically with performance in the free-strategy conditions and could not be predicted by the exemplar model. Markedly different theoretical approaches have been applied to account for the learning and representation of welldefined categories structured according to simple rules and more natural, ill-defined categories (Rosch, 1973; E. E. Smith & Medin, 1981). In the case of well-defined categories, it is generally assumed that people formulate and test hypotheses concerning the "rules " that determine category membership