• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

and object (1960)

by Word
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 1,203
Next 10 →

A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge

by Thomas K Landauer, Susan T. Dutnais - PSYCHOLOGICAL REVIEW , 1997
"... How do people know as much as they do with as little information as they get? The problem takes many forms; learning vocabulary from text is an especially dramatic and convenient case for research. A new general theory of acquired similarity and knowledge representation, latent semantic analysis (LS ..."
Abstract - Cited by 1816 (10 self) - Add to MetaCart
How do people know as much as they do with as little information as they get? The problem takes many forms; learning vocabulary from text is an especially dramatic and convenient case for research. A new general theory of acquired similarity and knowledge representation, latent semantic analysis (LSA), is presented and used to successfully simulate such learning and several other psycholinguistic phenomena. By inducing global knowledge indirectly from local co-occurrence data in a large body of representative text, LSA acquired knowledge about the full vocabulary of English at a comparable rate to schoolchildren. LSA uses no prior linguistic or perceptual similarity knowledge; it is based solely on a general mathematical learning method that achieves powerful inductive effects by extracting the right number of dimensions (e.g., 300) to represent objects and contexts. Relations to other theories, phenomena, and problems are sketched.
(Show Context)

Citation Context

...f events results in beliefs that are usually correct or behaviors that are usually adaptive in a large, potentially infinite variety of situations. Following Plato, philosophers (e.g., Goodman, 1972; =-=Quine, 1960-=-), psychologists (e.g., Shepard, 1987; Vygotsky, 1968), linguists (e.g., Chomsky, 1991; Jackendoff, 1992; Pinker, 1990), computation scientists (e.g., Angluin & Smith, 1983; Michaelski, 1983) and comb...

A method for disambiguating word senses in a large corpus

by William A. Gale, Kenneth W. Church, David Yarowsky - Computers and the Humanities , 1992
"... Word sense disambiguation has been recognized as a major problem in natural language processing research for over forty years. Both quantitive and qualitative methods have been tried, but much of this work has been stymied by difficulties in acquiring appropriate lexical resources, such as semantic ..."
Abstract - Cited by 273 (14 self) - Add to MetaCart
Word sense disambiguation has been recognized as a major problem in natural language processing research for over forty years. Both quantitive and qualitative methods have been tried, but much of this work has been stymied by difficulties in acquiring appropriate lexical resources, such as semantic networks and annotated corpora. In particular, much of the work on qualitative methods has had to focus on ‘‘toy’’ domains since currently available semantic networks generally lack broad coverage. Similarly, much of the work on quantitative methods has had to depend on small amounts of hand-labeled text for testing and training. We have achieved considerable progress recently by taking advantage of a new source of testing and training materials. Rather than depending on small amounts of hand-labeled text, we have been making use of relatively large amounts of parallel text, text such as the Canadian Hansards, which are available in multiple languages. The translation can often be used in lieu of hand-labeling. For example, consider the polysemous word sentence, which has two major senses: (1) a judicial sentence, and (2), a syntactic sentence. We can collect a number of sense (1) examples by extracting instances that are translated as peine, and we can collect a number of sense (2) examples by extracting instances that are translated as

Learning words from sights and sounds: a computational model

by Deb K. Roy, Alex P. Pentland , 2002
"... This paper presents an implemented computational model of word acquisition which learns directly from raw multimodal sensory input. Set in an information theoretic framework, the model acquires a lexicon by finding and statistically modeling consistent cross-modal structure. The model has been imple ..."
Abstract - Cited by 270 (31 self) - Add to MetaCart
This paper presents an implemented computational model of word acquisition which learns directly from raw multimodal sensory input. Set in an information theoretic framework, the model acquires a lexicon by finding and statistically modeling consistent cross-modal structure. The model has been implemented in a system using novel speech processing, computer vision, and machine learning algorithms. In evaluations the model successfully performed speech segmentation, word discovery and visual categorization from spontaneous infant-directed speech paired with video images of single objects. These results demonstrate the possibility of using state-of-the-art techniques from sensory pattern recognition and machine learning to implement cognitive models which can process raw sensor data without the need for human transcription or labeling.

Introduction to the special issue on word sense disambiguation

by Nancy Ide - Computational Linguistics J , 1998
"... ..."
Abstract - Cited by 265 (4 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...bject of discussion since antiquity: Aristotle19 devoted a section of his Topics to this subject in 350 B.C. Since then, philosophers and linguists have continued to discuss the topic at length (e.g. =-=Quine, 1960-=-; Asprejan, 1974; Lyons, 1977; Weinrich, 1980; Cruse, 1986), but the lack of resolution over 2,000 years is striking. 1.6.2 Granularity. One of the foremost problems for WSD is to determine the approp...

The importance of shape in early lexical learning

by B. Smith, S. Jones - Cognitive Development , 1988
"... We ask if certain dimensions of perceptual similarity are weighted more heavily than others in determining word extension. The specific dimensions examined were shape, size, and texture. In four experiments, subjects were asked either to extend a novel count noun to new instances or, in a nonword cl ..."
Abstract - Cited by 235 (31 self) - Add to MetaCart
We ask if certain dimensions of perceptual similarity are weighted more heavily than others in determining word extension. The specific dimensions examined were shape, size, and texture. In four experiments, subjects were asked either to extend a novel count noun to new instances or, in a nonword classification task, to put together objects that go together. The subjects were 2-year-olds, 3-year-olds, and adults. The results of all four experiments indicate that 2- and 3-year-olds and adults all weight shape more heavily than they do size or texture. This observed emphasis on shape, however, depends on the age of the subject and the task. First, there is a developmental trend. The shape bias increases in strength and generality from 2 to 3 years of age and more markedly from early childhood to adulthood. Second, in young children, the shape bias is much stronger in word extension than in nonword classification tasks. These results suggest that the development of the shape bias originates in language learning-it reflects a fact about language-and does not stem from general perceptual processes. Within the first few years of life, children learn many hundreds of words for different kinds of natural objects and artifacts. As many have noted, the rapidity
(Show Context)

Citation Context

...reasonable, descriptions of the same scene. The child might guess that the word refers to a particular part or property of the rabbit or to the rabbit’s relation to some other aspect of the scene (cf =-=Quine, 1960-=-). Even if the child assumed that the word referred to concrete whole objects, the question of category level would remain: the word rabbit might refer to some higher level category such as ‘animal,’ ...

How to build a baby: II. Conceptual primitives

by Jean M. Mandler - Psychological Review , 1992
"... A mechanism of perceptual analysis by which infants derive meaning from perceptual activity is described. Infants use this mechanism to redescribe perceptual information into image-schematic format. Image-schemas create conceptual structure from the spatial structure of objects and their movements, ..."
Abstract - Cited by 221 (5 self) - Add to MetaCart
A mechanism of perceptual analysis by which infants derive meaning from perceptual activity is described. Infants use this mechanism to redescribe perceptual information into image-schematic format. Image-schemas create conceptual structure from the spatial structure of objects and their movements, resulting in notions such as animacy, inanimacy, agency, and containment. These earliest meanings are nonpropositional, analogical representations grounded in the perceptual world of the infant. In contrast with most perceptual processing, which is not analyzed in this fashion, redescription into image-schematic format simplifies perceptual information and makes it potentially accessible for purposes of concept formation and thought. In addition to enabling preverbal thought, image-schemas provide a foundation for language acquisition by creating an interface between the continuous processes of perception and the discrete nature of language. When you keep putting questions to Nature and Nature keeps saying "no, " it is not unreasonable to suppose that somewhere among the things you believe there is something that isn't true. (Fodor, 1981, p. 316) One of the least understood developments in infancy is how
(Show Context)

Citation Context

...g the names for things. Even tolearn names, it has been thought necessary to propose that children have a bias to assume that labels refer to whole objects rather than to object parts (Markman, 1989; =-=Quine, 1960-=-). A similar kind of bias may operate with respect to various rela-tional constructions.Brown (1973) and others showed that a number of relational morphemes, such as the prepositions in and on, are ea...

Word Learning as Bayesian Inference

by Fei Xu, Joshua B. Tenenbaum - In Proceedings of the 22nd Annual Conference of the Cognitive Science Society , 2000
"... The authors present a Bayesian framework for understanding how adults and children learn the meanings of words. The theory explains how learners can generalize meaningfully from just one or a few positive examples of a novel word’s referents, by making rational inductive inferences that integrate pr ..."
Abstract - Cited by 175 (33 self) - Add to MetaCart
The authors present a Bayesian framework for understanding how adults and children learn the meanings of words. The theory explains how learners can generalize meaningfully from just one or a few positive examples of a novel word’s referents, by making rational inductive inferences that integrate prior knowledge about plausible word meanings with the statistical structure of the observed examples. The theory addresses shortcomings of the two best known approaches to modeling word learning, based on deductive hypothesis elimination and associative learning. Three experiments with adults and children test the Bayesian account’s predictions in the context of learning words for object categories at multiple levels of a taxonomic hierarchy. Results provide strong support for the Bayesian account over competing accounts, in terms of both quantitative model fits and the ability to explain important qualitative phenomena. Several extensions of the basic theory are discussed, illustrating the broader potential for Bayesian models of word learning.
(Show Context)

Citation Context

...n models of word learning. Keywords: word learning, Bayesian inference, concepts, computational modeling Learning even the simplest names for object categories presents a difficult induction problem (=-=Quine, 1960-=-). Consider a typical dilemma faced by a child learning English. Upon observing a competent adult speaker use the word dog in reference to Max, a particular Dalmatian running by, what can the child in...

File Change Semantics and the Familiarity Theory of Definiteness. Reprinted

by Irene Heim - Formal Semantics : The Essential Readings , 1983
"... ..."
Abstract - Cited by 160 (0 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

... his or her own illustrations. File Change Semantics 245 (35) Let F be a file, and let p be a molecular proposition whose immediate constituents are a negation operator and the proposition q. Then: Sat(F p) faN 2 Sat(F): there is no bM aN such that bM 2 Sat(F q)g. Notes The ideas contained in this article are elaborated more fully in my Ph. D. thesis (Heim 1982). All the people whose help I acknowledge there should also be mentioned here, in particular Angelika Kratzer and my thesis advisor Barbara Partee. 1 The label is due to Hawkins (1978). 2 See in particular Russell (1919, Ch. 16), Quine (1960), Kaplan (1972), and Geach (1962). 3 Karttunen (1968a, b, 1976). 4 The file metaphor was first suggested to me by Angelika Kratzer, in response to an earlier attempt of mine to modify Grice's and Stalnaker's notion of ``common ground'' (cf. especially Stalnaker 1979) in such a way as to impose on common grounds an essentially file-like structure. I subsequently found uses of the file metaphor for more or less similar purposes elsewhere in the literature, e.g. in Karttunen (1976). With respect to their role in a model of semantics, my files are closely related not only to Stalnaker's ``common g...

Situation models in language comprehension and memory”,

by R A Zwaan, G A Radvansky - Psychological bulletin, , 1998
"... ..."
Abstract - Cited by 153 (8 self) - Add to MetaCart
Abstract not found

Word sense disambiguation: The state of the art

by Nancy Ide, Jean Véronis - Computational Linguistics , 1998
"... The automatic disambiguation of word senses has been an interest and concern since the earliest days of computer treatment of language in the 1950's. Sense disambiguation is an “intermediate task ” (Wilks and Stevenson, 1996) which is not an end in itself, but rather is necessary at one level o ..."
Abstract - Cited by 152 (3 self) - Add to MetaCart
The automatic disambiguation of word senses has been an interest and concern since the earliest days of computer treatment of language in the 1950's. Sense disambiguation is an “intermediate task ” (Wilks and Stevenson, 1996) which is not an end in itself, but rather is necessary at one level or another to accomplish most natural language processing tasks. It is
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University