Results 1 - 10
of
480
The Generative Lexicon
- Computational Linguistics
, 1991
"... this paper, I will discuss four major topics relating to current research in lexical semantics: methodology, descriptive coverage, adequacy of the representation, and the computational usefulness of representations. In addressing these issues, I will discuss what I think are some of the central prob ..."
Abstract
-
Cited by 1341 (45 self)
- Add to MetaCart
(Show Context)
this paper, I will discuss four major topics relating to current research in lexical semantics: methodology, descriptive coverage, adequacy of the representation, and the computational usefulness of representations. In addressing these issues, I will discuss what I think are some of the central problems facing the lexical semantics community, and suggest ways of best approaching these issues. Then, I will provide a method for the decomposition of lexical categories and outline a theory of lexical semantics embodying a notion of cocompositionality and type coercion, as well as several levels of semantic description, where the semantic load is spread more evenly throughout the lexicon. I argue that lexical decomposition is possible if it is performed generatively. Rather than assuming a fixed set of primitives, I will assume a fixed number of generative devices that can be seen as constructing semantic expressions. I develop a theory of Qualia Structure, a representation language for lexical items, which renders much lexical ambiguity in the lexicon unnecessary, while still explaining the systematic polysemy that words carry. Finally, I discuss how individual lexical structures can be integrated into the larger lexical knowledge base through a theory of lexical inheritance. This provides us with the necessary principles of global organization for the lexicon, enabling us to fully integrate our natural language lexicon into a conceptual whole
Using information content to evaluate semantic similarity in a taxonomy
- In Proceedings of the 14th International Joint Conference on Artificial Intelligence (IJCAI-95
, 1995
"... philip.resnikfleast.sun.com This paper presents a new measure of semantic similarity in an IS-A taxonomy, based on the notion of information content. Experimental evaluation suggests that the measure performs encouragingly well (a correlation of r = 0.79 with a benchmark set of human similarity judg ..."
Abstract
-
Cited by 1097 (8 self)
- Add to MetaCart
philip.resnikfleast.sun.com This paper presents a new measure of semantic similarity in an IS-A taxonomy, based on the notion of information content. Experimental evaluation suggests that the measure performs encouragingly well (a correlation of r = 0.79 with a benchmark set of human similarity judgments, with an upper bound of r = 0.90 for human subjects performing the same task), and significantly better than the traditional edge counting approach (r = 0.66). 1
Interpretation as Abduction
, 1990
"... An approach to abductive inference developed in the TACITUS project has resulted in a dramatic simplification of how the problem of interpreting texts is conceptualized. Its use in solving the local pragmatics problems of reference, compound nominals, syntactic ambiguity, and metonymy is described ..."
Abstract
-
Cited by 687 (38 self)
- Add to MetaCart
An approach to abductive inference developed in the TACITUS project has resulted in a dramatic simplification of how the problem of interpreting texts is conceptualized. Its use in solving the local pragmatics problems of reference, compound nominals, syntactic ambiguity, and metonymy is described and illustrated. It also suggests an elegant and thorough integration of syntax, semantics, and pragmatics. 1
Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory
, 1995
"... Damage to the hippocampal system disrupts recent memory but leaves remote memory intact. The account presented here suggests that memories are first stored via synaptic changes in the hippocampal system, that these changes support reinstatement of recent memories in the neocortex, that neocortical s ..."
Abstract
-
Cited by 675 (39 self)
- Add to MetaCart
Damage to the hippocampal system disrupts recent memory but leaves remote memory intact. The account presented here suggests that memories are first stored via synaptic changes in the hippocampal system, that these changes support reinstatement of recent memories in the neocortex, that neocortical synapses change a little on each reinstatement, and that remote memory is based on accumulated neocortical changes. Models that learn via changes to connections help explain this organization. These models discover the structure in ensembles of items if learning of each item is gradual and interleaved with learning about other items. This suggests that the neocortex learns slowly to discover the structure in ensembles of experiences. The hippocampal system permits rapid learning of new items without disrupting this structure, and reinstatement of new memories interleaves them with others to integrate them into structured neocortical memory systems.
Semantic Similarity in a Taxonomy: An Information-Based Measure and its Application to Problems of Ambiguity in Natural Language
, 1999
"... This article presents a measure of semantic similarityinanis-a taxonomy based on the notion of shared information content. Experimental evaluation against a benchmark set of human similarity judgments demonstrates that the measure performs better than the traditional edge-counting approach. The a ..."
Abstract
-
Cited by 609 (9 self)
- Add to MetaCart
(Show Context)
This article presents a measure of semantic similarityinanis-a taxonomy based on the notion of shared information content. Experimental evaluation against a benchmark set of human similarity judgments demonstrates that the measure performs better than the traditional edge-counting approach. The article presents algorithms that take advantage of taxonomic similarity in resolving syntactic and semantic ambiguity, along with experimental results demonstrating their e#ectiveness. 1. Introduction Evaluating semantic relatedness using network representations is a problem with a long history in arti#cial intelligence and psychology, dating back to the spreading activation approach of Quillian #1968# and Collins and Loftus #1975#. Semantic similarity represents a special case of semantic relatedness: for example, cars and gasoline would seem to be more closely related than, say, cars and bicycles, but the latter pair are certainly more similar. Rada et al. #Rada, Mili, Bicknell, & Blett...
From Simple Associations to Systematic Reasoning: a Connectionist Representation of Rules, Variables and Dynamic Bindings Using Temporal Synchrony
- Behavioral and Brain Sciences
, 1993
"... Abstract: Human agents draw a variety of inferences effortlessly, spontaneously, and with remarkable efficiency — as though these inferences are a reflex response of their cognitive apparatus. Furthermore, these inferences are drawn with reference to a large body of background knowledge. This remark ..."
Abstract
-
Cited by 273 (32 self)
- Add to MetaCart
(Show Context)
Abstract: Human agents draw a variety of inferences effortlessly, spontaneously, and with remarkable efficiency — as though these inferences are a reflex response of their cognitive apparatus. Furthermore, these inferences are drawn with reference to a large body of background knowledge. This remarkable human ability seems paradoxical given the results about the complexity of reasoning reported by researchers in artificial intelligence. It also poses a challenge for cognitive science and computational neuroscience: How can a system of simple and slow neuron-like elements represent a large body of systematic knowledge and perform a range of inferences with such speed? We describe a computational model that is a step toward addressing the cognitive science challenge and resolving the artificial intelligence paradox. We show how a connectionist network can encode millions of facts and rules involving n-ary predicates and variables, and perform a class of inferences in a few hundred msec. Efficient reasoning requires the rapid representation and propagation of dynamic bindings. Our model achieves this by i) representing dynamic bindings as the synchronous firing of appropriate nodes, ii) rules as interconnection patterns
Introduction to the special issue on word sense disambiguation
- Computational Linguistics J
, 1998
"... ..."
(Show Context)
Language and Memory
- Cognitive Science
, 1980
"... This paper outlines some of the issues and basic philosophy that have guided my work and that of my students in the last ten years. It describes the progression of conceptual representational theories developed during that time, as well OS some of the research models built to implement those theorie ..."
Abstract
-
Cited by 246 (4 self)
- Add to MetaCart
This paper outlines some of the issues and basic philosophy that have guided my work and that of my students in the last ten years. It describes the progression of conceptual representational theories developed during that time, as well OS some of the research models built to implement those theories. The paper concludes with a discussion of my most recent work in the area of modelling memory. It presents a theory of MOPS (Memory Organization Pockets), which serve as both processors and organizers of information in memory. This enables effective categorization of experiences in episodic memory, which in turn enables better predictive understanding of new experiences. PREFACE As an undergraduate, I naturally developed a simultaneous interest in the problem of cognition and its computer simulation. I also had a strong interest in language. Attempting to combine these three interests led me to the conclusion that there existed no academic discipline that could comfortably accommodate my interests. Linguists were not seriously interested in cognition. Psychologists were, but did not take seriously the idea of a computer program as the embodiment of a theory. Computer Science was still nascent and in many ways resistant to the “mushiness ” of Artificial Intelligence (AI). Where AI did exist, concern with people as opposed to machines was frequently lacking. In the last few years the situation in all three fields has begun to change. In AI, cognitive concerns have not only been accepted but are considered to be of prime importance. Many linguists have abandoned their overriding concern with syntax for a more balanced view of language phenomena. Psychologists are *I would like to thank the following people for their help both in the writing of and the
Requirements Engineering in the Year 00: A Research Perspective
, 2000
"... Requirements engineering (RE) is concerned with the identification of the goals to be achieved by the envisioned system, the operationalization of such goals into services and constraints, and the assignment of responsibilities for the resulting requirements to agents such as humans, devices, a ..."
Abstract
-
Cited by 172 (11 self)
- Add to MetaCart
Requirements engineering (RE) is concerned with the identification of the goals to be achieved by the envisioned system, the operationalization of such goals into services and constraints, and the assignment of responsibilities for the resulting requirements to agents such as humans, devices, and software. The processes involved in RE include domain analysis, elicitation, specification, assessment, negotiation, documentation, and evolution. Getting highquality requirements is difficult and critical. Recent surveys have confirmed the growing recognition of RE as an area of utmost importance in software engineering research and practice. The paper presents a brief history of the main concepts and techniques developed to date to support the RE task, with a special focus on modeling as a common denominator to all RE processes. The initial description of a complex safetycritical system is used to illustrate a number of current research trends in RE-specific areas such as go...