Results 1 - 10
of
25
Interpretation as Abduction
, 1990
"... An approach to abductive inference developed in the TACITUS project has resulted in a dramatic simplification of how the problem of interpreting texts is conceptualized. Its use in solving the local pragmatics problems of reference, compound nominals, syntactic ambiguity, and metonymy is described ..."
Abstract
-
Cited by 687 (38 self)
- Add to MetaCart
An approach to abductive inference developed in the TACITUS project has resulted in a dramatic simplification of how the problem of interpreting texts is conceptualized. Its use in solving the local pragmatics problems of reference, compound nominals, syntactic ambiguity, and metonymy is described and illustrated. It also suggests an elegant and thorough integration of syntax, semantics, and pragmatics. 1
Active Logics: A Unified Formal Approach to Episodic Reasoning
"... Artificial intelligence research falls roughly into two categories: formal and implementational. This division is not completely firm: there are implementational studies based on (formal or informal) theories (e.g., CYC, SOAR, OSCAR), and there are theories framed with an eye toward implementabili ..."
Abstract
-
Cited by 36 (2 self)
- Add to MetaCart
Artificial intelligence research falls roughly into two categories: formal and implementational. This division is not completely firm: there are implementational studies based on (formal or informal) theories (e.g., CYC, SOAR, OSCAR), and there are theories framed with an eye toward implementability (e.g., predicate circumscription). Nevertheless, formal /theoretical work tends to focus on very narrow problems (and even on very special cases of very narrow problems) while trying to get them "right" in a very strict sense, while implementational work tends to aim at fairly broad ranges of behavior but often at the expense of any kind of overall conceptually unifying framework that informs understanding. It is sometimes urged that this gap is intrinsic to the topic: intelligence is not a unitary thing for which there will be a unifying theory, but rather a "society" of subintelligences whose overall behavior cannot be reduced to useful characterizing and predictive principles.
Conversational Adequacy: Mistakes are the Essence
- Int. J. Human-Computer Studies
, 1997
"... We argue that meta-dialog and meta-reasoning, far from being of only occasional use, are the very essence of conversation and communication between agents. We give four paradigm examples of massive use of meta-dialog where only limited object dialog may be present, and use these to bolster our c ..."
Abstract
-
Cited by 34 (13 self)
- Add to MetaCart
(Show Context)
We argue that meta-dialog and meta-reasoning, far from being of only occasional use, are the very essence of conversation and communication between agents. We give four paradigm examples of massive use of meta-dialog where only limited object dialog may be present, and use these to bolster our claim of centrality for meta-dialog. We further illustrate this with related work in active logics. We argue moreover This research has been supported in part by grants from the National Science Foundation and the Army Research Office. We thank Betsy Klipple, Tom Nelson, Clare Voss, and John Gurney for helpful discussion. y Institute for Advanced Computer Studies, University of Maryland. 1 that there may be a core set of meta-dialog principles that is in some sense complete, and that may correspond to the human ability to engage in "free-ranging" conversation. If we are right, then implementing such a set would be of considerable interest. We give examples of existing computer programs that converse inadequately according to our guidelines. 1
The Commonsense Algorithm As A Basis For Computer Models Of Human Memory, Inference, Belief And Contextual Language Comprehension
, 1975
"... The notion of a commonsense alorithm is presented as a basic data structure for modeling human cognition. This data structure unifies many current ideas about human memory and information processing. The structure is defined by specifying a set of proposed cognitive primitive links which, when used ..."
Abstract
-
Cited by 16 (0 self)
- Add to MetaCart
The notion of a commonsense alorithm is presented as a basic data structure for modeling human cognition. This data structure unifies many current ideas about human memory and information processing. The structure is defined by specifying a set of proposed cognitive primitive links which, when used to build up large structures of actions, states, statechanges and tendencies, provide an adequate formalism for expressing human plans and activities, as well as general mechanisms and computer algorithms. The commonsense algorithm is a type of framework (as Minsky has defined the term) for representing algorithmic processes, hopefully the way humans do.
Learning Event Durations from Event Descriptions
- in Proceedings of the 44th Conference of the Association for Computational Linguistics (COLING-ACL
, 2006
"... We have constructed a corpus of news articles in which events are annotated for estimated bounds on their duration. Here we describe a method for measuring inter-annotator agreement for these event duration distributions. We then show that machine learning techniques applied to this data yield coars ..."
Abstract
-
Cited by 15 (3 self)
- Add to MetaCart
(Show Context)
We have constructed a corpus of news articles in which events are annotated for estimated bounds on their duration. Here we describe a method for measuring inter-annotator agreement for these event duration distributions. We then show that machine learning techniques applied to this data yield coarse-grained event duration information, considerably outperforming a baseline and approaching human performance.
Lost Intuitions and Forgotten Intentions
- Centering Theory in Discourse
, 1998
"... This paper does not provide a comprehensive review of work on centering, as this volume in its entirety serves that purpose. Rather we present our view of the history of centering and our perceptions of the most important areas for future work. The paper begins with a description of the intuitions u ..."
Abstract
-
Cited by 9 (0 self)
- Add to MetaCart
This paper does not provide a comprehensive review of work on centering, as this volume in its entirety serves that purpose. Rather we present our view of the history of centering and our perceptions of the most important areas for future work. The paper begins with a description of the intuitions underlying our previous research, a statement of the intended properties of the attentional state models and proposed theories, and the major claims made. We then look to the future development of centering and argue briefly for additional empirical research, analysis of more complex types of discourse, and more detailed examination of the interaction of centering with other discourse processes at both the local and global levels. 2 Original Intuitions and Intentions
2006. An Annotated Corpus of Typical Durations of Events
- In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC
, 2006
"... In this paper, we present our work on generating an annotated corpus for extracting information about the typical durations of events from texts. We include the annotation guidelines, the event classes we categorized, the way we use normal distributions to model vague and implicit temporal informati ..."
Abstract
-
Cited by 8 (4 self)
- Add to MetaCart
In this paper, we present our work on generating an annotated corpus for extracting information about the typical durations of events from texts. We include the annotation guidelines, the event classes we categorized, the way we use normal distributions to model vague and implicit temporal information, and how we evaluate inter-annotator agreement. The experimental results show that our guidelines are effective in improving the inter-annotator agreement. 1.
Annotating and Learning Event Durations in Text
"... This article presents our work on constructing a corpus of news articles in which events are annotated for estimated bounds on their duration, and automatically learning from this corpus. We describe the annotation guidelines, the event classes we categorized to reduce gross discrepancies in inter-a ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
This article presents our work on constructing a corpus of news articles in which events are annotated for estimated bounds on their duration, and automatically learning from this corpus. We describe the annotation guidelines, the event classes we categorized to reduce gross discrepancies in inter-annotator judgments, and our use of normal distributions to model vague and implicit temporal information and to measure inter-annotator agreement for these event duration distributions. We then show that machine learning techniques applied to this data can produce coarse-grained event duration information automatically, considerably outperforming a baseline and approaching human performance. The methods described here should be applicable to other kinds of vague but substantive information in texts. 1.
Semantic networks and the generation of context
- Proc. Fourth Int. Joint Conf. Artif. Intel., Tiblisi
, 1975
"... by ..."
(Show Context)