Results 1 - 10
of
673
Agents and the Semantic Web
- IEEE INTELLIGENT SYSTEMS
, 2001
"... Many challenges of bringing communicating multiagent systems to the Web require ontologies. The integration of agent technology and ontologies could significantly affect the use of Web services and the ability to extend programs to perform tasks for users more efficiently and with less human interve ..."
Abstract
-
Cited by 2352 (18 self)
- Add to MetaCart
(Show Context)
Many challenges of bringing communicating multiagent systems to the Web require ontologies. The integration of agent technology and ontologies could significantly affect the use of Web services and the ability to extend programs to perform tasks for users more efficiently and with less human intervention.
The Generative Lexicon
- Computational Linguistics
, 1991
"... this paper, I will discuss four major topics relating to current research in lexical semantics: methodology, descriptive coverage, adequacy of the representation, and the computational usefulness of representations. In addressing these issues, I will discuss what I think are some of the central prob ..."
Abstract
-
Cited by 1341 (45 self)
- Add to MetaCart
(Show Context)
this paper, I will discuss four major topics relating to current research in lexical semantics: methodology, descriptive coverage, adequacy of the representation, and the computational usefulness of representations. In addressing these issues, I will discuss what I think are some of the central problems facing the lexical semantics community, and suggest ways of best approaching these issues. Then, I will provide a method for the decomposition of lexical categories and outline a theory of lexical semantics embodying a notion of cocompositionality and type coercion, as well as several levels of semantic description, where the semantic load is spread more evenly throughout the lexicon. I argue that lexical decomposition is possible if it is performed generatively. Rather than assuming a fixed set of primitives, I will assume a fixed number of generative devices that can be seen as constructing semantic expressions. I develop a theory of Qualia Structure, a representation language for lexical items, which renders much lexical ambiguity in the lexicon unnecessary, while still explaining the systematic polysemy that words carry. Finally, I discuss how individual lexical structures can be integrated into the larger lexical knowledge base through a theory of lexical inheritance. This provides us with the necessary principles of global organization for the lexicon, enabling us to fully integrate our natural language lexicon into a conceptual whole
Bayesian Description Logics. In:
- Proc. of DL’14. CEUR Workshop Proceedings,
, 2014
"... Abstract This chapter considers, on the one hand, extensions of Description Logics by features not available in the basic framework, but considered important for using Description Logics as a modeling language. In particular, it addresses the extensions concerning: concrete domain constraints; moda ..."
Abstract
-
Cited by 394 (49 self)
- Add to MetaCart
Abstract This chapter considers, on the one hand, extensions of Description Logics by features not available in the basic framework, but considered important for using Description Logics as a modeling language. In particular, it addresses the extensions concerning: concrete domain constraints; modal, epistemic, and temporal operators; probabilities and fuzzy logic; and defaults. On the other hand, it considers non-standard inference problems for Description Logics, i.e., inference problems that-unlike subsumption or instance checking-are not available in all systems, but have turned out to be useful in applications. In particular, it addresses the non-standard inference problems: least common subsumer and most specific concept; unification and matching of concepts; and rewriting.
CLASSIC: A Structural Data Model for Objects
, 1989
"... CLASSIC is a data model that encourages the description ofobjects not only in terms of their relations to other known objects, but in terms of a level of intensional structure as well. The CLASSIC language of structured descriptions permits i) partial descriptions of individuals, under an `open worl ..."
Abstract
-
Cited by 369 (26 self)
- Add to MetaCart
(Show Context)
CLASSIC is a data model that encourages the description ofobjects not only in terms of their relations to other known objects, but in terms of a level of intensional structure as well. The CLASSIC language of structured descriptions permits i) partial descriptions of individuals, under an `open world' assumption, ii) answers to queries either as extensional lists of valuesorasdescriptions that necessarily hold of all possible answers, and iii) an easily extensible schema, which can be accessed uniformly with the data. One of the strengths of the approach is that the same language plays multiple roles in the processes of defining and populating the DB, as well as querying and answering. classic (for which we have a prototype main-memory implementation) can actively discover new information about objects from several sources: it can recognize new classes under which an object falls based on a description of the object, it can propagate some deductive consequences of DB upda...
Computational Interpretations of the Gricean Maxims in the Generation of Referring Expressions
- COGNITIVE SCIENCE
, 1995
"... We examine the problem of generating definite noun phrases that are appropriate referring expressions: that is, noun phrases that (a) successfully identify the intended referent to the hearer whilst (b) not conveying to him or her any false conversational implicatures (Grice, 1975). We review severa ..."
Abstract
-
Cited by 368 (36 self)
- Add to MetaCart
We examine the problem of generating definite noun phrases that are appropriate referring expressions: that is, noun phrases that (a) successfully identify the intended referent to the hearer whilst (b) not conveying to him or her any false conversational implicatures (Grice, 1975). We review several possible computational interpretations of the conversational implicature maxims, with different computational costs, and argue that the simplest may be the best, because it seems to be closest to what human speakers do. We describe our recommended algorithm in detail, along with a specification of the resources a host system must provide in order to make use of the algorithm, and an implementation used in the natural language generation component of the IDAS system.
Retrieving And Integrating Datafrom Multiple Information Sources
, 1993
"... With the current explosion of data, retrieving and integrating information from various sources is a critical problem. Work in multidatabase systems has begun to address this problem, but it has primarily focused on methods for communicating between databases and requires significant effort for e ..."
Abstract
-
Cited by 332 (31 self)
- Add to MetaCart
(Show Context)
With the current explosion of data, retrieving and integrating information from various sources is a critical problem. Work in multidatabase systems has begun to address this problem, but it has primarily focused on methods for communicating between databases and requires significant effort for each new database added to the system. This paper describes a more general approach that exploits a semantic model of a problem domain to integrate the information from various information sources. The information sources handled include both databases and knowledge bases, and other information sources (e.g., programs) could potentially be incorporated into the system. This paper describes how both the domain and the information sources are modeled, shows how a query at the domain level is mapped into a set of queries to individual information sources, and presents algorithms for automatically improving the efficiency of queries using knowledge about both the domain and the informat...
OBSERVER: An Approach for Query Processing in Global Information Systems based on Interoperation across Pre-existing Ontologies
, 1996
"... The huge number of autonomousand heterogeneous data repositories accessible on the “global information infrastructure” makes it impossible for users to be aware of the locations, structure/organization, query languages and semantics of the data in various repositories. There is a critical need to co ..."
Abstract
-
Cited by 295 (36 self)
- Add to MetaCart
The huge number of autonomousand heterogeneous data repositories accessible on the “global information infrastructure” makes it impossible for users to be aware of the locations, structure/organization, query languages and semantics of the data in various repositories. There is a critical need to complement current browsing, navigationaland informationretrieval techniques with a strategy that focuses on information content and semantics. In any strategy that focuses on information content, the most critical problem is that of different vocabularies used to describe similar information across domains. We discuss a scalable approach for vocabulary sharing. The objects in the repositories are represented as intensional descriptions by pre-existing ontologies expressed in Description Logics characterizing information in different domains. User queries are rewritten by using interontologyrelationships to obtain semanticspreserving translations across the ontologies. 1.
A Scheme for Integrating Concrete Domains into Concept Languages
, 1991
"... A drawback which concept languages based on kl-one have is that all the terminological knowledge has to be defined on an abstract logical level. In many applications, one would like to be able to refer to concrete domains and predicates on these domains when defining concepts. Examples for such conc ..."
Abstract
-
Cited by 280 (22 self)
- Add to MetaCart
A drawback which concept languages based on kl-one have is that all the terminological knowledge has to be defined on an abstract logical level. In many applications, one would like to be able to refer to concrete domains and predicates on these domains when defining concepts. Examples for such concrete domains are the integers, the real numbers, or also non-arithmetic domains, and predicates could be equality, inequality, or more complex predicates. In the present paper we shall propose a scheme for integrating such concrete domains into concept languages rather than describing a particular extension by some specific concrete domain. We shall define a terminological and an assertional language, and consider the important inference problems such as subsumption, instantiation, and consistency. The formal semantics as well as the reasoning algorithms are given on the scheme level. In contrast to existing kl-one based systems, these algorithms will be not only sound but also complete. The...
Query Reformulation for Dynamic Information Integration
- JOURNAL OF INTELLIGENT INFORMATION SYSTEMS
, 1996
"... The standard approach to integrating heterogeneous information sources is to build a global schema that relates all of the information in the different sources, and to pose queries directly against it. The problem is that schema integration is usually difficult, and as soon as any of the information ..."
Abstract
-
Cited by 274 (32 self)
- Add to MetaCart
The standard approach to integrating heterogeneous information sources is to build a global schema that relates all of the information in the different sources, and to pose queries directly against it. The problem is that schema integration is usually difficult, and as soon as any of the information sources change or a new source is added, the process mayhave to be repeated. The SIMS system uses an alternative approach. A domain model of the application domain is created, establishing a fixed vocabulary for describing data sets in the domain. Using this language, each available information source is described. Queries to SIMS against the collection of available information sources are posed using terms from the domain model, and reformulation operators are employed to dynamically select an appropriate set of information sources and to determine how to integrate the available information to satisfy a query. This approach results in a system that is more flexible than existing ones, more easily scalable, and able to respond dynamically to newly available or unexpectedly missing information sources.