Results 1 - 10
of
257
Ontology Development 101: A Guide to Creating Your First Ontology
, 2001
"... In recent years the development of ontologies—explicit formal specifications of the terms in the domain and relations among them (Gruber 1993)—has been moving from the realm of Artificial-Intelligence laboratories to the desktops of domain experts. Ontologies have become common on the World-Wide Web ..."
Abstract
-
Cited by 830 (5 self)
- Add to MetaCart
(Show Context)
In recent years the development of ontologies—explicit formal specifications of the terms in the domain and relations among them (Gruber 1993)—has been moving from the realm of Artificial-Intelligence laboratories to the desktops of domain experts. Ontologies have become common on the World-Wide Web. The ontologies on the Web range from large taxonomies categorizing Web sites (such as on Yahoo!) to categorizations of products for sale and their features (such as on Amazon.com). The WWW Consortium (W3C) is developing the Resource Description Framework (Brickley and Guha 1999), a language for encoding knowledge on Web pages to make it understandable to electronic agents searching for information. The Defense Advanced Research Projects Agency (DARPA), in conjunction with the W3C, is developing DARPA Agent Markup Language (DAML) by extending RDF with more expressive constructs aimed at facilitating agent interaction on the Web (Hendler and McGuinness 2000). Many disciplines now develop standardized ontologies that domain experts can use to share and annotate information in their fields. Medicine, for example, has produced large, standardized, structured vocabularies such as SNOMED (Price and Spackman 2000) and the semantic network of the Unified Medical Language System (Humphreys and Lindberg 1993). Broad general-purpose ontologies are
Bayesian Description Logics. In:
- Proc. of DL’14. CEUR Workshop Proceedings,
, 2014
"... Abstract This chapter considers, on the one hand, extensions of Description Logics by features not available in the basic framework, but considered important for using Description Logics as a modeling language. In particular, it addresses the extensions concerning: concrete domain constraints; moda ..."
Abstract
-
Cited by 394 (49 self)
- Add to MetaCart
Abstract This chapter considers, on the one hand, extensions of Description Logics by features not available in the basic framework, but considered important for using Description Logics as a modeling language. In particular, it addresses the extensions concerning: concrete domain constraints; modal, epistemic, and temporal operators; probabilities and fuzzy logic; and defaults. On the other hand, it considers non-standard inference problems for Description Logics, i.e., inference problems that-unlike subsumption or instance checking-are not available in all systems, but have turned out to be useful in applications. In particular, it addresses the non-standard inference problems: least common subsumer and most specific concept; unification and matching of concepts; and rewriting.
Ontolingua: A Mechanism to Support Portable Ontologies
, 1992
"... An ontology is a set of definitions of content-specific knowledge representation primitives: classes, relations, functions, and object constants. Ontolingua is mechanism for writing ontologies in a canonical format, such that they can be easily translated into a variety of representation and reasoni ..."
Abstract
-
Cited by 245 (5 self)
- Add to MetaCart
(Show Context)
An ontology is a set of definitions of content-specific knowledge representation primitives: classes, relations, functions, and object constants. Ontolingua is mechanism for writing ontologies in a canonical format, such that they can be easily translated into a variety of representation and reasoning systems. This allows one to maintain the ontology in a single, machine-readable form while using it in systems with different syntax and reasoning capabilities. The syntax and semantics are based on the KIF knowledge interchange format [11]. Ontolingua extends KIF with standard primitives for defining classes and relations, and organizing knowledge in object-centered hierarchies with inheritance. The Ontolingua software provides an architecture for translating from KIF-level sentences into forms that can be efficiently stored and reasoned about by target representation systems. Currently, there are translators into LOOM, Epikit, and Algernon, as well as a canonical form of KIF. This paper describes the asic approach of Ontologia to the ontology sharing problem, introduces the syntax, and describes the semantics of a few ontological commitments made in the software. Those commitments, that are reflected in the ontological syntax and the primitive vocabulary of the frame ontology, include: a distinction between definitional and nondefinitional assertions; the organization of knowledge with classes, instances, sets, and second-order relations; and assertions whose meaning depends on the contents of the knowledge base. Limitations of Ontologia's "conservative" approach to sharing ontologies and alternative approaches to the problem are discussed.
Data Model and Query Evaluation in Global Information Systems
- Journal of Intelligent Information Systems
, 1991
"... . Global information systems involve a large number of information sources distributed over computer networks. The variety of information sources and disparity of interfaces makes the task of easily locating and efficiently accessing information over the network very cumbersome. We describe an archi ..."
Abstract
-
Cited by 216 (14 self)
- Add to MetaCart
. Global information systems involve a large number of information sources distributed over computer networks. The variety of information sources and disparity of interfaces makes the task of easily locating and efficiently accessing information over the network very cumbersome. We describe an architecture for global information systems that is especially tailored to address the challenges raised in such an environment, and distinguish our architecture from architectures of multidatabase and distributed database systems. Our architecture is based on presenting a conceptually unified view of the information space to a user, specifying rich descriptions of the contents of the information sources, and using these descriptions for optimizing queries posed in the unified view. The contributions of this paper include: (1) we identify aspects of site descriptions that are useful in query optimization; (2) we describe query optimization techniques that minimize the number of information source...
Supporting Ontological Analysis of Taxonomic Relationships
, 2001
"... Taxonomies are an important part of conceptual modeling. They provide substantial structural information, and are typically the key elements in integration efforts, however there has been little guidance as to what makes a proper taxonomy. We have adopted several notions from the philosophical pract ..."
Abstract
-
Cited by 189 (2 self)
- Add to MetaCart
Taxonomies are an important part of conceptual modeling. They provide substantial structural information, and are typically the key elements in integration efforts, however there has been little guidance as to what makes a proper taxonomy. We have adopted several notions from the philosophical practice of formal ontology, and adapted them for use in information systems. These tools, identity, essence, unity, and dependence, provide a solid logical framework within which the properties that form a taxonomy can be analyzed. This analysis helps make intended meaning more explicit, improving human understanding and reducing the cost of integration.
The Information Manifold.
- In Working Notes of the AAAI Spring Symposium
, 1995
"... Abstract We describe the Information Manifold (IM), a system for browsing and querying of multiple networked information sources. As a first contribution, the system demonstrates the viability of knowledge representation technology for retrieval and organization of information from disparate (struc ..."
Abstract
-
Cited by 173 (5 self)
- Add to MetaCart
(Show Context)
Abstract We describe the Information Manifold (IM), a system for browsing and querying of multiple networked information sources. As a first contribution, the system demonstrates the viability of knowledge representation technology for retrieval and organization of information from disparate (structured and unstructured) information sources. Such an organization allows the user to pose high-level queries that use data from multiple information sources. As a second contribution, we describe novel query processing algorithms used to combine information from multiple sources. In particular, our algorithms are guaranteed to find exactly the set of information sources relevant to a query, and to completely exploit knowledge about local closed world information
The DARPA Knowledge Sharing Effort: Progress Report
- PRINCIPLES OF KNOWLEDGE REPRESENTATION AND REASONING: PROCEEDINGS OF THE THIRD INTERNATIONAL CONFERENCE (KR92
, 1998
"... ..."
P-CLASSIC: A tractable probabilistic description logic
- In Proceedings of AAAI-97
, 1997
"... Knowledge representation languages invariably reflect a trade-off between expressivity and tractability. Evidence suggests that the compromise chosen by description logics is a particularly successful one. However, description logic (as for all variants of first-order logic) is severely limited in i ..."
Abstract
-
Cited by 119 (4 self)
- Add to MetaCart
(Show Context)
Knowledge representation languages invariably reflect a trade-off between expressivity and tractability. Evidence suggests that the compromise chosen by description logics is a particularly successful one. However, description logic (as for all variants of first-order logic) is severely limited in its ability to express uncertainty. In this paper, we present P-CLASSIC, a probabilistic version of the description logic CLASSIC. In addition to terminological knowledge, the language utilizes Bayesian networks to express uncertainty about the basic properties of an individual, the number of fillers for its roles, and the properties of these fillers. We provide a semantics for P-CLASSIC and an effective inference procedure for probabilistic subsumption: computing the probability that a random individual in class C is also in class D. The effectiveness of the algorithm relies on independenceassumptions and on our ability to execute lifted inference: reasoning about similar individuals as a gr...
Computing Least Common Subsumers in Description Logics with Existential Restrictions
, 1999
"... Computing the least common subsumer (lcs) is an inference task that can be used to support the "bottom-up " construction of knowledge bases for KR systems based on description logics. Previous work on how to compute the lcs has concentrated on description logics that allow for univ ..."
Abstract
-
Cited by 119 (29 self)
- Add to MetaCart
Computing the least common subsumer (lcs) is an inference task that can be used to support the "bottom-up " construction of knowledge bases for KR systems based on description logics. Previous work on how to compute the lcs has concentrated on description logics that allow for universal value restrictions, but not for existential restrictions. The main new contribution of this paper is the treatment of description logics with existential restrictions. Our approach for computing the lcs is based on an appropriate representation of concept descriptions by certain trees, and a characterization of subsumption by homomorphisms between these trees. The lcs operation then corresponds to the product operation on trees.