Results 1 - 10
of
589
Yago: A Core of Semantic Knowledge
- IN PROC. OF WWW ’07
, 2007
"... We present YAGO, a light-weight and extensible ontology with high coverage and quality. YAGO builds on entities and relations and currently contains roughly 900,000 entities and 5,000,000 facts. This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as hasWonPrize ..."
Abstract
-
Cited by 504 (66 self)
- Add to MetaCart
We present YAGO, a light-weight and extensible ontology with high coverage and quality. YAGO builds on entities and relations and currently contains roughly 900,000 entities and 5,000,000 facts. This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as hasWonPrize). The facts have been automatically extracted from the unification of Wikipedia and WordNet, using a carefully designed combination of rule-based and heuristic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about individuals like persons, organizations, products, etc. with their semantic relationships – and in quantity by increasing the number of facts by more than an order of magnitude. Our empirical evaluation of fact correctness shows an accuracy of about 95%. YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, we show how YAGO can be further extended by state-of-the-art information
Semantic Integration: A Survey Of Ontology-Based Approaches
- SIGMOD Record
, 2004
"... Semantic integration is an active area of research in several disciplines, such as databases, information-integration, and ontologies. This paper provides a brief survey of the approaches to semantic integration developed by researchers in the ontology community. We focus on the approaches that diff ..."
Abstract
-
Cited by 333 (2 self)
- Add to MetaCart
(Show Context)
Semantic integration is an active area of research in several disciplines, such as databases, information-integration, and ontologies. This paper provides a brief survey of the approaches to semantic integration developed by researchers in the ontology community. We focus on the approaches that differentiate the ontology research from other related areas. The goal of the paper is to provide a reader who may not be very familiar with ontology research with introduction to major themes in this research and with pointers to different research projects. We discuss techniques for finding correspondences between ontologies, declarative ways of representing these correspondences, and use of these correspondences in various semantic-integration tasks 1. ONTOLOGIES AND SEMANTIC INTE-
An Ontology for Context-Aware Pervasive Computing Environments
- Special Issue on Ontologies for Distributed Systems, Knowledge Engineering Review
, 2003
"... Ontologies are a key component for building open and dynamic distributed pervasive computing systems in which agents and devices share contextual information. We describe our use of the Web Ontology Language OWL and other tools for building the foundation ontology for the Context Broker Archite ..."
Abstract
-
Cited by 257 (9 self)
- Add to MetaCart
Ontologies are a key component for building open and dynamic distributed pervasive computing systems in which agents and devices share contextual information. We describe our use of the Web Ontology Language OWL and other tools for building the foundation ontology for the Context Broker Architecture (CoBrA), a new context-aware pervasive computing framework. The current version of the CoBrA ontology models the basic concepts of people, agents, places, and presentation events in an intelligent meeting room environment. It provides a vocabulary of terms for classes and properties suitable for building practical systems that model context in pervasive computing environments. We also describe our ongoing research in developing an OWL inference engine using Flora-2 and in extending the present CoBrA ontology to use the DAML spatial and temporal ontologies.
YAGO2: A Spatially and Temporally Enhanced Knowledge Base from Wikipedia
, 2010
"... We present YAGO2, an extension of the YAGO knowledge base, in which entities, facts, and events are anchored in both time and space. YAGO2 is built automatically from Wikipedia, GeoNames, and WordNet. It contains 80 million facts about 9.8 million entities. Human evaluation confirmed an accuracy o ..."
Abstract
-
Cited by 158 (20 self)
- Add to MetaCart
We present YAGO2, an extension of the YAGO knowledge base, in which entities, facts, and events are anchored in both time and space. YAGO2 is built automatically from Wikipedia, GeoNames, and WordNet. It contains 80 million facts about 9.8 million entities. Human evaluation confirmed an accuracy of 95 % of the facts in YAGO2. In this paper, we present the extraction methodology, the integration of the spatio-temporal dimension, and our knowledge representation SPOTL, an extension of the original SPO-triple
YAGO: A Large Ontology from Wikipedia and WordNet
, 2008
"... This article presents YAGO, a large ontology with high coverage and precision. YAGO has been automatically derived from Wikipedia and WordNet. It comprises entities and relations, and currently contains more than 1.7 million entities and 15 million facts. These include the taxonomic Is-A hierarchy a ..."
Abstract
-
Cited by 148 (16 self)
- Add to MetaCart
This article presents YAGO, a large ontology with high coverage and precision. YAGO has been automatically derived from Wikipedia and WordNet. It comprises entities and relations, and currently contains more than 1.7 million entities and 15 million facts. These include the taxonomic Is-A hierarchy as well as semantic relations between entities. The facts for YAGO have been extracted from the category system and the infoboxes of Wikipedia and have been combined with taxonomic relations from WordNet. Type checking techniques help us keep YAGO’s precision at 95% – as proven by an extensive evaluation study. YAGO is based on a clean logical model with a decidable consistency. Furthermore, it allows representing n-ary relations in a natural way while maintaining compatibility with RDFS. A powerful query model facilitates access to YAGO’s data.
YAGO: A Core of Semantic Knowledge Unifying WordNet and Wikipedia
, 2007
"... We present YAGO, a light-weight and extensible ontology with high coverage and quality. YAGO builds on entities and relations and currently contains more than 1 million entities and 5 million facts. This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as hasWo ..."
Abstract
-
Cited by 132 (16 self)
- Add to MetaCart
We present YAGO, a light-weight and extensible ontology with high coverage and quality. YAGO builds on entities and relations and currently contains more than 1 million entities and 5 million facts. This includes the Is-A hierarchy as well as non-taxonomic relations between entities (such as hasWonPrize). The facts have been automatically extracted from Wikipedia and unified with WordNet, using a carefully designed combination of rule-based and heuristic methods described in this paper. The resulting knowledge base is a major step beyond WordNet: in quality by adding knowledge about individuals like persons, organizations, products, etc. with their semantic relationships – and in quantity by increasing the number of facts by more than an order of magnitude. Our empirical evaluation of fact correctness shows an accuracy of about 95%. YAGO is based on a logically clean model, which is decidable, extensible, and compatible with RDFS. Finally, we show how YAGO can be further extended by state-of-the-art information extraction techniques.
Linking Lexicons and Ontologies: Mapping WordNet to the Suggested Upper Merged Ontology
- PROCEEDINGS OF THE 2003 INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE ENGINEERING (IKE 03), LAS VEGAS
, 2003
"... Ontologies are becoming extremely useful tools for sophisticated software engineering. Designing applications, databases, and knowledge bases with reference to a common ontology can mean shorter development cycles, easier and faster integration with other software and content, and a more scalable pr ..."
Abstract
-
Cited by 99 (9 self)
- Add to MetaCart
Ontologies are becoming extremely useful tools for sophisticated software engineering. Designing applications, databases, and knowledge bases with reference to a common ontology can mean shorter development cycles, easier and faster integration with other software and content, and a more scalable product. Although ontologies are a very promising solution to some of the most pressing problems that confront software engineering, they also raise some issues and difficulties of their own. Consider, for example, the questions below:
• How can a formal ontology be used effectively by those who lack extensive training in logic and mathematics?
• How can an ontology be used automatically by applications (e.g. Information Retrieval and Natural Language Processing applications) that process free text?
• How can we know when an ontology is complete?
In this paper we will begin by describing the upper- level ontology SUMO (Suggested Upper Merged Ontology), which has been proposed as the initial version of an eventual Standard Upper Ontology (SUO). We will then describe the popular, free, and structured WordNet lexical database. After this preliminary discussion, we will describe the methodology that we are using to align WordNet with the SUMO. We close this paper by discussing how this alignment of WordNet with SUMO will provide answers to the questions posed above.
BabelNet: The automatic construction, evaluation and application of a . . .
- ARTIFICIAL INTELLIGENCE
, 2012
"... ..."
Structure-Based Partitioning of Large Concept Hierarchies
- In: International Semantic Web Conference
, 2004
"... The increasing awareness of the benefits of ontologies for information processing has lead to the creation of a number of large ontologies about real world domains. The size of these ontologies and their monolithic character cause serious problems in handling them. In other areas, e.g. software e ..."
Abstract
-
Cited by 96 (5 self)
- Add to MetaCart
(Show Context)
The increasing awareness of the benefits of ontologies for information processing has lead to the creation of a number of large ontologies about real world domains. The size of these ontologies and their monolithic character cause serious problems in handling them. In other areas, e.g. software engineering, these problems are tackled by partitioning monolithic entities into sets of meaningful and mostly self-contained modules. In this paper, we suggest a similar approach for ontologies. We propose a method for automatically partitioning large ontologies into smaller modules based on the structure of the class hierarchy. We show that the structure-based performs surprisingly well on real world ontologies.