Results 1 - 10
of
10
Ontologies: Principles, methods and applications
- KNOWLEDGE ENGINEERING REVIEW
, 1996
"... This paper is intended to serve as a comprehensive introduction to the emerging field concerned with the design and use of ontologies. We observe that disparate backgrounds, languages, tools, and techniques are a major barrier to effective communication among people, organisations, and/or software s ..."
Abstract
-
Cited by 582 (3 self)
- Add to MetaCart
This paper is intended to serve as a comprehensive introduction to the emerging field concerned with the design and use of ontologies. We observe that disparate backgrounds, languages, tools, and techniques are a major barrier to effective communication among people, organisations, and/or software systems. We show how the development and implementation of an explicit account of a shared understanding (i.e. an `ontology') in a given subject area, can improve such communication, which in turn, can give rise to greater reuse and sharing, inter-operability, and more reliable software. After motivating their need, we clarify just what ontologies are and what purposes they serve. We outline a methodology for developing and evaluating ontologies, first discussing informal techniques, concerning such issues as scoping, handling ambiguity, reaching agreement and producing de nitions. We then consider the bene ts of and describe, a more formal approach. We re-visit the scoping phase, and discuss the role of formal languages and techniques in the specification, implementation and evaluation of ontologies. Finally, we review the state of the art and practice in this emerging field,
Extending SWRL to express fully-quantified constraints
- Rules and Rule Markup Languages for the Semantic Web (RuleML 2004), LNCS 3323
, 2004
"... Abstract. Drawing on experience gained over a series of distributed knowledge base and database projects, we argue for the utility of an expressive quantified constraint language for the Semantic Web logic layer. Our Constraint Interchange Format (CIF) is based on classical range-restricted FOL. CIF ..."
Abstract
-
Cited by 13 (7 self)
- Add to MetaCart
(Show Context)
Abstract. Drawing on experience gained over a series of distributed knowledge base and database projects, we argue for the utility of an expressive quantified constraint language for the Semantic Web logic layer. Our Constraint Interchange Format (CIF) is based on classical range-restricted FOL. CIF allows the expression of invariant conditions in Semantic Web data models, but the choice of how to implement the constraints is left to local reasoners. We develop the quantified constraint representation as an extension of the current proposal for a Semantic Web Rule Language (SWRL). An RDF syntax for our extended CIF/SWRL is given in this paper. While our approach differs from SWRL in that existential quantifiers are handled explicitly rather than using OWL-DL constructs, we believe our proposal is still fully compatible with the use of the various OWL species as well as RDFS. We demonstrate the use of the CIF/SWRL representation in the context of a practical Semantic Web reasoning application, based on the CS AK-Tive Space demonstrator (the 2003 Semantic Web Challenge winner). We indicate where in our application it makes sense to use the existing SWRL directly, and where our CIF/SWRL allows more complex constraints to be expressed in a natural manner. 1
KRAFT: Knowledge Fusion from Distributed Databases and Knowledge Bases
, 1997
"... The KRAFT project aims to investigate how a distributed architecture can support the transformation and reuse of a particular class of knowledge, namely constraints, and to fuse this knowledge so as to gain added value, by using it for constraint solving or data retrieval. ..."
Abstract
-
Cited by 12 (7 self)
- Add to MetaCart
The KRAFT project aims to investigate how a distributed architecture can support the transformation and reuse of a particular class of knowledge, namely constraints, and to fuse this knowledge so as to gain added value, by using it for constraint solving or data retrieval.
Finding and Moving Constraints in Cyberspace
, 1999
"... Agent-based architectures are an effective method for constructing open, dynamic, distributed information systems. The KRAFT system exploits such an architecture, focusing on the exchange of information -- in the form of constraints and data -- among participating agents. The KRAFT approach is ..."
Abstract
-
Cited by 11 (9 self)
- Add to MetaCart
(Show Context)
Agent-based architectures are an effective method for constructing open, dynamic, distributed information systems. The KRAFT system exploits such an architecture, focusing on the exchange of information -- in the form of constraints and data -- among participating agents. The KRAFT approach is particularly wellsuited to solving design and configuration problems, in which constraints and data are retrieved from agents representing customers and vendors on an extranet network, transformed to a common ontology, and processed by mediator agents. This paper describes the KRAFT system, discusses the issues involved in joining a KRAFT network from the point-of-view of information providers in Cyberspace, and examines the role of autonomous and mobile agents in KRAFT.
Distributing Semantic Constraints Between Heterogeneous Databases
- Proceedings of the Thirteenth International Conference on Data Engineering, ICDE
"... ..."
Non-Intrusive Assessment of Organisational Data Quality
- The 6th International Conference on Information Quality (IQ-2002
, 2001
"... Many organisations are becoming increasingly aware that the usefulness of their data is limited by its poor quality. Surprising (and sometimes alarming) proportions of data in databases are inaccurate, incomplete, inconsistent or out of date. One-off data cleaning methods can help the situation in t ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Many organisations are becoming increasingly aware that the usefulness of their data is limited by its poor quality. Surprising (and sometimes alarming) proportions of data in databases are inaccurate, incomplete, inconsistent or out of date. One-off data cleaning methods can help the situation in the short term, but they are costly and do little to improve data quality in the long term. However, in order to plan and monitor the progress of long term data quality improvement programmes, it is necessary to be able to assess the quality of data across an organisation. Since resources for such programmes are generally limited, and since much of the data in question resides in mission critical systems, it is vital that these assessment activities do not intrude on normal day-to-day business processing. In this paper, we present an approach to assess organisational data quality on a regular basis, which does not delay or disrupt revenue-generating data processing activities. We have adapted techniques from distributed query processing and distributed integrity checking to produce a system that takes account of the workload at each local site when distributing the defect checking work in the distributed information system. The approach assesses the data quality during the periods of low system activity, and ships data defects found to a global site, which time stamps them and records them for later analysis. 1
Capturing Quantified Constraints in FOL, through Interaction with a Relationship Graph
"... Abstract. As new semantic web standards evolve to allow quantified rules in FOL, we need new ways to capture them from end users. We show how to do this against a graphic view of entities and their relationships (not just their subclasses). Some of these relationships can be derived from data values ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Abstract. As new semantic web standards evolve to allow quantified rules in FOL, we need new ways to capture them from end users. We show how to do this against a graphic view of entities and their relationships (not just their subclasses). Some of these relationships can be derived from data values by algebraic expressions. For example scientists may use ad hoc lists of numbers instead of SQL key-matching conventions, as we show in the case of molecular pathway data. The derived relationships can also be included in captured constraints, which express domain semantics better. The constraints are captured as FOL and transmitted in RDFS(XML) format. However the user unfamiliar with FOL is made to see them as simple nested loops. This device even allows inclusion of existential quantifiers in readable fashion. The captured constraint can be tested by generating queries to search for violations in stored data. The constraint can then be automatically revised to exclude specific cases picked out by the user, who is spared worries about proper syntax and boolean connectives. 1
Database Object Creation Subject to Constraint Rules Using a Constraint Logic Search Engine
, 1995
"... The P/FDM object database is based on a semantic data model in which stored data is integrated with derived data which is computed by Prolog rules retrieved from the class descriptors stored in the database. These rules may also be inherited. They are generated from declarative definitions expres ..."
Abstract
- Add to MetaCart
The P/FDM object database is based on a semantic data model in which stored data is integrated with derived data which is computed by Prolog rules retrieved from the class descriptors stored in the database. These rules may also be inherited. They are generated from declarative definitions expressed in the functional data language Daplex. Other Prolog rules are generated from integrity constraints expressed as invariant Daplex expressions, and these rules are triggered by relevant updates, and used to check semantic integrity. We are interested in using complex updates as part of design work. Thus, we do not execute update actions immediately following failure of a triggered rule, since these may in turn trigger other failures and knock-on actions, giving rise to anomalous rule behaviour. Instead we ask a constraint solver to find a set of values which consistently satisfy all the constraints. This set of values is then committed as a single update action. This paper describe...
Coping with Constraint Violation: the Practical Face of Database Integrity
"... This report describes an approach to handling violations of integrity constraints in databases which focusses on the needs of the user rather than the implementation. Traditionally, researchers have concentrated on the problems of checking constraint satisfaction, and have given little attention to ..."
Abstract
- Add to MetaCart
(Show Context)
This report describes an approach to handling violations of integrity constraints in databases which focusses on the needs of the user rather than the implementation. Traditionally, researchers have concentrated on the problems of checking constraint satisfaction, and have given little attention to how systems should behave when constraints are found to have been violated. We focus on two aspects of constraint violation: how to report violations to the user in the most meaningful way, and how to assist the user in restoring the database to a consistent state. We describe the shortcomings of current approaches to both these aspects, and some solutions which are under investigation in the context of a database of three-dimensional protein structure. 2 1 Introduction Database integrity is an area in which theory has made a significant contribution. Theory has helped us to show that the constraint checking algorithms proposed are both sound and complete, to demonstrate the equivalence...