• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

Semantics preserving SPARQL-to-SQL query translation for optional graph patterns,” (2006)

by A Chebotko, S Lu, H M Jamil, F Fotouhi
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 20
Next 10 →

An Experimental Comparison of RDF Data Management Approaches in a SPARQL Benchmark Scenario

by Michael Schmidt, Thomas Hornung, Norbert Küchlin, Georg Lausen, Christoph Pinkel - In Proceedings of the 7th International Semantic Web Conference (ISWC , 2008
"... Abstract. Efficient RDF data management is one of the cornerstones in realizing the Semantic Web vision. In the past, different RDF storage strategies have been proposed, ranging from simple triple stores to more advanced techniques like clustering or vertical partitioning on the predicates. We pres ..."
Abstract - Cited by 23 (1 self) - Add to MetaCart
Abstract. Efficient RDF data management is one of the cornerstones in realizing the Semantic Web vision. In the past, different RDF storage strategies have been proposed, ranging from simple triple stores to more advanced techniques like clustering or vertical partitioning on the predicates. We present an experimental comparison of existing storage strategies on top of the SP 2 Bench SPARQL performance benchmark suite and put the results into context by comparing them to a purely relational model of the benchmark scenario. We observe that (1) in terms of performance and scalability, a simple triple store built on top of a column-store DBMS is competitive to the vertically partitioned approach when choosing a physical (predicate, subject, object) sort order, (2) in our scenario with real-world queries, none of the approaches scales to documents containing tens of millions of RDF triples, and (3) none of the approaches can compete with a purely relational model. We conclude that future research is necessary to further bring forward RDF data management. 1

Storing and Querying Scientific Workflow Provenance Metadata Using an RDBMS

by Artem Chebotko, Xubo Fei, Cui Lin, Shiyong Lu, Farshad Fotouhi - THIRD IEEE INTERNATIONAL CONFERENCE ON E-SCIENCE AND GRID COMPUTING , 2007
"... Provenance management has become increasingly important to support scientific discovery reproducibility, result interpretation, and problem diagnosis in scientific workflow environments. This paper proposes an approach to provenance management that seamlessly integrates the interoperability, extensi ..."
Abstract - Cited by 14 (10 self) - Add to MetaCart
Provenance management has become increasingly important to support scientific discovery reproducibility, result interpretation, and problem diagnosis in scientific workflow environments. This paper proposes an approach to provenance management that seamlessly integrates the interoperability, extensibility, and reasoning advantages of Semantic Web technologies with the storage and querying power of an RDBMS. Specifically, we propose: i) two schema mapping algorithms to map an arbitrary OWL provenance ontology to a relational database schema that is optimized for common provenance queries; ii) two efficient data mapping algorithms to map provenance RDF metadata to relational data according to the generated relational database schema, and iii) a schema-independent SPARQL-to-SQL translation algorithm that is optimized on-the-fly by using the type information of an instance available from the input provenance ontology and the statistics of the sizes of the tables in the database. Experimental results are presented to show that our algorithms are efficient and scalable.
(Show Context)

Citation Context

....p As p, t1.o As o, t2.o As x From WorkflowSubject t1, inputParameter t2 Where t1.i=t2.s ) t3 Note that our system supports the translation and evaluation of arbitrary complex optional graph patterns =-=[8]-=-, group patterns, value constraints, which is not presented here due to space limit. 5.2 Experimental study A wide range of scientific queries can be answered based on our provenance ontology model en...

SPARQL Query Rewriting for Implementing Data Integration over Linked Data

by Gianluca Correndo, Manuel Salvadores, Ian Millard, Hugh Glaser, Nigel Shadbolt
"... There has been lately an increased activity of publishing structured data in RDF due to the activity of the Linked Data community 1. The presence on the Web of such a huge information cloud, ranging from academic to geographic to gene related information, poses a great challenge when it comes to rec ..."
Abstract - Cited by 14 (2 self) - Add to MetaCart
There has been lately an increased activity of publishing structured data in RDF due to the activity of the Linked Data community 1. The presence on the Web of such a huge information cloud, ranging from academic to geographic to gene related information, poses a great challenge when it comes to reconcile heterogeneous schemas adopted by data publishers. For several years, the Semantic Web community has been developing algorithms for aligning data models (ontologies). Nevertheless, exploiting such ontology alignments for achieving data integration is still an under supported research topic. The semantics of ontology alignments, often defined over a logical frameworks, implies a reasoning step over huge amounts of data, that is often hard to implement and rarely scales on Web dimensions. This paper presents an algorithm for achieving RDF data mediation based on SPARQL query rewriting. The approach is based on the encoding of rewriting rules for RDF patterns that constitute part of the structure of a SPARQL query.
(Show Context)

Citation Context

... the one provided (i.e we want all the co-authors of id:person-02686 except id:person-02686 itself. A specification for SPARQL language has been provided in literature for relational algebra [8], SQL =-=[6]-=-, and for Datalog [27]. For the sake of the presentation of the algorithm, we will consider just the basic graph pattern (or BGP) section and we will show how it is rewritten in order to fit a target ...

Rdfmatview: Indexing rdf data for sparql queries

by Roger Castillo, Christian Rothe, Ulf Leser , 2010
"... Abstract. The Semantic Web as an evolution of the World Wide Web aims to create a universal medium for the exchange of semantically described data. The idea of representing this information by means of directed labelled graphs, RDF, has been widely accepted by the scientific community. However query ..."
Abstract - Cited by 7 (2 self) - Add to MetaCart
Abstract. The Semantic Web as an evolution of the World Wide Web aims to create a universal medium for the exchange of semantically described data. The idea of representing this information by means of directed labelled graphs, RDF, has been widely accepted by the scientific community. However querying RDF data sets to find the desired information often is highly time consuming due to the number of comparisons that are needed. In this article we propose indexes on RDF to reduce the search space and the SPARQL query processing time. Our approach is based on materialized queries, i.e., precomputed query patterns and their occurrences in the data sets. We provide a formal definition of RDFMatView indexes for SPARQL queries, a cost model to evaluate their potential impact on query performance, and a rewriting algorithm to use indexes in SPARQL queries. We also develop and compare different approaches to integrate such indexes into an existing SPARQL query engine. Our preliminary results show that our approach can drastically decrease the query processing time in comparison to conventional query processing.
(Show Context)

Citation Context

...QL Engine MatView-to-SQL is a rewriting engine which, unlike our first method, translates the residual part of the query into a SQL query on the Jena tables using an algorithm proposed by Chebotko in =-=[20]-=-. The SQL query is executed by the RDBMS. The result set is processed using our RDF Dictionary and finally combined with the results of the cover. The complete query processing is performed inside the...

OWSCIS: Ontology and Web Service Based Cooperation of Information Sources

by Raji Ghawi , Thibault Poulain , Guillermo Gomez , Nadine Cullot - In Proceedings of the Third International IEEE Conference on Signal-Image Technologies and Internet-Based Systems (SITIS 2007
"... Abstract ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...ection 2.2). In other cases, SQL statements have to be manually provided. A statement that corresponds to a property has two selected columns representing the domain and the range of the corresponding property. These columns are given two aliases (C0 and C1 respectively). Such a statement typically retrieves the values of the pair <domain, range> of the property from the database. For example, the SQL statements for the terms mentioned in sub-quey2 are listed in figure 7. When the translator receives a SPARQL query, it establishes a basic graph pattern (BGP)4 of the SPARQL query as defined in [7]. For example, the BGP representing the sub-query2 is shown in figure 8. Then, each edge in this graph is associated with the suitable SQL statement (from those mentioned above) and a unique alias is generated for this statement. The start node of the edge is associated to the first selected column in the statement (C0) and the end node is associated to the second one (C1). The set of statements representing all the edges in the graph form the FROM clause of the final SQL query. lo2:firstName SELECT person.personId AS C0, person.firstName AS C1 FROM person lo2:lastName SELECT person.personId A...

Scientific workflow provenance metadata management using an RDBMS

by Artem Chebotko, Xubo Fei, Shiyong Lu, Farshad Fotouhi , 2007
"... Abstract. Provenance management has become increasingly important to support scientific discovery reproducibility, result interpretation, and problem diagnosis in scientific workflow environments. This paper proposes an approach to provenance management that seamlessly integrates the interoperabilit ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
Abstract. Provenance management has become increasingly important to support scientific discovery reproducibility, result interpretation, and problem diagnosis in scientific workflow environments. This paper proposes an approach to provenance management that seamlessly integrates the interoperability, extensibility, and reasoning advantages of Semantic Web technologies with the storage and querying power of an RDBMS. Specifically, we propose: i) two schema mapping algorithms to map an arbitrary OWL provenance ontology to a relational database schema that is optimized for common provenance queries; ii) three efficient data mapping algorithms to map provenance RDF metadata to relational data according to the generated relational database schema, and iii) a schema-independent SPARQL-to-SQL translation algorithm that is optimized on-the-fly by using the type information of an instance available from the input provenance ontology and the statistics of the sizes of the tables in the database. While the schema mapping and query translation and optimization algorithms are applicable to general RDF storage and query systems, the data mapping algorithms are optimized for and applicable only to scientific workflow provenance metadata. Moreover, we extend SPARQL with negation, aggregation, and set operations to support additional important provenance queries. Experimental results are presented to show that our algorithms are efficient and scalable. The comparison with existing RDF stores, Jena and Sesame, showed that our optimizations result in improved performance and scalability for provenance metadata management.
(Show Context)

Citation Context

....p As p, t1.o As o, t2.o As x From WorkflowSubject t1, inputParameter t2 Where t1.i=t2.s ) t3 Note that our system supports the translation and evaluation of arbitrary complex optional graph patterns =-=[19]-=-, group patterns, value constraints, which are out of this paper scope. We illustrate by example some of these features in the next section. 4.2 Provenance queries A wide range of scientific queries c...

SPARQL Query Containment under SHI Axioms

by Melisachew Wudage, Jérôme Euzenat, Pierre Genevès, Nabil Layaïda
"... SPARQL query containment under schema axioms is the problem of determining whether, for any RDF graph satisfying a given set of schema axioms, the answers to a query are contained in the answers of another query. This problem has major applications for verification and optimization of queries. In or ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
SPARQL query containment under schema axioms is the problem of determining whether, for any RDF graph satisfying a given set of schema axioms, the answers to a query are contained in the answers of another query. This problem has major applications for verification and optimization of queries. In order to solve it, we rely on the µ-calculus. Firstly, we provide a mapping from RDF graphs into transition systems. Secondly, SPARQL queries and RDFS and SHI axioms are encoded into µ-calculus formulas. This allows us to reduce query containment and equivalence to satisfiability in the µ-calculus. Finally, we prove a double exponential upper bound for containment under SHI schema axioms.
(Show Context)

Citation Context

...007) that established the optimal complexity for XPath query containment and provided an effective implementation. Studies on the translation of SPARQL into relational algebra and SQL (Cyganiak 2005; =-=Chebotko et al. 2006-=-) indicate a close connection between SPARQL and relational algebra in terms of expressiveness. In (Polleres 2007), a translation of SPARQL queries into a datalog fragment (non-recursive datalog with ...

RDFMatView: Indexing RDF data using Materialized SPARQL Queries

by Roger Castillo, Christian Rothe, Ulf Leser - In International Workshop on Scalable Semantic Web Knowledge Base Systems (SSWS , 2010
"... Abstract. The Semantic Web aims to create a universal medium for the exchange of semantically tagged data. The idea of representing and querying this information by means of directed labelled graphs, i.e., RDF and SPARQL, has been widely accepted by the scientific community. However, even when most ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
Abstract. The Semantic Web aims to create a universal medium for the exchange of semantically tagged data. The idea of representing and querying this information by means of directed labelled graphs, i.e., RDF and SPARQL, has been widely accepted by the scientific community. However, even when most current implementations of RDF/SPARQL are based on ad-hoc storage systems, processing complex queries on large data sets incurs a high number of joins, which may slow down performance. In this article we propose materialized SPARQL queries as indexes on RDF data sets to reduce the number of necessary joins and thus query processing time. We provide a formal definition of materialized SPARQL queries, a cost model to evaluate their impact on query performance, a storage scheme for the materialization, and an algorithm to find the optimal set of indexes given a query. We also present and evaluate different approaches to integrate materialized queries into an existing SPARQL query engine. An evaluation shows that our approach can drastically decrease the query processing time compared to a direct evaluation.
(Show Context)

Citation Context

...rlying database. Method 2: MatView-to-SQL Engine Rewriting engine which, unlike our first method, translates the residual part of the query into a SQL query using an algorithm proposed by Chebotko in =-=[23]-=-. The SQL query is executed by the RDBMS which evaluates the query using the Jena tables. The result set is processed using our dictionary and combined with the results of the cover. The complete quer...

1 Using Description Logics for the Provision of Context-Driven Content Adaptation Services

by Jia Zhang
"... This paper presents our design and development of a description logics-based planner for providing context-driven content adaptation services. This approach dynamically transforms requested Web content into a proper format conforming to receiving contexts (e.g., access condition, network connection, ..."
Abstract - Add to MetaCart
This paper presents our design and development of a description logics-based planner for providing context-driven content adaptation services. This approach dynamically transforms requested Web content into a proper format conforming to receiving contexts (e.g., access condition, network connection, and receiving device). Aiming to establish a semantic foundation for content adaptation, we apply description logics to formally define context profiles and requirements. We also propose a formal Object Structure Model as the basis of content adaptation management for higher reusability and adaptability. To automate content adaptation decision, our content adaptation planner is driven by a stepwise procedure equipped with algorithms and techniques to enable rule-based context-driven content adaptation over the mobile Internet. Experimental results prove the effectiveness and efficiency of our content adaptation planner on saving transmission bandwidth, when users are using handheld devices. By reducing the size of adapted content, we moderately decrease the computational overhead caused by content adaptation.

1 Using Description Logics for the Provision of Context-Driven Content Adaptation Services

by Stephen J. H. Yang, Jia Zhang, Jeff J. S. Huang, Jeffrey J. P. Tsai, Jia Zhang , 2010
"... This paper presents our design and development of a description logics-based planner for providing context-driven content adaptation services. This approach dynamically transforms requested Web content into a proper format conforming to receiving contexts (e.g., access condition, network connection, ..."
Abstract - Add to MetaCart
This paper presents our design and development of a description logics-based planner for providing context-driven content adaptation services. This approach dynamically transforms requested Web content into a proper format conforming to receiving contexts (e.g., access condition, network connection, and receiving device). Aiming to establish a semantic foundation for content adaptation, we apply description logics to formally define context profiles and requirements. We also propose a formal Object Structure Model as the basis of content adaptation management for higher reusability and adaptability. To automate content adaptation decision, our content adaptation planner is driven by a stepwise procedure equipped with algorithms and techniques to enable rule-based context-driven content adaptation over the mobile Internet. Experimental results prove the effectiveness and efficiency of our content adaptation planner on saving transmission bandwidth, when users are using handheld devices. By reducing the size of adapted content, we moderately decrease the computational overhead caused by content adaptation.
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University