Results 1  10
of
18
The ADEPT Digital Library Architecture
, 2002
"... The Alexandria Digital Earth ProtoType (ADEPT) architecture is a framework for building distributed digital libraries of georeferenced information. An ADEPT system comprises one or more autonomous libraries, each of which provides a uniform interface to one or more collections, each of which man ..."
Abstract

Cited by 48 (7 self)
 Add to MetaCart
The Alexandria Digital Earth ProtoType (ADEPT) architecture is a framework for building distributed digital libraries of georeferenced information. An ADEPT system comprises one or more autonomous libraries, each of which provides a uniform interface to one or more collections, each of which manages metadata for one or more items. The primary standard on which the architecture is based is the ADEPT bucket framework, which defines uniform clientlevel metadata query services that are compatible with heterogeneous underlying collections. ADEPT functionality strikes a balance between the simplicity of Web document delivery and the richness of Z39.50. The current ADEPT implementation runs as servletbased middleware and supports collections housed in arbitrary relational databases.
Efficient Aggregation over Objects with Extent (Extended Abstract)
 TechReport UCR CS 01 01, CS Dept
, 2002
"... We examine the problem of efficiently computing sum/count/avg aggregates over... ..."
Abstract

Cited by 38 (8 self)
 Add to MetaCart
We examine the problem of efficiently computing sum/count/avg aggregates over...
Analysis of Predictive SpatioTemporal Queries
 TODS
, 2003
"... this paper we present probabilistic cost models that estimate the selectivity of spatiotemporal window queries and joins, and the expected distance between a query and its nearest neighbor(s). Our models capture any query/object mobility combination (moving queries, moving objects or both) and any ..."
Abstract

Cited by 29 (5 self)
 Add to MetaCart
(Show Context)
this paper we present probabilistic cost models that estimate the selectivity of spatiotemporal window queries and joins, and the expected distance between a query and its nearest neighbor(s). Our models capture any query/object mobility combination (moving queries, moving objects or both) and any data type (points and rectangles) in arbitrary dimensionality. In addition, we develop specialized spatiotemporal histograms, which take into account both location and velocity information, and can be incrementally maintained. Extensive performance evaluation verifies that the proposed techniques produce highly accurate estimation on both uniform and nonuniform data
Querying about the past, the present, and the future in spatiotemporal databases
 In ICDE
, 2004
"... Moving objects (e.g., vehicles in road networks) continuously generate large amounts of spatiotemporal information in the form of data streams. Efficient management of such streams is a challenging goal due to the highly dynamic nature of the data and the need for fast, online computations. In thi ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
(Show Context)
Moving objects (e.g., vehicles in road networks) continuously generate large amounts of spatiotemporal information in the form of data streams. Efficient management of such streams is a challenging goal due to the highly dynamic nature of the data and the need for fast, online computations. In this paper we present a novel approach for approximate query processing about the present, past, or the future in spatiotemporal databases. In particular, we first propose an incrementally updateable, multidimensional histogram for presenttime queries. Second, we develop a general architecture for maintaining and querying historical data. Third, we implement a stochastic approach for predicting the results of queries that refer to the future. Finally, we experimentally prove the effectiveness and efficiency of our techniques using a realistic simulation. 1.
Approximation techniques for spatial data
, 2004
"... Spatial Database Management Systems (SDBMS), e.g., Geographical Information Systems, that manage spatial objects such as points, lines, and hyperrectangles, often have very high query processing costs. Accurate selectivity estimation during query optimization therefore is crucially important for fi ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
Spatial Database Management Systems (SDBMS), e.g., Geographical Information Systems, that manage spatial objects such as points, lines, and hyperrectangles, often have very high query processing costs. Accurate selectivity estimation during query optimization therefore is crucially important for finding good query plans, especially when spatial joins are involved. Selectivity estimation has been studied for relational database systems, but to date has only received little attention in SDBMS. In this paper, we introduce novel methods that permit highquality selectivity estimation for spatial joins and range queries. Our techniques can be constructed in a single scan over the input, handle inserts and deletes to the database incrementally, and hence they can also be used for processing of streaming spatial data. In contrast to previous approaches, our techniques return approximate results that come with provable probabilistic quality guarantees. We present a detailed analysis and experimentally demonstrate the efficacy of the proposed techniques. 1.
The powermethod: A comprehensive estimation technique for multidimensional queries
 In CIKM
, 2003
"... Existing estimation approaches for multidimensional databases often rely on the assumption that data distribution in a small region is uniform, which seldom holds in practice. Moreover, their applicability is limited to specific estimation tasks under certain distance metric. This paper develops th ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
Existing estimation approaches for multidimensional databases often rely on the assumption that data distribution in a small region is uniform, which seldom holds in practice. Moreover, their applicability is limited to specific estimation tasks under certain distance metric. This paper develops the Powermethod, a comprehensive technique applicable to a wide range of query optimization problems under various metrics. The Powermethod eliminates the local uniformity assumption and is accurate even in scenarios where existing approaches completely fail. Furthermore, it performs estimation by evaluating only one simple formula with minimal computational overhead. Extensive experiments confirm that the Powermethod outperforms previous techniques in terms of accuracy and applicability to various optimization scenarios. 1.
Spatial Query Estimation without the Local Uniformity Assumption
"... Existing estimation approaches for spatial databases often rely on the assumption that data distribution in a small region is uniform, which seldom holds in practice. Moreover, their applicability is limited to specific estimation tasks under certain distance metric. This paper develops the Powerme ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Existing estimation approaches for spatial databases often rely on the assumption that data distribution in a small region is uniform, which seldom holds in practice. Moreover, their applicability is limited to specific estimation tasks under certain distance metric. This paper develops the Powermethod, a comprehensive technique applicable to a wide range of query optimization problems under both L ∞ and L2 metrics. The Powermethod eliminates the local uniformity assumption and is, therefore, accurate even for datasets where existing approaches fail. Furthermore, it performs estimation by evaluating only one simple formula with minimal computational overhead. Extensive experiments confirm that the Powermethod outperforms previous techniques in terms of accuracy and applicability to various optimization scenarios.
Efficient Temporal Counting with Bounded Error
 VLDB Journal
, 2008
"... This paper studies aggregate search in transaction time databases. Specifically, each object in such a database can be modeled as a horizontal segment, whose yprojection is its search key, and its xprojection represents the period when the key was valid in history. Given a query timestamp qt and a ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
This paper studies aggregate search in transaction time databases. Specifically, each object in such a database can be modeled as a horizontal segment, whose yprojection is its search key, and its xprojection represents the period when the key was valid in history. Given a query timestamp qt and a key range �qk, a countquery retrieves the number of objects that are alive at qt, and their keys fall in �qk. We provide a method that accurately answers such queries, with error less than 1 ε + ε · Nalive(qt), where Nalive(qt) is the number of objects alive at time qt, and ε is any constant in (0,1]. Denoting the disk page size as B, and n = N/B, our technique requires O(n) space, processes any query in O(logB n) time, and supports each update in O(logB n) amortized I/Os. As demonstrated by extensive experiments, the proposed solutions guarantee query results with extremely high precision (median relative error below 5%), while consuming only a fraction of the space occupied by the existing approaches that promise precise results. To appear in VLDB Journal.
Summarizing leveltwo topological relations in large spatial datasets
 TODS
"... Summarizing topological relations is fundamental to many spatial applications including spatial query optimization. In this paper, we present several novel techniques to effectively construct cell density based spatial histograms for range (window) summarizations restricted to the four most importan ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Summarizing topological relations is fundamental to many spatial applications including spatial query optimization. In this paper, we present several novel techniques to effectively construct cell density based spatial histograms for range (window) summarizations restricted to the four most important leveltwo topological relations: contains, contained, overlap, and disjoint. We first present a novel framework to construct a multiscale Euler histogram in 2D space with the guarantee of the exact summarization results for aligned windows in constant time. To minimize the storage space in such a multiscale Euler histogram, an approximate algorithm with the approximate ratio 19/12 is presented, while the problem is shown NPhard generally. To conform to a limited storage space where a multiscale histogram may be allowed to have only k Euler histograms, an effective algorithm is presented to construct multiscale histograms to achieve high accuracy in approximately summarizing aligned windows. Then, we present a new approximate algorithm to query an Euler histogram that cannot guarantee the exact answers; it runs in constant time. We also investigate the problem of nonaligned windows and the problem of effectively partitioning the data space to support nonaligned window queries. Finally, we extend our techniques to 3D space. Our extensive experiments against both synthetic and real world datasets demonstrate that the approximate multiscale histogram techniques may improve the accuracy of the existing techniques by several orders of magnitude while retaining the cost efficiency, and the exact multiscale histogram technique requires only a storage space linearly proportional to the number of cells for many popular real datasets.
unknown title
"... TPRtree is a practical index structure for moving object databases. Due to the uniform distribution assumption, TPRtree’s bulk loading algorithm (TPR) is relatively inefficient in dealing with nonuniform datasets. In this paper we present a histogrambased bottom up algorithm (HBU) along with a m ..."
Abstract
 Add to MetaCart
(Show Context)
TPRtree is a practical index structure for moving object databases. Due to the uniform distribution assumption, TPRtree’s bulk loading algorithm (TPR) is relatively inefficient in dealing with nonuniform datasets. In this paper we present a histogrambased bottom up algorithm (HBU) along with a modified topdown greedy split algorithm (TGS) for TPRtree. HBU uses histograms to refine tree structures for different distributions. Empirical studies show that HBU outperforms both TPR and TGS for all kinds of nonuniform datasets, is relatively stable over varying degree of skewness and better for large datasets and large query windows. 1.