Results 1 
3 of
3
Models and Techniques for Proving Data Structure Lower Bounds
, 2013
"... In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I/Omodel. ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I/Omodel. In all cases, we push the frontiers further by proving lower bounds higher than what could possibly be proved using previously known techniques. For the cell probe model, our results have the following consequences: • The first Ω(lg n) query time lower bound for linear space static data structures. The highest previous lower bound for any static data structure problem peaked at Ω(lg n / lg lg n). • An Ω((lg n / lg lg n) 2) lower bound on the maximum of the update time and the query time of dynamic data structures. This is almost a quadratic improvement over the highest previous lower bound of Ω(lg n). In the group model, we establish a number of intimate connections to the fields of combinatorial discrepancy and range reporting in the pointer machine
Succinct indices for path minimum, with applications to path reporting
, 2014
"... In the path minimum query problem, we preprocess a tree on n weighted nodes, such that given an arbitrary path, we can locate the node with the smallest weight along this path. We design novel succinct indices for this problem; one of our index structures supports queries in O((m;n)) time, and occ ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
In the path minimum query problem, we preprocess a tree on n weighted nodes, such that given an arbitrary path, we can locate the node with the smallest weight along this path. We design novel succinct indices for this problem; one of our index structures supports queries in O((m;n)) time, and occupies O(m) bits of space in addition to the space required for the input tree, where m is an integer greater than or equal to n and (m;n) is the inverseAckermann function. These indices give us the first succinct data structures for the path minimum problem, and allow us to obtain new data structures for path reporting queries, which report the nodes along a query path whose weights are within a query range. We achieve three different time/space tradeoffs for path reporting by designing (a) an O(n)word structure with O(lgϵ n+ occ lgϵ n) query time, where occ is the number of nodes reported; (b) an O(n lg lgn)word structure with O(lg lg n+ occ lg lgn) query time; and (c) an O(n lgϵ n)word structure with O(lg lg n+occ) query time. These tradeoffs match the state of the art of twodimensional orthogonal range reporting queries [8] which can be treated as a special case of path reporting queries. When the number of distinct weights is much smaller than n, we further improve both the query time and the space cost of these three results.
Sampling in Space Restricted Settings
"... Abstract. Space efficient algorithms play a central role in dealing with large amount of data. In such settings, one would like to analyse the large data using small amount of “working space”. One of the key steps in many algorithms for analysing large data is to maintain a (or a small number) rando ..."
Abstract
 Add to MetaCart
Abstract. Space efficient algorithms play a central role in dealing with large amount of data. In such settings, one would like to analyse the large data using small amount of “working space”. One of the key steps in many algorithms for analysing large data is to maintain a (or a small number) random sample from the data points. In this paper, we consider two space restricted settings – (i) streaming model, where data arrives over time and one can use only a small amount of storage, and (ii) query model, where we can structure the data in low space and answer sampling queries. In this paper, we prove the following results in above two settings: – In the streaming setting, we would like to maintain a random sample from the elements seen so far. We prove that one can maintain a random sample using O(logn) random bits and O(logn) space, where n is the number of elements seen so far. We can extend this to the case when elements have weights as well. – In the query model, there are n elements with weights w1,..., wn (which are wbit integers) and one would like to sample a random element with probability proportional to its weight. Bringmann and Larsen (STOC 2013) showed how to sample such an element using nw + 1 space (whereas, the information theoretic lower bound is nw). We consider the approximate sampling problem, where we are given an error parameter ε, and the sampling probability of an element can be off by an ε factor. We give matching upper and lower bounds for this problem. 1