Results 1  10
of
25
Compact representations of ordered sets
 In Proc. 15th Annual ACMSIAM Symposium on Discrete Algorithms (SODA
, 2004
"... We consider the problem of efficiently representing sets S of size n from an ordered universe U = {0,...,m−1}. Given any ordered dictionary structure (or comparisonbased ordered set structure) D that uses O(n) pointers, we demonstrate a simple blocking technique that produces an ordered set struct ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
(Show Context)
We consider the problem of efficiently representing sets S of size n from an ordered universe U = {0,...,m−1}. Given any ordered dictionary structure (or comparisonbased ordered set structure) D that uses O(n) pointers, we demonstrate a simple blocking technique that produces an ordered set structure supporting the same operations in the same time bounds but with O(n log m+nn) bits. This is within a constant factor of the informationtheoretic lower bound. We assume the unit cost RAM model with word size Ω(log U ) and a table of size O(mα log2m) bits, for some constant α> 0. The time bound for our operations contains a factor of 1/α. We present experimental results for the STL (C++ Standard Template Library) implementation of RedBlack trees, and for an implementation of Treaps. We compare the implementations with blocking and without blocking. The blocking variants use a factor of between 1.5 and 10 less space depending on the density of the set. 1
An Experimental Analysis of a Compact Graph Representation
 In ALENEX04
, 2004
"... In previous work we described a method for compactly representing graphs with small separators, which makes use of small separators, and presented preliminary experimental results. In this paper we extend the experimental results in several ways, including extensions for dynamic insertion and deleti ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
(Show Context)
In previous work we described a method for compactly representing graphs with small separators, which makes use of small separators, and presented preliminary experimental results. In this paper we extend the experimental results in several ways, including extensions for dynamic insertion and deletion of edges, a comparison of a variety of coding schemes, and an implementation of two applications using the representation.
Incremental Construction of the Delaunay Triangulation and the Delaunay Graph in Medium Dimension
, 2009
"... We describe a new implementation of the wellknown incremental algorithm for constructing Delaunay triangulations in any dimension. Our implementation follows the exact computing paradigm and is fully robust. Extensive comparisons show that our implementation outperforms the best currently available ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
We describe a new implementation of the wellknown incremental algorithm for constructing Delaunay triangulations in any dimension. Our implementation follows the exact computing paradigm and is fully robust. Extensive comparisons show that our implementation outperforms the best currently available codes for exact convex hulls and Delaunay triangulations, compares very well to the fast nonexact Qhull implementation and can be used for quite big input sets in spaces of dimensions up to 6. To circumvent prohibitive memory usage, we also propose a modi cation of the algorithm that uses and stores only the Delaunay graph (the edges of the full triangulation). We show that a careful implementation of the modi ed algorithm performs only 6 to 8 times slower than the original algorithm while drastically reducing memory usage in dimension 4 or above.
Compact Dictionaries for VariableLength Keys and Data, with Applications
, 2007
"... We consider the problem of maintaining a dynamic dictionary T of keys and associated data for which both the keys and data are bit strings that can vary in length from zero up to the length w of a machine word. We present a data structure for this variablebitlength dictionary problem that supports ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We consider the problem of maintaining a dynamic dictionary T of keys and associated data for which both the keys and data are bit strings that can vary in length from zero up to the length w of a machine word. We present a data structure for this variablebitlength dictionary problem that supports constant time lookup and expected amortized constant time insertion and deletion. It uses O(m + 3n − n log 2 n) bits, where n is the number of elements in T, and m is the total number of bits across all strings in T (keys and data). Our dictionary uses an array A[1... n] in which locations store variablebitlength strings. We present a data structure for this variablebitlength array problem that supports worstcase constanttime lookups and updates and uses O(m + n) bits, where m is the total number of bits across all strings stored in A. The motivation for these structures is to support applications for which it is helpful to efficiently store short varying length bit strings. We present several applications, including representations for semidynamic graphs, order queries on integers sets, cardinal trees with varying cardinality, and simplicial meshes of d dimensions. These results either generalize or simplify previous results.
SQuad: Compact representation for triangle meshes
 Computer Graphics Forum
, 2011
"... The SQuad data structure represents the connectivity of a triangle mesh by its “S table ” of about 2 rpt (integer references per triangle). Yet it allows for a simple implementation of expected constanttime, randomaccess operators for traversing the mesh, including inorder traversal of the triang ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
(Show Context)
The SQuad data structure represents the connectivity of a triangle mesh by its “S table ” of about 2 rpt (integer references per triangle). Yet it allows for a simple implementation of expected constanttime, randomaccess operators for traversing the mesh, including inorder traversal of the triangles incident upon a vertex. SQuad is more compact than the Corner Table (CT), which stores 6 rpt, and than the recently proposed SOT, which stores 3 rpt. However, incore access is generally faster in CT than in SQuad, and SQuad requires rebuilding the S table if the connectivity is altered. The storage reduction and memory coherence opportunities it offers may help to reduce the frequency of page faults and cache misses when accessing elements of a mesh that does not fit in memory. We provide the details of a simple algorithm that builds the S table and of an optimized implementation of the SQuad operators.
LR: compact connectivity representation for triangle meshes
 in ACM SIGGRAPH 2011 papers, SIGGRAPH ’11
, 2011
"... Figure 1: The ring (black loop) delineates two corridors of triangles. Normal T1 triangles (cream/orange) have one ring edge, deadend T2 triangles (blue) have two ring edges, and T0 triangles (green) comprising bifurcations have no ring edges. Adjacent T0 (gray/red) and T2 triangles (left) are repr ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Figure 1: The ring (black loop) delineates two corridors of triangles. Normal T1 triangles (cream/orange) have one ring edge, deadend T2 triangles (blue) have two ring edges, and T0 triangles (green) comprising bifurcations have no ring edges. Adjacent T0 (gray/red) and T2 triangles (left) are represented internally as inexpensive T1 triangles (right), thereby significantly reducing storage. Our LR representation supports random access to connectivity, storing on average only 1.08 references or 26.2 bits per triangle. We propose LR (Laced Ring)—a simple data structure for representing the connectivity of manifold triangle meshes. LR provides the option to store on average either 1.08 references per triangle or 26.2 bits per triangle. Its construction, from an input mesh that supports constanttime adjacency queries, has linear space and time complexity, and involves ordering most vertices along a nearlyHamiltonian cycle. LR is best suited for applications that process meshes with fixed connectivity, as any changes to the connectivity require the data structure to be rebuilt. We provide an implementation of the set of standard randomaccess, constanttime operators for traversing a mesh, and show that LR often saves both space and traversal time over competing representations.
Estimation of Euler Characteristic from Point Data
"... Determination of the geometry and topology of a 3dimensional body is an important problem ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Determination of the geometry and topology of a 3dimensional body is an important problem
Engineering a compact parallel delaunay algorithm in 3d
 In Proceedings of the ACM Symposium on Computational Geometry
, 2006
"... We describe an implementation of a compact parallel algorithm for 3D Delaunay tetrahedralization on a 64processor sharedmemory machine. Our algorithm uses a concurrent version of the BowyerWatson incremental insertion, and a threadsafe spaceefficient structure for representing the mesh. Using t ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
We describe an implementation of a compact parallel algorithm for 3D Delaunay tetrahedralization on a 64processor sharedmemory machine. Our algorithm uses a concurrent version of the BowyerWatson incremental insertion, and a threadsafe spaceefficient structure for representing the mesh. Using the implementation we are able to generate significantly larger Delaunay meshes than have previously been generated—10 billion tetrahedra on a 64 processor SMP using 200GB of RAM. The implementation makes use of a locality based relabeling of the vertices that serves three purposes—it is used as part of the space efficient representation, it improves the memory locality, and it reduces the overhead necessary for locks. The implementation also makes use of a caching technique to avoid excessive decoding of vertex information, a technique for backing out of insertions that collide, and a shared work queue for maintaining points that have yet to be inserted.
Compact Data Structures with Fast Queries
, 2005
"... Many applications dealing with large data structures can benefit from keeping them in compressed form. Compression has many benefits: it can allow a representation to fit in main memory rather than swapping out to disk, and it improves cache performance since it allows more data to fit into the c ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Many applications dealing with large data structures can benefit from keeping them in compressed form. Compression has many benefits: it can allow a representation to fit in main memory rather than swapping out to disk, and it improves cache performance since it allows more data to fit into the cache. However, a data structure is only useful if it allows the application to perform fast queries (and updates) to the data.
Validation of Planar Partitions Using Constrained Triangulations.
 In Proceedings Joint International Conference on Theory, Data Handling and Modelling in GeoSpatial Information Science,
, 2010
"... Planar partitionsfull tessellations of the plane into nonoverlapping polygonsare frequently used in GIS to model concepts such as land cover, cadastral parcels or administrative boundaries. Since in practice planar partitions are often stored as a set of individual objects (polygons) to which at ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Planar partitionsfull tessellations of the plane into nonoverlapping polygonsare frequently used in GIS to model concepts such as land cover, cadastral parcels or administrative boundaries. Since in practice planar partitions are often stored as a set of individual objects (polygons) to which attributes are attached (e.g. stored with a shapefile), and since different errors/mistakes can be introduced during their construction, manipulation or exchange, several inconsistencies will often arise in practice. The inconsistencies are for instance overlapping polygons, gaps and unconnected polygons. We present in this paper a novel algorithm to validate such planar partitions. It uses a constrained triangulation as a support for the validation, and permits us to avoid different problems that arise with existing solutions based on the construction of a planar graph. We describe in the paper the details of our algorithm, our implementation, how inconsistencies can be detected, and the experiments we have made with realworld data (the CORINE2000 dataset).