Results 11  20
of
42
Incremental Construction of the Delaunay Triangulation and the Delaunay Graph in Medium Dimension
, 2009
"... We describe a new implementation of the wellknown incremental algorithm for constructing Delaunay triangulations in any dimension. Our implementation follows the exact computing paradigm and is fully robust. Extensive comparisons show that our implementation outperforms the best currently available ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
We describe a new implementation of the wellknown incremental algorithm for constructing Delaunay triangulations in any dimension. Our implementation follows the exact computing paradigm and is fully robust. Extensive comparisons show that our implementation outperforms the best currently available codes for exact convex hulls and Delaunay triangulations, compares very well to the fast nonexact Qhull implementation and can be used for quite big input sets in spaces of dimensions up to 6. To circumvent prohibitive memory usage, we also propose a modi cation of the algorithm that uses and stores only the Delaunay graph (the edges of the full triangulation). We show that a careful implementation of the modi ed algorithm performs only 6 to 8 times slower than the original algorithm while drastically reducing memory usage in dimension 4 or above.
Efficient Query Processing on Unstructured Tetrahedral Meshes
 IN SIGMOD ’06: PROCEEDINGS OF THE 2006 ACM SIGMOD INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA
, 2006
"... Modern scientific applications consume massive volumes of data produced by computer simulations. Such applications require new data management capabilities in order to scale to terabytescale data volumes [25, 10]. The most common way to discretize the application domain is to decompose it into pyra ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Modern scientific applications consume massive volumes of data produced by computer simulations. Such applications require new data management capabilities in order to scale to terabytescale data volumes [25, 10]. The most common way to discretize the application domain is to decompose it into pyramids, forming an unstructured tetrahedral mesh. Modern simulations generate meshes of high resolution and precision, to be queried by a visualization or analysis tool. Tetrahedral meshes are extremely flexible and therefore vital to accurately model complex geometries, but also are difficult to index. To reduce query execution time, applications either use only subsets of the data or rely on different (less flexible) structures, thereby trading accuracy for speed. This
Visualization of Large Networks with Mincut Plots, Aplots and RMAT ⋆,⋆⋆
"... What does a ‘normal ’ computer (or social) network look like? How can we spot ‘abnormal ’ subnetworks in the Internet, or web graph? The answer to such questions is vital for outlier detection (terrorist networks, or illegal moneylaundering rings), forecasting, and simulations (“how will a compute ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
What does a ‘normal ’ computer (or social) network look like? How can we spot ‘abnormal ’ subnetworks in the Internet, or web graph? The answer to such questions is vital for outlier detection (terrorist networks, or illegal moneylaundering rings), forecasting, and simulations (“how will a computer virus spread?”). The heart of the problem is finding the properties of real graphs that seem to persist over multiple disciplines. We list such patterns and “laws”, including the “mincut plots ” discovered by us. This is the first part of our NetMine package: given any large graph, it provides visual feedback about these patterns; any significant deviations from the expected patterns can thus be immediately flagged by the user as abnormalities in the graph. The second part of NetMine is the Aplots tool for visualizing the adjacency matrix of the graph in innovative new ways, again to find outliers. Third, NetMine contains the RMAT (Recursive MATrix) graph generator, which can successfully model many of the patterns found in realworld graphs and quickly generate realistic graphs, capturing the essence of each graph in only a few parameters. We present results on multiple, large real graphs, where we show the effectiveness of our approach.
Compact Dictionaries for VariableLength Keys and Data, with Applications
, 2007
"... We consider the problem of maintaining a dynamic dictionary T of keys and associated data for which both the keys and data are bit strings that can vary in length from zero up to the length w of a machine word. We present a data structure for this variablebitlength dictionary problem that supports ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We consider the problem of maintaining a dynamic dictionary T of keys and associated data for which both the keys and data are bit strings that can vary in length from zero up to the length w of a machine word. We present a data structure for this variablebitlength dictionary problem that supports constant time lookup and expected amortized constant time insertion and deletion. It uses O(m + 3n − n log 2 n) bits, where n is the number of elements in T, and m is the total number of bits across all strings in T (keys and data). Our dictionary uses an array A[1... n] in which locations store variablebitlength strings. We present a data structure for this variablebitlength array problem that supports worstcase constanttime lookups and updates and uses O(m + n) bits, where m is the total number of bits across all strings stored in A. The motivation for these structures is to support applications for which it is helpful to efficiently store short varying length bit strings. We present several applications, including representations for semidynamic graphs, order queries on integers sets, cardinal trees with varying cardinality, and simplicial meshes of d dimensions. These results either generalize or simplify previous results.
Permuting Web and Social Graphs
"... Since the first investigations on web graph compression, it has been clear that the ordering of the nodes of the graph has a fundamental influence on the compression rate (usually expressed as the number of bits per link). The authors of the LINK database [2], for instance, investigated three differ ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
Since the first investigations on web graph compression, it has been clear that the ordering of the nodes of the graph has a fundamental influence on the compression rate (usually expressed as the number of bits per link). The authors of the LINK database [2], for instance, investigated three different approaches: an extrinsic ordering (URL ordering) and two intrinsic orderings based on the rows of the adjacency matrix (lexicographic and Gray code); they concluded that URL ordering has many advantages in spite of a small penalty in compression. In this paper we approach this issue in a more systematic way, testing some known orderings and proposing some new ones. Our experiments are made in the WebGraph framework [3], and show that the compression technique and the structure of the graph can produce significantly different results. In particular, we show that for the transposed web graph URL ordering is significantly less effective, and that some new mixed orderings combining host information and Gray/lexicographic orderings outperform all previous methods: in some large transposed graphs they yield the quite incredible compression rate of 1 bit per link. We experiment these simple ideas on some nonweb social networks and obtain results that are extremely promising and are very close to those recently achieved using shingle orderings and backlinks compression schemes [4].
Some typical properties of the Spatial Preferred Attachment model
 Proceedings of the 9th Workshop on Algorithms and Models for the Web Graph (WAW 2012), Lecture Notes in Computer Science 7323
, 2012
"... Abstract. We investigate a stochastic model for complex networks, based on a spatial embedding of the nodes, called the Spatial Preferred Attachment (SPA) model. In the SPA model, nodes have spheres of influence of varying size, and new nodes may only link to a node if they fall within its influence ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Abstract. We investigate a stochastic model for complex networks, based on a spatial embedding of the nodes, called the Spatial Preferred Attachment (SPA) model. In the SPA model, nodes have spheres of influence of varying size, and new nodes may only link to a node if they fall within its influence region. The spatial embedding of the nodes models the background knowledge or identity of the node, which influences its link environment. In this paper, we focus on the (directed) diameter, small separators, and the (weak) giant component of the model. 1.
Engineering a compact parallel delaunay algorithm in 3d
 In Proceedings of the ACM Symposium on Computational Geometry
, 2006
"... We describe an implementation of a compact parallel algorithm for 3D Delaunay tetrahedralization on a 64processor sharedmemory machine. Our algorithm uses a concurrent version of the BowyerWatson incremental insertion, and a threadsafe spaceefficient structure for representing the mesh. Using t ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
We describe an implementation of a compact parallel algorithm for 3D Delaunay tetrahedralization on a 64processor sharedmemory machine. Our algorithm uses a concurrent version of the BowyerWatson incremental insertion, and a threadsafe spaceefficient structure for representing the mesh. Using the implementation we are able to generate significantly larger Delaunay meshes than have previously been generated—10 billion tetrahedra on a 64 processor SMP using 200GB of RAM. The implementation makes use of a locality based relabeling of the vertices that serves three purposes—it is used as part of the space efficient representation, it improves the memory locality, and it reduces the overhead necessary for locks. The implementation also makes use of a caching technique to avoid excessive decoding of vertex information, a technique for backing out of insertions that collide, and a shared work queue for maintaining points that have yet to be inserted.
An algorithmic framework for compression and text indexing
"... We present a unified algorithmic framework to obtain nearly optimal space bounds for text compression and compressed text indexing, apart from lowerorder terms. For a text T of n symbols drawn from an alphabet Σ, our bounds are stated in terms of the hthorder empirical entropy of the text, Hh. In ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We present a unified algorithmic framework to obtain nearly optimal space bounds for text compression and compressed text indexing, apart from lowerorder terms. For a text T of n symbols drawn from an alphabet Σ, our bounds are stated in terms of the hthorder empirical entropy of the text, Hh. In particular, we provide a tight analysis of the BurrowsWheeler transform (bwt) establishing a bound of nHh + M(T,Σ,h) bits, where M(T,Σ,h) denotes the asymptotical number of bits required to store the empirical statistical model for contexts of order h appearing in T. Using the same framework, we also obtain an implementation of the compressed suffix array (csa) which achieves nHh + M(T,Σ,h) + O(nlg lg n/lg Σ  n) bits of space while still retaining competitive fulltext indexing functionality. The novelty of the proposed framework lies in its use of the finite set model instead of the empirical probability model (as in previous work), giving us new insight into the design and analysis of our algorithms. For example, we show that our analysis gives improved bounds since M(T,Σ,h) ≤ min{g ′ h lg(n/g ′ h + 1),H ∗ hn + lg n + g′′ h}, where g ′ h = O(Σh+1) and g ′′ h = O(Σ  h+1 lg Σ  h+1) do not depend on the text length n, while H ∗ h ≥ Hh is the modified hthorder empirical entropy of T. Moreover, we show a strong relationship between a compressed fulltext index and the succinct dictionary problem. We also examine the importance of lowerorder terms, as these can dwarf any savings achieved by highorder entropy. We report further results and tradeoffs on highorder entropycompressed text indexes in the paper. 1
Compact Data Structures with Fast Queries
, 2005
"... Many applications dealing with large data structures can benefit from keeping them in compressed form. Compression has many benefits: it can allow a representation to fit in main memory rather than swapping out to disk, and it improves cache performance since it allows more data to fit into the c ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Many applications dealing with large data structures can benefit from keeping them in compressed form. Compression has many benefits: it can allow a representation to fit in main memory rather than swapping out to disk, and it improves cache performance since it allows more data to fit into the cache. However, a data structure is only useful if it allows the application to perform fast queries (and updates) to the data.