Results 1  10
of
13
An outputsensitive algorithm for persistent homology
 Comput. Geom
"... Abstract In this paper, we present the first outputsensitive algorithm to compute the persistence diagram of a filtered simplicial complex. For any Γ > 0, it returns only those homology classes with persistence at least Γ. Instead of the classical reduction via column operations, our algorithm ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
(Show Context)
Abstract In this paper, we present the first outputsensitive algorithm to compute the persistence diagram of a filtered simplicial complex. For any Γ > 0, it returns only those homology classes with persistence at least Γ. Instead of the classical reduction via column operations, our algorithm performs rank computations on submatrices of the boundary matrix. For an arbitrary constant δ ∈ (0, 1), the running time is O(C (1−δ)Γ R d (n) log n), where C (1−δ)Γ is the number of homology classes with persistence at least (1 − δ)Γ, n is the total number of simplices in the complex, d its dimension, and R d (n) is the complexity of computing the rank of an n × n matrix with O(dn) nonzero entries. Depending on the choice of the rank algorithm, this yields a deterministic O(C (1−δ)Γ n 2.376 ) algorithm, a O(C (1−δ)Γ n 2.28 ) LasVegas algorithm, or a O(C (1−δ)Γ n 2+ǫ ) MonteCarlo algorithm for an arbitrary ǫ > 0. The space complexity of the MonteCarlo version is bounded by O(dn) = O(n log n).
Zigzag Zoology: Rips Zigzags for Homology Inference
, 2012
"... For points sampled near a compact set X, the persistence barcode of the Rips filtration built from the sample contains information about the homology of X as long as X satisfies some geometric assumptions. The Rips filtration is prohibitively large, however zigzag persistence can be used to keep t ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
For points sampled near a compact set X, the persistence barcode of the Rips filtration built from the sample contains information about the homology of X as long as X satisfies some geometric assumptions. The Rips filtration is prohibitively large, however zigzag persistence can be used to keep the size linear. We present several species of Ripslike zigzags and compare them with respect to the signaltonoise ratio, a measure of how well the underlying homology is represented in the persistence barcode relative to the noise in the barcode at the relevant scales. Some of these Ripslike zigzags have been available as part of the Dionysus library for several years while others are new. Interestingly, we show that some species of Rips zigzags will exhibit less noise than the (nonzigzag) Rips filtration itself. Thus, the Rips zigzag can offer improvements in both size complexity and signaltonoise ratio. Along the way, we develop new techniques for manipulating and comparing persistence barcodes from zigzag modules. We give methods for reversing arrows and removing spaces from a zigzag. We also discuss factoring zigzags and a kind of interleaving of two zigzags that allows their barcodes to be compared. These techniques were developed to provide our theoretical analysis of the signaltonoise ratio of Ripslike zigzags, but they are of independent interest as they apply to zigzag modules generally.
New Bounds on the Size of Optimal Meshes
"... The theory of optimal size meshes gives a method for analyzing the output size (number of simplices) of a Delaunay refinement mesh in terms of the integral of a sizing function over the input domain. The input points define a maximal such sizing function called the feature size. This paper presents ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
The theory of optimal size meshes gives a method for analyzing the output size (number of simplices) of a Delaunay refinement mesh in terms of the integral of a sizing function over the input domain. The input points define a maximal such sizing function called the feature size. This paper presents a way to bound the feature size integral in terms of an easy to compute property of a suitable ordering of the point set. The key idea is to consider the pacing of an ordered point set, a measure of the rate of change in the feature size as points are added one at a time. In previous work, Miller et al. showed that if an ordered point set has pacing φ, then the number of vertices in an optimal mesh will be O(φ d n), where d is the input dimension. We give a new analysis of this integral showing that the output size is only Θ(n + nlogφ). The new analysis tightens bounds from several previous results and provides matching lower bounds. Moreover, it precisely characterizes inputs that yield outputs of size O(n).
Donald Sheehy Geometry, Topology, and Data 1 Executive Summary
"... When I lived in Pittsburgh, the local classical music station had a slogan reminding listeners that “all music was once new”. The same holds in computer science where the basic concepts that any undergraduate might be expected to learn were once the cutting edge of theory. It is easy to forget this. ..."
Abstract
 Add to MetaCart
(Show Context)
When I lived in Pittsburgh, the local classical music station had a slogan reminding listeners that “all music was once new”. The same holds in computer science where the basic concepts that any undergraduate might be expected to learn were once the cutting edge of theory. It is easy to forget this. I approach theoretical computer science with the clear vision that I am searching for the algorithms and data structures that will be standard, ubiquitous, even “obvious ” in 10, 15, or 20 years. I do this by focusing on high impact areas of scientific importance where foundational questions remain wide open. My work started mainly in mesh generation, the essential preprocess for numerical solution of partial differential equations by the finite element method. Basic questions in mesh generation have lingered unanswered for decades, despite being the focus of a massive engineering effort distributed across major industries including big money players in automobiles, aviation, and petroleum. I have been working to answer the primary algorithmic questions about the construction of finite element meshes with an eye towards the major pain points for those using meshes in the field. Recently, I have leveraged my expertise in mesh generation, geometric algorithms, and
Persistent Homology and Nested Dissection
"... Abstract Nested dissection exploits underlying topology to do matrix reductions while persistent homology exploits matrix reductions to reveal underlying topology. It seems natural that one should be able to combine these techniques to beat the currently best bound of matrix multiplication time for ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract Nested dissection exploits underlying topology to do matrix reductions while persistent homology exploits matrix reductions to reveal underlying topology. It seems natural that one should be able to combine these techniques to beat the currently best bound of matrix multiplication time for computing persistent homology. However, nested dissection works by fixing a reduction order, whereas persistent homology generally constrains the ordering. Despite this obstruction, we show that it is possible to combine these two theories. This shows that one can improve the computation of persistent homology of a filtration by exploiting information about the underlying space. It gives reasonable geometric conditions under which one can beat the matrix multiplication bound for persistent homology. Overview Given geometric data, persistent homology gives a way to extract multiscale shape information with the goal of understanding the underlying shape of the distribution from which the data was drawn. The data induces a function on the underlying space, usually a distancelike function to the data. The persistent homology measures the changes in topology of the sublevel sets of the function. To do this, one firsts constructs a filtered simplicial complex, that is, a simplicial complex with an ordering on the simplices. The persistence algorithm is a restricted form * MaxPlanckCenter for Visual Computing and Communication, Saarbrüken, Germany, mkerber@mpiinf.mpg.de † UConn, don.r.sheehy@gmail.com ‡ Jožef Stefan Institute, Ljubljana, Slovenia, primoz.skraba@ijs.si of Gaussian elimination on the boundary matrix of the simplicial complex. As with standard Gaussian elimination, persistent homology can be computed in matrix multiplication time. We let ω denote the smallest exponent such that matrix multiplication can be computed in O(n ω ) time. It is likely that this is also the best possible running time for computing persistent homology of general filtered simplicial complexes. However, if the simplicial complex is coming from lowdimensional geometric data, one might hope to exploit that structure to improve the running time. We show that this is indeed possible. Our approach combines several different ideas. We use the outputsensitive algorithm of Chen and Kerber to reduce the persistence computation to rank computations
Efficient Computation of Clipped Voronoi Diagram for Mesh Generation
"... The Voronoi diagram is a fundamental geometric structure widely used in various fields, especially in computer graphics and geometry computing. For a set of points in a compact domain (i.e. a bounded and closed 2D region or a 3D volume), some Voronoi cells of their Voronoi diagram are infinite or pa ..."
Abstract
 Add to MetaCart
(Show Context)
The Voronoi diagram is a fundamental geometric structure widely used in various fields, especially in computer graphics and geometry computing. For a set of points in a compact domain (i.e. a bounded and closed 2D region or a 3D volume), some Voronoi cells of their Voronoi diagram are infinite or partially outside of the domain, but in practice only the parts of the cells inside the domain are needed, as when computing the centroidal Voronoi tessellation. Such a Voronoi diagram confined to a compact domain is called a clipped Voronoi diagram. We present an efficient algorithm to compute the clipped Voronoi diagram for a set of sites with respect to a compact 2D region or a 3D volume. We also apply the proposed method to optimal mesh generation based on the centroidal Voronoi tessellation.
ProjectTeam Geometrica Geometric Computing
"... c t i v it y e p o r t 2009 Table of contents ..."
(Show Context)
LinearSize Approximations to the VietorisRips
"... The VietorisRips filtration is a versatile tool in topological data analysis. It is a sequence of simplicial complexes built on a metric space to add topological structure to an otherwise disconnected set of points. It is widely used because it encodes useful information about the topology of the u ..."
Abstract
 Add to MetaCart
(Show Context)
The VietorisRips filtration is a versatile tool in topological data analysis. It is a sequence of simplicial complexes built on a metric space to add topological structure to an otherwise disconnected set of points. It is widely used because it encodes useful information about the topology of the underlying metric space. This information is often extracted from its socalled persistence diagram. Unfortunately, this filtration is often too large to construct in full. We show how to construct an O(n)size filtered simplicial complex on an npoint metric spacesuchthat itspersistencediagramisagoodapproximationto that of the VietorisRips filtration. This new filtration can be constructed in O(nlogn) time. The constant factors in both the size and the running time depend only on the doubling dimension of the metric space and the desired tightness of the approximation. For the first time, this makes it computationally tractable to approximate the persistence diagram of the VietorisRips filtration across all scales for large data sets. We describe two different sparse filtrations. The first is a zigzag filtration that removes points as the scale increases. The second is a (nonzigzag) filtration that yields the same persistence diagram. Both methods are based on a hierarchical nettree and yield the same guarantees. 1
Beating the Spread: TimeOptimal Point Meshing (Extended Abstract)
, 2011
"... We present NetMesh, a new algorithm that produces ac onforming Delaunay mesh for point sets in any fixed dimension with guaranteed optimal mesh size and quality. Our comparisonbased algorithm runs in O(nlogn + m) time, where n is the input size and m is the output size, and with constants depending ..."
Abstract
 Add to MetaCart
We present NetMesh, a new algorithm that produces ac onforming Delaunay mesh for point sets in any fixed dimension with guaranteed optimal mesh size and quality. Our comparisonbased algorithm runs in O(nlogn + m) time, where n is the input size and m is the output size, and with constants depending only on the dimension and the desired element quality. It can terminate early in O(nlogn) time returning a O(n) size Voronoi diagram of a superset of P, which again matches the known lower bounds. The previous best results in the comparison model depended on the log of the spread of the input, the ratio of the largest to smallest pairwise distance. We reduce this dependence to O(logn) by using a sequence of ǫnets to determine input insertion order into a incremental Voronoi diagram. We generate a hierarchy of wellspaced meshes and use these to show that the complexity of the Voronoi diagram stays linear in the number of points throughout the construction.
(Multi)Filtering Noise in Geometric Persistent Homology
"... The beauty of persistent homology for topological data analysis is that it obviates the need to choose an explicit scale at which to view the data. Not only does one skip the problem of tuning parameters, but also the output shows explicitly which features are robust to perturbations of scale. Unfor ..."
Abstract
 Add to MetaCart
(Show Context)
The beauty of persistent homology for topological data analysis is that it obviates the need to choose an explicit scale at which to view the data. Not only does one skip the problem of tuning parameters, but also the output shows explicitly which features are robust to perturbations of scale. Unfortunately, denoising the data as a preprocess often leads to new parameters to choose. We show how to replace the Euclidean distance with a family of distance functions to denoise the data as part of the persistence computation. The result is an instance of multidimensional persistence where we can tell not only what topological features are present but also how robust they are to changes in the denoising parameters.