Results 1  10
of
17
Clear and compress: Computing persistent homology in chunks
 In TopoInVis
, 2013
"... We present a parallelizable algorithm for computing the persistent homology of a filtered chain complex. Our approach differs from the commonly used reduction algorithm by first computing persistence pairs within local chunks, then simplifying the unpaired columns, and finally applying standard redu ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
(Show Context)
We present a parallelizable algorithm for computing the persistent homology of a filtered chain complex. Our approach differs from the commonly used reduction algorithm by first computing persistence pairs within local chunks, then simplifying the unpaired columns, and finally applying standard reduction on the simplified matrix. The approach generalizes a technique by Günther et al., which uses discrete Morse Theory to compute persistence; we derive the same worstcase complexity bound in a more general context. The algorithm employs several practical optimization techniques which are of independent interest. Our sequential implementation of the algorithm is competitive with stateoftheart methods, and we improve the performance through parallelized computation. 1
Parallel Computation of 2D MorseSmale Complexes
"... Abstract—The MorseSmale complex is a useful topological data structure for the analysis and visualization of scalar data. This paper describes an algorithm that processes all mesh elements of the domain in parallel to compute the MorseSmale complex of large twodimensional data sets at interactive ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
Abstract—The MorseSmale complex is a useful topological data structure for the analysis and visualization of scalar data. This paper describes an algorithm that processes all mesh elements of the domain in parallel to compute the MorseSmale complex of large twodimensional data sets at interactive speeds. We employ a reformulation of the MorseSmale complex using Forman’s Discrete Morse Theory and achieve scalability by computing the discrete gradient using local accesses only. We also introduce a novel approach to merge gradient paths that ensures accurate geometry of the computed complex. We demonstrate that our algorithm performs well on both multicore environments and on massively parallel architectures such as the GPU. Index Terms—Topologybased methods, discrete Morse theory, large datasets, gradient pairs, multicore, 2D scalar functions.
Efficient computation of a hierarchy of discrete 3d gradient vector fields
 in Proc. TopoInVis
, 2011
"... Abstract This paper introduces a novel combinatorial algorithm to compute a hierarchy of discrete gradient vector fields for threedimensional scalar fields. The hierarchy is defined by an importance measure and represents the combinatorial gradient flow at different levels of detail. The presented ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
Abstract This paper introduces a novel combinatorial algorithm to compute a hierarchy of discrete gradient vector fields for threedimensional scalar fields. The hierarchy is defined by an importance measure and represents the combinatorial gradient flow at different levels of detail. The presented algorithm is based on Forman’s discrete Morse theory, which guarantees topological consistency and algorithmic robustness. In contrast to previous work, our algorithm combines memory and runtime efficiency. It thereby lends itself to the analysis of large data sets. A discrete gradient vector field is also a compact representation of the underlying extremal structures – the critical points, separation lines and surfaces. Given a certain level of detail, an explicit geometric representation of these structures can be extracted using simple and fast graph algorithms. 1
Generalized Topological Simplification of Scalar Fields on Surfaces
"... Fig. 1. Given an input scalar field f (left), our combinatorial algorithm generates a simplified function g that provably admits only critical points from a constrained subset of the singularities of f. Our approach is completely oblivious to the employed feature selection strategy, while guaranteei ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
Fig. 1. Given an input scalar field f (left), our combinatorial algorithm generates a simplified function g that provably admits only critical points from a constrained subset of the singularities of f. Our approach is completely oblivious to the employed feature selection strategy, while guaranteeing a small distance f − g ∞ for datafitting purpose. Thus it supports applicationdependent simplification scenarios such as the removal of singularities based on local geometrical measures, interactive user selection or even random selection. The topology of the resulting field is summarized with the inset Reeb graphs for illustration purpose. Abstract — We present a combinatorial algorithm for the general topological simplification of scalar fields on surfaces. Given a scalar field f, our algorithm generates a simplified field g that provably admits only critical points from a constrained subset of the singularities of f, while guaranteeing a small distance f − g ∞ for datafitting purpose. In contrast to previous algorithms, our approach is oblivious to the strategy used for selecting features of interest and allows critical points to be removed arbitrarily. When topological persistence is used to select the features of interest, our algorithm produces a standard ɛsimplification. Our approach is based on a new iterative algorithm for the constrained reconstruction of sub and surlevel sets. Extensive experiments show that the number of iterations required for our algorithm to converge is rarely greater than 2 and never greater than 5, yielding O(n log(n)) practical time performances. The algorithm handles triangulated surfaces with or without boundary and is robust to the presence of multisaddles in the input. It is simple to implement, fast in practice and more general than previous techniques. Practically, our approach allows a user to arbitrarily simplify the topology of an input function and robustly generate the corresponding
Distributed Merge Trees
"... Improved simulations and sensors are producing datasets whose increasing complexity exhausts our ability to visualize and comprehend them directly. To cope with this problem, we can detect and extract significant features in the data and use them as the basis for subsequent analysis. Topological met ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Improved simulations and sensors are producing datasets whose increasing complexity exhausts our ability to visualize and comprehend them directly. To cope with this problem, we can detect and extract significant features in the data and use them as the basis for subsequent analysis. Topological methods are valuable in this context because they provide robust and general feature definitions. As the growth of serial computational power has stalled, data analysis is becoming increasingly dependent on massively parallel machines. To satisfy the computational demand created by complex datasets, algorithms need to effectively utilize these computer architectures. The main strength of topological methods, their emphasis on global information, turns into an obstacle during parallelization. We present two approaches to alleviate this problem. We develop a distributed representation of the merge tree that avoids computing the global tree on a single processor and lets us parallelize subsequent queries. To account for the increasing number of cores per processor, we develop a new data structure that lets us take advantage of multiple sharedmemory cores to parallelize the work on a single node. Finally, we present experiments that illustrate the strengths of our approach as well as help identify future challenges.
Combinatorial Gradient Fields for 2D Images with Empirically Convergent Separatrices
, 1208
"... This paper proposes an efficient probabilistic method that computes combinatorial gradient fields for two dimensional image data. In contrast to existing algorithms, this approach yields a geometric MorseSmale complex that converges almost surely to its continuous counterpart when the image resolut ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This paper proposes an efficient probabilistic method that computes combinatorial gradient fields for two dimensional image data. In contrast to existing algorithms, this approach yields a geometric MorseSmale complex that converges almost surely to its continuous counterpart when the image resolution is increased. This approach is motivated using basic ideas from probability theory and builds upon an algorithm from discrete Morse theory with a strong mathematical foundation. While a formal proof is only hinted at, we do provide a thorough numerical evaluation of our method and compare it to established algorithms. 1
Total Variation Meets Topological Persistence: A First Encounter
, 2010
"... We present first insights into the relation between two popular yet apparently dissimilar approaches to denoising of one dimensional signals, based on (i) total variation (TV) minimization and (ii) ideas from topological persistence. While a close relation between (i) and (ii) might phenomenologic ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We present first insights into the relation between two popular yet apparently dissimilar approaches to denoising of one dimensional signals, based on (i) total variation (TV) minimization and (ii) ideas from topological persistence. While a close relation between (i) and (ii) might phenomenologically not be unexpected, our work appears to be the first to make this connection precise for one dimensional signals. We provide a link between (i) and (ii) that builds on the equivalence between TVL 2 regularization and taut strings and leads to a novel and efficient denoising algorithm that is contrast preserving and operates in O(nlogn) time, where n is the size of the input.
Characterizing Molecular Interactions in Chemical Systems
"... Fig. 1. Visual and quantitative exploration of covalent and noncovalent bonds in the βsheet polipeptide. Our analysis enables to visualize, enumerate, classify, and investigate molecular interactions in complex chemical systems. In this example, the amplitude of the signed electron density (ρ̃, c ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Fig. 1. Visual and quantitative exploration of covalent and noncovalent bonds in the βsheet polipeptide. Our analysis enables to visualize, enumerate, classify, and investigate molecular interactions in complex chemical systems. In this example, the amplitude of the signed electron density (ρ̃, colorcoded from blue to red) enables to distinguish covalent bonds (yellow) from hydrogen bonds (cyan) and van der Waals interactions (dark blue). While the numerical integration of ∇ρ ̃ (right inset) enables to visually distinguish the latter two types of interactions, our combinatorial pipeline robustly extracts these features to support further quantitative analysis. In particular, our algorithm reveals the repeating pattern (black frame) of noncovalent interactions responsible for the folding of this molecule, which decomposes it in unitary building blocks corresponding to the elementary aminoacids composing the molecule. Abstract — Interactions between atoms have a major influence on the chemical properties of molecular systems. While covalent interactions impose the structural integrity of molecules, noncovalent interactions govern more subtle phenomena such as protein folding, bonding or self assembly. The understanding of these types of interactions is necessary for the interpretation of many biological processes and chemical design tasks. While traditionally the electron density is analyzed to interpret the quantum chemistry of a molecular system, noncovalent interactions are characterized by low electron densities and only slight variations of them – challenging their extraction and characterization. Recently, the signed electron density and the reduced gradient, two scalar fields derived from the electron density, have drawn much attention in quantum chemistry since they enable a qualitative visualization of these interactions even in complex molecular systems and experimental measurements. In this work, we present the first combinatorial algorithm for the
Parallel Computation of 3D MorseSmale Complexes
"... The MorseSmale complex is a topological structure that captures the behavior of the gradient of a scalar function on a manifold. This paper discusses scalable techniques to compute the MorseSmale complex of scalar functions defined on large threedimensional structured grids. Computing the MorseS ..."
Abstract
 Add to MetaCart
(Show Context)
The MorseSmale complex is a topological structure that captures the behavior of the gradient of a scalar function on a manifold. This paper discusses scalable techniques to compute the MorseSmale complex of scalar functions defined on large threedimensional structured grids. Computing the MorseSmale complex of threedimensional domains is challenging as compared to twodimensional domains because of the nontrivial structure introduced by the two types of saddle criticalities. We present a parallel sharedmemory algorithm to compute the MorseSmale complex based on Forman’s discrete Morse theory. The algorithm achieves scalability via synergistic use of the CPU and the GPU. We first prove that the discrete gradient on the domain can be computed independently for each cell and hence can be implemented on the GPU. Second, we describe a twostep graph traversal algorithm to compute the 1saddle2saddle connections efficiently and in parallel on the CPU. Simultaneously, the extremasaddle connections are computed using a tree traversal algorithm on the GPU. Categories and Subject Descriptors (according to ACM CCS): I.3.5 Computational Geometry and Object Modeling
Persistent Homology meets Statistical Inference – A Case Study: Detecting Modes of OneDimensional Signals∗
, 2014
"... We investigate the problem of estimating persistent homology of noisy one dimensional signals. We relate this to the problem of estimating the number of modes (i.e., local maxima) – a well known question in statistical inference – and we show how to do so without presmoothing the data. To this end, ..."
Abstract
 Add to MetaCart
(Show Context)
We investigate the problem of estimating persistent homology of noisy one dimensional signals. We relate this to the problem of estimating the number of modes (i.e., local maxima) – a well known question in statistical inference – and we show how to do so without presmoothing the data. To this end, we extend the ideas of persistent homology by working with norms different from the (classical) supremum norm. As a particular case we investigate the so called Kolmogorov norm. We argue that this extension has certain statistical advantages. We offer confidence bands for the attendant Kolmogorov signatures, thereby allowing for the selection of relevant signatures with a statistically controllable error. As a result of independent interest, we show that socalled taut strings minimize the number of critical points for a very general class of functions. We illustrate our results by several numerical examples. AMS subject classification: Primary 62G05,62G20; secondary 62H12 1