Results 11  20
of
156
Vector diffusion maps and the connection laplacian
 CComm. Pure Appl. Math
"... Abstract. We introduce vector diffusion maps (VDM), a new mathematical framework for organizing and analyzing massive high dimensional data sets, images and shapes. VDM is a mathematical and algorithmic generalization of diffusion maps and other nonlinear dimensionality reduction methods, such as L ..."
Abstract

Cited by 48 (13 self)
 Add to MetaCart
Abstract. We introduce vector diffusion maps (VDM), a new mathematical framework for organizing and analyzing massive high dimensional data sets, images and shapes. VDM is a mathematical and algorithmic generalization of diffusion maps and other nonlinear dimensionality reduction methods, such as LLE, ISOMAP and Laplacian eigenmaps. While existing methods are either directly or indirectly related to the heat kernel for functions over the data, VDM is based on the heat kernel for vector fields. VDM provides tools for organizing complex data sets, embedding them in a low dimensional space, and interpolating and regressing vector fields over the data. In particular, it equips the data with a metric, which we refer to as the vector diffusion distance. In the manifold learning setup, where the data set is distributed on (or near) a low dimensional manifold M d embedded in R p, we prove the relation between VDM and the connectionLaplacian operator for vector fields over the manifold. Key words. Dimensionality reduction, vector fields, heat kernel, parallel transport, local principal component analysis, alignment. 1. Introduction. Apopularwaytodescribethe
Convergence of laplacian eigenmaps
 In NIPS
, 2006
"... Geometrically based methods for various tasks of machine learning have attracted considerable attention over the last few years. In this paper we show convergence of eigenvectors of the point cloud Laplacian to the eigenfunctions of the LaplaceBeltrami operator on the underlying manifold, thus esta ..."
Abstract

Cited by 46 (4 self)
 Add to MetaCart
(Show Context)
Geometrically based methods for various tasks of machine learning have attracted considerable attention over the last few years. In this paper we show convergence of eigenvectors of the point cloud Laplacian to the eigenfunctions of the LaplaceBeltrami operator on the underlying manifold, thus establishing the first convergence results for a spectral dimensionality reduction algorithm in the manifold setting. 1
ANGULAR SYNCHRONIZATION BY EIGENVECTORS AND SEMIDEFINITE PROGRAMMING: ANALYSIS AND APPLICATION TO CLASS AVERAGING IN CRYOELECTRON MICROSCOPY
, 2009
"... The angular synchronization problem is to obtain an accurate estimation (up to a constant additive phase) for a set of unknown angles θ1,..., θn from m noisy measurements of their offsets θi − θj mod 2π. Of particular interest is angle recovery in the presence of many outlier measurements that are ..."
Abstract

Cited by 46 (18 self)
 Add to MetaCart
(Show Context)
The angular synchronization problem is to obtain an accurate estimation (up to a constant additive phase) for a set of unknown angles θ1,..., θn from m noisy measurements of their offsets θi − θj mod 2π. Of particular interest is angle recovery in the presence of many outlier measurements that are uniformly distributed in [0,2π) and carry no information on the true offsets. We introduce an efficient recovery algorithm for the unknown angles from the top eigenvector of a specially designed Hermitian matrix. The eigenvector method is extremely stable and succeeds even when the number of outliers is exceedingly large. For example, we successfully estimate n = 400 angles from a full set of m = `400 ´ offset measurements of which 90 % are outliers in less than a second 2 on a commercial laptop. We use random matrix theory to prove that the eigenvector method q gives
Empirical graph Laplacian approximation of LaplaceBeltrami operators: Large sample results
, 2006
"... Let M be a compact Riemannian submanifold of R m of dimension d and let X1,..., Xn be a sample of i.i.d. points in M with uniform distribution. We study the random operators ∆hn,nf(p):= 1 nh d+2 ..."
Abstract

Cited by 43 (0 self)
 Add to MetaCart
(Show Context)
Let M be a compact Riemannian submanifold of R m of dimension d and let X1,..., Xn be a sample of i.i.d. points in M with uniform distribution. We study the random operators ∆hn,nf(p):= 1 nh d+2
Discrete Laplace Operator on Meshed Surfaces
"... In recent years a considerable amount of work in graphics and geometric optimization used tools based on the LaplaceBeltrami operator on a surface. The applications of the Laplacian include mesh editing, surface smoothing, and shape interpolations among others. However, it has been shown [12, 23, 2 ..."
Abstract

Cited by 42 (11 self)
 Add to MetaCart
(Show Context)
In recent years a considerable amount of work in graphics and geometric optimization used tools based on the LaplaceBeltrami operator on a surface. The applications of the Laplacian include mesh editing, surface smoothing, and shape interpolations among others. However, it has been shown [12, 23, 25] that the popular cotangent approximation schemes do not provide convergent pointwise (or even L2) estimates, while many applications rely on pointwise estimation. Existence of such schemes has been an open question [12]. In this paper we propose the first algorithm for approximating the Laplace operator of a surface from a mesh with pointwise convergence guarantees applicable to arbitrary meshed surfaces. We show that for a sufficiently fine mesh over an arbitrary surface, our mesh Laplacian is close to the LaplaceBeltrami operator on the surface at every point of the surface. Moreover, the proposed algorithm is simple and easily implementable. Experimental evidence shows that our algorithm exhibits convergence empirically and outperforms cotangentbased methods in providing accurate approximation of the Laplace operator for various meshes.
Constructing Laplace Operator from Point Clouds in R^d
, 2009
"... We present an algorithm for approximating the LaplaceBeltrami operator from an arbitrary point cloud obtained from a kdimensional manifold embedded in the ddimensional space. We show that this PCD Laplace (PointCloud Data Laplace) operator converges to the LaplaceBeltrami operator on the underl ..."
Abstract

Cited by 39 (5 self)
 Add to MetaCart
(Show Context)
We present an algorithm for approximating the LaplaceBeltrami operator from an arbitrary point cloud obtained from a kdimensional manifold embedded in the ddimensional space. We show that this PCD Laplace (PointCloud Data Laplace) operator converges to the LaplaceBeltrami operator on the underlying manifold as the point cloud becomes denser. Unlike the previous work, we do not assume that the data samples are independent identically distributed from a probability distribution and do not require a global mesh. The resulting algorithm is easy to implement. We present experimental results indicating that even for point sets sampled from a uniform distribution, PCD Laplace converges faster than the weighted graph Laplacian. We also show that using our PCD Laplacian we can directly estimate certain geometric invariants, such as manifold area.
Graph laplacians and their convergence on random neighborhood graphs
 Journal of Machine Learning Research
, 2006
"... Given a sample from a probability measure with support on a submanifold in Euclidean space one can construct a neighborhood graph which can be seen as an approximation of the submanifold. The graph Laplacian of such a graph is used in several machine learning methods like semisupervised learning, d ..."
Abstract

Cited by 35 (7 self)
 Add to MetaCart
Given a sample from a probability measure with support on a submanifold in Euclidean space one can construct a neighborhood graph which can be seen as an approximation of the submanifold. The graph Laplacian of such a graph is used in several machine learning methods like semisupervised learning, dimensionality reduction and clustering. In this paper we determine the pointwise limit of three different graph Laplacians used in the literature as the sample size increases and the neighborhood size approaches zero. We show that for a uniform measure on the submanifold all graph Laplacians have the same limit up to constants. However in the case of a nonuniform measure on the submanifold only the so called random walk graph Laplacian converges to the weighted LaplaceBeltrami operator.
Laplacian Support Vector Machines Trained in the Primal
"... In the last few years, due to the growing ubiquity of unlabeled data, much effort has been spent by the machine learning community to develop better understanding and improve the quality of classifiers exploiting unlabeled data. Following the manifold regularization approach, Laplacian Support Vecto ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
In the last few years, due to the growing ubiquity of unlabeled data, much effort has been spent by the machine learning community to develop better understanding and improve the quality of classifiers exploiting unlabeled data. Following the manifold regularization approach, Laplacian Support Vector Machines (LapSVMs) have shown the state of the art performance in semisupervised classification. In this paper we present two strategies to solve the primal LapSVM problem, in order to overcome some issues of the original dual formulation. In particular, training a LapSVM in the primal can be efficiently performed with preconditioned conjugate gradient. We speed up training by using an early stopping strategy based on the prediction on unlabeled data or, if available, on labeled validation examples. This allows the algorithm to quickly compute approximate solutions with roughly the same classification accuracy as the optimal ones, considerably reducing the training time. The computational complexity of the training algorithm is reduced from O(n 3) to O(kn 2), where n is the combined number of labeled and unlabeled examples and k is empirically evaluated to be significantly smaller than n. Due to its simplicity, training LapSVM in the primal can be the starting point for additional enhancements of the original LapSVM formulation, such as those for dealing with large data sets. We present an extensive experimental evaluation on real world data showing the benefits of the proposed approach.
A topological view of unsupervised learning from noisy data
 SIAM Journal of Computing
, 2011
"... Abstract. In this paper, we take a topological view of unsupervised learning. From this point of view, clustering may be interpreted as trying to find the number of connected components of any underlying geometrically structured probability distribution in a certain sense that we will make precise. ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we take a topological view of unsupervised learning. From this point of view, clustering may be interpreted as trying to find the number of connected components of any underlying geometrically structured probability distribution in a certain sense that we will make precise. We construct a geometrically structured probability distribution that seems appropriate for modeling data in very high dimensions. A special case of our construction is the mixture of Gaussians where there is Gaussian noise concentrated around a finite set of points (the means). More generally we consider Gaussian noise concentrated around a low dimensional manifold and discuss how to recover the homology of this underlying geometric core from data that do not lie on it. We show that if the variance of the Gaussian noise is small in a certain sense, then the homology can be learned with high confidence by an algorithm that has a weak (linear) dependence on the ambient dimension. Our algorithm has a natural interpretation as a spectral learning algorithm using a combinatorial Laplacian of a suitable dataderived simplicial complex.