Results 1 - 10
of
105
A unified model for probabilistic principal surfaces
- IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2001
"... AbstractÐPrincipal curves and surfaces are nonlinear generalizations of principal components and subspaces, respectively. They can provide insightful summary of high-dimensional data not typically attainable by classical linear methods. Solutions to several problems, such as proof of existence and c ..."
Abstract
-
Cited by 61 (6 self)
- Add to MetaCart
AbstractÐPrincipal curves and surfaces are nonlinear generalizations of principal components and subspaces, respectively. They can provide insightful summary of high-dimensional data not typically attainable by classical linear methods. Solutions to several problems, such as proof of existence and convergence, faced by the original principal curve formulation have been proposed in the past few years. Nevertheless, these solutions are not generally extensible to principal surfaces, the mere computation of which presents a formidable obstacle. Consequently, relatively few studies of principal surfaces are available. Recently, we proposed the probabilistic principal surface (PPS) to address a number of issues associated with current principal surface algorithms. PPS uses a manifold oriented covariance noise model, based on the generative topographical mapping (GTM), which can be viewed as a parametric formulation of Kohonen's self-organizing map. Building on the PPS, we introduce a unified covariance model that implements PPS … 0< <1†, GTM … ˆ 1†, and the manifold-aligned GTM …>1† by varying the clamping parameter. Then, we comprehensively evaluate the empirical performance (reconstruction error) of PPS, GTM, and the manifold-aligned GTM on three popular benchmark data sets. It is shown in two different comparisons that the PPS outperforms the GTM under identical parameter settings. Convergence of the PPS is found to be identical to that of the GTM and the computational overhead incurred by the PPS decreases to 40 percent or less for more complex manifolds. These results show that the generalized PPS provides a flexible and effective way of obtaining principal surfaces. Index TermsÐPrincipal curve, principal surface, probabilistic, dimensionality reduction, nonlinear manifold, generative topographic mapping. 1
Statistics of shape via principal geodesic analysis on lie groups
- In IEEE Conf. on Computer Vision and Pattern Recognition
, 2003
"... Abstract ..."
Piecewise Linear Skeletonization Using Principal Curves
, 2002
"... We propose an algorithm to find piecewise linear skeletons of hand-written characters by using principal curves. The development of the method was inspired by the apparent similarity between the definition of principal curves (smooth curves which pass through the "middle" of a cloud of poi ..."
Abstract
-
Cited by 52 (0 self)
- Add to MetaCart
We propose an algorithm to find piecewise linear skeletons of hand-written characters by using principal curves. The development of the method was inspired by the apparent similarity between the definition of principal curves (smooth curves which pass through the "middle" of a cloud of points) and the medial axis (smooth curves that go equidistantly from the contours of a character image). The central fitting-and-smoothing step of the algorithm is an extension of the polygonal line algorithm [1, 2] which approximates principal curves of data sets by piecewise linear curves. The polygonal line algorithm is extended to find principal graphs and complemented with two steps specific to the task of skeletonization: an initialization method to improve the structural quality of the skeleton produced by the initialization method.
Regularized Principal Manifolds
- In Computational Learning Theory: 4th European Conference
, 2001
"... Many settings of unsupervised learning can be viewed as quantization problems - the minimization ..."
Abstract
-
Cited by 47 (5 self)
- Add to MetaCart
Many settings of unsupervised learning can be viewed as quantization problems - the minimization
Markerless kinematic model and motion capture from volume sequences
- In To appear in the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2003
, 2003
"... We present an approach for model-free markerless motion capture of articulated kinematic structures. This approach is centered on our method for generating underlying nonlinear axes (or a skeleton curve) of a volume of genus zero (i.e., without holes). We describe the use of skeleton curves for deri ..."
Abstract
-
Cited by 44 (3 self)
- Add to MetaCart
(Show Context)
We present an approach for model-free markerless motion capture of articulated kinematic structures. This approach is centered on our method for generating underlying nonlinear axes (or a skeleton curve) of a volume of genus zero (i.e., without holes). We describe the use of skeleton curves for deriving a kinematic model and motion (in the form of joint angles over time) from a captured volume sequence. Our motion capture method uses a skeleton curve, found in each frame of a volume sequence, to automatically determine kinematic postures. These postures are aligned to determine a common kinematic model for the volume sequence. The derived kinematic model is then reapplied to each frame in the volume sequence to find the motion sequence suited to this model. We demonstrate our method on several types of motion, from synthetically generated volume sequences with an arbitrary kinematic topology, to human volume sequences captured from a set of multiple calibrated cameras. 1.
Riemannian manifold learning
- IEEE Trans. Pattern Anal. Mach. Intell
, 2008
"... Abstract—Recently, manifold learning has beenwidely exploited in pattern recognition, data analysis, andmachine learning. This paper presents a novel framework, called Riemannian manifold learning (RML), based on the assumption that the input high-dimensional data lie on an intrinsically low-dimensi ..."
Abstract
-
Cited by 42 (0 self)
- Add to MetaCart
(Show Context)
Abstract—Recently, manifold learning has beenwidely exploited in pattern recognition, data analysis, andmachine learning. This paper presents a novel framework, called Riemannian manifold learning (RML), based on the assumption that the input high-dimensional data lie on an intrinsically low-dimensional Riemannian manifold. The main idea is to formulate the dimensionality reduction problem as a classical problem in Riemannian geometry, that is, how to construct coordinate charts for a given Riemannian manifold? We implement the Riemannian normal coordinate chart, which has been the most widely used in Riemannian geometry, for a set of unorganized data points. First, two input parameters (the neighborhood size k and the intrinsic dimension d) are estimated based on an efficient simplicial reconstruction of the underlying manifold. Then, the normal coordinates are computed to map the input high-dimensional data into a low-dimensional space. Experiments on synthetic data, as well as real-world images, demonstrate that our algorithm can learn intrinsic geometric structures of the data, preserve radial geodesic distances, and yield regular embeddings.
Novel skeletal representation for articulated creatures
- In Proc. European Conf. on Computer Vision
, 2004
"... Abstract. Volumetric structures are frequently used as shape descriptors for 3D data. The capture of such data is being facilitated by developments in multi-view video and range scanning, extending to subjects that are alive and moving. In this paper, we examine vision-based modeling and the related ..."
Abstract
-
Cited by 35 (1 self)
- Add to MetaCart
(Show Context)
Abstract. Volumetric structures are frequently used as shape descriptors for 3D data. The capture of such data is being facilitated by developments in multi-view video and range scanning, extending to subjects that are alive and moving. In this paper, we examine vision-based modeling and the related representation of moving articulated creatures using spines. We define a spine as a branching axial structure representing the shape and topology of a 3D object’s limbs, and capturing the limbs’ correspondence and motion over time. Our spine concept builds on skeletal representations often used to describe the internal structure of an articulated object and the significant protrusions. The algorithms for determining both 2D and 3D skeletons generally use an objective function tuned to balance stability against the responsiveness to detail. Our representation of a spine provides for enhancements over a 3D skeleton, afforded by temporal robustness and correspondence. We also introduce a probabilistic framework that is needed to compute the spine from a sequence of surface data. We present a practical implementation that approximates the spine’s joint probability function to reconstruct spines for synthetic and real subjects that move.
Continuous latent variable models for dimensionality reduction and sequential data reconstruction
, 2001
"... ..."