Results 1 
8 of
8
Elastic functional coding of human actions: From vectorfields to latent variables
 In CVPR
, 2015
"... Human activities observed from visual sensors often give rise to a sequence of smoothly varying features. In many cases, the space of features can be formally defined as a manifold, where the action becomes a trajectory on the manifold. Such trajectories are high dimensional in addition to being no ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Human activities observed from visual sensors often give rise to a sequence of smoothly varying features. In many cases, the space of features can be formally defined as a manifold, where the action becomes a trajectory on the manifold. Such trajectories are high dimensional in addition to being nonlinear, which can severely limit computations on them. We also argue that by their nature, human actions themselves lie on a much lower dimensional manifold compared to the high dimensional feature space. Learning an accurate low dimensional embedding for actions could have a huge impact in the areas of efficient search and retrieval, visualization, learning, and recognition. Traditional manifold learning addresses this problem for static points in Rn, but its extension to trajectories on Rieman
Sparse Coding on Symmetric Positive Definite Manifolds using Bregman Divergences
"... Abstract—This paper introduces sparse coding and dictionary learning for Symmetric Positive Definite (SPD) matrices, which are often used in machine learning, computer vision and related areas. Unlike traditional sparse coding schemes that work in vector spaces, in this paper we discuss how SPD matr ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract—This paper introduces sparse coding and dictionary learning for Symmetric Positive Definite (SPD) matrices, which are often used in machine learning, computer vision and related areas. Unlike traditional sparse coding schemes that work in vector spaces, in this paper we discuss how SPD matrices can be described by sparse combination of dictionary atoms, where the atoms are also SPD matrices. We propose to seek sparse coding by embedding the space of SPD matrices into Hilbert spaces through two types of Bregman matrix divergences. This not only leads to an efficient way of performing sparse coding, but also an online and iterative scheme for dictionary learning. We apply the proposed methods to several computer vision tasks where images are represented by region covariance matrices. Our proposed algorithms outperform stateoftheart methods on a wide range of classification tasks, including face recognition, action recognition, material classification and texture categorization. Index Terms—Riemannian geometry, Bregman divergences, kernel methods, sparse coding, dictionary learning. I.
Projection metric learning on Grassmann manifold with application to video based face recognition
 In CVPR, 2015c
"... In video based face recognition, great success has been made by representing videos as linear subspaces, which typically lie in a special type of nonEuclidean space known as Grassmann manifold. To leverage the kernelbased methods developed for Euclidean space, several recent methods have been p ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
In video based face recognition, great success has been made by representing videos as linear subspaces, which typically lie in a special type of nonEuclidean space known as Grassmann manifold. To leverage the kernelbased methods developed for Euclidean space, several recent methods have been proposed to embed the Grassmann manifold into a high dimensional Hilbert space by exploiting the well established Project Metric, which can approximate the Riemannian geometry of Grassmann manifold. Nevertheless, they inevitably introduce the drawbacks from traditional kernelbased methods such as implicit map and high computational cost to the Grassmann manifold. To overcome such limitations, we propose a novel method to learn the Projection Metric directly on Grassmann manifold rather than in Hilbert space. From the perspective of manifold learning, our method can be regarded as performing a geometryaware dimensionality reduction from the original Grassmann manifold to a lowerdimensional, more discriminative Grassmann manifold where more favorable classification can be achieved. Experiments on several realworld video face datasets demonstrate that the proposed method yields competitive performance compared with the stateoftheart algorithms. 1.
GeometryAware Principal Component Analysis for Symmetric Positive Definite Matrices
"... Symmetric positive definite (SPD) matrices, e.g. covariance matrices, are ubiquitous in machine learning applications. However, because their size grows as n2 (where n is the number of variables) their highdimensionality is a crucial point when working with them. Thus, it is often useful to apply t ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Symmetric positive definite (SPD) matrices, e.g. covariance matrices, are ubiquitous in machine learning applications. However, because their size grows as n2 (where n is the number of variables) their highdimensionality is a crucial point when working with them. Thus, it is often useful to apply to them dimensionality reduction techniques. Principal component analysis (PCA) is a canonical tool for dimensionality reduction, which for vector data reduces the dimension of the input data while maximizing the preserved variance. Yet, the commonly used, naive extensions of PCA to matrices result in suboptimal variance retention. Moreover, when applied to SPD matrices, they ignore the geometric structure of the space of SPD matrices, further degrading the performance. In this paper we develop a new Riemannian geometry based formulation of PCA for SPD matrices that i) preserves more data variance by appropriately extending PCA to matrix data, and ii) extends the standard definition from the Euclidean to the Riemannian geometries. We experimentally demonstrate the usefulness of our approach as preprocessing for EEG signals.
Geometric Compression of Orientation Signals for Fast Gesture Analysis
"... Abstract This paper concerns itself with compression strategies for orientation signals, seen as signals evolving on the space of quaternions. The compression techniques extend classical signal approximation strategies used in data mining, by explicitly taking into account the quotientspace propert ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract This paper concerns itself with compression strategies for orientation signals, seen as signals evolving on the space of quaternions. The compression techniques extend classical signal approximation strategies used in data mining, by explicitly taking into account the quotientspace properties of the quaternion space. The approximation techniques are applied to the case of human gesture recognition from cellphonebased orientation sensors. Results indicate that the proposed approach results in high recognition accuracies, with low storage requirements, with the geometric computations providing added robustness than classical vectorspace computations.
Bayesian Nonparametric Clustering for Positive Definite Matrices
"... Abstract—Symmetric Positive Definite (SPD) matrices emerge as data descriptors in several applications of computer vision such as object tracking, texture recognition, and diffusion tensor imaging. Clustering these data matrices forms an integral part of these applications, for which softclustering ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—Symmetric Positive Definite (SPD) matrices emerge as data descriptors in several applications of computer vision such as object tracking, texture recognition, and diffusion tensor imaging. Clustering these data matrices forms an integral part of these applications, for which softclustering algorithms (KMeans, Expectation Maximization, etc.) are generally used. As is wellknown, these algorithms need the number of clusters to be specified, which is difficult when the dataset scales. To address this issue, we resort to the classical nonparametric Bayesian framework by modeling the data as a mixture model using the Dirichlet process (DP) prior. Since these matrices do not conform to the Euclidean geometry, rather belongs to a curved Riemannian manifold,existing DP models cannot be directly applied. Thus, in this paper, we propose a novel DP mixture model framework for SPD matrices. Using the logdeterminant divergence as the underlying dissimilarity measure to compare these matrices, and further using the connection between this measure and the Wishart distribution, we derive a novel DPM model based on the WishartInverseWishart conjugate pair. We apply this model to several applications in computer vision. Our experiments demonstrate that our model is scalable to the dataset size and at the same time achieves superior accuracy compared to several stateoftheart parametric and nonparametric clustering algorithms. Index Terms—Region covariances, Dirichlet process, nonparametric methods, positive definite matrices F 1
LogEuclidean Metric Learning on Symmetric Positive Definite Manifold with Application to Image Set Classification
"... The manifold of Symmetric Positive Definite (SPD) matrices has been successfully used for data representation in image set classification. By endowing the SPD manifold with LogEuclidean Metric, existing methods typically work on vectorforms of SPD matrix logarithms. This however not only inevitabl ..."
Abstract
 Add to MetaCart
The manifold of Symmetric Positive Definite (SPD) matrices has been successfully used for data representation in image set classification. By endowing the SPD manifold with LogEuclidean Metric, existing methods typically work on vectorforms of SPD matrix logarithms. This however not only inevitably distorts the geometrical structure of the space of SPD matrix logarithms but also brings low efficiency especially when the dimensionality of SPD matrix is high. To overcome this limitation, we propose a novel metric learning approach to work directly on logarithms of SPD matrices. Specifically, our method aims to learn a tangent map that can di
Beyond Covariance: Feature Representation with Nonlinear Kernel Matrices
"... Covariance matrix has recently received increasing attention in computer vision by leveraging Riemannian geometry of symmetric positivedefinite (SPD) matrices. Originally proposed as a region descriptor, it has now been used as a generic representation in various recognition tasks. However, cova ..."
Abstract
 Add to MetaCart
(Show Context)
Covariance matrix has recently received increasing attention in computer vision by leveraging Riemannian geometry of symmetric positivedefinite (SPD) matrices. Originally proposed as a region descriptor, it has now been used as a generic representation in various recognition tasks. However, covariance matrix has shortcomings such as being prone to be singular, limited capability in modeling complicated feature relationship, and having a fixed form of representation. This paper argues that more appropriate SPDmatrixbased representations shall be explored to achieve better recognition. It proposes an open framework to use the kernel matrix over feature dimensions as a generic representation and discusses its properties and advantages. The proposed framework significantly elevates covariance representation to the unlimited opportunities provided by this new representation. Experimental study shows that this representation consistently outperforms its covariance counterpart on various visual recognition tasks. In particular, it achieves significant improvement on skeletonbased human action recognition, demonstrating the stateoftheart performance over both the covariance and the existing noncovariance representations. 1.