Results 1  10
of
37
Tensor Decompositions and Applications
 SIAM REVIEW
, 2009
"... This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal proce ..."
Abstract

Cited by 723 (18 self)
 Add to MetaCart
(Show Context)
This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, etc. Two particular tensor decompositions can be considered to be higherorder extensions of the matrix singular value decompo
sition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rankone tensors, and the Tucker decomposition is a higherorder form of principal components analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The Nway Toolbox and Tensor Toolbox, both for MATLAB, and the Multilinear Engine are examples of software packages for working with tensors.
Multilinear principal component analysis of tensor objects for recognition
 in Proc. Int. Conf. Pattern Recognit
, 2006
"... Abstract—This paper introduces a multilinear principal component analysis (MPCA) framework for tensor object feature extraction. Objects of interest in many computer vision and pattern recognition applications, such as 2D/3D images and video sequences are naturally described as tensors or multilin ..."
Abstract

Cited by 88 (15 self)
 Add to MetaCart
(Show Context)
Abstract—This paper introduces a multilinear principal component analysis (MPCA) framework for tensor object feature extraction. Objects of interest in many computer vision and pattern recognition applications, such as 2D/3D images and video sequences are naturally described as tensors or multilinear arrays. The proposed framework performs feature extraction by determining a multilinear projection that captures most of the original tensorial input variation. The solution is iterative in nature and it proceeds by decomposing the original problem to a series of multiple projection subproblems. As part of this work, methods for subspace dimensionality determination are proposed and analyzed. It is shown that the MPCA framework discussed in this work supplants existing heterogeneous solutions such as the classical principal component analysis (PCA) and its 2D variant (2D PCA). Finally, a tensor object recognition system is proposed with the introduction of a discriminative tensor feature selection mechanism and a novel classification strategy, and applied to the problem of gait recognition. Results presented here indicate MPCA’s utility as a feature extraction tool. It is shown that even without a fully optimized design, an MPCAbased gait recognition module achieves highly competitive performance and compares favorably to the stateoftheart gait recognizers. Index Terms—Dimensionality reduction, feature extraction, gait recognition, multilinear principal component analysis (MPCA), tensor objects. I.
Statistical Performance of Convex Tensor Decomposition
"... We analyze the statistical performance of a recently proposed convex tensor decomposition algorithm. Conventionally tensor decomposition has been formulated as nonconvex optimization problems, which hindered the analysis of their performance. We show under some conditions that the mean squared erro ..."
Abstract

Cited by 36 (5 self)
 Add to MetaCart
(Show Context)
We analyze the statistical performance of a recently proposed convex tensor decomposition algorithm. Conventionally tensor decomposition has been formulated as nonconvex optimization problems, which hindered the analysis of their performance. We show under some conditions that the mean squared error of the convex method scales linearly with the quantity we call the normalized rank of the true tensor. The current analysis naturally extends the analysis of convex lowrank matrix estimation to tensors. Furthermore, we show through numerical experiments that our theory can precisely predict the scaling behaviour in practice. 1
Bilinear discriminant component analysis
, 2007
"... Factor analysis and discriminant analysis are often used as complementary approaches to identify linear components in two dimensional data arrays. For three dimensional arrays, which may organize data in dimensions such as space, time, and trials, the opportunity arises to combine these two approach ..."
Abstract

Cited by 32 (12 self)
 Add to MetaCart
Factor analysis and discriminant analysis are often used as complementary approaches to identify linear components in two dimensional data arrays. For three dimensional arrays, which may organize data in dimensions such as space, time, and trials, the opportunity arises to combine these two approaches. A new method, Bilinear Discriminant Component Analysis (BDCA), is derived and demonstrated in the context of functional brain imaging data for which it seems ideally suited. The work suggests to identify a subspace projection which optimally separates classes while ensuring that each dimension in this space captures an independent contribution to the discrimination.
QuasiNewton methods on Grassmannians and multilinear approximations of tensors
, 2009
"... Abstract. In this paper we proposed quasiNewton and limited memory quasiNewton methods for objective functions defined on Grassmann manifolds or a product of Grassmann manifolds. Specifically we defined bfgs and lbfgs updates in local and global coordinates on Grassmann manifolds or a product of ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we proposed quasiNewton and limited memory quasiNewton methods for objective functions defined on Grassmann manifolds or a product of Grassmann manifolds. Specifically we defined bfgs and lbfgs updates in local and global coordinates on Grassmann manifolds or a product of these. We proved that, when local coordinates are used, our bfgs updates on Grassmann manifolds share the same optimality property as the usual bfgs updates on Euclidean spaces. When applied to the best multilinear rank approximation problem for general and symmetric tensors, our approach yields fast, robust, and accurate algorithms that exploit the special Grassmannian structure of the respective problems, and which work on tensors of large dimensions and arbitrarily high order. Extensive numerical experiments are included to substantiate our claims. Key words. Grassmann manifold, Grassmannian, product of Grassmannians, Grassmann quasiNewton, Grassmann bfgs, Grassmann lbfgs, multilinear rank, symmetric multilinear rank, tensor, symmetric tensor, approximations
Minimal subspaces in tensor representations
, 2011
"... In this paper we introduce and develop the notion of minimal subspaces in the framework of algebraic and topological tensor product spaces. This mathematical structure arises in a natural way in the study of tensor representations. We use minimal subspaces to prove the existence of a best approximat ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
(Show Context)
In this paper we introduce and develop the notion of minimal subspaces in the framework of algebraic and topological tensor product spaces. This mathematical structure arises in a natural way in the study of tensor representations. We use minimal subspaces to prove the existence of a best approximation, for any element in a Banach tensor space, by means a tensor given in a typical representation format (Tucker, hierarchical or tensor train). We show that this result holds in a tensor Banach space with a norm stronger that the injective norm and in an intersection of finitely many Banach tensor spaces satisfying some additional conditions. Examples by using topological tensor products of standard Sobolev spaces are given.
Dimensionality reduction of modulation frequency features for speech discrimination
 In Proc. Interspeech 2008
, 2008
"... We describe a dimensionality reduction method for modulation spectral features, which keeps the timevarying information of interest to the classification task. Due to the varying degrees of redundancy and discriminative power of the acoustic and modulation frequency subspaces, we first employ a ge ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
(Show Context)
We describe a dimensionality reduction method for modulation spectral features, which keeps the timevarying information of interest to the classification task. Due to the varying degrees of redundancy and discriminative power of the acoustic and modulation frequency subspaces, we first employ a generalization of SVD to tensors (Higher Order SVD) to reduce dimensions. Projection of modulation spectral features on the principal axes with the higher energy in each subspace results in a compact feature set. We further estimate the relevance of these projections to speech discrimination based on mutual information to the target class. Reconstruction of modulation spectrograms from the “best ” 22 features back to the initial dimensions, shows that modulation spectral features close to syllable and phoneme rates as well as pitch values of speakers are preserved. Index Terms: modulation spectrum, multilinear algebra, feature selection, mutual information, speech discrimination
EXTENDED HALS ALGORITHM FOR NONNEGATIVE TUCKER DECOMPOSITION AND ITS APPLICATIONS FOR MULTIWAY ANALYSIS AND CLASSIFICATION
"... Analysis of high dimensional data in modern applications, such as neuroscience, text mining, spectral analysis or chemometrices naturally requires tensor decomposition methods. The Tucker decompositions allow us to extract hidden factors (component matrices) with a different dimension in each mode a ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
(Show Context)
Analysis of high dimensional data in modern applications, such as neuroscience, text mining, spectral analysis or chemometrices naturally requires tensor decomposition methods. The Tucker decompositions allow us to extract hidden factors (component matrices) with a different dimension in each mode and investigate interactions among various modes. The Alternating Least Squares (ALS) algorithms have been confirmed effective and efficient in most of tensor decompositions, especially, Tucker with orthogonality constraints. However, for nonnegative Tucker decomposition (NTD), standard ALS algorithms suffer from unstable convergence properties, demand high computational cost for large scale problems due to matrix inversion and often return suboptimal solutions. Moreover, they are quite sensitive with respect to noise, and can be relatively slow in the special case when the data are nearly collinear. In this paper, we propose a new algorithm for nonnegative Tucker decomposition based on constrained minimization of a set of local cost functions and Hierarchical Alternating Least Squares (HALS). The developed HALS NTD algorithm sequentially updates components, hence avoids matrix inversion, and is suitable for largescale problems. The proposed algorithm is also regularized with additional constraint terms such as sparseness, orthogonality, smoothness, and especially discriminant constraints for classification problems. Extensive experiments confirm the validity and higher performance of the developed algorithm in comparison with other existing algorithms.
Batch and adaptive PARAFACbased blind separation of convolutive speech mixtures
 IEEE Audio, Speech, Language Process
, 2010
"... ..."
(Show Context)