Results 1  10
of
145
Tensor Decompositions and Applications
 SIAM REVIEW
, 2009
"... This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal proce ..."
Abstract

Cited by 723 (18 self)
 Add to MetaCart
(Show Context)
This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, etc. Two particular tensor decompositions can be considered to be higherorder extensions of the matrix singular value decompo
sition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rankone tensors, and the Tucker decomposition is a higherorder form of principal components analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The Nway Toolbox and Tensor Toolbox, both for MATLAB, and the Multilinear Engine are examples of software packages for working with tensors.
Singular values and eigenvalues of tensors: a variational approach.
 In Proceedings of the IEEE International Workshop on Computational Advances in MultiSensor Adaptive Processing,
, 2005
"... ..."
(Show Context)
Symmetric tensors and symmetric tensor rank
 Scientific Computing and Computational Mathematics (SCCM
, 2006
"... Abstract. A symmetric tensor is a higher order generalization of a symmetric matrix. In this paper, we study various properties of symmetric tensors in relation to a decomposition into a symmetric sum of outer product of vectors. A rank1 orderk tensor is the outer product of k nonzero vectors. An ..."
Abstract

Cited by 99 (20 self)
 Add to MetaCart
(Show Context)
Abstract. A symmetric tensor is a higher order generalization of a symmetric matrix. In this paper, we study various properties of symmetric tensors in relation to a decomposition into a symmetric sum of outer product of vectors. A rank1 orderk tensor is the outer product of k nonzero vectors. Any symmetric tensor can be decomposed into a linear combination of rank1 tensors, each of them being symmetric or not. The rank of a symmetric tensor is the minimal number of rank1 tensors that is necessary to reconstruct it. The symmetric rank is obtained when the constituting rank1 tensors are imposed to be themselves symmetric. It is shown that rank and symmetric rank are equal in a number of cases, and that they always exist in an algebraically closed field. We will discuss the notion of the generic symmetric rank, which, due to the work of Alexander and Hirschowitz, is now known for any values of dimension and order. We will also show that the set of symmetric tensors of symmetric rank at most r is not closed, unless r = 1. Key words. Tensors, multiway arrays, outer product decomposition, symmetric outer product decomposition, candecomp, parafac, tensor rank, symmetric rank, symmetric tensor rank, generic symmetric rank, maximal symmetric rank, quantics AMS subject classifications. 15A03, 15A21, 15A72, 15A69, 15A18 1. Introduction. We
Tensor decompositions for learning latent variable models
, 2014
"... This work considers a computationally and statistically efficient parameter estimation method for a wide class of latent variable models—including Gaussian mixture models, hidden Markov models, and latent Dirichlet allocation—which exploits a certain tensor structure in their loworder observable mo ..."
Abstract

Cited by 83 (7 self)
 Add to MetaCart
(Show Context)
This work considers a computationally and statistically efficient parameter estimation method for a wide class of latent variable models—including Gaussian mixture models, hidden Markov models, and latent Dirichlet allocation—which exploits a certain tensor structure in their loworder observable moments (typically, of second and thirdorder). Specifically, parameter estimation is reduced to the problem of extracting a certain (orthogonal) decomposition of a symmetric tensor derived from the moments; this decomposition can be viewed as a natural generalization of the singular value decomposition for matrices. Although tensor decompositions are generally intractable to compute, the decomposition of these specially structured tensors can be efficiently obtained by a variety of approaches, including power iterations and maximization approaches (similar to the case of matrices). A detailed analysis of a robust tensor power method is provided, establishing an analogue of Wedin’s perturbation theorem for the singular vectors of matrices. This implies a robust and computationally tractable estimation approach for several popular latent variable models.
Eigenvalues and invariants of tensors
, 2007
"... A tensor is represented by a supermatrix under a coordinate system. In this paper, we define Eeigenvalues and Eeigenvectors for tensors and supermatrices. By the resultant theory, we define the Echaracteristic polynomial of a tensor. An Eeigenvalue of a tensor is a root of the Echaracteristic p ..."
Abstract

Cited by 54 (23 self)
 Add to MetaCart
A tensor is represented by a supermatrix under a coordinate system. In this paper, we define Eeigenvalues and Eeigenvectors for tensors and supermatrices. By the resultant theory, we define the Echaracteristic polynomial of a tensor. An Eeigenvalue of a tensor is a root of the Echaracteristic polynomial. In the regular case, a complex number is an Eeigenvalue if and only if it is a root of the Echaracteristic polynomial. We convert the Echaracteristic polynomial of a tensor to a monic polynomial and show that the coefficients of that monic polynomial are invariants of that tensor, i.e., they are invariant under coordinate system changes. We call them principal invariants of that tensor. The maximum number of principal invariants of mth order ndimensional tensors is a function of m and n. We denote it by d(m,n) and show that d(1, n) = 1, d(2, n) = n, d(m,2) = m for m 3 and d(m,n) mn−1 + · · · + m for m,n 3. We also define the rank of a tensor. All real eigenvectors associated with nonzero Eeigenvalues are in a subspace with dimension equal to its rank.
Most tensor problems are NP hard
 CORR
, 2009
"... The idea that one might extend numerical linear algebra, the collection of matrix computational methods that form the workhorse of scientific and engineering computing, to numerical multilinear algebra, an analogous collection of tools involving hypermatrices/tensors, appears very promising and has ..."
Abstract

Cited by 45 (6 self)
 Add to MetaCart
(Show Context)
The idea that one might extend numerical linear algebra, the collection of matrix computational methods that form the workhorse of scientific and engineering computing, to numerical multilinear algebra, an analogous collection of tools involving hypermatrices/tensors, appears very promising and has attracted a lot of attention recently. We examine here the computational tractability of some core problems in numerical multilinear algebra. We show that tensor analogues of several standard problems that are readily computable in the matrix (i.e. 2tensor) case are NP hard. Our list here includes: determining the feasibility of a system of bilinear equations, determining an eigenvalue, a singular value, or the spectral norm of a 3tensor, determining a best rank1 approximation to a 3tensor, determining the rank of a 3tensor over R or C. Hence making tensor computations feasible is likely to be a challenge.
Finding the largest eigenvalue of a nonnegative tensor
 SIAM J. MATRIX ANAL. APPL
, 2009
"... In this paper we propose an iterative method for calculating the largest eigenvalue of an irreducible nonnegative tensor. This method is an extension of a method of Collatz (1942) for calculating the spectral radius of an irreducible nonnegative matrix. Numerical results show that our proposed meth ..."
Abstract

Cited by 42 (25 self)
 Add to MetaCart
(Show Context)
In this paper we propose an iterative method for calculating the largest eigenvalue of an irreducible nonnegative tensor. This method is an extension of a method of Collatz (1942) for calculating the spectral radius of an irreducible nonnegative matrix. Numerical results show that our proposed method is promising. We also apply the method to studying higherorder Markov chains.
SHIFTED POWER METHOD FOR COMPUTING TENSOR EIGENPAIRS
, 2010
"... Recent work on eigenvalues and eigenvectors for tensors of order m ≥ 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetrictensor eigenpairs of the form Axm−1 = ..."
Abstract

Cited by 41 (4 self)
 Add to MetaCart
Recent work on eigenvalues and eigenvectors for tensors of order m ≥ 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetrictensor eigenpairs of the form Axm−1 = λx subject to ‖x ‖ = 1, which is closely related to optimal rank1 approximation of a symmetric tensor. Our contribution is a novel shifted symmetric higherorder power method (SSHOPM), which we show is guaranteed to converge to a tensor eigenpair. SSHOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higherorder power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to finding complex eigenpairs.
Higher order positive semidefinite diffusion tensor imaging
 SIAM J. Imaging Sci
"... Due to the wellknown limitations of diffusion tensor imaging (DTI), high angular resolution diffusion imaging (HARDI) is used to characterize nonGaussian diffusion processes. One approach to analyze HARDI data is to model the apparent diffusion coefficient (ADC) with higher order diffusion tensors ..."
Abstract

Cited by 37 (25 self)
 Add to MetaCart
Due to the wellknown limitations of diffusion tensor imaging (DTI), high angular resolution diffusion imaging (HARDI) is used to characterize nonGaussian diffusion processes. One approach to analyze HARDI data is to model the apparent diffusion coefficient (ADC) with higher order diffusion tensors (HODT). The diffusivity function is positive semidefinite. In the literature, some methods have been proposed to preserve positive semidefiniteness of second order and fourth order diffusion tensors. None of them can work for arbitrary high order diffusion tensors. In this paper, we propose a comprehensive model to approximate the ADC profile by a positive semidefinite diffusion tensor of either second or higher order. We call this model PSDT (positive semidefinite diffusion tensor). PSDT is a convex optimization problem with a convex quadratic objective function constrained by the nonnegativity requirement on the smallest Zeigenvalue of the diffusivity function. The smallest Zeigenvalue is a computable measure of the extent of positive definiteness of the diffusivity function. We also propose some other invariants for the ADC profile analysis. Experiment results show that higher order tensors could improve the estimation of anisotropic diffusion and the PSDT model can
BiQuadratic Optimization over Unit Spheres and Semidefinite Programming Relaxations
, 2008
"... Abstract. This paper studies the socalled biquadratic optimization over unit spheres min x∈R n,y∈R m bijklxiyjxkyl ..."
Abstract

Cited by 32 (15 self)
 Add to MetaCart
Abstract. This paper studies the socalled biquadratic optimization over unit spheres min x∈R n,y∈R m bijklxiyjxkyl