Results 1  10
of
76
Tensor Decompositions and Applications
 SIAM REVIEW
, 2009
"... This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal proce ..."
Abstract

Cited by 723 (18 self)
 Add to MetaCart
(Show Context)
This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, etc. Two particular tensor decompositions can be considered to be higherorder extensions of the matrix singular value decompo
sition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rankone tensors, and the Tucker decomposition is a higherorder form of principal components analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The Nway Toolbox and Tensor Toolbox, both for MATLAB, and the Multilinear Engine are examples of software packages for working with tensors.
TENSOR RANK AND THE ILLPOSEDNESS OF THE BEST LOWRANK APPROXIMATION PROBLEM
"... There has been continued interest in seeking a theorem describing optimal lowrank approximations to tensors of order 3 or higher, that parallels the Eckart–Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, te ..."
Abstract

Cited by 194 (13 self)
 Add to MetaCart
There has been continued interest in seeking a theorem describing optimal lowrank approximations to tensors of order 3 or higher, that parallels the Eckart–Young theorem for matrices. In this paper, we argue that the naive approach to this problem is doomed to failure because, unlike matrices, tensors of order 3 or higher can fail to have best rankr approximations. The phenomenon is much more widespread than one might suspect: examples of this failure can be constructed over a wide range of dimensions, orders and ranks, regardless of the choice of norm (or even Brègman divergence). Moreover, we show that in many instances these counterexamples have positive volume: they cannot be regarded as isolated phenomena. In one extreme case, we exhibit a tensor space in which no rank3 tensor has an optimal rank2 approximation. The notable exceptions to this misbehavior are rank1 tensors and order2 tensors (i.e. matrices). In a more positive spirit, we propose a natural way of overcoming the illposedness of the lowrank approximation problem, by using weak solutions when true solutions do not exist. For this to work, it is necessary to characterize the set of weak solutions, and we do this in the case of rank 2, order 3 (in arbitrary dimensions). In our work we emphasize the importance of closely studying concrete lowdimensional examples as a first step towards more general results. To this end, we present a detailed analysis of equivalence classes of 2 × 2 × 2 tensors, and we develop methods for extending results upwards to higher orders and dimensions. Finally, we link our work to existing studies of tensors from an algebraic geometric point of view. The rank of a tensor can in theory be given a semialgebraic description; in other words, can be determined by a system of polynomial inequalities. We study some of these polynomials in cases of interest to us; in particular we make extensive use of the hyperdeterminant ∆ on R 2×2×2.
Eigenvalues of a real supersymmetric tensor
 J. Symbolic Comput
"... In this paper, we define the symmetric hyperdeterminant, eigenvalues and Eeigenvalues of a real supersymmetric tensor. We show that eigenvalues are roots of a onedimensional polynomial, and when the order of the tensor is even, Eeigenvalues are roots of another onedimensional polynomial. These t ..."
Abstract

Cited by 145 (62 self)
 Add to MetaCart
In this paper, we define the symmetric hyperdeterminant, eigenvalues and Eeigenvalues of a real supersymmetric tensor. We show that eigenvalues are roots of a onedimensional polynomial, and when the order of the tensor is even, Eeigenvalues are roots of another onedimensional polynomial. These two onedimensional polynomials are associated with the symmetric hyperdeterminant. We call them the characteristic polynomial and the Echaracteristic polynomial of that supersymmetric tensor. Real eigenvalues (Eeigenvalues) with real eigenvectors (Eeigenvectors) are called Heigenvalues (Zeigenvalues). When the order of the supersymmetric tensor is even, Heigenvalues (Zeigenvalues) exist and the supersymmetric tensor is positive definite if and only if all of its Heigenvalues (Zeigenvalues) are positive. An mthorder ndimensional supersymmetric tensor where m is even has exactly n(m − 1) n−1 eigenvalues, and the number of its Eeigenvalues is strictly less than n(m − 1) n−1 when m ≥ 4. We show that the product of all the eigenvalues is equal to the value of the symmetric hyperdeterminant, while the sum of all the eigenvalues is equal to the sum of the diagonal elements of that supersymmetric tensor, multiplied by (m − 1) n−1.The n(m −1) n−1 eigenvalues are distributed in n disks in C.Thecenters and radii of these n disks are the diagonal elements, and the sums of the absolute values of the corresponding offdiagonal elements, of that supersymmetric tensor. On the other hand, Eeigenvalues are invariant under orthogonal transformations.
Symmetric tensors and symmetric tensor rank
 Scientific Computing and Computational Mathematics (SCCM
, 2006
"... Abstract. A symmetric tensor is a higher order generalization of a symmetric matrix. In this paper, we study various properties of symmetric tensors in relation to a decomposition into a symmetric sum of outer product of vectors. A rank1 orderk tensor is the outer product of k nonzero vectors. An ..."
Abstract

Cited by 99 (20 self)
 Add to MetaCart
(Show Context)
Abstract. A symmetric tensor is a higher order generalization of a symmetric matrix. In this paper, we study various properties of symmetric tensors in relation to a decomposition into a symmetric sum of outer product of vectors. A rank1 orderk tensor is the outer product of k nonzero vectors. Any symmetric tensor can be decomposed into a linear combination of rank1 tensors, each of them being symmetric or not. The rank of a symmetric tensor is the minimal number of rank1 tensors that is necessary to reconstruct it. The symmetric rank is obtained when the constituting rank1 tensors are imposed to be themselves symmetric. It is shown that rank and symmetric rank are equal in a number of cases, and that they always exist in an algebraically closed field. We will discuss the notion of the generic symmetric rank, which, due to the work of Alexander and Hirschowitz, is now known for any values of dimension and order. We will also show that the set of symmetric tensors of symmetric rank at most r is not closed, unless r = 1. Key words. Tensors, multiway arrays, outer product decomposition, symmetric outer product decomposition, candecomp, parafac, tensor rank, symmetric rank, symmetric tensor rank, generic symmetric rank, maximal symmetric rank, quantics AMS subject classifications. 15A03, 15A21, 15A72, 15A69, 15A18 1. Introduction. We
Computation of the canonical decomposition by means of a simultaneous generalized schur decomposition
 SIAM J. Matrix Anal. Appl
, 2004
"... Abstract. The canonical decomposition of higherorder tensors is a key tool in multilinear algebra. First we review the state of the art. Then we show that, under certain conditions, the problem can be rephrased as the simultaneous diagonalization, by equivalence or congruence, of a set of matrices. ..."
Abstract

Cited by 55 (10 self)
 Add to MetaCart
(Show Context)
Abstract. The canonical decomposition of higherorder tensors is a key tool in multilinear algebra. First we review the state of the art. Then we show that, under certain conditions, the problem can be rephrased as the simultaneous diagonalization, by equivalence or congruence, of a set of matrices. Necessary and sufficient conditions for the uniqueness of these simultaneous matrix decompositions are derived. In a next step, the problem can be translated into a simultaneous generalized Schur decomposition, with orthogonal unknowns [A.J. van der Veen and A. Paulraj, IEEE Trans. Signal Process., 44 (1996), pp. 1136–1155]. A firstorder perturbation analysis of the simultaneous generalized Schur decomposition is carried out. We discuss some computational techniques (including a new Jacobi algorithm) and illustrate their behavior by means of a number of numerical experiments.
Eigenvalues and invariants of tensors
, 2007
"... A tensor is represented by a supermatrix under a coordinate system. In this paper, we define Eeigenvalues and Eeigenvectors for tensors and supermatrices. By the resultant theory, we define the Echaracteristic polynomial of a tensor. An Eeigenvalue of a tensor is a root of the Echaracteristic p ..."
Abstract

Cited by 54 (23 self)
 Add to MetaCart
A tensor is represented by a supermatrix under a coordinate system. In this paper, we define Eeigenvalues and Eeigenvectors for tensors and supermatrices. By the resultant theory, we define the Echaracteristic polynomial of a tensor. An Eeigenvalue of a tensor is a root of the Echaracteristic polynomial. In the regular case, a complex number is an Eeigenvalue if and only if it is a root of the Echaracteristic polynomial. We convert the Echaracteristic polynomial of a tensor to a monic polynomial and show that the coefficients of that monic polynomial are invariants of that tensor, i.e., they are invariant under coordinate system changes. We call them principal invariants of that tensor. The maximum number of principal invariants of mth order ndimensional tensors is a function of m and n. We denote it by d(m,n) and show that d(1, n) = 1, d(2, n) = n, d(m,2) = m for m 3 and d(m,n) mn−1 + · · · + m for m,n 3. We also define the rank of a tensor. All real eigenvectors associated with nonzero Eeigenvalues are in a subspace with dimension equal to its rank.
Most tensor problems are NP hard
 CORR
, 2009
"... The idea that one might extend numerical linear algebra, the collection of matrix computational methods that form the workhorse of scientific and engineering computing, to numerical multilinear algebra, an analogous collection of tools involving hypermatrices/tensors, appears very promising and has ..."
Abstract

Cited by 45 (6 self)
 Add to MetaCart
(Show Context)
The idea that one might extend numerical linear algebra, the collection of matrix computational methods that form the workhorse of scientific and engineering computing, to numerical multilinear algebra, an analogous collection of tools involving hypermatrices/tensors, appears very promising and has attracted a lot of attention recently. We examine here the computational tractability of some core problems in numerical multilinear algebra. We show that tensor analogues of several standard problems that are readily computable in the matrix (i.e. 2tensor) case are NP hard. Our list here includes: determining the feasibility of a system of bilinear equations, determining an eigenvalue, a singular value, or the spectral norm of a 3tensor, determining a best rank1 approximation to a 3tensor, determining the rank of a 3tensor over R or C. Hence making tensor computations feasible is likely to be a challenge.
SHIFTED POWER METHOD FOR COMPUTING TENSOR EIGENPAIRS
, 2010
"... Recent work on eigenvalues and eigenvectors for tensors of order m ≥ 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetrictensor eigenpairs of the form Axm−1 = ..."
Abstract

Cited by 41 (4 self)
 Add to MetaCart
Recent work on eigenvalues and eigenvectors for tensors of order m ≥ 3 has been motivated by applications in blind source separation, magnetic resonance imaging, molecular conformation, and more. In this paper, we consider methods for computing real symmetrictensor eigenpairs of the form Axm−1 = λx subject to ‖x ‖ = 1, which is closely related to optimal rank1 approximation of a symmetric tensor. Our contribution is a novel shifted symmetric higherorder power method (SSHOPM), which we show is guaranteed to converge to a tensor eigenpair. SSHOPM can be viewed as a generalization of the power iteration method for matrices or of the symmetric higherorder power method. Additionally, using fixed point analysis, we can characterize exactly which eigenpairs can and cannot be found by the method. Numerical examples are presented, including examples from an extension of the method to finding complex eigenpairs.
BiQuadratic Optimization over Unit Spheres and Semidefinite Programming Relaxations
, 2008
"... Abstract. This paper studies the socalled biquadratic optimization over unit spheres min x∈R n,y∈R m bijklxiyjxkyl ..."
Abstract

Cited by 32 (15 self)
 Add to MetaCart
(Show Context)
Abstract. This paper studies the socalled biquadratic optimization over unit spheres min x∈R n,y∈R m bijklxiyjxkyl
Robust iterative fitting of multilinear models
 IEEE Transactions on Signal Processing
, 2005
"... Abstract—Parallel factor (PARAFAC) analysis is an extension of lowrank matrix decomposition to higher way arrays, also referred to as tensors. It decomposes a given array in a sum of multilinear terms, analogous to the familiar bilinear vector outer products that appear in matrix decomposition. PAR ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
Abstract—Parallel factor (PARAFAC) analysis is an extension of lowrank matrix decomposition to higher way arrays, also referred to as tensors. It decomposes a given array in a sum of multilinear terms, analogous to the familiar bilinear vector outer products that appear in matrix decomposition. PARAFAC analysis generalizes and unifies common array processing models, like joint diagonalization and ESPRIT; it has found numerous applications from blind multiuser detection and multidimensional harmonic retrieval, to clustering and nuclear magnetic resonance. The prevailing fitting algorithm in all these applications is based on (alternating) least squares, which is optimal for Gaussian noise. In many cases, however, measurement errors are far from being Gaussian. In this paper, we develop two iterative algorithms for the least absolute error fitting of general multilinear models. The first is based on efficient interior point methods for linear programming, employed in an alternating fashion. The second is based on a weighted median filtering iteration, which is particularly appealing from a simplicity viewpoint. Both are guaranteed to converge in terms of absolute error. Performance is illustrated by means of simulations, and compared to the pertinent Cramér–Rao bounds (CRBs). Index Terms—Array signal processing, nonGaussian noise, parallel factor analysis, robust model fitting. I.