Results 1  10
of
27
Tensor Decompositions and Applications
 SIAM REVIEW
, 2009
"... This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal proce ..."
Abstract

Cited by 705 (17 self)
 Add to MetaCart
(Show Context)
This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, etc. Two particular tensor decompositions can be considered to be higherorder extensions of the matrix singular value decompo
sition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rankone tensors, and the Tucker decomposition is a higherorder form of principal components analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The Nway Toolbox and Tensor Toolbox, both for MATLAB, and the Multilinear Engine are examples of software packages for working with tensors.
Tensor Decompositions, Alternating Least Squares and Other Tales
 JOURNAL OF CHEMOMETRICS
, 2009
"... This work was originally motivated by a classification of tensors proposed by Richard Harshman. In particular, we focus on simple and multiple “bottlenecks”, and on “swamps”. Existing theoretical results are surveyed, some numerical algorithms are described in details, and their numerical complexity ..."
Abstract

Cited by 35 (9 self)
 Add to MetaCart
This work was originally motivated by a classification of tensors proposed by Richard Harshman. In particular, we focus on simple and multiple “bottlenecks”, and on “swamps”. Existing theoretical results are surveyed, some numerical algorithms are described in details, and their numerical complexity is calculated. In particular, the interest in using the ELS enhancement in these algorithms is discussed. Computer simulations feed this discussion.
RANKS OF TENSORS AND AND A GENERALIZATION OF SECANT VARIETIES
"... Abstract. We investigate differences between Xrank and Xborder rank, focusing on the cases of tensors and partially symmetric tensors. As an aid to our study, and as an object of interest in its own right, we define notions of Xrank and border rank for a linear subspace. Results include determini ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We investigate differences between Xrank and Xborder rank, focusing on the cases of tensors and partially symmetric tensors. As an aid to our study, and as an object of interest in its own right, we define notions of Xrank and border rank for a linear subspace. Results include determining and bounding the maximum Xrank of points in several cases of interest. 1.
A concise proof of Kruskal’s theorem on tensor decomposition
 LINEAR ALGEBRA AND ITS APPLICATIONS
, 2010
"... ..."
Tensors: a Brief Introduction
, 2014
"... Tensor decompositions are at the core of many Blind Source Separation (BSS) algorithms, either explicitly or implicitly. In particular, the Canonical Polyadic (CP) tensor ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Tensor decompositions are at the core of many Blind Source Separation (BSS) algorithms, either explicitly or implicitly. In particular, the Canonical Polyadic (CP) tensor
An algorithm for generic and lowrank specific identifiability of complex tensors
, 2014
"... ..."
(Show Context)
On invariant notions of Segre varieties in binary projective spaces
 CODES CRYPTOGR
, 2010
"... Invariant notions of a class of Segre varietiesS(m)(2) of PG(2 m − 1, 2) that are direct products of m copies of PG(1, 2), m being any positive integer, are established and studied. We first demonstrate that there exists a hyperbolic quadric that containsS(m)(2) and is invariant under its projective ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
(Show Context)
Invariant notions of a class of Segre varietiesS(m)(2) of PG(2 m − 1, 2) that are direct products of m copies of PG(1, 2), m being any positive integer, are established and studied. We first demonstrate that there exists a hyperbolic quadric that containsS(m)(2) and is invariant under its projective stabiliser group GS(m)(2). By embedding PG(2 m − 1, 2) into PG(2 m − 1, 4), a basis of the latter space is constructed that is invariant under GS(m)(2) as well. Such a basis can be split into two subsets of an odd and even parity whose spans are either real or complexconjugate subspaces according as m is even or odd. In the latter case, these spans can, in addition, be viewed as indicator sets of a GS(m)(2)invariant geometric spread of lines of PG(2 m − 1, 2). This spread is also related with a GS(m)(2)invariant nonsingular Hermitian variety. The case m=3 is examined in detail to illustrate the theory. Here, the lines of the invariant spread are found to fall into four distinct orbits under GS(3)(2), while the points of PG(7, 2) form five orbits.
On the typical rank of real binary forms
, 2009
"... We determine the rank of a general real binary form of degree d = 4 and d = 5. In the case d = 5, the possible values of the rank of such general forms are 3, 4, 5. The existence of three typical ranks was unexpected. We prove that a real binary form of degree d with d real roots has rank d. ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
We determine the rank of a general real binary form of degree d = 4 and d = 5. In the case d = 5, the possible values of the rank of such general forms are 3, 4, 5. The existence of three typical ranks was unexpected. We prove that a real binary form of degree d with d real roots has rank d.
TENSORS VERSUS MATRICES USEFULNESS AND UNEXPECTED PROPERTIES
 IEEE WORKSHOP ON STATISTICAL SIGNAL PROCESSING, CARDIFF: UNITED KINGDOM (2009)
, 2009
"... Since the nineties, tensors are increasingly used in Signal Processing and Data Analysis. There exist striking differences between tensors and matrices, some being advantages, and others raising difficulties. These differences are pointed out in this paper while briefly surveying the state of the ar ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Since the nineties, tensors are increasingly used in Signal Processing and Data Analysis. There exist striking differences between tensors and matrices, some being advantages, and others raising difficulties. These differences are pointed out in this paper while briefly surveying the state of the art. The conclusion is that tensors are omnipresent in real life, implicitly or explicitly, and must be used even if we still know quite little about their properties.
On the global convergence of the alternating least squares method for rankone approximation to generic tensors
, 2013
"... Tensor decomposition has important applications in various disciplines, but it remains an extremely challenging task even to this date. A slightly more manageable endeavor has been to find a low rank approximation in place of the decomposition. Even for this less stringent undertaking, it is an es ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Tensor decomposition has important applications in various disciplines, but it remains an extremely challenging task even to this date. A slightly more manageable endeavor has been to find a low rank approximation in place of the decomposition. Even for this less stringent undertaking, it is an established fact that tensors beyond matrices can fail to have best low rank approximations, with the notable exception that the best rankone approximation always exists for tensors of any order. Toward the latter, the most popular approach is the notion of alternating least squares whose specific numerical scheme appears in the form as a variant of the power method. Though the limiting behavior of the objective values is well understood, a proof of global convergence for the iterates themselves has been elusive. This paper partially addresses the missing piece by showing that for almost all tensors, the iterates generated by the alternating least squares method for the rankone approximation converge globally. The underlying technique employed is an eclectic mix of knowledge from algebraic geometry and dynamical system.