Results 1  10
of
81
Tensor Decompositions and Applications
 SIAM REVIEW
, 2009
"... This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal proce ..."
Abstract

Cited by 705 (17 self)
 Add to MetaCart
(Show Context)
This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, etc. Two particular tensor decompositions can be considered to be higherorder extensions of the matrix singular value decompo
sition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rankone tensors, and the Tucker decomposition is a higherorder form of principal components analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The Nway Toolbox and Tensor Toolbox, both for MATLAB, and the Multilinear Engine are examples of software packages for working with tensors.
Symmetric tensors and symmetric tensor rank
 Scientific Computing and Computational Mathematics (SCCM
, 2006
"... Abstract. A symmetric tensor is a higher order generalization of a symmetric matrix. In this paper, we study various properties of symmetric tensors in relation to a decomposition into a symmetric sum of outer product of vectors. A rank1 orderk tensor is the outer product of k nonzero vectors. An ..."
Abstract

Cited by 101 (22 self)
 Add to MetaCart
Abstract. A symmetric tensor is a higher order generalization of a symmetric matrix. In this paper, we study various properties of symmetric tensors in relation to a decomposition into a symmetric sum of outer product of vectors. A rank1 orderk tensor is the outer product of k nonzero vectors. Any symmetric tensor can be decomposed into a linear combination of rank1 tensors, each of them being symmetric or not. The rank of a symmetric tensor is the minimal number of rank1 tensors that is necessary to reconstruct it. The symmetric rank is obtained when the constituting rank1 tensors are imposed to be themselves symmetric. It is shown that rank and symmetric rank are equal in a number of cases, and that they always exist in an algebraically closed field. We will discuss the notion of the generic symmetric rank, which, due to the work of Alexander and Hirschowitz, is now known for any values of dimension and order. We will also show that the set of symmetric tensors of symmetric rank at most r is not closed, unless r = 1. Key words. Tensors, multiway arrays, outer product decomposition, symmetric outer product decomposition, candecomp, parafac, tensor rank, symmetric rank, symmetric tensor rank, generic symmetric rank, maximal symmetric rank, quantics AMS subject classifications. 15A03, 15A21, 15A72, 15A69, 15A18 1. Introduction. We
Enhanced line search: A novel method to accelerate Parafac
 in Eusipco’05
, 2005
"... Abstract. Several modifications have been proposed to speed up the alternating least squares (ALS) method of fitting the PARAFAC model. The most widely used is line search, which extrapolates from linear trends in the parameter changes over prior iterations to estimate the parameter values that woul ..."
Abstract

Cited by 58 (11 self)
 Add to MetaCart
Abstract. Several modifications have been proposed to speed up the alternating least squares (ALS) method of fitting the PARAFAC model. The most widely used is line search, which extrapolates from linear trends in the parameter changes over prior iterations to estimate the parameter values that would be obtained after many additional ALS iterations. We propose some extensions of this approach that incorporate a more sophisticated extrapolation, using information on nonlinear trends in the parameters and changing all the parameter sets simultaneously. The new method, called “enhanced line search (ELS), ” can be implemented at different levels of complexity, depending on how many different extrapolation parameters (for different modes) are jointly optimized during each iteration. We report some tests of the simplest parameter version, using simulated data. The performance of this lowestlevel of ELS depends on the nature of the convergence difficulty. It significantly outperforms standard LS when there is a “convergence bottleneck, ” a situation where some modes have almost collinear factors but others do not, but is somewhat less effective in classic “swamp ” situations where factors are highly collinear in all modes. This is illustrated by examples. To demonstrate how ELS can be adapted to different Nway decompositions, we also apply it to a fourway array to perform a blind identification of an underdetermined mixture (UDM). Since analysis of this dataset happens to involve a serious convergence “bottleneck ” (collinear factors in two of the four modes), it provides another example of a situation in which ELS dramatically outperforms standard line search. Key words. PARAFAC, alternating least squares (ALS), line search, enhanced line search (ELS), acceleration, swamps, bottlenecks, collinear factors, degeneracy AMS subject classifications. Authors must provide DOI. 10.1137/06065577 1. Introduction. PARAFAC
Decompositions of a higherorder tensor in block terms  Part II: Definitions and Uniqueness
 SIAM J. MATRIX ANAL. APPL
, 2008
"... In this paper we introduce a new class of tensor decompositions. Intuitively, we decompose a given tensor block into blocks of smaller size, where the size is characterized by a set of moden ranks. We study different types of such decompositions. For each type we derive conditions under which ess ..."
Abstract

Cited by 44 (8 self)
 Add to MetaCart
In this paper we introduce a new class of tensor decompositions. Intuitively, we decompose a given tensor block into blocks of smaller size, where the size is characterized by a set of moden ranks. We study different types of such decompositions. For each type we derive conditions under which essential uniqueness is guaranteed. The parallel factor decomposition and Tucker’s decomposition can be considered as special cases in the new framework. The paper sheds new light on fundamental aspects of tensor algebra.
Fourthorder cumulantbased blind identification of underdetermined mixtures
 SIGNAL PROCESSING, IEEE TRANSACTIONS ON
, 2007
"... In this paper we study two fourthorder cumulantbased techniques for the estimation of the mixing matrix in underdetermined independent component analysis. The first method is based on a simultaneous matrix diagonalization. The second is based on a simultaneous offdiagonalization. The number of so ..."
Abstract

Cited by 38 (0 self)
 Add to MetaCart
In this paper we study two fourthorder cumulantbased techniques for the estimation of the mixing matrix in underdetermined independent component analysis. The first method is based on a simultaneous matrix diagonalization. The second is based on a simultaneous offdiagonalization. The number of sources that can be allowed is roughly quadratic in the number of observations. For both methods, explicit expressions for the maximum number of sources are given. Simulations illustrate the performance of the techniques.
Tensor Decompositions, Alternating Least Squares and Other Tales
 JOURNAL OF CHEMOMETRICS
, 2009
"... This work was originally motivated by a classification of tensors proposed by Richard Harshman. In particular, we focus on simple and multiple “bottlenecks”, and on “swamps”. Existing theoretical results are surveyed, some numerical algorithms are described in details, and their numerical complexity ..."
Abstract

Cited by 33 (9 self)
 Add to MetaCart
This work was originally motivated by a classification of tensors proposed by Richard Harshman. In particular, we focus on simple and multiple “bottlenecks”, and on “swamps”. Existing theoretical results are surveyed, some numerical algorithms are described in details, and their numerical complexity is calculated. In particular, the interest in using the ELS enhancement in these algorithms is discussed. Computer simulations feed this discussion.
Generic and typical ranks of multiway arrays
 Linear Algebra Appl
"... HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte p ..."
Abstract

Cited by 27 (5 self)
 Add to MetaCart
(Show Context)
HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et a ̀ la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Blind identification of underdetermined mixtures by simultaneous matrix diagonalization
 IEEE TRANSACTIONS ON SIGNAL PROCESSING
, 2008
"... In this paper, we study simultaneous matrix diagonalizationbased techniques for the estimation of the mixing matrix in underdetermined independent component analysis (ICA). This includes a generalization to underdetermined mixtures of the wellknown SOBI algorithm. The problem is reformulated in t ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
In this paper, we study simultaneous matrix diagonalizationbased techniques for the estimation of the mixing matrix in underdetermined independent component analysis (ICA). This includes a generalization to underdetermined mixtures of the wellknown SOBI algorithm. The problem is reformulated in terms of the parallel factor decomposition (PARAFAC) of a higherorder tensor. We present conditions under which the mixing matrix is unique and discuss several algorithms for its computation.
SUBTRACTING A BEST RANK1 APPROXIMATION MAY INCREASE TENSOR RANK
"... Is has been shown that a best rankR approximation of an orderk tensor may not exist when R ≥ 2 and k ≥ 3. This poses a serious problem to data analysts using Candecomp/Parafac and related models. It has been observed numerically that, generally, this issue cannot be solved by consecutively computi ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
(Show Context)
Is has been shown that a best rankR approximation of an orderk tensor may not exist when R ≥ 2 and k ≥ 3. This poses a serious problem to data analysts using Candecomp/Parafac and related models. It has been observed numerically that, generally, this issue cannot be solved by consecutively computing and substracting best rank1 approximations. The reason for this is that subtracting a best rank1 approximation generally does not decrease tensor rank. In this paper, we provide a mathematical treatment of this property for realvalued 2 × 2 × 2 tensors, with symmetric tensors as a special case. Regardless of the symmetry, we show that for generic 2 × 2 × 2 tensors (which have rank 2 or 3), subtracting a best rank1 approximation will result in a tensor that has rank 3 and lies on the boundary between the rank2 and rank3 sets. Hence, for a typical tensor of rank 2, subtracting a best rank1 approximation has increased the tensor rank.
An Optimization Approach for Fitting Canonical Tensor Decompositions
, 2009
"... Tensor decompositions are higherorder analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
Tensor decompositions are higherorder analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rankone tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be diﬃcult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use
of gradientbased optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed eﬃciently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradientbased optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.