Results 1  10
of
55
Tensor Decompositions and Applications
 SIAM REVIEW
, 2009
"... This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal proce ..."
Abstract

Cited by 723 (18 self)
 Add to MetaCart
(Show Context)
This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, etc. Two particular tensor decompositions can be considered to be higherorder extensions of the matrix singular value decompo
sition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rankone tensors, and the Tucker decomposition is a higherorder form of principal components analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The Nway Toolbox and Tensor Toolbox, both for MATLAB, and the Multilinear Engine are examples of software packages for working with tensors.
Graph Kernels
, 2007
"... We present a unified framework to study graph kernels, special cases of which include the random walk (Gärtner et al., 2003; Borgwardt et al., 2005) and marginalized (Kashima et al., 2003, 2004; Mahé et al., 2004) graph kernels. Through reduction to a Sylvester equation we improve the time complexit ..."
Abstract

Cited by 101 (9 self)
 Add to MetaCart
We present a unified framework to study graph kernels, special cases of which include the random walk (Gärtner et al., 2003; Borgwardt et al., 2005) and marginalized (Kashima et al., 2003, 2004; Mahé et al., 2004) graph kernels. Through reduction to a Sylvester equation we improve the time complexity of kernel computation between unlabeled graphs with n vertices from O(n 6) to O(n 3). We find a spectral decomposition approach even more efficient when computing entire kernel matrices. For labeled graphs we develop conjugate gradient and fixedpoint methods that take O(dn 3) time per iteration, where d is the size of the label set. By extending the necessary linear algebra to Reproducing Kernel Hilbert Spaces (RKHS) we obtain the same result for ddimensional edge kernels, and O(n 4) in the infinitedimensional case; on sparse graphs these algorithms only take O(n 2) time per iteration in all cases. Experiments on graphs from bioinformatics and other application domains show that these techniques can speed up computation of the kernel by an order of magnitude or more. We also show that certain rational kernels (Cortes et al., 2002, 2003, 2004) when specialized to graphs reduce to our random walk graph kernel. Finally, we relate our framework to Rconvolution kernels (Haussler, 1999) and provide a kernel that is close to the optimal assignment kernel of Fröhlich et al. (2006) yet provably positive semidefinite.
Enhanced line search: A novel method to accelerate PARAFAC
, 2006
"... Several modifications have been proposed to speed up the alternating least squares (ALS) method of fitting the PARAFAC model. The most widely used is line search, which extrapolates from linear trends in the parameter changes over prior iterations to estimate the parameter values that would be obta ..."
Abstract

Cited by 58 (11 self)
 Add to MetaCart
Several modifications have been proposed to speed up the alternating least squares (ALS) method of fitting the PARAFAC model. The most widely used is line search, which extrapolates from linear trends in the parameter changes over prior iterations to estimate the parameter values that would be obtained after many additional ALS iterations. We propose some extensions of this approach that incorporate a more sophisticated extrapolation, using information on nonlinear trends in the parameters and changing all the parameter sets simultaneously. The new method, called “enhanced line search (ELS), ” can be implemented at different levels of complexity, depending on how many different extrapolation parameters (for different modes) are jointly optimized during each iteration. We report some tests of the simplest parameter version, using simulated data. The performance of this lowestlevel of ELS depends on the nature of the convergence difficulty. It significantly outperforms standard LS when there is a “convergence bottleneck, ” a situation where some modes have almost collinear factors but others do not, but is somewhat less effective in classic “swamp ” situations where factors are highly collinear in all modes. This is illustrated by examples. To demonstrate how ELS can be adapted to different Nway decompositions, we also apply it to a fourway array to perform a blind identification of an underdetermined mixture (UDM). Since analysis of this dataset happens to involve a serious convergence “bottleneck” (collinear factors in two of the four modes), it provides another example of a situation in which ELS dramatically outperforms standard line search.
Fourthorder cumulantbased blind identification of underdetermined mixtures
 SIGNAL PROCESSING, IEEE TRANSACTIONS ON
, 2007
"... In this paper we study two fourthorder cumulantbased techniques for the estimation of the mixing matrix in underdetermined independent component analysis. The first method is based on a simultaneous matrix diagonalization. The second is based on a simultaneous offdiagonalization. The number of so ..."
Abstract

Cited by 42 (0 self)
 Add to MetaCart
In this paper we study two fourthorder cumulantbased techniques for the estimation of the mixing matrix in underdetermined independent component analysis. The first method is based on a simultaneous matrix diagonalization. The second is based on a simultaneous offdiagonalization. The number of sources that can be allowed is roughly quadratic in the number of observations. For both methods, explicit expressions for the maximum number of sources are given. Simulations illustrate the performance of the techniques.
Dimensionality reduction in higherorder signal processing and rank(R_1,R__2,...,R_N) reduction in multilinear algebra
, 2004
"... ..."
Blind identification of underdetermined mixtures by simultaneous matrix diagonalization
 IEEE TRANSACTIONS ON SIGNAL PROCESSING
, 2008
"... In this paper, we study simultaneous matrix diagonalizationbased techniques for the estimation of the mixing matrix in underdetermined independent component analysis (ICA). This includes a generalization to underdetermined mixtures of the wellknown SOBI algorithm. The problem is reformulated in t ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
In this paper, we study simultaneous matrix diagonalizationbased techniques for the estimation of the mixing matrix in underdetermined independent component analysis (ICA). This includes a generalization to underdetermined mixtures of the wellknown SOBI algorithm. The problem is reformulated in terms of the parallel factor decomposition (PARAFAC) of a higherorder tensor. We present conditions under which the mixing matrix is unique and discuss several algorithms for its computation.
An Optimization Approach for Fitting Canonical Tensor Decompositions
, 2009
"... Tensor decompositions are higherorder analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
Tensor decompositions are higherorder analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rankone tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be diﬃcult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use
of gradientbased optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed eﬃciently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradientbased optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.
A concise proof of Kruskal’s theorem on tensor decomposition
 LINEAR ALGEBRA AND ITS APPLICATIONS
, 2010
"... ..."
Concept of datasparse tensorproduct approximation in manyparticle modeling
 In: “Matrix Methods: Theory, Algorithms, Applications”, V. Olshevsky and E. Tyrtyshnikov, eds., World Scientific Publishers, Singapoure
"... We present concepts of datasparse tensor approximations to the functions and operators arising in manyparticle models of quantum chemistry. Our approach is based on the systematic use of structured tensorproduct representations where the lowdimensional components are represented in hierarchical ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
(Show Context)
We present concepts of datasparse tensor approximations to the functions and operators arising in manyparticle models of quantum chemistry. Our approach is based on the systematic use of structured tensorproduct representations where the lowdimensional components are represented in hierarchical or wavelet based matrix formats. The modern methods of tensorproduct approximation in higher dimensions are discussed with the focus on analytically based approaches. We give numerical illustrations which confirm the efficiency of tensor decomposition techniques in electronic structure calculations.
Tensors: a Brief Introduction
, 2014
"... Tensor decompositions are at the core of many Blind Source Separation (BSS) algorithms, either explicitly or implicitly. In particular, the Canonical Polyadic (CP) tensor ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Tensor decompositions are at the core of many Blind Source Separation (BSS) algorithms, either explicitly or implicitly. In particular, the Canonical Polyadic (CP) tensor