Results 1  10
of
17
Tensor Decompositions and Applications
 SIAM REVIEW
, 2009
"... This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal proce ..."
Abstract

Cited by 723 (18 self)
 Add to MetaCart
(Show Context)
This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, etc. Two particular tensor decompositions can be considered to be higherorder extensions of the matrix singular value decompo
sition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rankone tensors, and the Tucker decomposition is a higherorder form of principal components analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The Nway Toolbox and Tensor Toolbox, both for MATLAB, and the Multilinear Engine are examples of software packages for working with tensors.
A multilinear singular value decomposition
 SIAM J. Matrix Anal. Appl
, 2000
"... Abstract. We discuss a multilinear generalization of the singular value decomposition. There is a strong analogy between several properties of the matrix and the higherorder tensor decomposition; uniqueness, link with the matrix eigenvalue decomposition, firstorder perturbation effects, etc., are ..."
Abstract

Cited by 472 (22 self)
 Add to MetaCart
(Show Context)
Abstract. We discuss a multilinear generalization of the singular value decomposition. There is a strong analogy between several properties of the matrix and the higherorder tensor decomposition; uniqueness, link with the matrix eigenvalue decomposition, firstorder perturbation effects, etc., are analyzed. We investigate how tensor symmetries affect the decomposition and propose a multilinear generalization of the symmetric eigenvalue decomposition for pairwise symmetric tensors.
Efficient MATLAB computations with sparse and factored tensors
 SIAM JOURNAL ON SCIENTIFIC COMPUTING
, 2007
"... In this paper, the term tensor refers simply to a multidimensional or $N$way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose stori ..."
Abstract

Cited by 84 (17 self)
 Add to MetaCart
(Show Context)
In this paper, the term tensor refers simply to a multidimensional or $N$way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: A Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.
Computation of the canonical decomposition by means of a simultaneous generalized schur decomposition
 SIAM J. Matrix Anal. Appl
, 2004
"... Abstract. The canonical decomposition of higherorder tensors is a key tool in multilinear algebra. First we review the state of the art. Then we show that, under certain conditions, the problem can be rephrased as the simultaneous diagonalization, by equivalence or congruence, of a set of matrices. ..."
Abstract

Cited by 55 (10 self)
 Add to MetaCart
(Show Context)
Abstract. The canonical decomposition of higherorder tensors is a key tool in multilinear algebra. First we review the state of the art. Then we show that, under certain conditions, the problem can be rephrased as the simultaneous diagonalization, by equivalence or congruence, of a set of matrices. Necessary and sufficient conditions for the uniqueness of these simultaneous matrix decompositions are derived. In a next step, the problem can be translated into a simultaneous generalized Schur decomposition, with orthogonal unknowns [A.J. van der Veen and A. Paulraj, IEEE Trans. Signal Process., 44 (1996), pp. 1136–1155]. A firstorder perturbation analysis of the simultaneous generalized Schur decomposition is carried out. We discuss some computational techniques (including a new Jacobi algorithm) and illustrate their behavior by means of a number of numerical experiments.
Multilinear operators for higherorder decompositions
, 2006
"... We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The ﬁrst operator,
which we call the Tucker operator, is shorthand for performing an nmode matrix multiplication for every mode of a given tensor and ..."
Abstract

Cited by 52 (9 self)
 Add to MetaCart
We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The ﬁrst operator,
which we call the Tucker operator, is shorthand for performing an nmode matrix multiplication for every mode of a given tensor and can be employed to consisely express the Tucker decomposition. The second operator, which we call the Kruskal operator, is shorthand for the sum of the outerproducts of the columns of N matrices and allows a divorce from a matricized representation and a very consise expression of the PARAFAC decomposition. We explore the
properties of the Tucker and Kruskal operators independently of the related decompositions.
Additionally, we provide a review of the matrix and tensor operations that are frequently used in the context of tensor decompositions.
Minimal subspaces in tensor representations
, 2011
"... In this paper we introduce and develop the notion of minimal subspaces in the framework of algebraic and topological tensor product spaces. This mathematical structure arises in a natural way in the study of tensor representations. We use minimal subspaces to prove the existence of a best approximat ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
In this paper we introduce and develop the notion of minimal subspaces in the framework of algebraic and topological tensor product spaces. This mathematical structure arises in a natural way in the study of tensor representations. We use minimal subspaces to prove the existence of a best approximation, for any element in a Banach tensor space, by means a tensor given in a typical representation format (Tucker, hierarchical or tensor train). We show that this result holds in a tensor Banach space with a norm stronger that the injective norm and in an intersection of finitely many Banach tensor spaces satisfying some additional conditions. Examples by using topological tensor products of standard Sobolev spaces are given.
An algorithm for generic and lowrank specific identifiability of complex tensors
, 2014
"... ..."
(Show Context)
On generic nonexistence of the SCHMIDTECKARTYOUNG DECOMPOSITION FOR COMPLEX TENSORS
, 2013
"... ..."
Tight Wavelet Frames on Multislice Graphs
"... Abstract—We present a framework for the design of wavelet transforms tailored to data defined on multislice graphs (i.e., multiplex or dynamic graphs). Graphs with multiple types of interactions are ubiquitous in real life, motivating the extension of wavelets to these complex domains. Our framework ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract—We present a framework for the design of wavelet transforms tailored to data defined on multislice graphs (i.e., multiplex or dynamic graphs). Graphs with multiple types of interactions are ubiquitous in real life, motivating the extension of wavelets to these complex domains. Our framework generalizes the recently proposed spectral graph wavelet transform (SGWT)
ON TENSOR TUCKER DECOMPOSITION: THE CASE FOR AN ADJUSTABLE CORE SIZE
"... Abstract. This paper is concerned with the problem of finding a Tucker decomposition for tensors. Traditionally, solution methods for Tucker decomposition presume that the size of the core tensor is specified in advance, which may not be a realistic assumption in some applications. In this paper we ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. This paper is concerned with the problem of finding a Tucker decomposition for tensors. Traditionally, solution methods for Tucker decomposition presume that the size of the core tensor is specified in advance, which may not be a realistic assumption in some applications. In this paper we propose a new computational model where the configuration and the size of the core become a part of the decisions to be optimized. Our approach is based on the socalled maximum block improvement (MBI) method for nonconvex block optimization. We consider a number of real applications in this paper, showing the promising numerical performances of the proposed algorithms.