Results 1  10
of
33
A multilinear singular value decomposition
 SIAM J. Matrix Anal. Appl
, 2000
"... Abstract. We discuss a multilinear generalization of the singular value decomposition. There is a strong analogy between several properties of the matrix and the higherorder tensor decomposition; uniqueness, link with the matrix eigenvalue decomposition, firstorder perturbation effects, etc., are ..."
Abstract

Cited by 472 (22 self)
 Add to MetaCart
(Show Context)
Abstract. We discuss a multilinear generalization of the singular value decomposition. There is a strong analogy between several properties of the matrix and the higherorder tensor decomposition; uniqueness, link with the matrix eigenvalue decomposition, firstorder perturbation effects, etc., are analyzed. We investigate how tensor symmetries affect the decomposition and propose a multilinear generalization of the symmetric eigenvalue decomposition for pairwise symmetric tensors.
Multilinear Analysis of Image Ensembles: TensorFaces
 IN PROCEEDINGS OF THE EUROPEAN CONFERENCE ON COMPUTER VISION
, 2002
"... Natural images are the composite consequence of multiple factors related to scene structure, illumination, and imaging. Multilinear algebra, the algebra of higherorder tensors, offers a potent mathematical framework for analyzing the multifactor structure of image ensembles and for addressing the d ..."
Abstract

Cited by 188 (7 self)
 Add to MetaCart
Natural images are the composite consequence of multiple factors related to scene structure, illumination, and imaging. Multilinear algebra, the algebra of higherorder tensors, offers a potent mathematical framework for analyzing the multifactor structure of image ensembles and for addressing the difficult problem of disentangling the constituent factors or modes. Our multilinear modeling technique employs a tensor extension of the conventional matrix singular value decomposition (SVD), known as the Nmode SVD.As a concrete example, we consider the multilinear analysis of ensembles of facial images that combine several modes, including different facial geometries (people), expressions, head poses, and lighting conditions. Our resulting "TensorFaces" representation has several advantages over conventional eigenfaces. More generally, multilinear analysis shows promise as a unifying framework for a variety of computer vision problems.
Multilinear Subspace Analysis of Image Ensembles
 PROCEEDINGS OF 2003 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION
, 2003
"... Multilinear algebra, the algebra of higherorder tensors, offers a potent mathematical framework for analyzing ensembles of images resulting from the interaction of any number of underlying factors. We present a dimensionality reduction algorithm that enables subspace analysis within the multilinear ..."
Abstract

Cited by 119 (2 self)
 Add to MetaCart
Multilinear algebra, the algebra of higherorder tensors, offers a potent mathematical framework for analyzing ensembles of images resulting from the interaction of any number of underlying factors. We present a dimensionality reduction algorithm that enables subspace analysis within the multilinear framework. This Nmode orthogonal iteration algorithm is based on a tensor decomposition known as the Nmode SVD, the natural extension to tensors of the conventional matrix singular value decomposition (SVD). We demonstrate the power of multilinear subspace analysis in the context of facial image ensembles, where the relevant factors include different faces, expressions, viewpoints, and illuminations. In prior work we showed that our multilinear representation, called TensorFaces, yields superior facial recognition rates relative to standard, linear (PCA/eigenfaces) approaches. Here, we demonstrate factorspecific dimensionality reduction of facial image ensembles. For example, we can suppress illumination effects (shadows, highlights) while preserving detailed facial features, yielding a low perceptual error.
On the Uniqueness of Multilinear Decomposition of Nway arrays
, 2000
"... INTRODUCTION Consider an I # J matrix X and suppose that rank (X) = 3. Let x i,j denote the (i, j)th entry of X.Thenit holds that x i,j admits a threecomponent bilinear decomposition x i#j # # 3 f #1 a i#f b j#f #1# for all i = 1,...,I and j = 1,...,J. Equivalently, letting a f := [a 1,f ..."
Abstract

Cited by 101 (10 self)
 Add to MetaCart
INTRODUCTION Consider an I # J matrix X and suppose that rank (X) = 3. Let x i,j denote the (i, j)th entry of X.Thenit holds that x i,j admits a threecomponent bilinear decomposition x i#j # # 3 f #1 a i#f b j#f #1# for all i = 1,...,I and j = 1,...,J. Equivalently, letting a f := [a 1,f ,...,a I,f ] T and similarly for b f , X # a 1 b T 1 # a 2 b T 2 # a 3 b T 3 #2# i
TENSORCUR DECOMPOSITIONS FOR TENSORBASED DATA
 SIAM J. MATRIX ANAL. APPL.
, 2008
"... Motivated by numerous applications in which the data may be modeled by a variable subscripted by three or more indices, we develop a tensorbased extension of the matrix CUR decomposition. The tensorCUR decomposition is most relevant as a data analysis tool when the data consist of one mode that i ..."
Abstract

Cited by 36 (10 self)
 Add to MetaCart
Motivated by numerous applications in which the data may be modeled by a variable subscripted by three or more indices, we develop a tensorbased extension of the matrix CUR decomposition. The tensorCUR decomposition is most relevant as a data analysis tool when the data consist of one mode that is qualitatively different from the others. In this case, the tensorCUR decomposition approximately expresses the original data tensor in terms of a basis consisting of underlying subtensors that are actual data elements and thus that have a natural interpretation in terms of the processes generating the data. Assume the data may be modeled as a (2+1)tensor, i.e., an m×n×p tensor A in which the first two modes are similar and the third is qualitatively different. We refer to each of the p different m × n matrices as “slabs ” and each of the mn different pvectors as “fibers.” In this case, the tensorCUR algorithm computes an approximation to the data tensor A that is of the form CUR, where C is an m×n×c tensor consisting of a small number c of the slabs, R is an r × p matrix consisting of a small number r of the fibers, and U is an appropriately defined and easily computed c × r encoding matrix. Both C and R may be chosen by randomly sampling either slabs or fibers according to a judiciously chosen and datadependent probability distribution, and both c and r depend on a rank parameter k, an error parameter ɛ, and a failure probability δ. Under
A randomized algorithm for a tensorbased generalization of the singular value decomposition
, 2007
"... ..."
Generic and typical ranks of multiway arrays
 Linear Algebra Appl
"... HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte p ..."
Abstract

Cited by 28 (5 self)
 Add to MetaCart
(Show Context)
HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et a ̀ la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Structured Rank(r1,..., rd) Decomposition of Functionrelated Tensors In R^d
, 2006
"... The structured tensorproduct approximation of multidimensional nonlocal operators by a twolevel rank(r1,..., rd) decomposition of related higherorder tensors is proposed and analysed. In this approach, a construction of the desired approximant to a target tensor is a reminiscence of the Tucker ..."
Abstract

Cited by 16 (9 self)
 Add to MetaCart
The structured tensorproduct approximation of multidimensional nonlocal operators by a twolevel rank(r1,..., rd) decomposition of related higherorder tensors is proposed and analysed. In this approach, a construction of the desired approximant to a target tensor is a reminiscence of the Tuckertype model, where the canonical components are represented in a fixed (uniform) basis, while the core tensor is given in the canonical format. As an alternative, the multilevel nested canonical decomposition is presented. The complexity analysis of the corresponding multilinear algebra indicates almost linear cost in onedimensional problem size. The existence of a low Kronecker rank twolevel representation is proven for a class of functionrelated tensors. In particular, we apply the results to dth order tensors generated by the multivariate functions 1 x2,
Hierarchical multilinear models for multiway data
, 2009
"... Reducedrank decompositions provide descriptions of the variation among the elements of a matrix or array. In such decompositions, the elements of an array are expressed as products of lowdimensional latent factors. This article presents a modelbased version of such a decomposition, extending the s ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
Reducedrank decompositions provide descriptions of the variation among the elements of a matrix or array. In such decompositions, the elements of an array are expressed as products of lowdimensional latent factors. This article presents a modelbased version of such a decomposition, extending the scope of reduced rank methods to accommodate a variety of data types such as longitudinal social networks and continuous multivariate data that is crossclassified by categorical variables. The proposed modelbased approach is hierarchical, in that the latent factors corresponding to a given dimension of the array are not a priori independent, but exchangeable. Such a hierarchical approach allows more flexibility in the types of patterns that can be represented. Matrixvalued data are prevalent in many scientific disciplines. Studies in social and health sciences often gather social network data that can be represented by square, binary matrices with undefined diagonals. Numerical results from gene expression studies are recorded in matrices with rows
The Identifiability of Covarion Models in phylogenetics
, 2008
"... Covarion models of character evolution describe inhomogeneities in substitution processes through time. In phylogenetics, such models are used to describe changing functional constraints or selection regimes during the evolution of biological sequences. In this work the identifiability of such mode ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
Covarion models of character evolution describe inhomogeneities in substitution processes through time. In phylogenetics, such models are used to describe changing functional constraints or selection regimes during the evolution of biological sequences. In this work the identifiability of such models for generic parameters on a known phylogenetic tree is established, provided the number of covarion classes does not exceed the size of the observable state space. Combined with earlier results, this implies both the tree and generic numerical parameters are identifiable if the number of classes is strictly smaller than the number of observable states.