Results 1  10
of
107
Tensor Decompositions and Applications
 SIAM REVIEW
, 2009
"... This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal proce ..."
Abstract

Cited by 705 (17 self)
 Add to MetaCart
(Show Context)
This survey provides an overview of higherorder tensor decompositions, their applications, and available software. A tensor is a multidimensional or N way array. Decompositions of higherorder tensors (i.e., N way arrays with N â¥ 3) have applications in psychometrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, etc. Two particular tensor decompositions can be considered to be higherorder extensions of the matrix singular value decompo
sition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rankone tensors, and the Tucker decomposition is a higherorder form of principal components analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The Nway Toolbox and Tensor Toolbox, both for MATLAB, and the Multilinear Engine are examples of software packages for working with tensors.
Multilinear Analysis of Image Ensembles: TensorFaces
 IN PROCEEDINGS OF THE EUROPEAN CONFERENCE ON COMPUTER VISION
, 2002
"... Natural images are the composite consequence of multiple factors related to scene structure, illumination, and imaging. Multilinear algebra, the algebra of higherorder tensors, offers a potent mathematical framework for analyzing the multifactor structure of image ensembles and for addressing the d ..."
Abstract

Cited by 188 (7 self)
 Add to MetaCart
Natural images are the composite consequence of multiple factors related to scene structure, illumination, and imaging. Multilinear algebra, the algebra of higherorder tensors, offers a potent mathematical framework for analyzing the multifactor structure of image ensembles and for addressing the difficult problem of disentangling the constituent factors or modes. Our multilinear modeling technique employs a tensor extension of the conventional matrix singular value decomposition (SVD), known as the Nmode SVD.As a concrete example, we consider the multilinear analysis of ensembles of facial images that combine several modes, including different facial geometries (people), expressions, head poses, and lighting conditions. Our resulting "TensorFaces" representation has several advantages over conventional eigenfaces. More generally, multilinear analysis shows promise as a unifying framework for a variety of computer vision problems.
Multilinear Subspace Analysis of Image Ensembles
 PROCEEDINGS OF 2003 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION
, 2003
"... Multilinear algebra, the algebra of higherorder tensors, offers a potent mathematical framework for analyzing ensembles of images resulting from the interaction of any number of underlying factors. We present a dimensionality reduction algorithm that enables subspace analysis within the multilinear ..."
Abstract

Cited by 117 (2 self)
 Add to MetaCart
Multilinear algebra, the algebra of higherorder tensors, offers a potent mathematical framework for analyzing ensembles of images resulting from the interaction of any number of underlying factors. We present a dimensionality reduction algorithm that enables subspace analysis within the multilinear framework. This Nmode orthogonal iteration algorithm is based on a tensor decomposition known as the Nmode SVD, the natural extension to tensors of the conventional matrix singular value decomposition (SVD). We demonstrate the power of multilinear subspace analysis in the context of facial image ensembles, where the relevant factors include different faces, expressions, viewpoints, and illuminations. In prior work we showed that our multilinear representation, called TensorFaces, yields superior facial recognition rates relative to standard, linear (PCA/eigenfaces) approaches. Here, we demonstrate factorspecific dimensionality reduction of facial image ensembles. For example, we can suppress illumination effects (shadows, highlights) while preserving detailed facial features, yielding a low perceptual error.
Algorithms for numerical analysis in high dimensions
 SIAM J. Sci. Comput
, 2005
"... Abstract. Nearly every numerical analysis algorithm has computational complexity that scales exponentially in the underlying physical dimension. The separated representation, introduced previously, allows many operations to be performed with scaling that is formally linear in the dimension. In this ..."
Abstract

Cited by 87 (11 self)
 Add to MetaCart
Abstract. Nearly every numerical analysis algorithm has computational complexity that scales exponentially in the underlying physical dimension. The separated representation, introduced previously, allows many operations to be performed with scaling that is formally linear in the dimension. In this paper we further develop this representation by: (i) discussing the variety of mechanisms that allow it to be surprisingly efficient; (ii) addressing the issue of conditioning; (iii) presenting algorithms for solving linear systems within this framework; and (iv) demonstrating methods for dealing with antisymmetric functions, as arise in the multiparticle Schrödinger equation in quantum mechanics. Numerical examples are given. Key words. curse of dimensionality; multidimensional function; multidimensional operator; algorithms in high dimensions; separation of variables; separated representation; alternating least squares; separationrank reduction; separated
A tensorbased algorithm for highorder graph matching
 In CVPR
, 2009
"... Abstract—This paper addresses the problem of establishing correspondences between two sets of visual features using higherorder constraints instead of the unary or pairwise ones used in classical methods. Concretely, the corresponding hypergraph matching problem is formulated as the maximization of ..."
Abstract

Cited by 83 (3 self)
 Add to MetaCart
(Show Context)
Abstract—This paper addresses the problem of establishing correspondences between two sets of visual features using higherorder constraints instead of the unary or pairwise ones used in classical methods. Concretely, the corresponding hypergraph matching problem is formulated as the maximization of a multilinear objective function over all permutations of the features. This function is defined by a tensor representing the affinity between feature tuples. It is maximized using a generalization of spectral techniques where a relaxed problem is first solved by a multidimensional power method, and the solution is then projected onto the closest assignment matrix. The proposed approach has been implemented, and it is compared to stateoftheart algorithms on both synthetic and real data.
Efficient MATLAB computations with sparse and factored tensors
 SIAM JOURNAL ON SCIENTIFIC COMPUTING
, 2007
"... In this paper, the term tensor refers simply to a multidimensional or $N$way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose stori ..."
Abstract

Cited by 80 (15 self)
 Add to MetaCart
(Show Context)
In this paper, the term tensor refers simply to a multidimensional or $N$way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more basic components. We consider two specific types: A Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a Kruskal tensor can be expressed as the sum of rank1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.
On the best rank1 approximation of higherorder supersymmetric tensors
 SIAM J. Matrix Anal. Appl
, 2002
"... Abstract. Recently the problem of determining the best, in the leastsquares sense, rank1 approximation to a higherorder tensor was studied and an iterative method that extends the wellknown power method for matriceswasproposed for itssolution. Thishigherorder power method is also proposed for th ..."
Abstract

Cited by 79 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Recently the problem of determining the best, in the leastsquares sense, rank1 approximation to a higherorder tensor was studied and an iterative method that extends the wellknown power method for matriceswasproposed for itssolution. Thishigherorder power method is also proposed for the special but important class of supersymmetric tensors, with no change. A simplified version, adapted to the special structure of the supersymmetric problem, is deemed unreliable, asitsconvergence isnot guaranteed. The aim of thispaper isto show that a symmetric version of the above method converges under assumptions of convexity (or concavity) for the functional induced by the tensor in question, assumptions that are very often satisfied in practical applications. The use of this version entails significant savings in computational complexity as compared to the unconstrained higherorder power method. Furthermore, a novel method for initializing the iterative processisdeveloped which hasbeen observed to yield an estimate that liescloser to the global optimum than the initialization suggested before. Moreover, its proximity to the global optimum is a priori quantifiable. In the course of the analysis, some important properties that the supersymmetry of a tensor implies for its square matrix unfolding are also studied.
Multilinear Image Analysis for Facial Recognition
 INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR)
, 2002
"... Natural images are the composite consequence of multiple factors related to scene structure, illumination, and imaging. For facial images, the factors include different facial geometries, expressions, head poses, and lighting conditions. We apply multilinear algebra, the algebra of higherorder tenso ..."
Abstract

Cited by 71 (1 self)
 Add to MetaCart
Natural images are the composite consequence of multiple factors related to scene structure, illumination, and imaging. For facial images, the factors include different facial geometries, expressions, head poses, and lighting conditions. We apply multilinear algebra, the algebra of higherorder tensors, to obtain a parsimonious representation of facial image ensembles which separates these factors. Our representation, called TensorFaces, yields improved facial recognition rates relative to standard eigenfaces.
TensorTextures: Multilinear ImageBased Rendering
 ACM TRANSACTIONS ON GRAPHICS
, 2004
"... This paper introduces a tensor framework for imagebased rendering. In particular, we develop an algorithm called TensorTextures that learns a parsimonious model of the bidirectional texture function (BTF) from observational data. Given an ensemble of images of a textured surface, our nonlinear, gen ..."
Abstract

Cited by 62 (0 self)
 Add to MetaCart
This paper introduces a tensor framework for imagebased rendering. In particular, we develop an algorithm called TensorTextures that learns a parsimonious model of the bidirectional texture function (BTF) from observational data. Given an ensemble of images of a textured surface, our nonlinear, generative model explicitly represents the multifactor interaction implicit in the detailed appearance of the surface under varying photometric angles, including local (pertexel) reflectance, complex mesostructural selfocclusion, interreflection and selfshadowing, and other BTFrelevant phenomena. Mathematically, TensorTextures is based on multilinear algebra, the algebra of higherorder tensors, hence its name. It is computed through a decomposition known as the Nmode SVD, an extension to tensors of the conventional matrix singular value decomposition (SVD). We demonstrate the application of TensorTextures to the imagebased rendering of natural and synthetic textured surfaces under continuously varying viewpoint and illumination conditions.
Multilinear Independent Components Analysis
 IEEE COMPUTER SOCIETY COMPUTER VISION AND PATTERN RECOGNITION (CVPR'05)
, 2005
"... Independent Components Analysis (ICA) maximizes the statistical independence of the representational components of a training image ensemble, but it cannot distinguish between the different factors, or modes, inherent to image formation, including scene structure, illumination, and imaging. We intro ..."
Abstract

Cited by 58 (1 self)
 Add to MetaCart
Independent Components Analysis (ICA) maximizes the statistical independence of the representational components of a training image ensemble, but it cannot distinguish between the different factors, or modes, inherent to image formation, including scene structure, illumination, and imaging. We introduce a nonlinear, multifactor model that generalizes ICA. Our Multilinear ICA (MICA) model of image ensembles learns the statistically independent components of multiple factors. Whereas ICA employs linear (matrix) algebra, MICA exploits multilinear (tensor) algebra. We furthermore introduce a multilinear projection algorithm which projects an unlabeled test image into the N constituent mode spaces to simultaneously infer its mode labels. In the context of facial image ensembles, where the mode labels are person, viewpoint, illumination, expression, etc., we demonstrate that the statistical regularities learned by MICA capture information that, in conjunction with our multilinear projection algorithm, improves automatic face recognition. 1