Results 1  10
of
39
Local Convergence of the Alternating Least Squares Algorithm For Canonical Tensor Approximation
, 2011
"... ..."
(Show Context)
Tensor Decompositions, Alternating Least Squares and Other Tales
 JOURNAL OF CHEMOMETRICS
, 2009
"... This work was originally motivated by a classification of tensors proposed by Richard Harshman. In particular, we focus on simple and multiple “bottlenecks”, and on “swamps”. Existing theoretical results are surveyed, some numerical algorithms are described in details, and their numerical complexity ..."
Abstract

Cited by 33 (9 self)
 Add to MetaCart
This work was originally motivated by a classification of tensors proposed by Richard Harshman. In particular, we focus on simple and multiple “bottlenecks”, and on “swamps”. Existing theoretical results are surveyed, some numerical algorithms are described in details, and their numerical complexity is calculated. In particular, the interest in using the ELS enhancement in these algorithms is discussed. Computer simulations feed this discussion.
ON TENSORS, SPARSITY, AND NONNEGATIVE FACTORIZATIONS ∗
"... Abstract. Tensors have found application in a variety of fields, ranging from chemometrics to signal processing and beyond. In this paper, we consider the problem of multilinear modeling of sparse count data. Our goal is to develop a descriptive tensor factorization model of such data, along with ap ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
Abstract. Tensors have found application in a variety of fields, ranging from chemometrics to signal processing and beyond. In this paper, we consider the problem of multilinear modeling of sparse count data. Our goal is to develop a descriptive tensor factorization model of such data, along with appropriate algorithms and theory. To do so, we propose that the random variation is best described via a Poisson distribution, which better describes the zeros observed in the data as compared to the typical assumption of a Gaussian distribution. Under a Poisson assumption, we fit a model to observed data using the negative loglikelihood score. We present a new algorithm for Poisson tensor factorization called CANDECOMP–PARAFAC alternating Poisson regression (CPAPR) that is based on a majorizationminimization approach. It can be shown that CPAPR is a generalization of the Lee–Seung multiplicative updates. We show how to prevent the algorithm from converging to nonKKT points and prove convergence of CPAPR under mild conditions. We also explain how to implement CPAPR for largescale sparse tensors and present results on several data sets,bothrealandsimulated.
Multiarray Signal Processing: Tensor decomposition meets compressed sensing
, 2009
"... We discuss how recently discovered techniques and tools from compressed sensing can be used in tensor decompositions, with a view towards modeling signals from multiple arrays of multiple sensors. We show that with appropriate bounds on coherence, one could always guarantee the existence and uniquen ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
(Show Context)
We discuss how recently discovered techniques and tools from compressed sensing can be used in tensor decompositions, with a view towards modeling signals from multiple arrays of multiple sensors. We show that with appropriate bounds on coherence, one could always guarantee the existence and uniqueness of a best rankr approximation of a tensor. In particular, we obtain a computationally feasible variant of Kruskal’s uniqueness condition with coherence as a proxy for krank. We treat sparsest recovery and lowestrank recovery problems in a uniform fashion by considering Schatten and nuclear norms of tensors of arbitrary order and dictionaries that comprise a continuum of uncountably many atoms.
Multiway spacetimewavevector analysis for EEG source separation
 Signal Processing
, 2012
"... HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte p ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
(Show Context)
HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et a ̀ la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
Tensors: a Brief Introduction
, 2014
"... Tensor decompositions are at the core of many Blind Source Separation (BSS) algorithms, either explicitly or implicitly. In particular, the Canonical Polyadic (CP) tensor ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Tensor decompositions are at the core of many Blind Source Separation (BSS) algorithms, either explicitly or implicitly. In particular, the Canonical Polyadic (CP) tensor
Tensor decompositions for signal processing applications. From Twoway to Multiway Component Analysis
 ESATSTADIUS INTERNAL REPORT
, 2014
"... The widespread use of multisensor technology and the emergence of big datasets has highlighted the limitations of standard flatview matrix models and the necessity to move towards more versatile data analysis tools. We show that higherorder tensors (i.e., multiway arrays) enable such a fundame ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
The widespread use of multisensor technology and the emergence of big datasets has highlighted the limitations of standard flatview matrix models and the necessity to move towards more versatile data analysis tools. We show that higherorder tensors (i.e., multiway arrays) enable such a fundamental paradigm shift towards models that are essentially polynomial and whose uniqueness, unlike the matrix methods, is guaranteed under very mild and natural conditions. Benefiting from the power of multilinear algebra as their mathematical backbone, data analysis techniques using tensor decompositions are shown to have great flexibility in the choice of constraints that match data properties, and to find more general latent components in the data than matrixbased methods. A comprehensive introduction to tensor decompositions is provided from a signal processing perspective, starting from the algebraic foundations, via basic Canonical Polyadic and Tucker models, through to advanced causeeffect and multiview data analysis schemes. We show that tensor decompositions enable natural generalizations of some commonly used signal processing paradigms, such as canonical correlation and subspace techniques, signal separation, linear regression, feature extraction and classification. We also cover computational aspects, and point out how ideas from compressed sensing and scientific computing may be used for addressing the otherwise unmanageable storage and manipulation problems associated with big datasets. The concepts are supported by illustrative real world case studies illuminating the benefits of the tensor framework, as efficient and promising tools for modern signal processing, data analysis and machine learning applications; these benefits also extend to vector/matrix data through tensorization.
Some convergence results on the regularized alternating leastsquares method for tensor decomposition, submitted
"... We study the convergence of the Regularized Alternating LeastSquares algorithm for tensor decompositions. As a main result, we have shown that given the existence of critical points of the Alternating LeastSquares method, the limit points of the converging subsequences of the RALS are the critical ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
We study the convergence of the Regularized Alternating LeastSquares algorithm for tensor decompositions. As a main result, we have shown that given the existence of critical points of the Alternating LeastSquares method, the limit points of the converging subsequences of the RALS are the critical points of the least squares cost functional. Some numerical examples indicate a faster convergence rate for the RALS in comparison to the usual alternating least squares method. 1
Fast Nonnegative Tensor Factorization with an ActiveSetLike Method
"... Abstract We introduce an efficient algorithm for computing a lowranknonnegativeCANDECOMP/PARAFAC(NNCP)decomposition.Intextmining, signal processing, and computer vision among other areas, imposing nonnegativity constraints to the lowrank factors of matrices and tensors has been shown an effective ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Abstract We introduce an efficient algorithm for computing a lowranknonnegativeCANDECOMP/PARAFAC(NNCP)decomposition.Intextmining, signal processing, and computer vision among other areas, imposing nonnegativity constraints to the lowrank factors of matrices and tensors has been shown an effective technique providing physically meaningful interpretation. A principled methodology for computing NNCP is alternating nonnegative least squares, in which the nonnegativityconstrained least squares (NNLS) problems are solved in each iteration. In this chapter, we propose to solve the NNLS problems using the block principal pivoting method. The block principal pivoting method overcomes some difficulties of the classical active method for the NNLS problems with a large number of variables. We introducetechniquestoacceleratetheblockprincipalpivotingmethodformultiple righthand sides, which is typical in NNCP computation. Computational experiments show the stateoftheart performance of the proposed method. 1
TENSORS VERSUS MATRICES USEFULNESS AND UNEXPECTED PROPERTIES
 IEEE WORKSHOP ON STATISTICAL SIGNAL PROCESSING, CARDIFF: UNITED KINGDOM (2009)
, 2009
"... Since the nineties, tensors are increasingly used in Signal Processing and Data Analysis. There exist striking differences between tensors and matrices, some being advantages, and others raising difficulties. These differences are pointed out in this paper while briefly surveying the state of the ar ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Since the nineties, tensors are increasingly used in Signal Processing and Data Analysis. There exist striking differences between tensors and matrices, some being advantages, and others raising difficulties. These differences are pointed out in this paper while briefly surveying the state of the art. The conclusion is that tensors are omnipresent in real life, implicitly or explicitly, and must be used even if we still know quite little about their properties.