Results 1  10
of
39
A multilinear singular value decomposition
 SIAM J. Matrix Anal. Appl
, 2000
"... Abstract. We discuss a multilinear generalization of the singular value decomposition. There is a strong analogy between several properties of the matrix and the higherorder tensor decomposition; uniqueness, link with the matrix eigenvalue decomposition, firstorder perturbation effects, etc., are ..."
Abstract

Cited by 472 (22 self)
 Add to MetaCart
(Show Context)
Abstract. We discuss a multilinear generalization of the singular value decomposition. There is a strong analogy between several properties of the matrix and the higherorder tensor decomposition; uniqueness, link with the matrix eigenvalue decomposition, firstorder perturbation effects, etc., are analyzed. We investigate how tensor symmetries affect the decomposition and propose a multilinear generalization of the symmetric eigenvalue decomposition for pairwise symmetric tensors.
Enhanced line search: A novel method to accelerate PARAFAC
, 2006
"... Several modifications have been proposed to speed up the alternating least squares (ALS) method of fitting the PARAFAC model. The most widely used is line search, which extrapolates from linear trends in the parameter changes over prior iterations to estimate the parameter values that would be obta ..."
Abstract

Cited by 58 (11 self)
 Add to MetaCart
Several modifications have been proposed to speed up the alternating least squares (ALS) method of fitting the PARAFAC model. The most widely used is line search, which extrapolates from linear trends in the parameter changes over prior iterations to estimate the parameter values that would be obtained after many additional ALS iterations. We propose some extensions of this approach that incorporate a more sophisticated extrapolation, using information on nonlinear trends in the parameters and changing all the parameter sets simultaneously. The new method, called “enhanced line search (ELS), ” can be implemented at different levels of complexity, depending on how many different extrapolation parameters (for different modes) are jointly optimized during each iteration. We report some tests of the simplest parameter version, using simulated data. The performance of this lowestlevel of ELS depends on the nature of the convergence difficulty. It significantly outperforms standard LS when there is a “convergence bottleneck, ” a situation where some modes have almost collinear factors but others do not, but is somewhat less effective in classic “swamp ” situations where factors are highly collinear in all modes. This is illustrated by examples. To demonstrate how ELS can be adapted to different Nway decompositions, we also apply it to a fourway array to perform a blind identification of an underdetermined mixture (UDM). Since analysis of this dataset happens to involve a serious convergence “bottleneck” (collinear factors in two of the four modes), it provides another example of a situation in which ELS dramatically outperforms standard line search.
Computation of the canonical decomposition by means of a simultaneous generalized schur decomposition
 SIAM J. Matrix Anal. Appl
, 2004
"... Abstract. The canonical decomposition of higherorder tensors is a key tool in multilinear algebra. First we review the state of the art. Then we show that, under certain conditions, the problem can be rephrased as the simultaneous diagonalization, by equivalence or congruence, of a set of matrices. ..."
Abstract

Cited by 55 (10 self)
 Add to MetaCart
(Show Context)
Abstract. The canonical decomposition of higherorder tensors is a key tool in multilinear algebra. First we review the state of the art. Then we show that, under certain conditions, the problem can be rephrased as the simultaneous diagonalization, by equivalence or congruence, of a set of matrices. Necessary and sufficient conditions for the uniqueness of these simultaneous matrix decompositions are derived. In a next step, the problem can be translated into a simultaneous generalized Schur decomposition, with orthogonal unknowns [A.J. van der Veen and A. Paulraj, IEEE Trans. Signal Process., 44 (1996), pp. 1136–1155]. A firstorder perturbation analysis of the simultaneous generalized Schur decomposition is carried out. We discuss some computational techniques (including a new Jacobi algorithm) and illustrate their behavior by means of a number of numerical experiments.
A Spectral Algorithm for Latent Dirichlet Allocation
"... Topic modeling is a generalization of clustering that posits that observations (words in a document) are generated by multiple latent factors (topics), as opposed to just one. This increased representational power comes at the cost of a more challenging unsupervised learning problem of estimating th ..."
Abstract

Cited by 49 (11 self)
 Add to MetaCart
Topic modeling is a generalization of clustering that posits that observations (words in a document) are generated by multiple latent factors (topics), as opposed to just one. This increased representational power comes at the cost of a more challenging unsupervised learning problem of estimating the topicword distributions when only words are observed, and the topics are hidden. This work provides a simple and efficient learning procedure that is guaranteed to recover the parameters for a wide class of topic models, including Latent Dirichlet Allocation (LDA). For LDA, the procedure correctly recovers both the topicword distributions and the parameters of the Dirichlet prior over the topic mixtures, using only trigram statistics (i.e., third order moments, which may be estimated with documents containing just three words). The method, called Excess Correlation Analysis, is based on a spectral decomposition of loworder moments via two singular value decompositions (SVDs). Moreover, the algorithm is scalable, since the SVDs are carried out only on k × k matrices, where k is the number of latent factors (topics) and is typically much smaller than the dimension of the observation (word) space. 1
Tensor decompositions, state of the art and applications
 MATHEMATICS IN SIGNAL PROCESSING V
"... ..."
Blind identification of underdetermined mixtures based on the characteristic function
 Signal Process
, 2005
"... Linear mixtures of independent random variables (the socalled sources) are sometimes referred to as underdetermined mixtures (UDM) when the number of sources exceeds the dimension of the observation space. The algorithms proposed are able to identify algebraically a UDM using the second characteri ..."
Abstract

Cited by 44 (20 self)
 Add to MetaCart
(Show Context)
Linear mixtures of independent random variables (the socalled sources) are sometimes referred to as underdetermined mixtures (UDM) when the number of sources exceeds the dimension of the observation space. The algorithms proposed are able to identify algebraically a UDM using the second characteristic function (c.f.) of the observations, without any need of sparsity assumption on sources. In fact, by taking higher order derivatives of the multivariate c.f. core equation, the blind identification problem is shown to reduce to a tensor decomposition. With only two sensors, the first algorithm only needs a SVD. With a larger number of sensors, the second algorithm executes an alternating least squares (ALS) algorithm. The joint use of statistics of different orders is possible, and a LS solution can be computed. Identifiability conditions are stated in each of the two cases. Computer simulations eventually demonstrate performances in the absence of sparsity, and emphasize the interest in using jointly derivatives of different orders. r 2005 Elsevier B.V. All rights reserved.
Canonical Tensor Decompositions
 ARCC WORKSHOP ON TENSOR DECOMPOSITION
, 2004
"... The Singular Value Decomposition (SVD) may be extended to tensors at least in two very different ways. One is the HighOrder SVD (HOSVD), and the other is the Canonical Decomposition (CanD). Only the latter is closely related to the tensor rank. Important basic questions are raised in this short pap ..."
Abstract

Cited by 43 (16 self)
 Add to MetaCart
The Singular Value Decomposition (SVD) may be extended to tensors at least in two very different ways. One is the HighOrder SVD (HOSVD), and the other is the Canonical Decomposition (CanD). Only the latter is closely related to the tensor rank. Important basic questions are raised in this short paper, such as the maximal achievable rank of a tensor of given dimensions, or the computation of a CanD. Some questions are answered, and it turns out that the answers depend on the choice of the underlying field, and on tensor symmetry structure, which outlines a major difference compared to matrices.
Dimensionality reduction in higherorder signal processing and rank(R_1,R__2,...,R_N) reduction in multilinear algebra
, 2004
"... ..."
Blind channel identification and extraction of more sources than sensors
, 1998
"... It is often admitted that a static system with more inputs (sources) than outputs (sensors, or channels) cannot be blindly identified, that is, identified only from the observation of its outputs, and without any a priori knowledge on the source statistics but their independence. By resorting to Hig ..."
Abstract

Cited by 20 (8 self)
 Add to MetaCart
It is often admitted that a static system with more inputs (sources) than outputs (sensors, or channels) cannot be blindly identified, that is, identified only from the observation of its outputs, and without any a priori knowledge on the source statistics but their independence. By resorting to HighOrder Statistics, it turns out that static MIMO systems with fewer outputs than inputs can be identified, as demonstrated in the present paper. The principle, already described in a recent rather theoretical paper, had not yet been applied to a concrete blind identification problem. Here, in order to demonstrate its feasibility, the procedure is detailed in the case of a 2sensor 3source mixture; a numerical algorithm is devised, that blindly identifies a 3input 2output mixture. Computer results show its behavior as a function of the data length when sources are QPSKmodulated signals, widely used in digital communications. Then another algorithm is proposed to extract the 3 sources from the 2 observations, once the mixture has been identified. Contrary to the first algorithm, this one assumes that the sources have a known discrete distribution. Computer experiments are run in the case of three BPSK sources in presence of Gaussian noise.