Results 1  10
of
33
Allatonce Optimization for Coupled Matrix and Tensor Factorizations
, 1105
"... Joint analysis of data from multiple sources has the potential to improve our understanding of the underlying structures in complex data sets. For instance, in restaurant recommendation systems, recommendations can be based on rating histories of customers. In addition to rating histories, customers ..."
Abstract

Cited by 28 (3 self)
 Add to MetaCart
(Show Context)
Joint analysis of data from multiple sources has the potential to improve our understanding of the underlying structures in complex data sets. For instance, in restaurant recommendation systems, recommendations can be based on rating histories of customers. In addition to rating histories, customers ’ social networks (e.g., Facebook friendships) and restaurant categories information (e.g., Thai or Italian) can also be used to make better recommendations. The task of fusing data, however, is challenging since data sets can be incomplete and heterogeneous, i.e., data consist of both matrices, e.g., the person by person social network matrix or the restaurant by category matrix, and higherorder tensors, e.g., the “ratings ” tensor of the form restaurant by meal by person. In this paper, we are particularly interested in fusing data sets with the goal of capturing their underlying latent structures. We formulate this problem as a coupled matrix and tensor factorization (CMTF) problem where heterogeneous data sets are modeled by fitting outerproduct models to higherorder tensors and matrices in a coupled manner. Unlike traditional approaches solving this problem using alternating algorithms, we propose an allatonce optimization approach called CMTFOPT (CMTFOPTimization), which is a gradientbased optimization approach for joint analysis of matrices and higherorder tensors. We also extend the algorithm to handle coupled incomplete data sets. Using numerical experiments, we demonstrate that the proposed allatonce approach is more accurate than the alternating least squares approach.
LowRank Tensor Krylov Subspace Methods for Parametrized Linear Systems
, 2010
"... We consider linear systems A(α)x(α) = b(α) depending on possibly many parameters α = (α1,...,αp). Solving these systems simultaneously for a standard discretization of the parameter space would require a computational effort growing exponentially in the number of parameters. We show that this curse ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
(Show Context)
We consider linear systems A(α)x(α) = b(α) depending on possibly many parameters α = (α1,...,αp). Solving these systems simultaneously for a standard discretization of the parameter space would require a computational effort growing exponentially in the number of parameters. We show that this curse of dimensionality can be avoided for sufficiently smooth parameter dependencies. For this purpose, computational methods are developed that benefit from the fact that x(α) can be well approximated by a tensor of low rank. In particular, lowrank tensor variants of shortrecurrence Krylov subspace methods are presented. Numerical experiments for deterministic PDEs with parametrized coefficients and stochastic elliptic PDEs demonstrate the effectiveness of our approach.
The geometry of algorithms using hierarchical tensors
, 2012
"... In this paper, the differential geometry of the novel hierarchical Tucker format for tensors is derived. The set HT,k of tensors with fixed tree T and hierarchical rank k is shown to be a smooth quotient manifold, namely the set of orbits of a Lie group action corresponding to the nonunique basis ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
In this paper, the differential geometry of the novel hierarchical Tucker format for tensors is derived. The set HT,k of tensors with fixed tree T and hierarchical rank k is shown to be a smooth quotient manifold, namely the set of orbits of a Lie group action corresponding to the nonunique basis representation of these hierarchical tensors. Explicit characterizations of the quotient manifold, its tangent space and the tangent space of HT,k are derived, suitable for highdimensional problems. The usefulness of a complete geometric description is demonstrated by two typical applications. First, new convergence results for the nonlinear Gauss– Seidel method on HT,k are given. Notably and in contrast to earlier works on this subject, the task of minimizing the Rayleigh quotient is also addressed. Second, evolution equations for dynamic tensor approximation are formulated in terms of an explicit projection operator onto the tangent space of HT,k. In addition, a numerical comparison is made between this dynamical approach and the standard one based on truncated singular value decompositions.
Low complexity Damped GaussNewton algorithms for CANDECOMP/PARAFAC
 SIAM Journal on Matrix Analysis and Applications (SIMAX
, 2013
"... ar ..."
(Show Context)
htucker – a matlab toolbox for tensors in hierarchical Tucker format
, 2012
"... The hierarchical Tucker format is a storageefficient scheme to approximate and represent tensors of possibly high order. This paper presents a Matlab toolbox, along with the underlying methodology and algorithms, which provides a convenient way to work with this format. The toolbox not only allows ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
The hierarchical Tucker format is a storageefficient scheme to approximate and represent tensors of possibly high order. This paper presents a Matlab toolbox, along with the underlying methodology and algorithms, which provides a convenient way to work with this format. The toolbox not only allows for the efficient storage and manipulation of tensors but also offers a set of tools for the development of higherlevel algorithms. Several examples for the use of the toolbox are given. 1
Learning Separable Filters ⋆
, 2012
"... Abstract. While learned image features can achieve great accuracy on different Computer Vision problems, their use in realworld situations is still very limited as their extraction is typically timeconsuming. We therefore propose a method to learn image features that can be extracted very efficien ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
Abstract. While learned image features can achieve great accuracy on different Computer Vision problems, their use in realworld situations is still very limited as their extraction is typically timeconsuming. We therefore propose a method to learn image features that can be extracted very efficiently using separable filters, by looking for low rank filters. We evaluate our approach on both the image categorization and the pixel classification tasks and show that we obtain similar accuracy as stateoftheart methods, at a fraction of the computational cost. 1
Linear hamilton jacobi bellman equations in high dimensions
 in Conference on Decision and Control (CDC), 2014, arXiv preprint arXiv:1404.1089
"... provides the globally optimal solution to large classes of control problems. Unfortunately, this generality comes at a price, the calculation of such solutions is typically intractible for systems with more than moderate state space size due to the curse of dimensionality. This work combines recent ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
provides the globally optimal solution to large classes of control problems. Unfortunately, this generality comes at a price, the calculation of such solutions is typically intractible for systems with more than moderate state space size due to the curse of dimensionality. This work combines recent results in the structure of the HJB, and its reduction to a linear Partial Differential Equation (PDE), with methods based on low rank tensor representations, known as a separated representations, to address the curse of dimensionality. The result is an algorithm to solve optimal control problems which scales linearly with the number of states in a system, and is applicable to systems that are nonlinear with stochastic forcing in finitehorizon, average cost, and firstexit settings. The method is demonstrated on inverted pendulum, VTOL aircraft, and quadcopter models, with system dimension two, six, and twelve respectively. I.
New ALS methods with extrapolating search directions and optimal step size for complexvalued tensor decompositions
 IEEE Transactions on Signal Processing
, 2011
"... Abstract—In signal processing, data analysis and scientific computing, one often encounters the problem of decomposing a tensor into a sum of contributions. To solve such problems, both the search direction and the step size are two crucial elements in numerical algorithms, such as alternating least ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Abstract—In signal processing, data analysis and scientific computing, one often encounters the problem of decomposing a tensor into a sum of contributions. To solve such problems, both the search direction and the step size are two crucial elements in numerical algorithms, such as alternating least squares algorithm (ALS). Owing to the nonlinearity of the problem, the often used linear search direction is not always powerful enough. In this paper, we propose two higherorder search directions. The first one, geometric search direction, is constructed via a combination of two successive linear directions. The second one, algebraic search direction, is constructed via a quadratic approximation of three successive iterates. Then, in an enhanced line search along these directions, the optimal complex step size contains two arguments: modulus and phase. A current strategy is ELSCS that finds these two arguments alternately. So it may suffer from a local optimum. We broach a direct method, which determines these two arguments simultaneously, so as to obtain the global optimum. Finally, numerical comparisons on various search direction and step size schemes are reported in the context of blind separationequalization of convolutive DSCDMA mixtures. The results show that the new search directions have greatly improve the efficiency of ALS and the new step size strategy is competitive. Index Terms—Alternating least squares, block component model, CANDECOMP, CDMA, complex step size, PARAFAC, search direction, tensor decompositions.
Fast Alternating LS Algorithms for High Order CANDECOMP/PARAFAC Tensor Factorizations
, 2012
"... CANDECOMP/PARAFAC (CP) has found numerous applications in wide variety of areas such as in chemometrics, telecommunication, data mining, neuroscience, separated representations. For an orderN tensor, most CP algorithms can be computationally demanding due to computation of gradients which are rela ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
CANDECOMP/PARAFAC (CP) has found numerous applications in wide variety of areas such as in chemometrics, telecommunication, data mining, neuroscience, separated representations. For an orderN tensor, most CP algorithms can be computationally demanding due to computation of gradients which are related to products between tensor unfoldings and KhatriRao products of all factor matrices except one. These products have the largest workload in most CP algorithms. In this paper, we propose a fast method to deal with this issue. The method also reduces the extra memory requirements of CP algorithms. As a result, we can accelerate the standard alternating CP algorithms 2030 times for order5 and order6 tensors, and even higher ratios can be obtained for higher order tensors (e.g., N ≥ 10). The proposed method is more efficient than the stateoftheart ALS algorithm which operates two modes at a time (ALSo2) in the Eigenvector PLS toolbox, especially for tensors with order N ≥ 5 and high rank.
General
"... The data in many disciplines such as social networks, Web analysis, etc. is linkbased, and the link structure can be exploited for many different data mining tasks. In this article, we consider the problem of temporal link prediction: Given link data for times 1 through T, can we predict the links ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
The data in many disciplines such as social networks, Web analysis, etc. is linkbased, and the link structure can be exploited for many different data mining tasks. In this article, we consider the problem of temporal link prediction: Given link data for times 1 through T, can we predict the links at time T + 1? If our data has underlying periodic structure, can we predict out even further in time, i.e., links at time T + 2, T + 3, etc.? In this article, we consider bipartite graphs that evolve over time and consider matrixand tensorbased methods for predicting future links. We present a weightbased method for collapsing multiyear data into a single matrix. We show how the wellknown Katz method for link prediction can be extended to bipartite graphs and, moreover, approximated in a scalable way using a truncated singular value decomposition. Using a CANDECOMP/PARAFAC tensor decomposition of the data, we illustrate the usefulness of exploiting the natural threedimensional structure of temporal link data. Through several numerical experiments, we demonstrate that both matrix and tensorbased techniques are effective for temporal link prediction despite the inherent difficulty of the problem. Additionally, we show that tensorbased