Results 1  10
of
10
Guaranteed NonOrthogonal Tensor Decomposition via Alternating Rank1 Updates. arXiv preprint arXiv:1402.5180,
, 2014
"... Abstract A simple alternating rank1 update procedure is considered for CP tensor decomposition. Local convergence guarantees are established for third order tensors of rank k in d dimensions, when k = o(d 1.5 ) and the tensor components are incoherent. We strengthen the results to global converge ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
Abstract A simple alternating rank1 update procedure is considered for CP tensor decomposition. Local convergence guarantees are established for third order tensors of rank k in d dimensions, when k = o(d 1.5 ) and the tensor components are incoherent. We strengthen the results to global convergence guarantees when k = O(d) through a simple initialization procedure based on rank1 singular value decomposition of random tensor slices. Our tight perturbation analysis leads to efficient sample guarantees for unsupervised learning of discrete multiview mixtures when k = O(d), where k is the number of mixture components and d is the observed dimension. For learning overcomplete decompositions (k = ω(d)), we prove that having an extremely small number of labeled samples, scaling as polylog(k) for each label, under the semisupervised setting (where the label corresponds to the choice variable in the mixture model) leads to global convergence guarantees for learning mixture models.
A Tensor Approach to Learning Mixed Membership Community Models
"... Community detection is the task of detecting hidden communities from observed interactions. Guaranteed community detection has so far been mostly limited to models with nonoverlapping communities such as the stochastic block model. In this paper, we remove this restriction, and provide guaranteed ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Community detection is the task of detecting hidden communities from observed interactions. Guaranteed community detection has so far been mostly limited to models with nonoverlapping communities such as the stochastic block model. In this paper, we remove this restriction, and provide guaranteed community detection for a family of probabilistic network models with overlapping communities, termed as the mixed membership Dirichlet model, first introduced by Airoldi et al. (2008). This model allows for nodes to have fractional memberships in multiple communities and assumes that the community memberships are drawn from a Dirichlet distribution. Moreover, it contains the stochastic block model as a special case. We propose a unified approach to learning these models via a tensor spectral decomposition method. Our estimator is based on loworder moment tensor of the observed network, consisting of 3star counts. Our learning method is fast and is based on simple linear algebraic operations, e.g., singular value decomposition and tensor power iterations. We provide guaranteed recovery of community memberships and model parameters and present a careful finite sample analysis of our learning method. As an important special case, our results match the best known scaling requirements for the (homogeneous) stochastic block model.
Fast and guaranteed tensor decomposition via sketching. In
 NIPS,
, 2015
"... Abstract Tensor CANDECOMP/PARAFAC (CP) decomposition has wide applications in statistical learning of latent variable models and in data mining. In this paper, we propose fast and randomized tensor CP decomposition algorithms based on sketching. We build on the idea of count sketches, but introduce ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract Tensor CANDECOMP/PARAFAC (CP) decomposition has wide applications in statistical learning of latent variable models and in data mining. In this paper, we propose fast and randomized tensor CP decomposition algorithms based on sketching. We build on the idea of count sketches, but introduce many novel ideas which are unique to tensors. We develop novel methods for randomized computation of tensor contractions via FFTs, without explicitly forming the tensors. Such tensor contractions are encountered in decomposition methods such as tensor power iterations and alternating least squares. We also design novel colliding hashes for symmetric tensors to further save time in computing the sketches. We then combine these sketching ideas with existing whitening and tensor power iterative techniques to obtain the fastest algorithm on both sparse and dense tensors. The quality of approximation under our method does not depend on properties such as sparsity, uniformity of elements, etc. We apply the method for topic modeling and obtain competitive results.
Tensor Spectral Clustering for Partitioning Higherorder Network Structures
"... Spectral graph theorybased methods represent an important class of tools for studying the structure of networks. Spectral methods are based on a firstorder Markov chain derived from a random walk on the graph and thus they cannot take advantage of important higherorder network substructures such ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Spectral graph theorybased methods represent an important class of tools for studying the structure of networks. Spectral methods are based on a firstorder Markov chain derived from a random walk on the graph and thus they cannot take advantage of important higherorder network substructures such as triangles, cycles, and feedforward loops. Here we propose a Tensor Spectral Clustering (TSC) algorithm that allows for modeling higherorder network structures in a graph partitioning framework. Our TSC algorithm allows the user to specify which higherorder network structures (cycles, feedforward loops, etc.) should be preserved by the network clustering. Higherorder network structures of interest are represented using a tensor, which we then partition by developing a multilinear spectral method. Our framework can be applied to discovering layered flows in networks as well as graph anomaly detection, which we illustrate on synthetic networks. In directed networks, a higherorder structure of particular interest is the directed 3cycle, which captures feedback loops in networks. We demonstrate that our TSC algorithm produces large partitions that cut fewer directed 3cycles than standard spectral clustering algorithms.
Sublinear Time Orthogonal Tensor Decomposition *
"... Abstract A recent work (Wang et. al., NIPS 2015) gives the fastest known algorithms for orthogonal tensor decomposition with provable guarantees. Their algorithm is based on computing sketches of the input tensor, which requires reading the entire input. We show in a number of cases one can achiev ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract A recent work (Wang et. al., NIPS 2015) gives the fastest known algorithms for orthogonal tensor decomposition with provable guarantees. Their algorithm is based on computing sketches of the input tensor, which requires reading the entire input. We show in a number of cases one can achieve the same theoretical guarantees in sublinear time, i.e., even without reading most of the input tensor. Instead of using sketches to estimate inner products in tensor decomposition algorithms, we use importance sampling. To achieve sublinear time, we need to know the norms of tensor slices, and we show how to do this in a number of important cases. For symmetric tensors T = k i=1 λ i u ⊗p i with λ i > 0 for all i, we estimate such norms in sublinear time whenever p is even. For the important case of p = 3 and small values of k, we can also estimate such norms. For asymmetric tensors sublinear time is not possible in general, but we show if the tensor slice norms are just slightly below T F then sublinear time is again possible. One of the main strengths of our work is empirical in a number of cases our algorithm is orders of magnitude faster than existing methods with the same accuracy.
Overlap Graph Clustering via Successive Removal
"... AbstractOne of the fundamental questions in the study of complex networks is community detection, i.e., given a graph that represents interactions in a real system, can we group vertices with similar interests together? In many applications, we are often in a setting where vertices may potentially ..."
Abstract
 Add to MetaCart
(Show Context)
AbstractOne of the fundamental questions in the study of complex networks is community detection, i.e., given a graph that represents interactions in a real system, can we group vertices with similar interests together? In many applications, we are often in a setting where vertices may potentially belong to multiple communities. In this paper, we propose an efficient algorithm for overlapping community detection which can successively recover all the communities. We provide theoretical guarantees on the performance of the algorithm by leveraging convex relaxation and exploiting the fact that in many networks there are often vertices that only belong to one community.
Analyzing Tensor Power Method Dynamics: Applications to Learning Overcomplete Latent Variable Models
, 2014
"... ar ..."
(Show Context)
Scalable Link Prediction in Dynamic Networks via NonNegative Matrix Factorization
"... We study temporal link prediction problem, where, given past interactions, our goal is to predict new interactions. We propose a dynamic link prediction method based on nonnegative matrix factorization. This method assumes that interactions are more likely between users that are similar to each oth ..."
Abstract
 Add to MetaCart
(Show Context)
We study temporal link prediction problem, where, given past interactions, our goal is to predict new interactions. We propose a dynamic link prediction method based on nonnegative matrix factorization. This method assumes that interactions are more likely between users that are similar to each other in the latent space representation. We propose a global optimization algorithm to effectively learn the temporal latent space with quadratic convergence rate and bounded error. In addition, we propose two alternative algorithms with local and incremental updates, which provide much better scalability without deteriorating prediction accuracy. We evaluate our model on a number of realworld dynamic networks and demonstrate that our model significantly outperforms existing approaches for temporal link prediction in terms of both scalability and predictive power. 1.
A Tensor Spectral Approach to Learning Mixed Membership Community Models
, 2013
"... ar ..."
(Show Context)
Guaranteed NonOrthogonal Tensor Decomposition via Alternating Rank1 Updates
, 2014
"... A simple alternating rank1 update procedure is considered for CP tensor decomposition. Local convergence guarantees are established for third order tensors of rank k in d dimensions, when k = o(d1.5) and the tensor components are incoherent. We strengthen the results to global convergence guarantee ..."
Abstract
 Add to MetaCart
(Show Context)
A simple alternating rank1 update procedure is considered for CP tensor decomposition. Local convergence guarantees are established for third order tensors of rank k in d dimensions, when k = o(d1.5) and the tensor components are incoherent. We strengthen the results to global convergence guarantees when k ≤ Cd (for arbitrary constant C> 1) through a simple initialization procedure based on rank1 singular value decomposition of random tensor slices. The guarantees also provide tight perturbation analysis given noisy tensor.