Results 1 
3 of
3
An algorithm for generic and lowrank specific identifiability of complex tensors
, 2014
"... ..."
(Show Context)
Smoothed Analysis of Tensor Decompositions
"... Low rank decomposition of tensors is a powerful tool for learning generative models. The uniqueness of decomposition gives tensors a significant advantage over matrices. However, tensors pose significant algorithmic challenges and tensors analogs of much of the matrix algebra toolkit are unlikely t ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Low rank decomposition of tensors is a powerful tool for learning generative models. The uniqueness of decomposition gives tensors a significant advantage over matrices. However, tensors pose significant algorithmic challenges and tensors analogs of much of the matrix algebra toolkit are unlikely to exist because of hardness results. Efficient decomposition in the overcomplete case (where rank exceeds dimension) is particularly challenging. We introduce a smoothed analysis model for studying these questions and develop an efficient algorithm for tensor decomposition in the highly overcomplete case (rank polynomial in the dimension). In this setting, we show that our algorithm is robust to inverse polynomial error – a crucial property for applications in learning since we are only allowed a polynomial number of samples. While algorithms are known for exact tensor decomposition in some overcomplete settings, our main contribution is in analyzing their stability in the framework of smoothed analysis. Our main technical contribution is to show that tensor products of perturbed vectors are linearly independent in a robust sense (i.e. the associated matrix has singular values that are at least an inverse polynomial). This key result paves the way for applying tensor methods
When are Overcomplete Topic Models Identifiable? Uniqueness of Tensor Tucker Decompositions with Structured Sparsity
, 2013
"... Overcompletelatentrepresentationshavebeenverypopularforunsupervisedfeaturelearning in recent years. In this paper, we specify which overcomplete models can be identified given observable moments of a certain order. We consider probabilistic admixture or topic models in the overcomplete regime, where ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Overcompletelatentrepresentationshavebeenverypopularforunsupervisedfeaturelearning in recent years. In this paper, we specify which overcomplete models can be identified given observable moments of a certain order. We consider probabilistic admixture or topic models in the overcomplete regime, where the number of latent topics can greatly exceed the size of the observed word vocabulary. While general overcomplete topic models are not identifiable, we establish generic identifiability under a constraint, referredto as topic persistence. Our sufficient conditions for identifiability involve a novel set of “higher order ” expansion conditions on the topicword matrix or the population structure of the model. This set of higherorder expansion conditions allow for overcomplete models, and require the existence of a perfect matching from latent topics to higher order observed words. We establish that random structured topic models are identifiable w.h.p. in the overcomplete regime. Our identifiability results allows for general (nondegenerate) distributions for modeling the topic proportions, and thus, we can handle arbitrarily correlated topics in our framework. Our identifiability results imply uniqueness of a class of tensor decompositions with structured sparsity which is contained in the class of Tucker decompositions, but is more general than the Candecomp/Parafac (CP) decomposition.