Results 1  10
of
15
Fast multivariate spatiotemporal analysis via low rank tensor learning
 In Advances in Neural Information Processing Systems
, 2014
"... Accurate and efficient analysis of multivariate spatiotemporal data is critical in climatology, geology, and sociology applications. Existing models usually assume simple interdependence among variables, space, and time, and are computationally expensive. We propose a unified low rank tensor lear ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
Accurate and efficient analysis of multivariate spatiotemporal data is critical in climatology, geology, and sociology applications. Existing models usually assume simple interdependence among variables, space, and time, and are computationally expensive. We propose a unified low rank tensor learning framework for multivariate spatiotemporal analysis, which can conveniently incorporate different properties in spatiotemporal data, such as spatial clustering and shared structure among variables. We demonstrate how the general framework can be applied to cokriging and forecasting tasks, and develop an efficient greedy algorithm to solve the resulting optimization problem with convergence guarantee. We conduct experiments on both synthetic datasets and real application datasets to demonstrate that our method is not only significantly faster than existing methods but also achieves lower estimation error. 1
Convergence rate of Bayesian tensor estimator: Optimal rate without restricted strong convexity
, 2014
"... ..."
Multitask learning meets tensor factorization: task imputation via convex optimization
"... We study a multitask learning problem in which each task is parametrized by a weight vector and indexed by a pair of indices, which can be e.g, (consumer, time). The weight vectors can be collected into a tensor and the (multilinear)rank of the tensor controls the amount of sharing of information a ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We study a multitask learning problem in which each task is parametrized by a weight vector and indexed by a pair of indices, which can be e.g, (consumer, time). The weight vectors can be collected into a tensor and the (multilinear)rank of the tensor controls the amount of sharing of information among tasks. Two types of convex relaxations have recently been proposed for the tensor multilinear rank. However, we argue that both of them are not optimal in the context of multitask learning in which the dimensions or multilinear rank are typically heterogeneous. We propose a new norm, which we call the scaled latent trace norm and analyze the excess risk of all the three norms. The results apply to various settings including matrix and tensor completion, multitask learning, and multilinear multitask learning. Both the theory and experiments support the advantage of the new norm when the tensor is not equalsized and we do not a priori know which mode is low rank. 1
Robust Tensor Decomposition with Gross Corruption
"... In this paper, we study the statistical performance of robust tensor decomposition with gross corruption. The observations are noisy realization of the superposition of a lowrank tensorW ∗ and an entrywise sparse corruption tensor V∗. Unlike conventional noise with bounded variance in previous conv ..."
Abstract
 Add to MetaCart
(Show Context)
In this paper, we study the statistical performance of robust tensor decomposition with gross corruption. The observations are noisy realization of the superposition of a lowrank tensorW ∗ and an entrywise sparse corruption tensor V∗. Unlike conventional noise with bounded variance in previous convex tensor decomposition analysis, the magnitude of the gross corruption can be arbitrary large. We show that under certain conditions, the true lowrank tensor as well as the sparse corruption tensor can be recovered simultaneously. Our theory yields nonasymptotic Frobeniusnorm estimation error bounds for each tensor separately. We show through numerical experiments that our theory can precisely predict the scaling behavior in practice. 1
A New Convex Relaxation for Tensor Completion
, 2013
"... We study the problem of learning a tensor from a set of linear measurements. A prominent methodology for this problem is based on a generalization of trace norm regularization, which has been used extensively for learning low rank matrices, to the tensor setting. In this paper, we highlight some lim ..."
Abstract
 Add to MetaCart
We study the problem of learning a tensor from a set of linear measurements. A prominent methodology for this problem is based on a generalization of trace norm regularization, which has been used extensively for learning low rank matrices, to the tensor setting. In this paper, we highlight some limitations of this approach and propose an alternative convex relaxation on the Euclidean ball. We then describe a technique to solve the associated regularization problem, which builds upon the alternating direction method of multipliers. Experiments on one synthetic dataset and two real datasets indicate that the proposed method improves significantly over tensor trace norm regularization in terms of estimation error, while remaining computationally tractable.
Adaptive Higherorder Spectral Estimators
, 2015
"... Many applications involve estimation of a signal matrix from a noisy data matrix. In such cases, it has been observed that estimators that shrink or truncate the singular values of the data matrix perform well when the signal matrix has approximately low rank. In this article, we generalize this app ..."
Abstract
 Add to MetaCart
Many applications involve estimation of a signal matrix from a noisy data matrix. In such cases, it has been observed that estimators that shrink or truncate the singular values of the data matrix perform well when the signal matrix has approximately low rank. In this article, we generalize this approach to the estimation of a tensor of parameters from noisy tensor data. We develop new classes of estimators that shrink or threshold the modespecific singular values from the higherorder singular value decomposition. These classes of estimators are indexed by tuning parameters, which we adaptively choose from the data by minimizing Stein’s unbiased risk estimate. In particular, this procedure provides a way to estimate the multilinear rank of the underlying signal tensor. Using simulation studies under a variety of conditions, we show that our estimators perform well when the mean tensor has approximately low multilinear rank, and perform competitively when the signal tensor does not have approximately low multilinear rank. We illustrate the use of these methods in an application to multivariate relational data.
Generalized HigherOrder Orthogonal Iteration for Tensor Decomposition and Completion
"... Lowrank tensor estimation has been frequently applied in many realworld problems. Despite successful applications, existing Schatten 1norm minimization (SNM) methods may become very slow or even not applicable for largescale problems. To address this difficulty, we therefore propose an efficien ..."
Abstract
 Add to MetaCart
(Show Context)
Lowrank tensor estimation has been frequently applied in many realworld problems. Despite successful applications, existing Schatten 1norm minimization (SNM) methods may become very slow or even not applicable for largescale problems. To address this difficulty, we therefore propose an efficient and scalable core tensor Schatten 1norm minimization method for simultaneous tensor decomposition and completion, with a much lower computational complexity. We first induce the equivalence relation of Schatten 1norm of a lowrank tensor and its core tensor. Then the Schatten 1norm of the core tensor is used to replace that of the whole tensor, which leads to a much smallerscale matrix SNM problem. Finally, an efficient algorithm with a rankincreasing scheme is developed to solve the proposed problem with a convergence guarantee. Extensive experimental results show that our method is usually more accurate than the stateoftheart methods, and is orders of magnitude faster.
Rubik: Knowledge Guided Tensor Factorization and Completion for Health Data Analytics
"... Computational phenotyping is the process of converting heterogeneous electronic health records (EHRs) into meaningful clinical concepts. Unsupervised phenotyping methods have the potential to leverage a vast amount of labeled EHR data for phenotype discovery. However, existing unsupervised phenotyp ..."
Abstract
 Add to MetaCart
(Show Context)
Computational phenotyping is the process of converting heterogeneous electronic health records (EHRs) into meaningful clinical concepts. Unsupervised phenotyping methods have the potential to leverage a vast amount of labeled EHR data for phenotype discovery. However, existing unsupervised phenotyping methods do not incorporate current medical knowledge and cannot directly handle missing, or noisy data. We propose Rubik, a constrained nonnegative tensor factorization and completion method for phenotyping. Rubik incorporates 1) guidance constraints to align with existing medical knowledge, and 2) pairwise constraints for obtaining distinct, nonoverlapping phenotypes. Rubik also has builtin tensor completion that can significantly alleviate the impact of noisy and missing data. We utilize the Alternating Direction Method of Multipliers (ADMM) framework to tensor factorization and completion, which can be easily scaled through parallel computing. We evaluate Rubik on two EHR datasets, one of which contains 647,118 records for 7,744 patients from an outpatient clinic, the other of which is a public dataset containing 1,018,614 CMS claims records for 472,645 patients. Our results show that Rubik can discover more meaningful and distinct phenotypes than the baselines. In particular, by using knowledge guidance constraints, Rubik can also discover subphenotypes for several major diseases. Rubik also runs around seven times faster than current stateoftheart tensor methods. Finally, Rubik is scalable to large datasets containing millions of EHR records.
Yours Truly Sunil Template Contents
"... 1 Lowrank tensor denoising and recovery via convex optimization 1 ..."
(Show Context)