• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

Convex tensor decomposition via structured schatten norm regularization (2013)

by R Tomioka, T Suzuki
Venue:In NIPS
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 15
Next 10 →

Fast multivariate spatio-temporal analysis via low rank tensor learning

by Mohammad Taha Bahadori, Rose Yu, Yan Liu - In Advances in Neural Information Processing Systems , 2014
"... Accurate and efficient analysis of multivariate spatio-temporal data is critical in climatology, geology, and sociology applications. Existing models usually assume simple inter-dependence among variables, space, and time, and are computation-ally expensive. We propose a unified low rank tensor lear ..."
Abstract - Cited by 5 (4 self) - Add to MetaCart
Accurate and efficient analysis of multivariate spatio-temporal data is critical in climatology, geology, and sociology applications. Existing models usually assume simple inter-dependence among variables, space, and time, and are computation-ally expensive. We propose a unified low rank tensor learning framework for mul-tivariate spatio-temporal analysis, which can conveniently incorporate different properties in spatio-temporal data, such as spatial clustering and shared structure among variables. We demonstrate how the general framework can be applied to cokriging and forecasting tasks, and develop an efficient greedy algorithm to solve the resulting optimization problem with convergence guarantee. We conduct ex-periments on both synthetic datasets and real application datasets to demonstrate that our method is not only significantly faster than existing methods but also achieves lower estimation error. 1
(Show Context)

Citation Context

...ective function, we note that tensor rank has different notions such as CP rank, Tucker rank and mode n-rank [16, 12]. In this paper, we choose the sum n-rank, which is computationally more tractable =-=[12, 24]-=-. The n-rank of a tensor W is the rank of its mode-n unfoldingW(n).2 In particular, for a tensorW with N mode, we have the following definition: sum n-rank(W) = N∑ n=1 rank(W(n)). (7) A common practic...

Convergence rate of Bayesian tensor estimator: Optimal rate without restricted strong convexity

by Taiji Suzuki , 2014
"... ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
Abstract not found

Multitask learning meets tensor factorization: task imputation via convex optimization

by Kishan Wimalawarne, Masashi Sugiyama, Ryota Tomioka
"... We study a multitask learning problem in which each task is parametrized by a weight vector and indexed by a pair of indices, which can be e.g, (consumer, time). The weight vectors can be collected into a tensor and the (multilinear-)rank of the tensor controls the amount of sharing of information a ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
We study a multitask learning problem in which each task is parametrized by a weight vector and indexed by a pair of indices, which can be e.g, (consumer, time). The weight vectors can be collected into a tensor and the (multilinear-)rank of the tensor controls the amount of sharing of information among tasks. Two types of convex relaxations have recently been proposed for the tensor multilin-ear rank. However, we argue that both of them are not optimal in the context of multitask learning in which the dimensions or multilinear rank are typically het-erogeneous. We propose a new norm, which we call the scaled latent trace norm and analyze the excess risk of all the three norms. The results apply to various set-tings including matrix and tensor completion, multitask learning, and multilinear multitask learning. Both the theory and experiments support the advantage of the new norm when the tensor is not equal-sized and we do not a priori know which mode is low rank. 1

Robust Tensor Decomposition with Gross Corruption

by Quanquan Gu, Huan Gui, Jiawei Han
"... In this paper, we study the statistical performance of robust tensor decomposition with gross corruption. The observations are noisy realization of the superposition of a low-rank tensorW ∗ and an entrywise sparse corruption tensor V∗. Unlike conventional noise with bounded variance in previous conv ..."
Abstract - Add to MetaCart
In this paper, we study the statistical performance of robust tensor decomposition with gross corruption. The observations are noisy realization of the superposition of a low-rank tensorW ∗ and an entrywise sparse corruption tensor V∗. Unlike conventional noise with bounded variance in previous convex tensor decomposition analysis, the magnitude of the gross corruption can be arbitrary large. We show that under certain conditions, the true low-rank tensor as well as the sparse cor-ruption tensor can be recovered simultaneously. Our theory yields nonasymptotic Frobenius-norm estimation error bounds for each tensor separately. We show through numerical experiments that our theory can precisely predict the scaling behavior in practice. 1
(Show Context)

Citation Context

...W∗ + E , where W∗ ∈ Rn1×...×nK is a low-rank tensor, E ∈ Rn1×...×nK is a noise tensor whose entries are i.i.d. Gaussian noise with zero mean and bounded variance σ2, i.e., Ei1,...,iK ∼ N(0, σ2). [22] =-=[21]-=- analyzed the statistical performance of convex tensor decomposition under different extensions of trace norm. They showed that, under certain conditions, the estimation error scales with the rank of ...

A New Convex Relaxation for Tensor Completion

by Bernardino Romera-Paredes, Massimiliano Pontil , 2013
"... We study the problem of learning a tensor from a set of linear measurements. A prominent methodology for this problem is based on a generalization of trace norm regularization, which has been used extensively for learning low rank matrices, to the tensor setting. In this paper, we highlight some lim ..."
Abstract - Add to MetaCart
We study the problem of learning a tensor from a set of linear measurements. A prominent methodology for this problem is based on a generalization of trace norm regularization, which has been used extensively for learning low rank matrices, to the tensor setting. In this paper, we highlight some limitations of this approach and propose an alternative convex relaxation on the Euclidean ball. We then describe a technique to solve the associated regularization problem, which builds upon the alternating direction method of multipliers. Experiments on one synthetic dataset and two real datasets indicate that the proposed method improves significantly over tensor trace norm regularization in terms of estimation error, while remaining computationally tractable.

Adaptive Higher-order Spectral Estimators

by David Gerard, Peter Hoff , 2015
"... Many applications involve estimation of a signal matrix from a noisy data matrix. In such cases, it has been observed that estimators that shrink or truncate the singular values of the data matrix perform well when the signal matrix has approximately low rank. In this article, we generalize this app ..."
Abstract - Add to MetaCart
Many applications involve estimation of a signal matrix from a noisy data matrix. In such cases, it has been observed that estimators that shrink or truncate the singular values of the data matrix perform well when the signal matrix has approximately low rank. In this article, we generalize this approach to the estimation of a tensor of parameters from noisy tensor data. We develop new classes of estimators that shrink or threshold the mode-specific singular values from the higher-order singular value decomposition. These classes of estimators are indexed by tuning parameters, which we adaptively choose from the data by minimizing Stein’s unbiased risk estimate. In particular, this procedure provides a way to estimate the multilinear rank of the underlying signal tensor. Using simulation studies under a variety of conditions, we show that our estimators perform well when the mean tensor has approximately low multilinear rank, and perform competitively when the signal tensor does not have approximately low multilinear rank. We illustrate the use of these methods in an application to multivariate relational data.

Generalized Higher-Order Orthogonal Iteration for Tensor Decomposition and Completion

by unknown authors
"... Low-rank tensor estimation has been frequently applied in many real-world prob-lems. Despite successful applications, existing Schatten 1-norm minimization (SNM) methods may become very slow or even not applicable for large-scale problems. To address this difficulty, we therefore propose an efficien ..."
Abstract - Add to MetaCart
Low-rank tensor estimation has been frequently applied in many real-world prob-lems. Despite successful applications, existing Schatten 1-norm minimization (SNM) methods may become very slow or even not applicable for large-scale problems. To address this difficulty, we therefore propose an efficient and scal-able core tensor Schatten 1-norm minimization method for simultaneous tensor decomposition and completion, with a much lower computational complexity. We first induce the equivalence relation of Schatten 1-norm of a low-rank tensor and its core tensor. Then the Schatten 1-norm of the core tensor is used to replace that of the whole tensor, which leads to a much smaller-scale matrix SNM prob-lem. Finally, an efficient algorithm with a rank-increasing scheme is developed to solve the proposed problem with a convergence guarantee. Extensive experimen-tal results show that our method is usually more accurate than the state-of-the-art methods, and is orders of magnitude faster.
(Show Context)

Citation Context

...ture. In addition, there are some theoretical developments that guarantee the reconstruction of a low-rank tensor from partial measurements by solving the SNM problem under some reasonable conditions =-=[24, 25, 11]-=-. Although those SNM algorithms have been successfully applied in many real-world applications, them suffer from high computational cost of multiple SVDs asO(NIN+1), where the assumed size of an N -th...

Rubik: Knowledge Guided Tensor Factorization and Completion for Health Data Analytics

by Yichen Wang, Robert Chen, Joydeep Ghosh, Joshua C. Denny, Abel Kho, Chen Bradley A. Malin
"... Computational phenotyping is the process of converting het-erogeneous electronic health records (EHRs) into meaningful clinical concepts. Unsupervised phenotyping methods have the potential to leverage a vast amount of labeled EHR data for phenotype discovery. However, existing unsupervised phenotyp ..."
Abstract - Add to MetaCart
Computational phenotyping is the process of converting het-erogeneous electronic health records (EHRs) into meaningful clinical concepts. Unsupervised phenotyping methods have the potential to leverage a vast amount of labeled EHR data for phenotype discovery. However, existing unsupervised phenotyping methods do not incorporate current medical knowledge and cannot directly handle missing, or noisy data. We propose Rubik, a constrained non-negative tensor fac-torization and completion method for phenotyping. Rubik incorporates 1) guidance constraints to align with existing medical knowledge, and 2) pairwise constraints for obtain-ing distinct, non-overlapping phenotypes. Rubik also has built-in tensor completion that can significantly alleviate the impact of noisy and missing data. We utilize the Alternat-ing Direction Method of Multipliers (ADMM) framework to tensor factorization and completion, which can be easily scaled through parallel computing. We evaluate Rubik on two EHR datasets, one of which contains 647,118 records for 7,744 patients from an outpatient clinic, the other of which is a public dataset containing 1,018,614 CMS claims records for 472,645 patients. Our results show that Rubik can discover more meaningful and distinct phenotypes than the baselines. In particular, by using knowledge guidance constraints, Rubik can also discover sub-phenotypes for sev-eral major diseases. Rubik also runs around seven times faster than current state-of-the-art tensor methods. Finally, Rubik is scalable to large datasets containing millions of EHR records.
(Show Context)

Citation Context

...zed matrix completion to the tensor case to recover a low-rank tensor. They defined the nuclear norm of a tensor as a convex combination of nuclear norms of its unfolding matrices. Tomioka and Suzuki =-=[31]-=- proposed a latent norm regularized approach. Liu et al. [23] substituted the nuclear norm of unfolding matrices by the nuclear norm of each factor matrix of its CP decomposition. A number of other al...

4 Generalized Higher-Order Tensor Decomposition via Parallel ADMM

by Fanhua Shang, Yuanyuan Liu, James Cheng
"... ar ..."
Abstract - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...or modes of the tensor. Thus, vectors and matrices are first-order and second-order tensors, respectively. Higher-order tensors arise in a wide variety of application areas, such as machine learning (=-=Tomioka and Suzuki, 2013-=-; Signoretto et al., 2014), computer vision (Liu et al., 2009), data mining (Yilmaz et al., 2011; Morup, 2011; Narita et al., 2012; Liu et al., 2014), numerical linear algebra (Lathauwer et al., 2000a...

Yours Truly Sunil Template Contents

by unknown authors
"... 1 Low-rank tensor denoising and recovery via convex optimiza-tion 1 ..."
Abstract - Add to MetaCart
1 Low-rank tensor denoising and recovery via convex optimiza-tion 1
(Show Context)

Citation Context

...ent Schatten 1-norm stays almost constant; this is because the minimum multilinear rank 3 is constant; see Theorem 2. Of course, this is just one well constructed example, and we refer the readers to =-=[53]-=- for more results that quantitatively validate Theorem 2. Low-rank tensor denoising and recovery via convex optimization 13 1.5.2 Tensor completion A synthetic problem was generated as follows. The tr...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University