#### DMCA

## Cemgil, “Probabilistic latent tensor factorization

Venue: | in LVA/ICA, 2010 |

Citations: | 13 - 0 self |

### Citations

1243 | Algorithms for non-negative matrix factorization
- Seung, Lee
(Show Context)
Citation Context ...any disciplines. Yet, in order to extract useful information effective and efficient computational tools are needed. In this context, matrix factorisation techniques have emerged as a useful paradigm =-=[10,15]-=-. Clustering, ICA, NMF, LSI, collaborative filtering and many such methods can be expressed and understood as matrix factorisation problems. Thinking of a matrix as the basic data structure maps well ... |

803 | editor. Learning in graphical models
- Jordan
- 1999
(Show Context)
Citation Context ... associated inference algorithm can be derived automatically using matrix computation primitives. For this, we introduce a notation for TF models that closely resembles probabilistic graphical models =-=[7]-=-. We also propose a probabilistic approach to multiway analysis as this provides a natural framework for model selection and handling missing values. We focus on using the KL divergence and the Euclid... |

723 | Tensor decompositions and applications
- Kolda, Bader
- 2009
(Show Context)
Citation Context ... distinct names discussed in detail in recent ⋆⋆ This research is funded by the Bogazici University Research Fund under grant BAP 09A105P.2 Probabilistic Latent Tensor Factorisation tutorial reviews =-=[9,1]-=-. A recent book [5] outlined various optimization algorithms for non-negative TF for alpha and beta divergences. The idea of sparse nonnegative TUCKER was discussed in [12]. Use of the probabilistic a... |

442 | 2005): “Clustering with Bregman divergences
- Banerjee, Merugu, et al.
(Show Context)
Citation Context ...tic Latent Tensor Factorization 3 with IS divergence also exist in [6]. Due to the duality between the Poisson likelihood and KL divergence, and between the Gaussian likelihood and Euclidean distance =-=[3]-=-, solving the TF problem in (1) is equivalent to finding the ML solution of p(X|Z1:N) [6]. • • • • • Z1 Zα ZN i j k p q k S r r p q X i j k i j Fig.1. The DAG on the left is the graphical model of PLT... |

287 | Probabilistic matrix factorization.
- Salakhutdinov, Mnih
- 2008
(Show Context)
Citation Context ...r non-negative TF for alpha and beta divergences. The idea of sparse nonnegative TUCKER was discussed in [12]. Use of the probabilistic approach, then, for thematrix factorization (PMF)was presentedby=-=[13]-=-while probabilistic nonnegative TF came out in [14]. However, all these works focus on isolated models. The motivation behind this paper is pave the way to a unifying framework in which any arbitrary ... |

100 |
The N-way Toolbox for MATLAB.
- Andersson, Bro
- 2000
(Show Context)
Citation Context ...tation used to unfold a multiway array into the matrix form. Following Einstein convention, duplicate indices are summed over. Khatri-Rao product and mode-n unfolding are implemented in N-way Toolbox =-=[2]-=- as krb() and nshape(). Equivalance Matlab Remark X j i X kj i ≡ X X Matrix notation ≡ X(1) nshape(X,1) Array (mode-1 unfolding) X j i ≡ (XT ) i j X ′ vecX j i X j i X p i X p i Y p j Y p j Y q j Tran... |

86 |
Towards a standardized notation and terminology in multiway analysis.
- Kiers
- 2000
(Show Context)
Citation Context ...on’t contain each other. We define a set of ’visible’ indices W ⊆ V and ’invisible’ indices ¯ W ⊆ V such that W ∪ ¯ W = V and W ∩ ¯ W = ∅. Example 1 (TUCKER3 Factorization). The TUCKER3 factorization =-=[8,9]-=- aims to find Zα for α = 1...4 that solve the following optimization problem where in our notation, the TUCKER3 model is given by N = 4, V = {p,q,r,i,j,k}, V1 = {i,p}, V2 = {j,q}, V3 = {k,r}, V4 = {p,... |

82 | Unsupervised multiway data analysis: A literature survey.
- Acar, Yener
- 2007
(Show Context)
Citation Context ... distinct names discussed in detail in recent ⋆⋆ This research is funded by the Bogazici University Research Fund under grant BAP 09A105P.2 Probabilistic Latent Tensor Factorisation tutorial reviews =-=[9,1]-=-. A recent book [5] outlined various optimization algorithms for non-negative TF for alpha and beta divergences. The idea of sparse nonnegative TUCKER was discussed in [12]. Use of the probabilistic a... |

82 | Nonnegative Matrix and Tensor Factorizations,
- Cichocki, Zdunek, et al.
- 2009
(Show Context)
Citation Context ...ssed in detail in recent ⋆⋆ This research is funded by the Bogazici University Research Fund under grant BAP 09A105P.2 Probabilistic Latent Tensor Factorisation tutorial reviews [9,1]. A recent book =-=[5]-=- outlined various optimization algorithms for non-negative TF for alpha and beta divergences. The idea of sparse nonnegative TUCKER was discussed in [12]. Use of the probabilistic approach, then, for ... |

73 |
Introduction Probability and Statistical Applications.
- Meyer
- 1970
(Show Context)
Citation Context ...5) X(w) = ∑ S(w, ¯w) model estimate after augmentation (6) M(w) = ¯w∈ ¯ W { 0 X(w) is missing 1 otherwise mask array (7) Note that due to reproductivity property of Possion and Gaussian distributions =-=[11]-=- the observation X(w) has the same type of distribution as S(w, ¯w). Next, PLTF handles the missing data smoothly by the following observation model [13,4] p(X|S)p(S|Z1:N) = ∏ ∏ ( p(X(w)|S(w, ¯w)) p(S... |

67 |
Bayesian inference for nonnegative matrix factorisation models,”
- Cemgil
- 2009
(Show Context)
Citation Context ...ng the KL divergence and the Euclidean metric to cover both unconstrained and non-negative decompositions. Our probabilistic treatment generalises the statistical treatment of NMF models described in =-=[4,6]-=-. 2 Tensor Factorization (TF) Model Following the established jargon, we call a N-way array X ∈ X I1×I2×···×IN simply a ’tensor’. Here, In are finite index sets, where in is the corresponding index. W... |

56 | A unified view of matrix factorization models
- Singh, Gordon
- 2008
(Show Context)
Citation Context ...any disciplines. Yet, in order to extract useful information effective and efficient computational tools are needed. In this context, matrix factorisation techniques have emerged as a useful paradigm =-=[10,15]-=-. Clustering, ICA, NMF, LSI, collaborative filtering and many such methods can be expressed and understood as matrix factorisation problems. Thinking of a matrix as the basic data structure maps well ... |

13 |
Nonnegative matrix factorisations as probabilistic inference in composite models.
- Fevotte, Cemgil
- 2009
(Show Context)
Citation Context ...ng the KL divergence and the Euclidean metric to cover both unconstrained and non-negative decompositions. Our probabilistic treatment generalises the statistical treatment of NMF models described in =-=[4,6]-=-. 2 Tensor Factorization (TF) Model Following the established jargon, we call a N-way array X ∈ X I1×I2×···×IN simply a ’tensor’. Here, In are finite index sets, where in is the corresponding index. W... |

4 |
Probabilistic non-negative tensor factorisation using markov chain monte carlo
- SCHMIDT, MOHAMED
- 2009
(Show Context)
Citation Context ...he idea of sparse nonnegative TUCKER was discussed in [12]. Use of the probabilistic approach, then, for thematrix factorization (PMF)was presentedby[13]while probabilistic nonnegative TF came out in =-=[14]-=-. However, all these works focus on isolated models. The motivation behind this paper is pave the way to a unifying framework in which any arbitrary TF structure can be realized and the associated inf... |

2 |
Algorithms for sparse non-negative TUCKER
- Mørup, Hansen, et al.
(Show Context)
Citation Context ...orisation tutorial reviews [9,1]. A recent book [5] outlined various optimization algorithms for non-negative TF for alpha and beta divergences. The idea of sparse nonnegative TUCKER was discussed in =-=[12]-=-. Use of the probabilistic approach, then, for thematrix factorization (PMF)was presentedby[13]while probabilistic nonnegative TF came out in [14]. However, all these works focus on isolated models. T... |