Results 1  10
of
1,203
Latent dirichlet allocation
 Journal of Machine Learning Research
, 2003
"... We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a threelevel hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is, ..."
Abstract

Cited by 4365 (92 self)
 Add to MetaCart
We describe latent Dirichlet allocation (LDA), a generative probabilistic model for collections of discrete data such as text corpora. LDA is a threelevel hierarchical Bayesian model, in which each item of a collection is modeled as a finite mixture over an underlying set of topics. Each topic is
A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge
 PSYCHOLOGICAL REVIEW
, 1997
"... How do people know as much as they do with as little information as they get? The problem takes many forms; learning vocabulary from text is an especially dramatic and convenient case for research. A new general theory of acquired similarity and knowledge representation, latent semantic analysis (LS ..."
Abstract

Cited by 1816 (10 self)
 Add to MetaCart
How do people know as much as they do with as little information as they get? The problem takes many forms; learning vocabulary from text is an especially dramatic and convenient case for research. A new general theory of acquired similarity and knowledge representation, latent semantic analysis
Mixtures of Probabilistic Principal Component Analysers
, 1998
"... Principal component analysis (PCA) is one of the most popular techniques for processing, compressing and visualising data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a com ..."
Abstract

Cited by 532 (6 self)
 Add to MetaCart
maximumlikelihood framework, based on a specific form of Gaussian latent variable model. This leads to a welldefined mixture model for probabilistic principal component analysers, whose parameters can be determined using an EM algorithm. We discuss the advantages of this model in the context
Joint Information Extraction and Reasoning: A Scalable Statistical Relational Learning Approach
"... A standard pipeline for statistical relational learning involves two steps: one first constructs the knowledge base (KB) from text, and then performs the learning and reasoning tasks using probabilistic firstorder logics. However, a key issue is that information extraction (IE) errors from tex ..."
Abstract
 Add to MetaCart
probabilistic logic framework. We then propose a latent context invention (LCI) approach to improve the performance. In experiments, we show that our approach outperforms stateoftheart baselines over three realworld Wikipedia datasets from multiple domains; that joint learning and inference for IE and SL
Probabilistic nonlinear principal component analysis with Gaussian process latent variable models
 Journal of Machine Learning Research
, 2005
"... Summarising a high dimensional data set with a low dimensional embedding is a standard approach for exploring its structure. In this paper we provide an overview of some existing techniques for discovering such embeddings. We then introduce a novel probabilistic interpretation of principal component ..."
Abstract

Cited by 229 (24 self)
 Add to MetaCart
component analysis (PCA) that we term dual probabilistic PCA (DPPCA). The DPPCA model has the additional advantage that the linear mappings from the embedded space can easily be nonlinearised through Gaussian processes. We refer to this model as a Gaussian process latent variable model (GPLVM). Through
On Learning, Representing and Generalizing a Task in a Humanoid Robot
 IEEE TRANSACTIONS ON SYSTEMS, MAN AND CYBERNETICS, PART B. SPECIAL
, 2007
"... We present a programmingbydemonstration framework for generically extracting the relevant features of a given task and for addressing the problem of generalizing the acquired knowledge to different contexts. We validate the architecture through a series of experiments, in which a human demonstra ..."
Abstract

Cited by 239 (48 self)
 Add to MetaCart
We present a programmingbydemonstration framework for generically extracting the relevant features of a given task and for addressing the problem of generalizing the acquired knowledge to different contexts. We validate the architecture through a series of experiments, in which a human demon
Development of readingrelated phonological processing abilities: New evidence of bidirectional causality from a latent variable longitudinal study
 Developmental Psychology
, 1994
"... Results from a longitudinal correlational study of 244 children from kindergarten through 2nd grade indicate that young children's phonological processing abilities are welldescribed by 5 correlated latent abilities: phonological analysis, phonological synthesis, phonological coding in working ..."
Abstract

Cited by 168 (6 self)
 Add to MetaCart
Results from a longitudinal correlational study of 244 children from kindergarten through 2nd grade indicate that young children's phonological processing abilities are welldescribed by 5 correlated latent abilities: phonological analysis, phonological synthesis, phonological coding
Random Indexing of Text Samples for Latent Semantic Analysis
 In Proceedings of the 22nd Annual Conference of the Cognitive Science Society
, 2000
"... VD, the result is not nearly as good: only 36% correct. The authors conclude that the reorganization of information by SVD somehow corresponds to human psychology. We have studied highdimensional random distributed representations, as models of brainlike representation of information (Kanerva, 199 ..."
Abstract

Cited by 123 (4 self)
 Add to MetaCart
, 1994# Kanerva & Sjodin, 1999). In this poster we report on the use of such a representation to reduce the dimensionality of the original wordsbycontexts matrix. The method can be explained by looking at the 60,000 \Theta 30,000 matrix of frequencies above. Assume that each text sample
A Neural Network Approach to Topic Spotting
, 1995
"... This paper presents an application of nonlinear neural networks to topic spotting. Neural networks allow us to model higherorder interaction between document terms and to simultaneously predict multiple topics using shared hidden features. In the context of this model, we compare two approaches to d ..."
Abstract

Cited by 188 (1 self)
 Add to MetaCart
This paper presents an application of nonlinear neural networks to topic spotting. Neural networks allow us to model higherorder interaction between document terms and to simultaneously predict multiple topics using shared hidden features. In the context of this model, we compare two approaches
Selfdetermination and persistence in a reallife setting: Toward a motivational model of high school dropout.
 Journal of Personality and Social Psychology,
, 1997
"... The purpose of this study was to propose and test a motivational model of high school dropout. The model posits that teachers, parents, and the school administration's behaviors toward students influence students' perceptions of competence and autonomy. The less autonomy supportive the so ..."
Abstract

Cited by 183 (19 self)
 Add to MetaCart
have been found to distinguish themselves clearly in factor analyses and to display adequate levels of reliability (Ryan & Connell, 1989, Study 1; On the Social Determinants of School Motivation Research reveals that the social context in education can have an important influence on motivation
Results 1  10
of
1,203