Results 1  10
of
359,481
The Probabilistic Set Covering Problem
 OPERATIONS RESEARCH
, 2001
"... In a probabilistic set covering problem the right hand side is a random binary vector and the covering constraint has to be satisfied with some prescribed probability. We analyse the structure of the set of probabilistically efficient points of binary random vectors, develop methods for their enu ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
In a probabilistic set covering problem the right hand side is a random binary vector and the covering constraint has to be satisfied with some prescribed probability. We analyse the structure of the set of probabilistically efficient points of binary random vectors, develop methods
Probabilistic Set Covering with Correlations
, 2011
"... We consider a probabilistic set covering problem where there is uncertainty regarding whether a selected set can cover an item, and the objective is to determine a minimumcost combination of sets so that each item is covered with a prespecified probability. To date, literature on this problem has ..."
Abstract
 Add to MetaCart
We consider a probabilistic set covering problem where there is uncertainty regarding whether a selected set can cover an item, and the objective is to determine a minimumcost combination of sets so that each item is covered with a prespecified probability. To date, literature on this problem has
USets as probabilistic sets
, 2003
"... Using topos theory I prove that reasoning about probabilities can be formalized with only one simple assumption: given two sets of measures A, B, if B A, then B is less imprecise than A. ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Using topos theory I prove that reasoning about probabilities can be formalized with only one simple assumption: given two sets of measures A, B, if B A, then B is less imprecise than A.
Similarity Query Processing for Probabilistic Sets
"... Abstract — Evaluating similarity between sets is a fundamental task in computer science. However, there are many applications in which elements in a set may be uncertain due to various reasons. Existing work on modeling such probabilistic sets and computing their similarities suffers from huge model ..."
Abstract
 Add to MetaCart
Abstract — Evaluating similarity between sets is a fundamental task in computer science. However, there are many applications in which elements in a set may be uncertain due to various reasons. Existing work on modeling such probabilistic sets and computing their similarities suffers from huge
Probabilistic Principal Component Analysis
 Journal of the Royal Statistical Society, Series B
, 1999
"... Principal component analysis (PCA) is a ubiquitous technique for data analysis and processing, but one which is not based upon a probability model. In this paper we demonstrate how the principal axes of a set of observed data vectors may be determined through maximumlikelihood estimation of paramet ..."
Abstract

Cited by 703 (5 self)
 Add to MetaCart
Principal component analysis (PCA) is a ubiquitous technique for data analysis and processing, but one which is not based upon a probability model. In this paper we demonstrate how the principal axes of a set of observed data vectors may be determined through maximumlikelihood estimation
Learning probabilistic relational models
 In IJCAI
, 1999
"... A large portion of realworld data is stored in commercial relational database systems. In contrast, most statistical learning methods work only with "flat " data representations. Thus, to apply these methods, we are forced to convert our data into a flat form, thereby losing much ..."
Abstract

Cited by 619 (31 self)
 Add to MetaCart
of the relational structure present in our database. This paper builds on the recent work on probabilistic relational models (PRMs), and describes how to learn them from databases. PRMs allow the properties of an object to depend probabilistically both on other properties of that object and on properties of related
Probabilistic Latent Semantic Indexing
, 1999
"... Probabilistic Latent Semantic Indexing is a novel approach to automated document indexing which is based on a statistical latent class model for factor analysis of count data. Fitted from a training corpus of text documents by a generalization of the Expectation Maximization algorithm, the utilized ..."
Abstract

Cited by 1207 (11 self)
 Add to MetaCart
Probabilistic Latent Semantic Indexing is a novel approach to automated document indexing which is based on a statistical latent class model for factor analysis of count data. Fitted from a training corpus of text documents by a generalization of the Expectation Maximization algorithm, the utilized
Probabilistic Latent Semantic Analysis
 In Proc. of Uncertainty in Artificial Intelligence, UAI’99
, 1999
"... Probabilistic Latent Semantic Analysis is a novel statistical technique for the analysis of twomode and cooccurrence data, which has applications in information retrieval and filtering, natural language processing, machine learning from text, and in related areas. Compared to standard Latent Sema ..."
Abstract

Cited by 760 (9 self)
 Add to MetaCart
Probabilistic Latent Semantic Analysis is a novel statistical technique for the analysis of twomode and cooccurrence data, which has applications in information retrieval and filtering, natural language processing, machine learning from text, and in related areas. Compared to standard Latent
Rational decisions in nonprobabilistic settings
 CUNY Ph.D. Program in Computer Science
, 2009
"... The knowledgebased rational decision model (KBRmodel), developed in [1], offers an approach to rational decision making in a nonprobabilistic setting, e.g., in perfect information games with deterministic payoffs. The KBRmodel is an epistemically explicit form of standard gametheoretical assump ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
The knowledgebased rational decision model (KBRmodel), developed in [1], offers an approach to rational decision making in a nonprobabilistic setting, e.g., in perfect information games with deterministic payoffs. The KBRmodel is an epistemically explicit form of standard game
Mixtures of Probabilistic Principal Component Analysers
, 1998
"... Principal component analysis (PCA) is one of the most popular techniques for processing, compressing and visualising data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a com ..."
Abstract

Cited by 537 (6 self)
 Add to MetaCart
maximumlikelihood framework, based on a specific form of Gaussian latent variable model. This leads to a welldefined mixture model for probabilistic principal component analysers, whose parameters can be determined using an EM algorithm. We discuss the advantages of this model in the context
Results 1  10
of
359,481