Results 1  10
of
3,096,070
Relative Probabilities
, 1997
"... Abstract In this paper a theory of relative probability measures (RPMs) is developed and related to the standard theory of probability measures. An RPM assigns a non–negative real or infinity to any pair of events. This number should be interpreted as relative probability. An RPM distinguishes possi ..."
Abstract
 Add to MetaCart
Abstract In this paper a theory of relative probability measures (RPMs) is developed and related to the standard theory of probability measures. An RPM assigns a non–negative real or infinity to any pair of events. This number should be interpreted as relative probability. An RPM distinguishes
Learning relational probability trees
 In Proceedings of the ACM International Conference on Knowledge Discovery and Data Mining (SIGKDD) (2003
"... Classification trees are widely used in the machine learning and data mining communities for modeling propositional data. Recent work has extended this basic paradigm to probability estimation trees. Traditional tree learning algorithms assume that instances in the training data are homogenous and i ..."
Abstract

Cited by 147 (36 self)
 Add to MetaCart
and independently distributed. Relational probability trees (RPTs) extend standard probability estimation trees to a relational setting in which data instances are heterogeneous and interdependent. Our algorithm for learning the structure and parameters of an RPT searches over a space of relational features
Learning probabilistic relational models
 In IJCAI
, 1999
"... A large portion of realworld data is stored in commercial relational database systems. In contrast, most statistical learning methods work only with "flat " data representations. Thus, to apply these methods, we are forced to convert our data into a flat form, thereby losing much ..."
Abstract

Cited by 619 (31 self)
 Add to MetaCart
A large portion of realworld data is stored in commercial relational database systems. In contrast, most statistical learning methods work only with "flat " data representations. Thus, to apply these methods, we are forced to convert our data into a flat form, thereby losing much
Gradient flows in metric spaces and in the space of probability measures
 LECTURES IN MATHEMATICS ETH ZÜRICH, BIRKHÄUSER VERLAG
, 2005
"... ..."
Selfdiscrepancy: A theory relating self and affect
 Psychological Review
, 1987
"... This article presents a theory of how different types of discrepancies between selfstate representations are related to different kinds of emotional vulnerabilities. One domain of the self (actual; ideal; ought) and one standpoint on the self (own; significant other) constitute each type of selfs ..."
Abstract

Cited by 567 (7 self)
 Add to MetaCart
This article presents a theory of how different types of discrepancies between selfstate representations are related to different kinds of emotional vulnerabilities. One domain of the self (actual; ideal; ought) and one standpoint on the self (own; significant other) constitute each type of self
Extracting Relations from Large PlainText Collections
, 2000
"... Text documents often contain valuable structured data that is hidden in regular English sentences. This data is best exploited if available as a relational table that we could use for answering precise queries or for running data mining tasks. We explore a technique for extracting such tables fr ..."
Abstract

Cited by 480 (25 self)
 Add to MetaCart
Text documents often contain valuable structured data that is hidden in regular English sentences. This data is best exploited if available as a relational table that we could use for answering precise queries or for running data mining tasks. We explore a technique for extracting such tables
Maximum entropy markov models for information extraction and segmentation
, 2000
"... Hidden Markov models (HMMs) are a powerful probabilistic tool for modeling sequential data, and have been applied with success to many textrelated tasks, such as partofspeech tagging, text segmentation and information extraction. In these cases, the observations are usually modeled as multinomial ..."
Abstract

Cited by 554 (18 self)
 Add to MetaCart
Hidden Markov models (HMMs) are a powerful probabilistic tool for modeling sequential data, and have been applied with success to many textrelated tasks, such as partofspeech tagging, text segmentation and information extraction. In these cases, the observations are usually modeled
Singular Combinatorics
 ICM 2002 VOL. III 13
, 2002
"... Combinatorial enumeration leads to counting generating functions presenting a wide variety of analytic types. Properties of generating functions at singularities encode valuable information regarding asymptotic counting and limit probability distributions present in large random structures. " ..."
Abstract

Cited by 820 (10 self)
 Add to MetaCart
Combinatorial enumeration leads to counting generating functions presenting a wide variety of analytic types. Properties of generating functions at singularities encode valuable information regarding asymptotic counting and limit probability distributions present in large random structures
Markov Random Field Models in Computer Vision
, 1994
"... . A variety of computer vision problems can be optimally posed as Bayesian labeling in which the solution of a problem is defined as the maximum a posteriori (MAP) probability estimate of the true labeling. The posterior probability is usually derived from a prior model and a likelihood model. The l ..."
Abstract

Cited by 515 (18 self)
 Add to MetaCart
. The latter relates to how data is observed and is problem domain dependent. The former depends on how various prior constraints are expressed. Markov Random Field Models (MRF) theory is a tool to encode contextual constraints into the prior probability. This paper presents a unified approach for MRF modeling
Probabilistic PartofSpeech Tagging Using Decision Trees
, 1994
"... In this paper, a new probabilistic tagging method is presented which avoids problems that Markov Model based taggers face, when they have to estimate transition probabilities from sparse data. In this tagging method, transition probabilities are estimated using a decision tree. Based on this method, ..."
Abstract

Cited by 1009 (9 self)
 Add to MetaCart
In this paper, a new probabilistic tagging method is presented which avoids problems that Markov Model based taggers face, when they have to estimate transition probabilities from sparse data. In this tagging method, transition probabilities are estimated using a decision tree. Based on this method
Results 1  10
of
3,096,070