Results 1 - 10
of
61,653
Validation of imprecise probability models
"... Abstract: Validation is the assessment of the match between a model’s predictions and any empirical observations relevant to those predictions. This comparison is straightforward when the data and predictions are deterministic, but is complicated when either or both are expressed in terms of uncerta ..."
Abstract
-
Cited by 4 (2 self)
- Add to MetaCart
of uncertain numbers (i.e., intervals, probability distributions, p-boxes, or more general imprecise probability structures). There are two obvious ways such comparisons might be conceptualized. Validation could measure the discrepancy between the shapes of the uncertain numbers representing prediction
The role of generalised p-boxes in imprecise probability models
- ISIPTA’09: Proceedings of the Sixth International Symposium on Imprecise Probability: Theories and Applications
, 2009
"... Abstract Recently, we have introduced an uncertainty representation generalising imprecise cumulative distributions to any (pre-)ordered space as well as possibility distributions: generalised p-boxes. This representation has many attractive features, as it remains quite simple while having an inte ..."
Abstract
-
Cited by 3 (3 self)
- Add to MetaCart
Abstract Recently, we have introduced an uncertainty representation generalising imprecise cumulative distributions to any (pre-)ordered space as well as possibility distributions: generalised p-boxes. This representation has many attractive features, as it remains quite simple while having
A Minimum Distance Estimator in an Imprecise Probability Model – Computational Aspects and Applications
"... The present article considers estimating a parameter θ in an imprecise probability model (P θ)θ∈Θ which consists of coherent upper previsions P θ. After the definition of a minimum distance estimator in this setup and a summarization of its main properties, the focus lies on applications. It is show ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
The present article considers estimating a parameter θ in an imprecise probability model (P θ)θ∈Θ which consists of coherent upper previsions P θ. After the definition of a minimum distance estimator in this setup and a summarization of its main properties, the focus lies on applications
4th International Symposium on Imprecise Probabilities and Their Applications, Pittsburgh, Pennsylvania, 2005 Imprecise probability models for inference in exponential families
"... When considering sampling models described by a distribution from an exponential family, it is possible to create two types of imprecise probability models. One is based on the corresponding conjugate distribution and the other on the corresponding predictive distribution. In this paper, we show how ..."
Abstract
- Add to MetaCart
When considering sampling models described by a distribution from an exponential family, it is possible to create two types of imprecise probability models. One is based on the corresponding conjugate distribution and the other on the corresponding predictive distribution. In this paper, we show
Estimation of probabilities from sparse data for the language model component of a speech recognizer
- IEEE Transactions on Acoustics, Speech and Signal Processing
, 1987
"... Abstract-The description of a novel type of rn-gram language model is given. The model offers, via a nonlinear recursive procedure, a com-putation and space efficient solution to the problem of estimating prob-abilities from sparse data. This solution compares favorably to other proposed methods. Wh ..."
Abstract
-
Cited by 799 (2 self)
- Add to MetaCart
Abstract-The description of a novel type of rn-gram language model is given. The model offers, via a nonlinear recursive procedure, a com-putation and space efficient solution to the problem of estimating prob-abilities from sparse data. This solution compares favorably to other proposed methods
Verb Semantics And Lexical Selection
, 1994
"... ... structure. As Levin has addressed (Levin 1985), the decomposition of verbs is proposed for the purposes of accounting for systematic semantic-syntactic correspondences. This results in a series of problems for MT systems: inflexible verb sense definitions; difficulty in handling metaphor and new ..."
Abstract
-
Cited by 551 (4 self)
- Add to MetaCart
and new usages; imprecise lexical selection and insufficient system coverage. It seems one approach is to apply probability methods and statistical models for some of these problems. However, the question reminds: has PSR exhausted the potential of the knowledge-based approach? If not, are there any
Exploiting Generative Models in Discriminative Classifiers
- In Advances in Neural Information Processing Systems 11
, 1998
"... Generative probability models such as hidden Markov models provide a principled way of treating missing information and dealing with variable length sequences. On the other hand, discriminative methods such as support vector machines enable us to construct flexible decision boundaries and often resu ..."
Abstract
-
Cited by 551 (9 self)
- Add to MetaCart
Generative probability models such as hidden Markov models provide a principled way of treating missing information and dealing with variable length sequences. On the other hand, discriminative methods such as support vector machines enable us to construct flexible decision boundaries and often
Nonparametric model for background subtraction
- in ECCV ’00
, 2000
"... Abstract. Background subtraction is a method typically used to seg-ment moving regions in image sequences taken from a static camera by comparing each new frame to a model of the scene background. We present a novel non-parametric background model and a background subtraction approach. The model can ..."
Abstract
-
Cited by 545 (17 self)
- Add to MetaCart
can handle situations where the back-ground of the scene is cluttered and not completely static but contains small motions such as tree branches and bushes. The model estimates the probability of observing pixel intensity values based on a sample of intensity values for each pixel. The model adapts
Markov Random Field Models in Computer Vision
, 1994
"... . A variety of computer vision problems can be optimally posed as Bayesian labeling in which the solution of a problem is defined as the maximum a posteriori (MAP) probability estimate of the true labeling. The posterior probability is usually derived from a prior model and a likelihood model. The l ..."
Abstract
-
Cited by 516 (18 self)
- Add to MetaCart
. A variety of computer vision problems can be optimally posed as Bayesian labeling in which the solution of a problem is defined as the maximum a posteriori (MAP) probability estimate of the true labeling. The posterior probability is usually derived from a prior model and a likelihood model
Maximum entropy markov models for information extraction and segmentation
, 2000
"... Hidden Markov models (HMMs) are a powerful probabilistic tool for modeling sequential data, and have been applied with success to many text-related tasks, such as part-of-speech tagging, text segmentation and information extraction. In these cases, the observations are usually modeled as multinomial ..."
Abstract
-
Cited by 561 (18 self)
- Add to MetaCart
, capitalization, formatting, part-of-speech), and defines the conditional probability of state sequences given observation sequences. It does this by using the maximum entropy framework to fit a set of exponential models that represent the probability of a state given an observation and the previous state. We
Results 1 - 10
of
61,653