Results 1  10
of
101,652
Bayesian hypothesis testing in machine learning
"... Abstract. Most hypothesis testing in machine learning is done using the frequentist nullhypothesis significance test, which has severe drawbacks. We review recent Bayesian tests which overcome the drawbacks of the frequentist ones. ..."
Abstract
 Add to MetaCart
Abstract. Most hypothesis testing in machine learning is done using the frequentist nullhypothesis significance test, which has severe drawbacks. We review recent Bayesian tests which overcome the drawbacks of the frequentist ones.
Bayesian Interpolation
 Neural Computation
, 1991
"... Although Bayesian analysis has been in use since Laplace, the Bayesian method of modelcomparison has only recently been developed in depth. In this paper, the Bayesian approach to regularisation and modelcomparison is demonstrated by studying the inference problem of interpolating noisy data. T ..."
Abstract

Cited by 721 (17 self)
 Add to MetaCart
Although Bayesian analysis has been in use since Laplace, the Bayesian method of modelcomparison has only recently been developed in depth. In this paper, the Bayesian approach to regularisation and modelcomparison is demonstrated by studying the inference problem of interpolating noisy data
Divergence Based Priors for Bayesian Hypothesis testing
, 2006
"... Maybe the main difficulty for objective Bayesian hypothesis testing (and model selection in general), is that usual objective improper priors can not be used for parameters not occurring in all of the models. In this paper we introduce (objective) proper prior distributions for hypothesis testing an ..."
Abstract
 Add to MetaCart
Maybe the main difficulty for objective Bayesian hypothesis testing (and model selection in general), is that usual objective improper priors can not be used for parameters not occurring in all of the models. In this paper we introduce (objective) proper prior distributions for hypothesis testing
Bayesian Network Classifiers
, 1997
"... Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with stateoftheart classifiers such as C4.5. This fact raises the question of whether a classifier with less restr ..."
Abstract

Cited by 788 (23 self)
 Add to MetaCart
Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with stateoftheart classifiers such as C4.5. This fact raises the question of whether a classifier with less
Bayesian hypothesis testing: A reference approach
 Internat. Statist. Rev
, 2002
"... For any probability model M ≡{p(x  θ, ω), θ ∈ Θ, ω ∈ Ω} assumed to describe the probabilistic behaviour of data x ∈ X, it is argued that testing whether or not the available data are compatible with the hypothesis H0 ≡{θ = θ0} is best considered as a formal decision problem on whether to use (a0), ..."
Abstract

Cited by 28 (6 self)
 Add to MetaCart
For any probability model M ≡{p(x  θ, ω), θ ∈ Θ, ω ∈ Ω} assumed to describe the probabilistic behaviour of data x ∈ X, it is argued that testing whether or not the available data are compatible with the hypothesis H0 ≡{θ = θ0} is best considered as a formal decision problem on whether to use (a0
Distributed Bayesian hypothesis testing in sensor networks
 in Proc. Amer. Control Conf
, 2004
"... Abstract — We consider the scenario of N distributed noisy sensors observing a single event. The sensors are distributed and can only exchange messages through a network. The sensor network is modelled by means of a graph, which captures the connectivity of different sensor nodes in the network. The ..."
Abstract

Cited by 29 (5 self)
 Add to MetaCart
belief propagation as a message passing strategy to solve a distributed hypothesis testing problem for a prespecified network connectivity. We show that the message evolution can be reformulated as the evolution of a linear dynamical system, which is primarily characterized by network connectivity
Estimating Continuous Distributions in Bayesian Classifiers
 In Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence
, 1995
"... When modeling a probability distribution with a Bayesian network, we are faced with the problem of how to handle continuous variables. Most previous work has either solved the problem by discretizing, or assumed that the data are generated by a single Gaussian. In this paper we abandon the normality ..."
Abstract

Cited by 489 (2 self)
 Add to MetaCart
When modeling a probability distribution with a Bayesian network, we are faced with the problem of how to handle continuous variables. Most previous work has either solved the problem by discretizing, or assumed that the data are generated by a single Gaussian. In this paper we abandon
Sparse Bayesian Learning and the Relevance Vector Machine
, 2001
"... This paper introduces a general Bayesian framework for obtaining sparse solutions to regression and classication tasks utilising models linear in the parameters. Although this framework is fully general, we illustrate our approach with a particular specialisation that we denote the `relevance vec ..."
Abstract

Cited by 958 (5 self)
 Add to MetaCart
This paper introduces a general Bayesian framework for obtaining sparse solutions to regression and classication tasks utilising models linear in the parameters. Although this framework is fully general, we illustrate our approach with a particular specialisation that we denote the `relevance
Results 1  10
of
101,652