Results 11  20
of
7,231
Extended Bayesian Learning
 Proceedings of ESANN 97, European Symposium on Artificial neural networks, Bruges
, 1997
"... . In Bayesian learning one represents the relative degree of believe in different values of the weight vector  including biases  by considering a probability distribution function over weight space. In general, this a priori probability is expected to come from a Gaussian with zero mean and flexib ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
. In Bayesian learning one represents the relative degree of believe in different values of the weight vector  including biases  by considering a probability distribution function over weight space. In general, this a priori probability is expected to come from a Gaussian with zero mean
Learning Bayesian networks: The combination of knowledge and statistical data
 Machine Learning
, 1995
"... We describe scoring metrics for learning Bayesian networks from a combination of user knowledge and statistical data. We identify two important properties of metrics, which we call event equivalence and parameter modularity. These properties have been mostly ignored, but when combined, greatly simpl ..."
Abstract

Cited by 1158 (35 self)
 Add to MetaCart
We describe scoring metrics for learning Bayesian networks from a combination of user knowledge and statistical data. We identify two important properties of metrics, which we call event equivalence and parameter modularity. These properties have been mostly ignored, but when combined, greatly
Dynamic Bayesian Networks: Representation, Inference and Learning
, 2002
"... Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have bee ..."
Abstract

Cited by 770 (3 self)
 Add to MetaCart
been used for problems ranging from tracking planes and missiles to predicting the economy. However, HMMs
and KFMs are limited in their “expressive power”. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete
Perspectives on sparse Bayesian learning
 Advances in Neural Information Processing Systems 16
, 2004
"... Recently, relevance vector machines (RVM) have been fashioned from a sparse Bayesian learning (SBL) framework to perform supervised learning using a weight prior that encourages sparsity of representation. The methodology incorporates an additional set of hyperparameters governing the prior, one for ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
Recently, relevance vector machines (RVM) have been fashioned from a sparse Bayesian learning (SBL) framework to perform supervised learning using a weight prior that encourages sparsity of representation. The methodology incorporates an additional set of hyperparameters governing the prior, one
Bayesian Network Classifiers
, 1997
"... Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with stateoftheart classifiers such as C4.5. This fact raises the question of whether a classifier with less restr ..."
Abstract

Cited by 796 (20 self)
 Add to MetaCart
Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with stateoftheart classifiers such as C4.5. This fact raises the question of whether a classifier with less
Bayesian Learning in Social Networks
, 2010
"... We study the (perfect Bayesian) equilibrium of a model of learning over a general social network. Each individual receives a signal about the underlying state of the world, observes the past actions of a stochasticallygenerated neighborhood of individuals, and chooses one of two possible actions. T ..."
Abstract

Cited by 58 (10 self)
 Add to MetaCart
We study the (perfect Bayesian) equilibrium of a model of learning over a general social network. Each individual receives a signal about the underlying state of the world, observes the past actions of a stochasticallygenerated neighborhood of individuals, and chooses one of two possible actions
Analysis of Sparse Bayesian Learning
 Advances in Neural Information Processing Systems 14
, 2001
"... The recent introduction of the `relevance vector machine' has eectively demonstrated how sparsity may be obtained in generalised linear models within a Bayesian framework. Using a particular form of Gaussian parameter prior, `learning' is the maximisation, with respect to hyperparamete ..."
Abstract

Cited by 58 (1 self)
 Add to MetaCart
The recent introduction of the `relevance vector machine' has eectively demonstrated how sparsity may be obtained in generalised linear models within a Bayesian framework. Using a particular form of Gaussian parameter prior, `learning' is the maximisation, with respect
Advances in Bayesian Learning
"... Bayesian learning is a probabilistic approach to building models that combine prior knowledge with new information extracted from data. In the past few years, significant progress has been made in learning graphical models such as Bayesian networks. Bayesian networks provide a compact representation ..."
Abstract
 Add to MetaCart
Bayesian learning is a probabilistic approach to building models that combine prior knowledge with new information extracted from data. In the past few years, significant progress has been made in learning graphical models such as Bayesian networks. Bayesian networks provide a compact
A bayesian hierarchical model for learning natural scene categories
 In CVPR
, 2005
"... We propose a novel approach to learn and recognize natural scene categories. Unlike previous work [9, 17], it does not require experts to annotate the training set. We represent the image of a scene by a collection of local regions, denoted as codewords obtained by unsupervised learning. Each region ..."
Abstract

Cited by 948 (15 self)
 Add to MetaCart
We propose a novel approach to learn and recognize natural scene categories. Unlike previous work [9, 17], it does not require experts to annotate the training set. We represent the image of a scene by a collection of local regions, denoted as codewords obtained by unsupervised learning. Each
Results 11  20
of
7,231