Results 1  10
of
192,748
Bayes rule
, 2010
"... Bayes theorem We first go from the elementary formula, P (AB) = to the advanced rule of Bayes, P (A)P (BA) P (A)P (BA)+P (A ′)P (BA ′) f(yx) = ∫ ..."
Abstract
 Add to MetaCart
Bayes theorem We first go from the elementary formula, P (AB) = to the advanced rule of Bayes, P (A)P (BA) P (A)P (BA)+P (A ′)P (BA ′) f(yx) = ∫
Bayes ’ Theorem or Bayes ’ Rule
"... At the heart of Bayesian statistics and decision theory is Bayes ’ Theorem, also frequently referred to as Bayes ’ Rule. In its simplest form, if H is a hypothesis and E is evidence, then the theorem is Pr(HE) = Pr(E ∩ H) ..."
Abstract
 Add to MetaCart
At the heart of Bayesian statistics and decision theory is Bayes ’ Theorem, also frequently referred to as Bayes ’ Rule. In its simplest form, if H is a hypothesis and E is evidence, then the theorem is Pr(HE) = Pr(E ∩ H)
Bayes rule for density matrices
 In Advances in Neural Information Processing Systems 18 (NIPS 05
, 2005
"... The classical Bayes rule computes the posterior model probability from the prior probability and the data likelihood. We generalize this rule to the case when the prior is a density matrix (symmetric positive definite and trace one) and the data likelihood a covariance matrix. The classical Bayes ru ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
The classical Bayes rule computes the posterior model probability from the prior probability and the data likelihood. We generalize this rule to the case when the prior is a density matrix (symmetric positive definite and trace one) and the data likelihood a covariance matrix. The classical Bayes
Kernel Bayes’ Rule
, 2011
"... A nonparametric kernelbased method for realizing Bayes’ rule is proposed, based on kernel representations of probabilities in reproducing kernel Hilbert spaces. The prior and conditional probabilities are expressed as empirical kernel mean and covariance operators, respectively, and the kernel mean ..."
Abstract

Cited by 14 (10 self)
 Add to MetaCart
A nonparametric kernelbased method for realizing Bayes’ rule is proposed, based on kernel representations of probabilities in reproducing kernel Hilbert spaces. The prior and conditional probabilities are expressed as empirical kernel mean and covariance operators, respectively, and the kernel
Bayes ’ Rule of Information Bayes ’ Rule of Information
"... This chapter discusses a duality between the addition of random variables and the addition of information via Bayes ’ theorem: When adding independent random variables, variances (when they exist) add. With Bayes ’ theorem, defining “score ” and “observed information ” via derivatives of the log den ..."
Abstract
 Add to MetaCart
Carlo integration and Markov Chain Monte Carlo, for example. One important realm for application of these techniques is with various kinds of (extended) Kalman / Bayesian filtering following a 2step Bayesian sequential updating ch3Bayes Rule of Info2.doc 1 / 28 08/02/05Bayes ’ Rule of Information
Quantum Bayes Rule
 Phys. Rev. A
, 2001
"... We state a quantum version of Bayes’s rule for statistical inference and give a simple general derivation within the framework of generalized measurements. The rule can be applied to measurements on N copies of a system if the initial state of the N copies is exchangeable. As an illustration, we app ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We state a quantum version of Bayes’s rule for statistical inference and give a simple general derivation within the framework of generalized measurements. The rule can be applied to measurements on N copies of a system if the initial state of the N copies is exchangeable. As an illustration, we
Support vector machines and the Bayes rule in classification
 Data Mining Knowledge Disc
, 2002
"... Abstract. The Bayes rule is the optimal classification rule if the underlying distribution of the data is known. In practice we do not know the underlying distribution, and need to “learn ” classification rules from the data. One way to derive classification rules in practice is to implement the Bay ..."
Abstract

Cited by 96 (14 self)
 Add to MetaCart
Abstract. The Bayes rule is the optimal classification rule if the underlying distribution of the data is known. In practice we do not know the underlying distribution, and need to “learn ” classification rules from the data. One way to derive classification rules in practice is to implement
Expansion Estimation By Bayes Rules
 J. Stat. Plann. Infer
, 1999
"... this paper we build on the work of DasGupta and Rubin by proceeding in two directions: exploring local expander rules and relaxing some distributional assumptions. We start with the definition of strong shrinkage. ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
this paper we build on the work of DasGupta and Rubin by proceeding in two directions: exploring local expander rules and relaxing some distributional assumptions. We start with the definition of strong shrinkage.
Bayes Rules in Finite Models
, 2000
"... . Of the many justifications of Bayesianism, most imply some assumption that is not very compelling, like the differentiability or continuity of some auxiliary function. We show how such assumptions can be replaced by weaker assumptions for finite domains. The new assumptions are a noninformative r ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
. Of the many justifications of Bayesianism, most imply some assumption that is not very compelling, like the differentiability or continuity of some auxiliary function. We show how such assumptions can be replaced by weaker assumptions for finite domains. The new assumptions are a noninformative refinement principle and a concept of information independence. These assumptions are weaker than those used in alternative justifications, which is shown by their inadequacy for infinite domains. They are also more compelling. 1 Introduction The normative claim of Bayesianism is that every type of uncertainty should be described as probability. Bayesianism has been quite controversial in both the statistics and the uncertainty management communities. It developed as subjective Bayesianism, in [5, 11]. Recently, the information based family of justifications, initiated in [3] and continued in [1] have been discussed in [12, 6, 13]. We will try to find assumptions that are strong enough to s...
Results 1  10
of
192,748