• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 1,290
Next 10 →

Bayes rule

by Tommy Norberg, Prior Density, Reference Analysis , 2010
"... Bayes theorem We first go from the elementary formula, P (A|B) = to the advanced rule of Bayes, P (A)P (B|A) P (A)P (B|A)+P (A ′)P (B|A ′) f(y|x) = ∫ ..."
Abstract - Add to MetaCart
Bayes theorem We first go from the elementary formula, P (A|B) = to the advanced rule of Bayes, P (A)P (B|A) P (A)P (B|A)+P (A ′)P (B|A ′) f(y|x) = ∫

• Bayes rule

by Probabilisc Model , 2012
"... Thanks to Michael Collins for many of today’s slides. ..."
Abstract - Add to MetaCart
Thanks to Michael Collins for many of today’s slides.

Bayes ’ Theorem or Bayes ’ Rule

by Simon Jackman
"... At the heart of Bayesian statistics and decision theory is Bayes ’ Theorem, also frequently referred to as Bayes ’ Rule. In its simplest form, if H is a hypothesis and E is evidence, then the theorem is Pr(H|E) = Pr(E ∩ H) ..."
Abstract - Add to MetaCart
At the heart of Bayesian statistics and decision theory is Bayes ’ Theorem, also frequently referred to as Bayes ’ Rule. In its simplest form, if H is a hypothesis and E is evidence, then the theorem is Pr(H|E) = Pr(E ∩ H)

Bayes rule for density matrices

by Manfred K. Warmuth - IN ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 18 (NIPS 05 , 2005
"... The classical Bayes rule computes the posterior model probability from the prior probability and the data likelihood. We generalize this rule to the case when the prior is a density matrix (symmetric positive definite and trace one) and the data likelihood a covariance matrix. The classical Bayes ru ..."
Abstract - Cited by 4 (1 self) - Add to MetaCart
The classical Bayes rule computes the posterior model probability from the prior probability and the data likelihood. We generalize this rule to the case when the prior is a density matrix (symmetric positive definite and trace one) and the data likelihood a covariance matrix. The classical Bayes

Kernel Bayes’ Rule

by Kenji Fukumizu, Le Song, Arthur Gretton , 2011
"... A nonparametric kernel-based method for realizing Bayes’ rule is proposed, based on kernel representations of probabilities in reproducing kernel Hilbert spaces. The prior and conditional probabilities are expressed as empirical kernel mean and covariance operators, respectively, and the kernel mean ..."
Abstract - Cited by 15 (10 self) - Add to MetaCart
A nonparametric kernel-based method for realizing Bayes’ rule is proposed, based on kernel representations of probabilities in reproducing kernel Hilbert spaces. The prior and conditional probabilities are expressed as empirical kernel mean and covariance operators, respectively, and the kernel

Bayes ’ Rule of Information Bayes ’ Rule of Information

by Spencer Graves
"... This chapter discusses a duality between the addition of random variables and the addition of information via Bayes ’ theorem: When adding independent random variables, variances (when they exist) add. With Bayes ’ theorem, defining “score ” and “observed information ” via derivatives of the log den ..."
Abstract - Add to MetaCart
Carlo integration and Markov Chain Monte Carlo, for example. One important realm for application of these techniques is with various kinds of (extended) Kalman / Bayesian filtering following a 2-step Bayesian sequential updating ch3-Bayes Rule of Info2.doc 1 / 28 08/02/05Bayes ’ Rule of Information

Quantum Bayes Rule

by Rüdiger Schack, Todd A. Brun, Carlton M. Caves - Phys. Rev. A , 2001
"... We state a quantum version of Bayes’s rule for statistical inference and give a simple general derivation within the framework of generalized measurements. The rule can be applied to measurements on N copies of a system if the initial state of the N copies is exchangeable. As an illustration, we app ..."
Abstract - Cited by 2 (1 self) - Add to MetaCart
We state a quantum version of Bayes’s rule for statistical inference and give a simple general derivation within the framework of generalized measurements. The rule can be applied to measurements on N copies of a system if the initial state of the N copies is exchangeable. As an illustration, we

Support Vector Machines and the Bayes Rule

by Yi Lin, Yi Lin - in Classification, Data Miniing and Knowledge Discovery , 2002
"... The Bayes rule is the optimal classication rule if the underlying distribution of the data is known. In practice we do not know the underlying distribution, and need to \learn " classication rules from the data. One way to derive classication rules in practice is to implement the Bayes rule app ..."
Abstract - Cited by 5 (0 self) - Add to MetaCart
The Bayes rule is the optimal classication rule if the underlying distribution of the data is known. In practice we do not know the underlying distribution, and need to \learn " classication rules from the data. One way to derive classication rules in practice is to implement the Bayes rule

Support vector machines and the Bayes rule in classification

by Yi Lin - Data Mining Knowledge Disc , 2002
"... Abstract. The Bayes rule is the optimal classification rule if the underlying distribution of the data is known. In practice we do not know the underlying distribution, and need to “learn ” classification rules from the data. One way to derive classification rules in practice is to implement the Bay ..."
Abstract - Cited by 95 (13 self) - Add to MetaCart
Abstract. The Bayes rule is the optimal classification rule if the underlying distribution of the data is known. In practice we do not know the underlying distribution, and need to “learn ” classification rules from the data. One way to derive classification rules in practice is to implement

Bayes Rules in Finite Models

by Stefan Arnborg, Gunnar Sjödin , 2000
"... . Of the many justifications of Bayesianism, most imply some assumption that is not very compelling, like the differentiability or continuity of some auxiliary function. We show how such assumptions can be replaced by weaker assumptions for finite domains. The new assumptions are a non-informative r ..."
Abstract - Cited by 7 (6 self) - Add to MetaCart
. Of the many justifications of Bayesianism, most imply some assumption that is not very compelling, like the differentiability or continuity of some auxiliary function. We show how such assumptions can be replaced by weaker assumptions for finite domains. The new assumptions are a non-informative refinement principle and a concept of information independence. These assumptions are weaker than those used in alternative justifications, which is shown by their inadequacy for infinite domains. They are also more compelling. 1 Introduction The normative claim of Bayesianism is that every type of uncertainty should be described as probability. Bayesianism has been quite controversial in both the statistics and the uncertainty management communities. It developed as subjective Bayesianism, in [5, 11]. Recently, the information based family of justifications, initiated in [3] and continued in [1] have been discussed in [12, 6, 13]. We will try to find assumptions that are strong enough to s...
Next 10 →
Results 1 - 10 of 1,290
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University