Results 1 - 10
of
2,979
Convolution Kernels on Discrete Structures
, 1999
"... We introduce a new method of constructing kernels on sets whose elements are discrete structures like strings, trees and graphs. The method can be applied iteratively to build a kernel on an infinite set from kernels involving generators of the set. The family of kernels generated generalizes the fa ..."
Abstract
-
Cited by 506 (0 self)
- Add to MetaCart
the family of radial basis kernels. It can also be used to define kernels in the form of joint Gibbs probability distributions. Kernels can be built from hidden Markov random elds, generalized regular expressions, pair-HMMs, or ANOVA decompositions. Uses of the method lead to open problems involving
Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images.
- IEEE Trans. Pattern Anal. Mach. Intell.
, 1984
"... Abstract-We make an analogy between images and statistical mechanics systems. Pixel gray levels and the presence and orientation of edges are viewed as states of atoms or molecules in a lattice-like physical system. The assignment of an energy function in the physical system determines its Gibbs di ..."
Abstract
-
Cited by 5126 (1 self)
- Add to MetaCart
system isolates low energy states ("annealing"), or what is the same thing, the most probable states under the Gibbs distribution. The analogous operation under the posterior distribution yields the maximum a posteriori (MAP) estimate of the image given the degraded observations. The result
Using Bayesian networks to analyze expression data
- Journal of Computational Biology
, 2000
"... DNA hybridization arrays simultaneously measure the expression level for thousands of genes. These measurements provide a “snapshot ” of transcription levels within the cell. A major challenge in computational biology is to uncover, from such measurements, gene/protein interactions and key biologica ..."
Abstract
-
Cited by 1088 (17 self)
- Add to MetaCart
of joint multivariate probability distributions that captures properties of conditional independence between variables. Such models are attractive for their ability to describe complex stochastic processes and because they provide a clear methodology for learning from (noisy) observations. We start
Loopy belief propagation for approximate inference: An empirical study. In:
- Proceedings of Uncertainty in AI,
, 1999
"... Abstract Recently, researchers have demonstrated that "loopy belief propagation" -the use of Pearl's polytree algorithm in a Bayesian network with loops -can perform well in the context of error-correcting codes. The most dramatic instance of this is the near Shannon-limit performanc ..."
Abstract
-
Cited by 676 (15 self)
- Add to MetaCart
. For each experimental run, we first gen erated random CPTs. We then sampled from the joint distribution defined by the network and clamped the observed nodes (all nodes in the bottom layer) to their sampled value. Given a structure and observations, we then ran three inference algorithms -junction tree
A Neural Probabilistic Language Model
- JOURNAL OF MACHINE LEARNING RESEARCH
, 2003
"... A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen ..."
Abstract
-
Cited by 447 (19 self)
- Add to MetaCart
A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences
Secret Key Agreement by Public Discussion From Common Information
- IEEE Transactions on Information Theory
, 1993
"... . The problem of generating a shared secret key S by two parties knowing dependent random variables X and Y , respectively, but not sharing a secret key initially, is considered. An enemy who knows the random variable Z, jointly distributed with X and Y according to some probability distribution PX ..."
Abstract
-
Cited by 434 (18 self)
- Add to MetaCart
. The problem of generating a shared secret key S by two parties knowing dependent random variables X and Y , respectively, but not sharing a secret key initially, is considered. An enemy who knows the random variable Z, jointly distributed with X and Y according to some probability distribution
Stochastic Tracking of 3D Human Figures Using 2D Image Motion
- In European Conference on Computer Vision
, 2000
"... . A probabilistic method for tracking 3D articulated human gures in monocular image sequences is presented. Within a Bayesian framework, we de ne a generative model of image appearance, a robust likelihood function based on image graylevel dierences, and a prior probability distribution over pose an ..."
Abstract
-
Cited by 383 (33 self)
- Add to MetaCart
and joint angles that models how humans move. The posterior probability distribution over model parameters is represented using a discrete set of samples and is propagated over time using particle ltering. The approach extends previous work on parameterized optical ow estimation to exploit a complex 3D
Information-Theoretic Co-Clustering
- In KDD
, 2003
"... Two-dimensional contingency or co-occurrence tables arise frequently in important applications such as text, web-log and market-basket data analysis. A basic problem in contingency table analysis is co-clustering: simultaneous clustering of the rows and columns. A novel theoretical formulation views ..."
Abstract
-
Cited by 346 (12 self)
- Add to MetaCart
views the contingency table as an empirical joint probability distribution of two discrete random variables and poses the co-clustering problem as an optimization problem in information theory -- the optimal co-clustering maximizes the mutual information between the clustered random variables subject
Sequential Monte Carlo Samplers
, 2002
"... In this paper, we propose a general algorithm to sample sequentially from a sequence of probability distributions known up to a normalizing constant and defined on a common space. A sequence of increasingly large artificial joint distributions is built; each of these distributions admits a marginal ..."
Abstract
-
Cited by 303 (44 self)
- Add to MetaCart
In this paper, we propose a general algorithm to sample sequentially from a sequence of probability distributions known up to a normalizing constant and defined on a common space. A sequence of increasingly large artificial joint distributions is built; each of these distributions admits a marginal
How Many Iterations in the Gibbs Sampler?
- In Bayesian Statistics 4
, 1992
"... When the Gibbs sampler is used to estimate posterior distributions (Gelfand and Smith, 1990), the question of how many iterations are required is central to its implementation. When interest focuses on quantiles of functionals of the posterior distribution, we describe an easily-implemented metho ..."
Abstract
-
Cited by 159 (6 self)
- Add to MetaCart
When the Gibbs sampler is used to estimate posterior distributions (Gelfand and Smith, 1990), the question of how many iterations are required is central to its implementation. When interest focuses on quantiles of functionals of the posterior distribution, we describe an easily
Results 1 - 10
of
2,979