• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 2,428
Next 10 →

Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images.

by Stuart Geman , Donald Geman - IEEE Trans. Pattern Anal. Mach. Intell. , 1984
"... Abstract-We make an analogy between images and statistical mechanics systems. Pixel gray levels and the presence and orientation of edges are viewed as states of atoms or molecules in a lattice-like physical system. The assignment of an energy function in the physical system determines its Gibbs di ..."
Abstract - Cited by 5126 (1 self) - Add to MetaCart
system isolates low energy states ("annealing"), or what is the same thing, the most probable states under the Gibbs distribution. The analogous operation under the posterior distribution yields the maximum a posteriori (MAP) estimate of the image given the degraded observations. The result

Markov Random Field Models in Computer Vision

by S. Z. Li , 1994
"... . A variety of computer vision problems can be optimally posed as Bayesian labeling in which the solution of a problem is defined as the maximum a posteriori (MAP) probability estimate of the true labeling. The posterior probability is usually derived from a prior model and a likelihood model. The l ..."
Abstract - Cited by 516 (18 self) - Add to MetaCart
. A variety of computer vision problems can be optimally posed as Bayesian labeling in which the solution of a problem is defined as the maximum a posteriori (MAP) probability estimate of the true labeling. The posterior probability is usually derived from a prior model and a likelihood model

Statistical shape influence in geodesic active contours

by Michael E. Leventon, W. Eric, L. Grimson, Olivier Faugeras - In Proc. 2000 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Hilton Head, SC , 2000
"... A novel method of incorporating shape information into the image segmentation process is presented. We introduce a representation for deformable shapes and define a probability distribution over the variances of a set of training shapes. The segmentation process embeds an initial curve as the zero l ..."
Abstract - Cited by 396 (4 self) - Add to MetaCart
level set of a higher dimensional surface, and evolves the surface such that the zero level set converges on the boundary of the object to be segmented. At each step of the surface evolution, we estimate the maximum a posteriori (MAP) position and shape of the object in the image, based on the prior

One-shot learning of object categories

by Li Fei-fei, Rob Fergus, Pietro Perona - IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 2006
"... Learning visual models of object categories notoriously requires hundreds or thousands of training examples. We show that it is possible to learn much information about a category from just one, or a handful, of images. The key insight is that, rather than learning from scratch, one can take advant ..."
Abstract - Cited by 364 (20 self) - Add to MetaCart
Bayesian approach to models learned from by Maximum Likelihood (ML) and Maximum A Posteriori (MAP) methods. We find that on a database of more than 100 categories, the Bayesian approach produces informative models when the number of training examples is too small for other methods to operate successfully.

Optimal and Sub-Optimal Maximum A Posteriori Algorithms Suitable for Turbo Decoding

by Patrick Robertson, Peter Hoeher, Emmanuelle Villebrun - ETT , 1997
"... For estimating the states or outputs of a Markov process, the symbol-by-symbol maximum a posteriori (MAP) algorithm is optimal. However, this algorithm, even in its recursive form, poses technical difficulties because of numerical representation problems, the necessity of non-linear functions and a ..."
Abstract - Cited by 155 (26 self) - Add to MetaCart
For estimating the states or outputs of a Markov process, the symbol-by-symbol maximum a posteriori (MAP) algorithm is optimal. However, this algorithm, even in its recursive form, poses technical difficulties because of numerical representation problems, the necessity of non-linear functions and a

Stereo matching using belief propagation

by Jian Sun, Nan-ning Zheng, Heung-yeung Shum , 2003
"... In this paper, we formulate the stereo matching problem as a Markov network and solve it using Bayesian belief propagation. The stereo Markov network consists of three coupled Markov random fields that model the following: a smooth field for depth/disparity, a line process for depth discontinuity, ..."
Abstract - Cited by 350 (4 self) - Add to MetaCart
, and a binary process for occlusion. After eliminating the line process and the binary process by introducing two robust functions, we apply the belief propagation algorithm to obtain the maximum a posteriori (MAP) estimation in the Markov network. Other low-level visual cues (e.g., image segmentation

Maximum a Posteriori Transduction

by Li-wei Wang, Ju-fu Feng - In Proceedings of the Third International Conference on Machine Learning and Cybernetics , 2004
"... Transduction deals with the problem of estimating the values of a function at given points (called working samples) by a set of training samples. This paper proposes a maximum a posteriori (MAP) scheme for the transduction. The probability measure defined for the estimation is induced by the code le ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
Transduction deals with the problem of estimating the values of a function at given points (called working samples) by a set of training samples. This paper proposes a maximum a posteriori (MAP) scheme for the transduction. The probability measure defined for the estimation is induced by the code

A Gaussian prior for smoothing maximum entropy models

by Stanley F. Chen, Ronald Rosenfeld , 1999
"... In certain contexts, maximum entropy (ME) modeling can be viewed as maximum likelihood train-ing for exponential models, and like other maximum likelihood methods is prone to overfitting of training data. Several smoothing methods for maximum entropy models have been proposed to address this problem ..."
Abstract - Cited by 253 (2 self) - Add to MetaCart
find that an ME smoothing method proposed to us by Lafferty [1] performs as well as or better than all other algorithms under consideration. This general and efficient method involves using a Gaussian prior on the parame-ters of the model and selecting maximum a posteriori instead of maximum likelihood

MAP Maximum A Posteriori

by Stijn Meganck, M-a Cado, Monte Carlo, Markov Chain , 2013
"... JT ..."
Abstract - Add to MetaCart
Abstract not found

interpreted as Maximum A Posteriori

by Senior Member
"... 1Should penalized least squares regression be ..."
Abstract - Add to MetaCart
1Should penalized least squares regression be
Next 10 →
Results 1 - 10 of 2,428
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University