• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 2,256
Next 10 →

A Gaussian prior for smoothing maximum entropy models

by Stanley F. Chen, Ronald Rosenfeld , 1999
"... In certain contexts, maximum entropy (ME) modeling can be viewed as maximum likelihood train-ing for exponential models, and like other maximum likelihood methods is prone to overfitting of training data. Several smoothing methods for maximum entropy models have been proposed to address this problem ..."
Abstract - Cited by 253 (2 self) - Add to MetaCart
find that an ME smoothing method proposed to us by Lafferty [1] performs as well as or better than all other algorithms under consideration. This general and efficient method involves using a Gaussian prior on the parame-ters of the model and selecting maximum a posteriori instead of maximum likelihood

Analysis Of Multiresolution Image Denoising Schemes Using Generalized-Gaussian Priors

by Pierre Moulin, Juan Liu - IEEE TRANS. INFO. THEORY , 1998
"... In this paper, we investigate various connections between wavelet shrinkage methods in image processing and Bayesian estimation using Generalized Gaussian priors. We present fundamental properties of the shrinkage rules implied by Generalized Gaussian and other heavy-tailed priors. This allows us to ..."
Abstract - Cited by 217 (9 self) - Add to MetaCart
In this paper, we investigate various connections between wavelet shrinkage methods in image processing and Bayesian estimation using Generalized Gaussian priors. We present fundamental properties of the shrinkage rules implied by Generalized Gaussian and other heavy-tailed priors. This allows us

Extended Linear Models with Gaussian Priors

by Joaquin Quiñonero-Candela
"... In extended linear models the input space is projected onto a feature space by means of an arbitrary non-linear transformation. A linear model is then applied to the feature space to construct the model output. The dimension of the feature space can be very large, or even infinite, giving the model ..."
Abstract - Add to MetaCart
a very big flexibility. Support Vector Machines (SVM's) and Gaussian processes are two examples of such models. In this technical report I present a model in which the dimension of the feature space remains finite, and where a Bayesian approach is used to train the model with Gaussian priors

Bayesian inverse problems with Gaussian priors

by B. T. Knapik, A. W. Van Der Vaart, J. H. Van Zanten - Ann. Statist
"... ar ..."
Abstract - Cited by 24 (2 self) - Add to MetaCart
Abstract not found

Posterior contraction for conditionally Gaussian priors

by Rene de Jonge , 2012
"... ..."
Abstract - Add to MetaCart
Abstract not found

Privatization and Strategic Monitoring with Gaussian Priors.

by Richard Disney, Christopher J. Ellis, Bulent Nomer, Jel Classi…cation L
"... This paper describes the sale and optimal regulation of a sequence of public utilities, where monitoring of regulatory compliance is costly. The government is concerned with the revenue raised by successive privatizations as well as the standard objective of efficiency in production. The costs of mo ..."
Abstract - Add to MetaCart
This paper describes the sale and optimal regulation of a sequence of public utilities, where monitoring of regulatory compliance is costly. The government is concerned with the revenue raised by successive privatizations as well as the standard objective of efficiency in production. The costs of monitoring are private information to the government. At each stage in the sequence of privatizations the public Bayes updates it s distribution of posterior beliefs over the governments regulatory enforcement strategy. The government knows that this learning is taking place and chooses the time path of monitoring levels accordingly.

On L2-norm Regularization and the Gaussian Prior

by Jason Rennie , 2003
"... We show how the regularization used for classification can be seen from the MDL viewpoint as a Gaussian prior on weights. We consider the problem of transmitting classification labels; we select as our model class logistic regression with perfect precision where we specify a weight for each feature. ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
We show how the regularization used for classification can be seen from the MDL viewpoint as a Gaussian prior on weights. We consider the problem of transmitting classification labels; we select as our model class logistic regression with perfect precision where we specify a weight for each feature

Partially Improper Gaussian Priors for Nonparametric Logistic Regression

by Nandini Raghavan, Dennis D. Cox , 1995
"... A "partially improper" Gaussian prior is considered for Bayesian inference in logistic regression. This includes generalized smoothing spline priors that are used for nonparametric inference about the logit, and also priors that correspond to generalized random effect models. Necessary and ..."
Abstract - Cited by 2 (2 self) - Add to MetaCart
A "partially improper" Gaussian prior is considered for Bayesian inference in logistic regression. This includes generalized smoothing spline priors that are used for nonparametric inference about the logit, and also priors that correspond to generalized random effect models. Necessary

Comparison of lesion detection and quantification in MAP reconstruction with Gaussian and non-Gaussian priors

by Jinyi Qi
"... Statistical image reconstruction methods based on maximum a posteriori (MAP) principle have been developed for emission tomography. The prior distribution of the unknown image plays an important role in MAP reconstruction. The most commonly used prior are Gaussian priors, whose logarithm has a quadr ..."
Abstract - Cited by 3 (0 self) - Add to MetaCart
Statistical image reconstruction methods based on maximum a posteriori (MAP) principle have been developed for emission tomography. The prior distribution of the unknown image plays an important role in MAP reconstruction. The most commonly used prior are Gaussian priors, whose logarithm has a

On the stick-breaking representation of normalized inverse Gaussian priors

by S. Favaro, A. Lijoi, I. Prünster , 2012
"... Random probability measures are the main tool for Bayesian nonparametric inference, with their laws acting as prior distributions. Many well-known priors used in practice admit different, though equivalent, representations. In terms of computational convenience, stick-breaking representations stand ..."
Abstract - Cited by 7 (0 self) - Add to MetaCart
Random probability measures are the main tool for Bayesian nonparametric inference, with their laws acting as prior distributions. Many well-known priors used in practice admit different, though equivalent, representations. In terms of computational convenience, stick-breaking representations stand
Next 10 →
Results 1 - 10 of 2,256
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University