Results 1  10
of
6,294
Prior data for nonnormal priors
, 2007
"... SUMMARY Data augmentation priors facilitate contextual evaluation of prior distributions and the generation of Bayesian outputs from frequentist software. Previous papers have presented approximate Bayesian methods using 2 × 2 tables of 'prior data' to represent lognormal relativerisk pr ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
for the log relative risk away from normality, while retaining the simple 2 × 2 table form of the prior data. When prior normality is preferred, it also provides a more accurate lognormal relativerisk prior in for the 2 × 2 table format. For more compact representation in regression analyses, the prior data
Additivity of information value in twoact linear loss decisions with normal priors
 Risk Anal
"... Additivity of information value in twoact linear loss decisions with normal priors ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Additivity of information value in twoact linear loss decisions with normal priors
Logistic Normal Priors for Unsupervised Probabilistic Grammar Induction
"... We explore a new Bayesian model for probabilistic grammars, a family of distributions over discrete structures that includes hidden Markov models and probabilistic contextfree grammars. Our model extends the correlated topic model framework to probabilistic grammars, exploiting the logistic normal ..."
Abstract

Cited by 34 (9 self)
 Add to MetaCart
We explore a new Bayesian model for probabilistic grammars, a family of distributions over discrete structures that includes hidden Markov models and probabilistic contextfree grammars. Our model extends the correlated topic model framework to probabilistic grammars, exploiting the logistic normal
Evaluating the Accuracy of SamplingBased Approaches to the Calculation of Posterior Moments
 IN BAYESIAN STATISTICS
, 1992
"... Data augmentation and Gibbs sampling are two closely related, samplingbased approaches to the calculation of posterior moments. The fact that each produces a sample whose constituents are neither independent nor identically distributed complicates the assessment of convergence and numerical accurac ..."
Abstract

Cited by 604 (12 self)
 Add to MetaCart
accuracy of the approximations to the expected value of functions of interest under the posterior. In this paper methods from spectral analysis are used to evaluate numerical accuracy formally and construct diagnostics for convergence. These methods are illustrated in the normal linear model
Bayesian density estimation and inference using mixtures.
 J. Amer. Statist. Assoc.
, 1995
"... JSTOR is a notforprofit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about J ..."
Abstract

Cited by 653 (18 self)
 Add to MetaCart
mixtures of normal distributions. Efficient simulation methods are used to approximate various prior, posterior, and predictive distributions. This allows for direct inference on a variety of practical issues, including problems of local versus global smoothing, uncertainty about density estimates
SMOTE: Synthetic Minority Oversampling Technique
 Journal of Artificial Intelligence Research
, 2002
"... An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often realworld data sets are predominately composed of ``normal'' examples with only a small percentag ..."
Abstract

Cited by 634 (27 self)
 Add to MetaCart
An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often realworld data sets are predominately composed of ``normal'' examples with only a small
Lambertian Reflectance and Linear Subspaces
, 2000
"... We prove that the set of all reflectance functions (the mapping from surface normals to intensities) produced by Lambertian objects under distant, isotropic lighting lies close to a 9D linear subspace. This implies that, in general, the set of images of a convex Lambertian object obtained under a wi ..."
Abstract

Cited by 526 (20 self)
 Add to MetaCart
We prove that the set of all reflectance functions (the mapping from surface normals to intensities) produced by Lambertian objects under distant, isotropic lighting lies close to a 9D linear subspace. This implies that, in general, the set of images of a convex Lambertian object obtained under a
Graphs over Time: Densification Laws, Shrinking Diameters and Possible Explanations
, 2005
"... How do real graphs evolve over time? What are “normal” growth patterns in social, technological, and information networks? Many studies have discovered patterns in static graphs, identifying properties in a single snapshot of a large network, or in a very small number of snapshots; these include hea ..."
Abstract

Cited by 541 (48 self)
 Add to MetaCart
How do real graphs evolve over time? What are “normal” growth patterns in social, technological, and information networks? Many studies have discovered patterns in static graphs, identifying properties in a single snapshot of a large network, or in a very small number of snapshots; these include
Loopy belief propagation for approximate inference: An empirical study. In:
 Proceedings of Uncertainty in AI,
, 1999
"... Abstract Recently, researchers have demonstrated that "loopy belief propagation" the use of Pearl's polytree algorithm in a Bayesian network with loops can perform well in the context of errorcorrecting codes. The most dramatic instance of this is the near Shannonlimit performanc ..."
Abstract

Cited by 676 (15 self)
 Add to MetaCart
modification to the update rules in that we normalized both ..\ and 1r messages at each iteration. As Pearl Nodes were updated in parallel: at each iteration all nodes calculated their outgoing messages based on the incoming messages of their neighbors from the pre vious iteration. The messages were said
Linear Versus Nonlinear Rules For Mixture Normal Priors
"... Introduction Partial prior information can be well formalized and leads naturally to the description of a class of priors \Gamma that forms the basis for the \Gammaminimax approach. (Skibinsky and Cote (1962), Kud¯o (1967)). If prior information is scarce, the class \Gamma of priors under consider ..."
Abstract
 Add to MetaCart
Introduction Partial prior information can be well formalized and leads naturally to the description of a class of priors \Gamma that forms the basis for the \Gammaminimax approach. (Skibinsky and Cote (1962), Kud¯o (1967)). If prior information is scarce, the class \Gamma of priors under
Results 1  10
of
6,294