Results 1  10
of
407
Dirichlet Posterior Sampling with Truncated Multinomial Likelihoods
, 2012
"... This document considers the problem of drawing samples from posterior distributions formed under a Dirichlet prior and a truncated multinomial likelihood, by which we mean a Multinomial likelihood function where we condition on one or more counts being zero a priori. An example is the distribution ..."
Abstract
 Add to MetaCart
This document considers the problem of drawing samples from posterior distributions formed under a Dirichlet prior and a truncated multinomial likelihood, by which we mean a Multinomial likelihood function where we condition on one or more counts being zero a priori. An example is the distribution
Additive Logistic Regression: a Statistical View of Boosting
 Annals of Statistics
, 1998
"... Boosting (Freund & Schapire 1996, Schapire & Singer 1998) is one of the most important recent developments in classification methodology. The performance of many classification algorithms can often be dramatically improved by sequentially applying them to reweighted versions of the input dat ..."
Abstract

Cited by 1750 (25 self)
 Add to MetaCart
be viewed as an approximation to additive modeling on the logistic scale using maximum Bernoulli likelihood as a criterion. We develop more direct approximations and show that they exhibit nearly identical results to boosting. Direct multiclass generalizations based on multinomial likelihood are derived
Maximum entropy markov models for information extraction and segmentation
, 2000
"... Hidden Markov models (HMMs) are a powerful probabilistic tool for modeling sequential data, and have been applied with success to many textrelated tasks, such as partofspeech tagging, text segmentation and information extraction. In these cases, the observations are usually modeled as multinomial ..."
Abstract

Cited by 561 (18 self)
 Add to MetaCart
as multinomial distributions over a discrete vocabulary, and the HMM parameters are set to maximize the likelihood of the observations. This paper presents a new Markovian sequence model, closely related to HMMs, that allows observations to be represented as arbitrary overlapping features (such as word
MIXED MNL MODELS FOR DISCRETE RESPONSE
 JOURNAL OF APPLIED ECONOMETRICS J. APPL. ECON. 15: 447470 (2000)
, 2000
"... This paper considers mixed, or random coefficients, multinomial logit (MMNL) models for discrete response, and establishes the following results. Under mild regularity conditions, any discrete choice model derived from random utility maximization has choice probabilities that can be approximated as ..."
Abstract

Cited by 487 (15 self)
 Add to MetaCart
This paper considers mixed, or random coefficients, multinomial logit (MMNL) models for discrete response, and establishes the following results. Under mild regularity conditions, any discrete choice model derived from random utility maximization has choice probabilities that can be approximated
An exact likelihood analysis of the multinomial probit model
, 1994
"... We develop new methods for conducting a finite sample, likelihoodbased analysis of the multinomial probit model. Using a variant of the Gibbs sampler, an algorithm is developed to draw from the exact posterior of the multinomial probit model with correlated errors. This approach avoids direct evalu ..."
Abstract

Cited by 166 (6 self)
 Add to MetaCart
We develop new methods for conducting a finite sample, likelihoodbased analysis of the multinomial probit model. Using a variant of the Gibbs sampler, an algorithm is developed to draw from the exact posterior of the multinomial probit model with correlated errors. This approach avoids direct
Dirichletmultinomial loglikelihood function
, 2014
"... An efficient algorithm for accurate computation of the ..."
Naive Bayesian classifiers for multinomial features: a theoretical analysis
"... We investigate the use of naive Bayesian classifiers for multinomial feature spaces and derive error estimates for these classifiers. The error analysis is done by developing a mathematical model to estimate the probability density functions for all multinomial likelihood functions describing differ ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We investigate the use of naive Bayesian classifiers for multinomial feature spaces and derive error estimates for these classifiers. The error analysis is done by developing a mathematical model to estimate the probability density functions for all multinomial likelihood functions describing
Alternative Computational Approaches to Inference in the Multinomial Probit Model
 Review of Economics and Statistics
, 1994
"... AbstractThis research compares several approaches to inference in the multinomial probit model, based on two Monte Carlo experiments for a seven choice model. The methods compared are the simulated maximum likelihood estimator using the GHK recursive probability,simulator, the method of simulated ..."
Abstract

Cited by 67 (2 self)
 Add to MetaCart
AbstractThis research compares several approaches to inference in the multinomial probit model, based on two Monte Carlo experiments for a seven choice model. The methods compared are the simulated maximum likelihood estimator using the GHK recursive probability,simulator, the method of simulated
A Fast Normalized Maximum Likelihood Algorithm for Multinomial Data
 In Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI05
, 2005
"... Stochastic complexity of a data set is defined as the shortest possible code length for the data obtainable by using some fixed set of models. This measure is of great theoretical and practical importance as a tool for tasks such as model selection or data clustering. In the case of multinomial data ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Stochastic complexity of a data set is defined as the shortest possible code length for the data obtainable by using some fixed set of models. This measure is of great theoretical and practical importance as a tool for tasks such as model selection or data clustering. In the case of multinomial
Motivation for the Multinomial Model
"... The multinomial model that we use in our likelihood function does not assume that the observed read depths in different intervals are independent. Even though we assume that reads are distributed uniformly on the cancer genome, large copy number aberrations (e.g. gain and loss of whole chromosomes) ..."
Abstract
 Add to MetaCart
The multinomial model that we use in our likelihood function does not assume that the observed read depths in different intervals are independent. Even though we assume that reads are distributed uniformly on the cancer genome, large copy number aberrations (e.g. gain and loss of whole chromosomes
Results 1  10
of
407