Results 1  10
of
817,642
LogLinear Models
, 2004
"... This is yet another introduction to loglinear (“maximum entropy”) models for NLP practitioners, in the spirit of Berger (1996) and Ratnaparkhi (1997b). The derivations here are similar to Berger’s, but more details are filled in and some errors are corrected. I do not address iterative scaling (Dar ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
This is yet another introduction to loglinear (“maximum entropy”) models for NLP practitioners, in the spirit of Berger (1996) and Ratnaparkhi (1997b). The derivations here are similar to Berger’s, but more details are filled in and some errors are corrected. I do not address iterative scaling
Question Classification with LogLinear Models
"... Question classification has become a crucial step in modern question answering systems. Previous work has demonstrated the effectiveness of statistical machine learning approaches to this problem. This paper presents a new approach to building a question classifier using loglinear models. Evidence ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Question classification has become a crucial step in modern question answering systems. Previous work has demonstrated the effectiveness of statistical machine learning approaches to this problem. This paper presents a new approach to building a question classifier using loglinear models. Evidence
Bayesian Selection of LogLinear Models
 Canadian Journal of Statistics
, 1995
"... A general methodology is presented for finding suitable Poisson loglinear models with applications to multiway contingency tables. Mixtures of multivariate normal distributions are used to model prior opinion when a subset of the regression vector is believed to be nonzero. This prior distribution ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
A general methodology is presented for finding suitable Poisson loglinear models with applications to multiway contingency tables. Mixtures of multivariate normal distributions are used to model prior opinion when a subset of the regression vector is believed to be nonzero. This prior distribution
Loglinear Models: A Didactic
 Journal of Educational Statistics
, 1981
"... The recently developed loglinear model technique for the analysis of contingency tables has many potential applications within educational research. This paper describes the two major models, loglinear and logitlinear, that are employed under this approach. The basic logic of each is developed a ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
The recently developed loglinear model technique for the analysis of contingency tables has many potential applications within educational research. This paper describes the two major models, loglinear and logitlinear, that are employed under this approach. The basic logic of each is developed
LogLinear Models for WideCoverage CCG Parsing
, 2003
"... This paper describes loglinear parsing models for Combinatory Categorial Grammar (CCG). Loglinear models can easily encode the longrange dependencies inherent in coordination and extraction phenomena, which CCG was designed to handle. Loglinear models have previously been applied to stati ..."
Abstract

Cited by 46 (7 self)
 Add to MetaCart
This paper describes loglinear parsing models for Combinatory Categorial Grammar (CCG). Loglinear models can easily encode the longrange dependencies inherent in coordination and extraction phenomena, which CCG was designed to handle. Loglinear models have previously been applied
LogLinear Models for Label Ranking
, 2003
"... Label ranking is the task of inferring a total order over a predefined set of labels for each given instance. We present a general framework for batch learning of label ranking functions from supervised data. We assume that each instance in the training data is associated with a list of preferenc ..."
Abstract

Cited by 107 (5 self)
 Add to MetaCart
Label ranking is the task of inferring a total order over a predefined set of labels for each given instance. We present a general framework for batch learning of label ranking functions from supervised data. We assume that each instance in the training data is associated with a list of preferences over the labelset, however we do not assume that this list is either complete or consistent. This enables us to accommodate a variety of ranking problems. In contrast to the general form of the supervision, our goal is to learn a ranking function that induces a total order over the entire set of labels. Special cases of our setting are multilabel categorization and hierarchical classification. We present a general boostingbased learning algorithm for the label ranking problem and prove a lower bound on the progress of each boosting iteration. The applicability of our approach is demonstrated with a set of experiments on a largescale text corpus.
Latent loglinear models for handwritten digit classification
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2011
"... Abstract—We present latent loglinear models, an extension of loglinear models incorporating latent variables and we propose two applications thereof: loglinear mixture models and image deformationaware loglinear models. The resulting models are fully discriminative, can be trained efficiently, ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract—We present latent loglinear models, an extension of loglinear models incorporating latent variables and we propose two applications thereof: loglinear mixture models and image deformationaware loglinear models. The resulting models are fully discriminative, can be trained efficiently
Hierarchical model · Interaction factor · Loglinear model · Möbius inversion ·
"... Loglinear modeling using conditional loglinear structures ..."
DeformationAware LogLinear Models
"... Abstract. In this paper, we present a novel deformationaware discriminative model for handwritten digit recognition. Unlike previous approaches our model directly considers image deformations and allows discriminative training of all parameters, including those accounting for nonlinear transformat ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
transformations of the image. This is achieved by extending a loglinear framework to incorporate a latent deformation variable. The resulting model has an order of magnitude less parameters than competing approaches to handling image deformations. We tune and evaluate our approach on the USPS task and show its
Results 1  10
of
817,642