• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 4,246
Next 10 →

A Maximum-Entropy-Inspired Parser

by Eugene Charniak , 1999
"... We present a new parser for parsing down to Penn tree-bank style parse trees that achieves 90.1% average precision/recall for sentences of length 40 and less, and 89.5% for sentences of length 100 and less when trained and tested on the previously established [5,9,10,15,17] "stan- dard" se ..."
Abstract - Cited by 971 (19 self) - Add to MetaCart
" sections of the Wall Street Journal tree- bank. This represents a 13% decrease in error rate over the best single-parser results on this corpus [9]. The major technical innova- tion is the use of a "maximum-entropy-inspired" model for conditioning and smoothing that let us successfully to test

A Maximum Entropy approach to Natural Language Processing

by Adam L. Berger, Stephen A. Della Pietra , Vincent J. Della Pietra - COMPUTATIONAL LINGUISTICS , 1996
"... The concept of maximum entropy can be traced back along multiple threads to Biblical times. Only recently, however, have computers become powerful enough to permit the widescale application of this concept to real world problems in statistical estimation and pattern recognition. In this paper we des ..."
Abstract - Cited by 1366 (5 self) - Add to MetaCart
The concept of maximum entropy can be traced back along multiple threads to Biblical times. Only recently, however, have computers become powerful enough to permit the widescale application of this concept to real world problems in statistical estimation and pattern recognition. In this paper we

Using Maximum Entropy for Text Classification

by Kamal Nigam, John Lafferty, Andrew Mccallum , 1999
"... This paper proposes the use of maximum entropy techniques for text classification. Maximum entropy is a probability distribution estimation technique widely used for a variety of natural language tasks, such as language modeling, part-of-speech tagging, and text segmentation. The underlying principl ..."
Abstract - Cited by 326 (6 self) - Add to MetaCart
This paper proposes the use of maximum entropy techniques for text classification. Maximum entropy is a probability distribution estimation technique widely used for a variety of natural language tasks, such as language modeling, part-of-speech tagging, and text segmentation. The underlying

Maximum entropy markov models for information extraction and segmentation

by Andrew McCallum, Dayne Freitag, Fernando Pereira , 2000
"... Hidden Markov models (HMMs) are a powerful probabilistic tool for modeling sequential data, and have been applied with success to many text-related tasks, such as part-of-speech tagging, text segmentation and information extraction. In these cases, the observations are usually modeled as multinomial ..."
Abstract - Cited by 561 (18 self) - Add to MetaCart
, capitalization, formatting, part-of-speech), and defines the conditional probability of state sequences given observation sequences. It does this by using the maximum entropy framework to fit a set of exponential models that represent the probability of a state given an observation and the previous state. We

Discriminative Training and Maximum Entropy Models for Statistical Machine Translation

by Franz Josef Och, Hermann Ney , 2002
"... We present a framework for statistical machine translation of natural languages based on direct maximum entropy models, which contains the widely used source -channel approach as a special case. All knowledge sources are treated as feature functions, which depend on the source language senten ..."
Abstract - Cited by 508 (30 self) - Add to MetaCart
We present a framework for statistical machine translation of natural languages based on direct maximum entropy models, which contains the widely used source -channel approach as a special case. All knowledge sources are treated as feature functions, which depend on the source language

A Maximum Entropy Model for Part-Of-Speech Tagging

by Adwait Ratnaparkhi , 1996
"... This paper presents a statistical model which trains from a corpus annotated with Part-OfSpeech tags and assigns them to previously unseen text with state-of-the-art accuracy(96.6%). The model can be classified as a Maximum Entropy model and simultaneously uses many contextual "features" t ..."
Abstract - Cited by 580 (1 self) - Add to MetaCart
This paper presents a statistical model which trains from a corpus annotated with Part-OfSpeech tags and assigns them to previously unseen text with state-of-the-art accuracy(96.6%). The model can be classified as a Maximum Entropy model and simultaneously uses many contextual "

Maximum Entropy Discrimination

by Tommi Jaakkola, Marina Meila, Tony Jebara , 1999
"... We present a general framework for discriminative estimation based on the maximum entropy principle and its extensions. All calculations involve distributions over structures and/or parameters rather than specific settings and reduce to relative entropy projections. This holds even when the data is ..."
Abstract - Cited by 138 (20 self) - Add to MetaCart
We present a general framework for discriminative estimation based on the maximum entropy principle and its extensions. All calculations involve distributions over structures and/or parameters rather than specific settings and reduce to relative entropy projections. This holds even when the data

Filters, Random Fields and Maximum Entropy . . .

by Song Chun Zhu, Yingnian Wu, David Mumford - INTERNATIONAL JOURNAL OF COMPUTER VISION , 1998
"... This article presents a statistical theory for texture modeling. This theory combines filtering theory and Markov random field modeling through the maximum entropy principle, and interprets and clarifies many previous concepts and methods for texture analysis and synthesis from a unified point of vi ..."
Abstract - Cited by 233 (16 self) - Add to MetaCart
This article presents a statistical theory for texture modeling. This theory combines filtering theory and Markov random field modeling through the maximum entropy principle, and interprets and clarifies many previous concepts and methods for texture analysis and synthesis from a unified point

A Comparison of Algorithms for Maximum Entropy Parameter Estimation

by Robert Malouf
"... A comparison of algorithms for maximum entropy parameter estimation Conditional maximum entropy (ME) models provide a general purpose machine learning technique which has been successfully applied to fields as diverse as computer vision and econometrics, and which is used for a wide variety of class ..."
Abstract - Cited by 290 (2 self) - Add to MetaCart
A comparison of algorithms for maximum entropy parameter estimation Conditional maximum entropy (ME) models provide a general purpose machine learning technique which has been successfully applied to fields as diverse as computer vision and econometrics, and which is used for a wide variety

The Latent Maximum Entropy Principle

by Shaojun Wang, Dale Schuurmans, Yunxin Zhao - In Proc. of ISIT , 2002
"... We present an extension to Jaynes' maximum entropy principle that handles latent variables. The principle of latent maximum entropy we propose is di#erent from both Jaynes' maximum entropy principle and maximum likelihood estimation, but often yields better estimates in the presence of h ..."
Abstract - Cited by 19 (5 self) - Add to MetaCart
We present an extension to Jaynes' maximum entropy principle that handles latent variables. The principle of latent maximum entropy we propose is di#erent from both Jaynes' maximum entropy principle and maximum likelihood estimation, but often yields better estimates in the presence
Next 10 →
Results 1 - 10 of 4,246
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University