• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

DMCA

Shallow Parsing with Conditional Random Fields (2003)

Cached

  • Download as a PDF

Download Links

  • [acl.ldc.upenn.edu]
  • [luthuli.cs.uiuc.edu]
  • [wing.comp.nus.edu.sg]
  • [www.aclweb.org]
  • [www.aclweb.org]
  • [ucrel.lancs.ac.uk]
  • [aclweb.org]
  • [aclweb.org]
  • [clair.si.umich.edu]
  • [newdesign.aclweb.org]
  • [luthuli.cs.uiuc.edu]
  • [wing.comp.nus.edu.sg]
  • [www.cis.upenn.edu]
  • [www.cs.berkeley.edu]
  • [www.cs.iastate.edu]
  • [www-rcf.usc.edu]
  • [www-bcf.usc.edu]
  • [www-rcf.usc.edu]
  • [web.cs.iastate.edu]
  • [www-rcf.usc.edu]
  • [www.cis.upenn.edu]
  • [www.seas.upenn.edu]
  • [www.seas.upenn.edu]
  • [courses.ischool.berkeley.edu]

  • Other Repositories/Bibliography

  • CiteULike
  • DBLP
  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Fei Sha , Fernando Pereira
Citations:579 - 8 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@INPROCEEDINGS{Sha03shallowparsing,
    author = {Fei Sha and Fernando Pereira},
    title = {Shallow Parsing with Conditional Random Fields},
    booktitle = {},
    year = {2003},
    pages = {213--220},
    publisher = {}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

Conditional random fields for sequence labeling offer advantages over both generative models like HMMs and classifiers applied at each sequence position. Among sequence labeling tasks in language processing, shallow parsing has received much attention, with the development of standard evaluation datasets and extensive comparison among methods. We show here how to train a conditional random field to achieve performance as good as any reported base noun-phrase chunking method on the CoNLL task, and better than any reported single model. Improved training methods based on modern optimization algorithms were critical in achieving these results. We present extensive comparisons between models and training methods that confirm and strengthen previous results on shallow parsing and training methods for maximum-entropy models.

Keyphrases

conditional random field    shallow parsing    conll task    sequence labeling offer    language processing    maximum-entropy model    reported base    standard evaluation datasets    single model    much attention    modern optimization algorithm    previous result    training method    generative model    extensive comparison    sequence position    present extensive comparison   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University