• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

DMCA

Efficient l1 regularized logistic regression (2006)

Cached

  • Download as a PDF

Download Links

  • [www.cs.washington.edu]
  • [ai.stanford.edu]
  • [www.aaai.org]
  • [www.aaai.org]
  • [www.robotics.stanford.edu]
  • [www.cs.stanford.edu]
  • [ai.stanford.edu]
  • [ai.stanford.edu]
  • [www.robotics.stanford.edu]
  • [www.robotics.stanford.edu]
  • [www.stanford.edu]
  • [www.stat.columbia.edu]
  • [ai.stanford.edu]
  • [www.stat.columbia.edu]
  • [www.eecs.umich.edu]
  • [web.eecs.umich.edu]
  • [ai.stanford.edu]
  • [www.cs.stanford.edu]
  • [ai.stanford.edu]
  • [www.stat.columbia.edu]
  • [ai.stanford.edu]
  • [web.eecs.umich.edu]
  • [www.robotics.stanford.edu]
  • [ai.stanford.edu]
  • [ai.stanford.edu]
  • [ai.stanford.edu]
  • [www.stat.columbia.edu]
  • [www.robotics.stanford.edu]

  • Other Repositories/Bibliography

  • DBLP
  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Su-in Lee , Honglak Lee , Pieter Abbeel , Andrew Y. Ng
Venue:In AAAI-06
Citations:68 - 4 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@INPROCEEDINGS{Lee06efficientl1,
    author = {Su-in Lee and Honglak Lee and Pieter Abbeel and Andrew Y. Ng},
    title = {Efficient l1 regularized logistic regression},
    booktitle = {In AAAI-06},
    year = {2006}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

L1 regularized logistic regression is now a workhorse of machine learning: it is widely used for many classification problems, particularly ones with many features. L1 regularized logistic regression requires solving a convex optimization problem. However, standard algorithms for solving convex optimization problems do not scale well enough to handle the large datasets encountered in many practical settings. In this paper, we propose an efficient algorithm for L1 regularized logistic regression. Our algorithm iteratively approximates the objective function by a quadratic approximation at the current point, while maintaining the L1 constraint. In each iteration, it uses the efficient LARS (Least Angle Regression) algorithm to solve the resulting L1 constrained quadratic optimization problem. Our theoretical results show that our algorithm is guaranteed to converge to the global optimum. Our experiments show that our algorithm significantly outperforms standard algorithms for solving convex optimization problems. Moreover, our algorithm outperforms four previously published algorithms that were specifically designed to solve the L1 regularized logistic regression problem.

Keyphrases

logistic regression    efficient l1    convex optimization problem    standard algorithm    current point    many feature    logistic regression problem    global optimum    efficient algorithm    quadratic optimization problem    least angle regression    many practical setting    machine learning    quadratic approximation    efficient lars    large datasets    theoretical result    algorithm outperforms    l1 constraint    many classification problem    objective function   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University