• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

DMCA

Learning minimum volume sets (2006)

Cached

  • Download as a PDF

Download Links

  • [www.ece.wisc.edu]
  • [nowak.ece.wisc.edu]
  • [www.ece.wisc.edu]
  • [nowak.ece.wisc.edu]
  • [www.jmlr.org]
  • [www.eecs.umich.edu]
  • [web.eecs.umich.edu]
  • [jmlr.org]
  • [web.eecs.umich.edu]
  • [books.nips.cc]
  • [www.eecs.umich.edu]
  • [web.eecs.umich.edu]
  • [web.eecs.umich.edu]
  • [papers.nips.cc]
  • [www.eecs.umich.edu]
  • [web.eecs.umich.edu]
  • [web.eecs.umich.edu]
  • [www.ece.wisc.edu]
  • [nowak.ece.wisc.edu]

  • Other Repositories/Bibliography

  • DBLP
  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Clayton Scott , Robert Nowak
Venue:J. Machine Learning Res
Citations:39 - 7 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@ARTICLE{Scott06learningminimum,
    author = {Clayton Scott and Robert Nowak},
    title = {Learning minimum volume sets},
    journal = {J. Machine Learning Res},
    year = {2006},
    volume = {7},
    pages = {665--704}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

Given a probability measure P and a reference measure µ, one is often interested in the minimum µ-measure set with P-measure at least α. Minimum volume sets of this type summarize the regions of greatest probability mass of P, and are useful for detecting anomalies and constructing confidence regions. This paper addresses the problem of estimating minimum volume sets based on independent samples distributed according to P. Other than these samples, no other information is available regarding P, but the reference measure µ is assumed to be known. We introduce rules for estimating minimum volume sets that parallel the empirical risk minimization and structural risk minimization principles in classification. As in classification, we show that the performances of our estimators are controlled by the rate of uniform convergence of empirical to true probabilities over the class from which the estimator is drawn. Thus we obtain finite sample size performance bounds in terms of VC dimension and related quantities. We also demonstrate strong universal consistency and an oracle inequality. Estimators based on histograms and dyadic partitions illustrate the proposed rules. 1

Keyphrases

minimum volume set    reference measure    empirical risk minimization    minimum measure    true probability    probability measure    vc dimension    independent sample    oracle inequality    uniform convergence    strong universal consistency    finite sample size performance bound    dyadic partition    probability mass    related quantity    structural risk minimization principle    confidence region   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University