• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 30,608
Next 10 →

1 Generalization Bounds and Consistency

by David Mcallester
"... This chapter gives generalization bounds for structured output learning. We show ..."
Abstract - Add to MetaCart
This chapter gives generalization bounds for structured output learning. We show

Generalization bounds for learning the kernel

by Yiming Ying, Colin Campbell - IN PROC. OF THE 22 ND ANNUAL CONFERENCE ON LEARNING THEORY , 2009
"... In this paper we develop a novel probabilistic generalization bound for learning the kernel problem. First, we show that the generalization analysis of the regularized kernel learning system reduces to investigation of the suprema of the Rademacher chaos process of order two over candidate kernels, ..."
Abstract - Cited by 23 (4 self) - Add to MetaCart
In this paper we develop a novel probabilistic generalization bound for learning the kernel problem. First, we show that the generalization analysis of the regularized kernel learning system reduces to investigation of the suprema of the Rademacher chaos process of order two over candidate kernels

Generalization bounds for learning kernels

by Corinna Cortes, Mehryar Mohri, Afshin Rostamizadeh - In ICML ’10,2010
"... This paper presents several novel generalization bounds for the problem of learning kernels based on a combinatorial analysis of the Rademacher complexity of the corresponding hypothesis sets. Our bound for learning kernels with a convex combination of p base kernels using L1 regularization admits o ..."
Abstract - Cited by 30 (3 self) - Add to MetaCart
This paper presents several novel generalization bounds for the problem of learning kernels based on a combinatorial analysis of the Rademacher complexity of the corresponding hypothesis sets. Our bound for learning kernels with a convex combination of p base kernels using L1 regularization admits

Combinatorial generalization bounds

by K. V. Vorontsov, A. A. Ivahnenko, P. V. Botov, I. M. Reshetnyak, I. O. Tolstikhin
"... In this paper we propose a new combinatorial technique for obtaining data dependent generalization bounds. We introduce a splitting and connectivity graph (SC-graph) over the set of classifiers. In some cases the knowledge of this graph leads to an exact generalization bound. Typically, the knowledg ..."
Abstract - Add to MetaCart
In this paper we propose a new combinatorial technique for obtaining data dependent generalization bounds. We introduce a splitting and connectivity graph (SC-graph) over the set of classifiers. In some cases the knowledge of this graph leads to an exact generalization bound. Typically

Generalization Bounds for Domain Adaptation

by Chao Zhang, Lei Zhang, Jieping Ye
"... In this paper, we provide a new framework to study the generalization bound of the learning process for domain adaptation. We consider two kinds of representative domain adaptation settings: one is domain adaptation with multiple sources and the other is domain adaptation combining source and target ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
In this paper, we provide a new framework to study the generalization bound of the learning process for domain adaptation. We consider two kinds of representative domain adaptation settings: one is domain adaptation with multiple sources and the other is domain adaptation combining source

On ranking and generalization bounds. The

by Wojciech Rejchel, Nicolas Vayatis - Journal of Machine Learning Research
"... The problem of ranking is to predict or to guess the ordering between objects on the basis of their observed features. In this paper we consider ranking estimators that minimize the empirical convex risk. We prove generalization bounds for the excess risk of such estimators with rates that are faste ..."
Abstract - Cited by 4 (0 self) - Add to MetaCart
The problem of ranking is to predict or to guess the ordering between objects on the basis of their observed features. In this paper we consider ranking estimators that minimize the empirical convex risk. We prove generalization bounds for the excess risk of such estimators with rates

Generalization Bounds for Decision Trees

by Yishay Mansour, David Mcallester , 2000
"... We derive a new bound on the error rate for decision trees. The bound depends both on the structure of the tree and the specific sample (not just the size of the sample). This bound is tighter than traditional bounds for unbalanced trees and justifies "compositional" algorithms for constru ..."
Abstract - Cited by 26 (2 self) - Add to MetaCart
.g., fitting a linear curve to clearly quadratic data. The fundamental question is how many parameters, or what concept size, should one allow for a given amount of training data. A standard theoretical approach is to prove a bound on generalization error as a function of the training error and the concept

Generalization bounds for averaged classifiers

by Yoav Freund, Yishay Mansour, Robert E. Schapire - THE ANNALS OF STATISTICS , 2004
"... We study a simple learning algorithm for binary classification. Instead of predicting with the best hypothesis in the hypothesis class, that is, the hypothesis that minimizes the training error, our algorithm predicts with a weighted average of all hypotheses, weighted exponentially with respect to ..."
Abstract - Cited by 23 (1 self) - Add to MetaCart
reliable. Finally, we show that the probability that the algorithm abstains is comparable to the generalization error of the best hypothesis in the class.

Sparse multinomial logistic regression: fast algorithms and generalization bounds

by Balaji Krishnapuram, Lawrence Carin, Mário A. T. Figueiredo, Senior Member, Er J. Hartemink - IEEE Trans. on Pattern Analysis and Machine Intelligence
"... Abstract—Recently developed methods for learning sparse classifiers are among the state-of-the-art in supervised learning. These methods learn classifiers that incorporate weighted sums of basis functions with sparsity-promoting priors encouraging the weight estimates to be either significantly larg ..."
Abstract - Cited by 190 (1 self) - Add to MetaCart
and the feature dimensionality, making them applicable even to large data sets in high-dimensional feature spaces. To the best of our knowledge, these are the first algorithms to perform exact multinomial logistic regression with a sparsity-promoting prior. Third, we show how nontrivial generalization bounds can

Sample Based Generalization Bounds

by Robert C. Williamson, John Shawe-Taylor, Bernhard Schölkopf, Alex J. Smola , 1999
"... It is known that the covering numbers of a function class on a double sample (length 2m, where m is the number of points in the sample) can be used to bound the generalization performance of a classifier by using a margin based analysis. Traditionally this has been done using a "Sauer-like" ..."
Abstract - Cited by 2 (1 self) - Add to MetaCart
It is known that the covering numbers of a function class on a double sample (length 2m, where m is the number of points in the sample) can be used to bound the generalization performance of a classifier by using a margin based analysis. Traditionally this has been done using a "
Next 10 →
Results 1 - 10 of 30,608
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University