Results 1  10
of
30
The PNorm Push: A Simple Convex Ranking Algorithm that Concentrates at the Top of the List
, 2009
"... We are interested in supervised ranking algorithms that perform especially well near the top of the ranked list, and are only required to perform sufficiently well on the rest of the list. In this work, we provide a general form of convex objective that gives highscoring examples more importance. T ..."
Abstract

Cited by 37 (14 self)
 Add to MetaCart
We are interested in supervised ranking algorithms that perform especially well near the top of the ranked list, and are only required to perform sufficiently well on the rest of the list. In this work, we provide a general form of convex objective that gives highscoring examples more importance. This “push ” near the top of the list can be chosen arbitrarily large or small, based on the preference of the user. We choose ℓpnorms to provide a specific type of push; if the user sets p larger, the objective concentrates harder on the top of the list. We derive a generalization bound based on the pnorm objective, working around the natural asymmetry of the problem. We then derive a boostingstyle algorithm for the problem of ranking with a push at the top. The usefulness of the algorithm is illustrated through experiments on repository data. We prove that the minimizer of the algorithm’s objective is unique in a specific sense. Furthermore, we illustrate how our objective is related to quality measurements for information retrieval.
Marginbased Ranking and an Equivalence between AdaBoost and RankBoost
, 2009
"... We study boosting algorithms for learning to rank. We give a general marginbased bound for ranking based on covering numbers for the hypothesis space. Our bound suggests that algorithms that maximize the ranking margin will generalize well. We then describe a new algorithm, smooth margin ranking, t ..."
Abstract

Cited by 26 (9 self)
 Add to MetaCart
We study boosting algorithms for learning to rank. We give a general marginbased bound for ranking based on covering numbers for the hypothesis space. Our bound suggests that algorithms that maximize the ranking margin will generalize well. We then describe a new algorithm, smooth margin ranking, that precisely converges to a maximum rankingmargin solution. The algorithm is a modification of RankBoost, analogous to “approximate coordinate ascent boosting. ” Finally, we prove that AdaBoost and RankBoost are equally good for the problems of bipartite ranking and classification in terms of their asymptotic behavior on the training set. Under natural conditions, AdaBoost achieves an area under the ROC curve that is equally as good as RankBoost’s; furthermore, RankBoost, when given a specific intercept, achieves a misclassification error that is as good as AdaBoost’s. This may help to explain the empirical observations made by Cortes and Mohri, and Caruana and NiculescuMizil, about the excellent performance of AdaBoost as a bipartite ranking algorithm, as measured by the area under the ROC curve.
Generalization bounds for learning the kernel
 In Proc. of the 22 nd Annual Conference on Learning Theory
, 2009
"... In this paper we develop a novel probabilistic generalization bound for learning the kernel problem. First, we show that the generalization analysis of the regularized kernel learning system reduces to investigation of the suprema of the Rademacher chaos process of order two over candidate kernels, ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
(Show Context)
In this paper we develop a novel probabilistic generalization bound for learning the kernel problem. First, we show that the generalization analysis of the regularized kernel learning system reduces to investigation of the suprema of the Rademacher chaos process of order two over candidate kernels, which we refer to as Rademacher chaos complexity. Next, we show how to estimate the empirical Rademacher chaos complexity by wellestablished metric entropy integrals and pseudodimension of the set of candidate kernels. Our new methodology mainly depends on the principal theory of Uprocesses. Finally, we establish satisfactory excess generalization bounds and misclassification error rates for learning Gaussian kernels and general radial basis kernels. 1
Generalization Bounds for Ranking Algorithms via Algorithmic Stability
 J. of Machine Learning Research
"... The problem of ranking, in which the goal is to learn a realvalued ranking function that induces a ranking or ordering over an instance space, has recently gained much attention in machine learning. We study generalization properties of ranking algorithms using the notion of algorithmic stability; ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
The problem of ranking, in which the goal is to learn a realvalued ranking function that induces a ranking or ordering over an instance space, has recently gained much attention in machine learning. We study generalization properties of ranking algorithms using the notion of algorithmic stability; in particular, we derive generalization bounds for ranking algorithms that have good stability properties. We show that kernelbased ranking algorithms that perform regularization in a reproducing kernel Hilbert space have such stability properties, and therefore our bounds can be applied to these algorithms; this is in contrast with generalization bounds based on uniform convergence, which in many cases cannot be applied to these algorithms. Our results generalize earlier results that were derived in the special setting of bipartite ranking (Agarwal and Niyogi, 2005) to a more general setting of the ranking problem that arises frequently in applications.
On the Generalization Ability of Online Learning Algorithms for Pairwise Loss Functions
"... In this paper, we study the generalization properties of online learning based stochastic methods for supervised learning problems where the loss function is dependent on more than one training sample (e.g., metric learning, ranking). We present a generic decoupling technique that enables us to prov ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
In this paper, we study the generalization properties of online learning based stochastic methods for supervised learning problems where the loss function is dependent on more than one training sample (e.g., metric learning, ranking). We present a generic decoupling technique that enables us to provide Rademacher complexitybased generalization error bounds. Our bounds are in general tighter than those obtained by Wang et al. (2012) for the same problem. Using our decoupling technique, we are further able to obtain fast convergence rates for strongly convex pairwise loss functions. We are also able to analyze a class of memory efficient online learning algorithms for pairwise learning problems that use only a bounded subset of past training samples to update the hypothesis at each step. Finally, in order to complement our generalization bounds, we propose a novel memory efficient online learning algorithm for higher order learning problems with bounded regret guarantees. 1.
A theoretical analysis of ndcg type ranking measures
 In JMLR Proceedings
, 2013
"... ar ..."
(Show Context)
On ranking and generalization bounds. The
 Journal of Machine Learning Research
"... The problem of ranking is to predict or to guess the ordering between objects on the basis of their observed features. In this paper we consider ranking estimators that minimize the empirical convex risk. We prove generalization bounds for the excess risk of such estimators with rates that are faste ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
The problem of ranking is to predict or to guess the ordering between objects on the basis of their observed features. In this paper we consider ranking estimators that minimize the empirical convex risk. We prove generalization bounds for the excess risk of such estimators with rates that are faster than 1 √ n. We apply our results to commonly used ranking algorithms, for instance boosting or support vector machines. Moreover, we study the performance of considered estimators on real data sets. Keywords: Uprocess
Surrogate regret bounds for the area under the ROC curve via strongly proper losses
 In COLT
, 2013
"... The area under the ROC curve (AUC) is a widely used performance measure in machine learning, and has been widely studied in recent years particularly in the context of bipartite ranking. A dominant theoretical and algorithmic framework for AUC optimization/bipartite ranking has been to reduce the ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
The area under the ROC curve (AUC) is a widely used performance measure in machine learning, and has been widely studied in recent years particularly in the context of bipartite ranking. A dominant theoretical and algorithmic framework for AUC optimization/bipartite ranking has been to reduce the problem to pairwise classification; in particular, it is well known that the AUC regret can be formulated as a pairwise classification regret, which in turn can be upper bounded using usual regret bounds for binary classification. Recently, Kotlowski et al. (2011) showed AUC regret bounds in terms of the regret associated with ‘balanced ’ versions of the standard (nonpairwise) logistic and exponential losses. In this paper, we obtain such (nonpairwise) surrogate regret bounds for the AUC in terms of a broad class of proper (composite) losses that we term strongly proper. Our proof technique is considerably simpler than that of Kotlowski et al. (2011), and relies on properties of proper (composite) losses as elucidated recently by Reid and Williamson (2009, 2010, 2011) and others. Our result yields explicit surrogate bounds (with no hidden balancing terms) in terms of a variety of strongly proper losses, including for example logistic, exponential, squared and squared hinge losses. An important consequence is that standard algorithms minimizing a (nonpairwise) strongly proper loss, such as logistic regression and boosting algorithms (assuming a universal function class and appropriate regularization), are in fact AUCconsistent; moreover, our results allow us to quantify the AUC regret in terms of the corresponding surrogate regret. We also obtain tighter surrogate regret bounds under certain lownoise conditions via a recent result of Clémençon and Robbiano (2011).
Generalization Guarantees for a Binary Classification Framework for TwoStage Multiple Kernel Learning
, 2013
"... We present generalization bounds for the TSMKL framework for two stage multiple kernel learning. We also present bounds for sparse kernel learning formulations within the TSMKL framework. 1 ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
We present generalization bounds for the TSMKL framework for two stage multiple kernel learning. We also present bounds for sparse kernel learning formulations within the TSMKL framework. 1
Guaranteed classification via regularized similarity learning
 Neural Computation
, 2014
"... ar ..."
(Show Context)