Results 1  10
of
43
Online passiveaggressive algorithms
 JMLR
, 2006
"... We present a unified view for online classification, regression, and uniclass problems. This view leads to a single algorithmic framework for the three problems. We prove worst case loss bounds for various algorithms for both the realizable case and the nonrealizable case. The end result is new alg ..."
Abstract

Cited by 435 (24 self)
 Add to MetaCart
algorithms and accompanying loss bounds for hingeloss regression and uniclass. We also get refined loss bounds for previously studied classification algorithms.
Statistical Tests using Hinge/ɛSensitive Loss
"... Abstract. Statistical tests used in the literature to compare algorithms use the misclassification error which is based on the 0/1 loss and square loss for regression. Kernelbased, support vector machine classifiers (regressors) however are trained to minimize the hinge (ɛsensitive) loss and hence ..."
Abstract
 Add to MetaCart
Abstract. Statistical tests used in the literature to compare algorithms use the misclassification error which is based on the 0/1 loss and square loss for regression. Kernelbased, support vector machine classifiers (regressors) however are trained to minimize the hinge (ɛsensitive) loss
Domain Adaptation in Regression
"... Abstract. This paper presents a series of new results for domain adaptation in the regression setting. We prove that the discrepancy is a distance for the squared loss when the hypothesis set is the reproducing kernel Hilbert space induced by a universal kernel such as the Gaussian kernel. We give n ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Abstract. This paper presents a series of new results for domain adaptation in the regression setting. We prove that the discrepancy is a distance for the squared loss when the hypothesis set is the reproducing kernel Hilbert space induced by a universal kernel such as the Gaussian kernel. We give
Smooth εInsensitive Regression by Loss
"... Abstract. We describe a framework for solving regression problems by reduction to classification. Our reduction is based on symmetrization of marginbased loss functions commonly used in boosting algorithms, namely, the logistic loss and the exponential loss. Our construction yields a smooth version ..."
Abstract
 Add to MetaCart
version of the εinsensitive hinge loss that is used in support vector regression. A byproduct of this construction is a new simple form of regularization for boostingbased classification and regression algorithms. We present two parametric families of batch learning algorithms for minimizing
Logistic Regression: Tight Bounds for Stochastic and Online Optimization
"... The logistic loss function is often advocated in machine learning and statistics as a smooth and strictly convex surrogate for the 01 loss. In this paper we investigate the question of whether these smoothness and convexity properties make the logistic loss preferable to other widely considered opt ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
options such as the hinge loss. We show that in contrast to known asymptotic bounds, as long as the number of prediction/optimization iterations is sub exponential, the logistic loss provides no improvement over a generic nonsmooth loss function such as the hinge loss. In particular we show
Smooth epsilonInsensitive Regression by Loss Symmetrization
, 2003
"... We describe a framework for solving regression problems by reduction to classification. Our reduction is based on symmetrization of marginbased loss functions commonly used in boosting algorithms, namely, the logisticloss and the exponentialloss. Our construction yields a smooth version of th ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
of the #insensitive hinge loss that is used in support vector regression. Furthermore, this construction enables a new form of smooth regularization that matches the di#erent losses. We present two parametric families of batch learning algorithms for minimizing these losses. The first family employs a log
Smooth εinsensitive regression by loss symmetrization
, 2005
"... We describe new loss functions for regression problems along with an accompanying algorithmic framework which utilizes these functions. These loss functions are derived by symmetrization of marginbased losses commonly used in boosting algorithms, namely, the logistic loss and the exponential loss. ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
. The resulting symmetric logistic loss can be viewed as a smooth approximation to the εinsensitive hinge loss used in support vector regression. We describe and analyze two parametric families of batch learning algorithms for minimizing these symmetric losses. The first family employs an iterative log
Loss functions for preference levels: Regression with discrete ordered labels
 Proceedings of the IJCAI Multidisciplinary Workshop on Advances in Preference Handling
, 2005
"... We consider different types of loss functions for discrete ordinal regression, i.e. fitting labels that may take one of several discrete, but ordered, values. These types of labels arise when preferences are specified by selecting, for each item, one of several rating “levels”, e.g. one through five ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
We consider different types of loss functions for discrete ordinal regression, i.e. fitting labels that may take one of several discrete, but ordered, values. These types of labels arise when preferences are specified by selecting, for each item, one of several rating “levels”, e.g. one through
Neuralnetwork Based Regression Model with Prior from Ranking Information
"... Abstract — In this work, a new algorithm, which can incorporate the ranking information as prior knowledge into the regression model, is presented. Comparing with the method that treats the ranking information as hard constraints, We handle ranking reasonably by maximization of Normalized Discount ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
regression model and the pairwise shifted hinge loss and logistic loss are proposed under the suggested approach. One benefit of the proposed approach is that the weighted pairwise loss is more reasonable than the unweighted loss and all the weights are set based on the NDCG. Finally, one synthetic example
LEARNING KERNELBASED HALFSPACES WITH THE 01 LOSS
, 2011
"... We describe and analyze a new algorithm for agnostically learning kernelbased halfspaces with respect to the 01 loss function. Unlike most of the previous formulations, which rely on surrogate convex loss functions (e.g., hingeloss in support vector machines (SVMs) and logloss in logistic regr ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
We describe and analyze a new algorithm for agnostically learning kernelbased halfspaces with respect to the 01 loss function. Unlike most of the previous formulations, which rely on surrogate convex loss functions (e.g., hingeloss in support vector machines (SVMs) and logloss in logistic
Results 1  10
of
43