Results 1  10
of
2,407
Minimal Kernel Classifiers
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2002
"... A finite concave minimization algorithm is proposed for constructing kernel classifiers that use a minimal number of data points both in generating and characterizing a classifier. The algorithm ..."
Abstract

Cited by 24 (8 self)
 Add to MetaCart
A finite concave minimization algorithm is proposed for constructing kernel classifiers that use a minimal number of data points both in generating and characterizing a classifier. The algorithm
Discriminative direction for kernel classifiers
 Advances in Neural Information Processing Systems 13
, 2001
"... In many scientific and engineering applications, detecting and understanding differences between two groups of examples can be reduced to a classical problem of training a classifier for labeling new examples while making as few mistakes as possible. In the traditional classification setting, the ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
with respect to the classifier function. We derive the discriminative direction for kernelbased classifiers, demonstrate the technique on several examples and briefly discuss its use in the statistical shape analysis, an application that originally motivated this work. 1
A Kernel Classifier for Distributions
, 2005
"... Abstract. This paper presents a new algorithm for classifying distributions. The algorithm combines the principle of margin maximization and a kernel trick, applied to distributions. Thus, it combines the discriminative power of support vector machines and the welldeveloped framework of generative ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. This paper presents a new algorithm for classifying distributions. The algorithm combines the principle of margin maximization and a kernel trick, applied to distributions. Thus, it combines the discriminative power of support vector machines and the welldeveloped framework of generative
Kernel Classifier with Correntropy Loss
"... Abstract—Classification can be seen as a mapping problem where some function of xn predicts the expectation of a class variable yn. This paper uses kernel methods for the prediction of class variable, together with a recently proposed cost function for classification, called Correntropyloss(Closs) ..."
Abstract
 Add to MetaCart
kernel based functional mapping, by a nonconvex loss function CLoss, a nonoverfitting, and hence, a better classifier can be obtained. Since gradient descent can still be used with the Closs and the kernel mapper, the classifier can be easily trained without performance penalty, compared to the SVM
Resilient Approximation of Kernel Classifiers
"... Abstract. Trained support vector machines (SVMs) have a slow runtime classification speed if the classification problem is noisy and the sample data set is large. Approximating the SVM by a more sparse function has been proposed to solve to this problem. In this study, different variants of approxim ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. Trained support vector machines (SVMs) have a slow runtime classification speed if the classification problem is noisy and the sample data set is large. Approximating the SVM by a more sparse function has been proposed to solve to this problem. In this study, different variants of approximation algorithms are empirically compared. It is shown that gradient descent using the improved Rprop algorithm increases the robustness of the method compared to fixedpoint iteration. Three different heuristics for selecting the support vectors to be used in the construction of the sparse approximation are proposed. It turns out that none is superior to random selection. The effect of a finishing gradient descent on all parameters of the sparse approximation is studied. 1
Fast Kernel Classifiers With Online And Active Learning
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2005
"... Very high dimensional learning systems become theoretically possible when training examples are abundant. The computing cost then becomes the limiting factor. Any efficient learning algorithm should at least take a brief look at each example. But should all examples be given equal attention? This ..."
Abstract

Cited by 153 (18 self)
 Add to MetaCart
Very high dimensional learning systems become theoretically possible when training examples are abundant. The computing cost then becomes the limiting factor. Any efficient learning algorithm should at least take a brief look at each example. But should all examples be given equal attention? This contribution proposes an empirical answer. We first present an online SVM algorithm based on this premise. LASVM yields competitive misclassification rates after a single pass over the training examples, outspeeding stateoftheart SVM solvers. Then we show how active example selection can yield faster training, higher accuracies, and simpler models, using only a fraction of the training example labels.
Knowledgebased nonlinear kernel classifiers
 In COLT
, 2003
"... "tr"2003/6/5 page 1i i ii i i ii ..."
Similarity Metric Learning for a VariableKernel Classifier
 Neural Computation
, 1995
"... Nearestneighbour interpolation algorithms have many useful properties for applications to learning, but they often exhibit poor generalization. In this paper, it is shown that much better generalization can be obtained by using a variable interpolation kernel in combination with conjugate gradient ..."
Abstract

Cited by 118 (1 self)
 Add to MetaCart
Nearestneighbour interpolation algorithms have many useful properties for applications to learning, but they often exhibit poor generalization. In this paper, it is shown that much better generalization can be obtained by using a variable interpolation kernel in combination with conjugate gradient
Discriminative kernel classifiers in speaker recognition
 In 33rd German Annual Conference on Acoustics (DAGA
, 2007
"... The goal of automatic speaker recognition is to identify a speaker or to verify if a speaker is the person he claims to be. We present an overview of stateoftheart speaker recognition systems which are usually based on speakerdependent Gaussian Mixture Models (GMMs). In this paper we also describ ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
describe different methods of integrating discriminative classifiers like the Support Vector Machine (SVM) into speaker recognition environments and show that it is possible to use the SVM methods directly on the framelevel for datasets with a small amount of speech data. On larger datasets a combination
Results 1  10
of
2,407