Results 11  20
of
35
Active Learning of Label Ranking Functions
 Proceedings of the 21th International Conference on Machine Learning
, 2004
"... The e#ort necessary to construct labeled sets of examples in a supervised learning scenario is often disregarded, though in many applications, it is a timeconsuming and expensive procedure. While this already constitutes a major issue in classification learning, it becomes an even more seriou ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
The e#ort necessary to construct labeled sets of examples in a supervised learning scenario is often disregarded, though in many applications, it is a timeconsuming and expensive procedure. While this already constitutes a major issue in classification learning, it becomes an even more serious problem when dealing with the more complex target domain of total orders over a set of alternatives. Considering both the pairwise decomposition and the constraint classification technique to represent label ranking functions, we introduce a novel generalization of poolbased active learning to address this problem.
Toward an Optimal Supervised Classifier for the Analysis of Hyperspectral Data
 IEEE Trans. Geosci. Remote Sens
, 2004
"... This material is posted here with permission of the IEEE. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
(Show Context)
This material is posted here with permission of the IEEE. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by sending a blank email message to pubspermissions@ieee.org. By choosing to view this document, you agree to all provisions of the copyright laws protecting it. In this study we propose a supervised classifier based on implementation of the Bayes rule with kernels. The proposed technique first proposes an implicit nonlinear transformation of the data into a feature space seeking to fit normal distributions having a common covariance matrix onto the mapped data. One requirement of this approach is the evaluation of posterior probabilities. We express the discriminant function in dotproduct form, and then apply the kernel concept to efficiently evaluate the posterior probabilities. The proposed technique gives the flexibility required to model complex data structures that originate from a widerange of class conditional distributions. Although we end up with piecewise linear decision boundaries in the feature space, these corresponds to powerful nonlinear boundaries in the original input space. For the data we considered we have obtained some encouraging results.
Margin Distribution and Soft Margin
, 1999
"... this paper that minimising this new criterion can be performed efficiently. ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
this paper that minimising this new criterion can be performed efficiently.
H.: Boosting through optimization of margin distributions
 Neural Networks, IEEE Transactions on
, 2010
"... based complexity measure for learning classifiers and developed margin distribution based generalization bounds. Competitive classification results have been shown by optimizing this bound. Another relevant work is [12]. [12] applies a boosting method to optimize the margin distribution based genera ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
(Show Context)
based complexity measure for learning classifiers and developed margin distribution based generalization bounds. Competitive classification results have been shown by optimizing this bound. Another relevant work is [12]. [12] applies a boosting method to optimize the margin distribution based generalization bound obtained by [13]. Experiments show that the new boosting methods achieves considerable improvements over AdaBoost. The optimization of this new boosting method is based on the AnyBoost framework [5]. Aligned with these attempts, we proposed a new boosting algorithm through optimization of margin distribution (termed MDBoost). Instead of minimizing a margin distribution based generalization bound, we directly optimize the margin distribution: maxiarXiv:0904.2037v1
On Generalization Bounds, Projection Profile, and Margin Distribution
, 2002
"... We study generalization properties of linear learning algorithms and develop a data dependent approach that is used to derive generalization bounds that depend on the margin distribution. Our method makes use of random projection techniques to allow the use of existing VC dimension bounds in the eff ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
We study generalization properties of linear learning algorithms and develop a data dependent approach that is used to derive generalization bounds that depend on the margin distribution. Our method makes use of random projection techniques to allow the use of existing VC dimension bounds in the effective, lower, dimension of the data. Comparisons with existing...
A kernel method for the optimization of the margin distribution
 In International Conference on Artificial Neural Networks (ICANN
, 2008
"... Abstract. Recent results in theoretical machine learning seem to suggest that nice properties of the margin distribution over a training set turns out in a good performance of a classifier. The same principle has been already used in SVM and other kernel based methods as the associated optimization ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Recent results in theoretical machine learning seem to suggest that nice properties of the margin distribution over a training set turns out in a good performance of a classifier. The same principle has been already used in SVM and other kernel based methods as the associated optimization problems try to maximize the minimum of these margins. In this paper, we propose a kernel based method for the direct optimization of the margin distribution (KMOMD). The method is motivated and analyzed from a game theoretical perspective. A quite efficient optimization algorithm is then proposed. Experimental results over a standard benchmark of 13 datasets have clearly shown stateoftheart performances.
Approximate Maximum Margin Algorithms with Rules Controlled by the Number of Mistakes
"... We present a family of incremental Perceptronlike algorithms (PLAs) with margin in which both the “effective ” learning rate, defined as the ratio of the learning rate to the length of the weight vector, and the misclassification condition are entirely controlled by rules involving (powers of) the ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
We present a family of incremental Perceptronlike algorithms (PLAs) with margin in which both the “effective ” learning rate, defined as the ratio of the learning rate to the length of the weight vector, and the misclassification condition are entirely controlled by rules involving (powers of) the number of mistakes. We examine the convergence of such algorithms in a finite number of steps and show that under some rather mild conditions there exists a limit of the parameters involved in which convergence leads to classification with maximum margin. An experimental comparison of algorithms belonging to this family with other large margin PLAs and decomposition SVMs is also presented. 1.
A Risk Minimization Principle for a Class of Parzen Estimators
"... This paper 1 explores the use of a Maximal Average Margin (MAM) optimality principle for the design of learning algorithms. It is shown that the application of this risk minimization principle results in a class of (computationally) simple learning machines similar to the classical Parzen window cla ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
This paper 1 explores the use of a Maximal Average Margin (MAM) optimality principle for the design of learning algorithms. It is shown that the application of this risk minimization principle results in a class of (computationally) simple learning machines similar to the classical Parzen window classifier. A direct relation with the Rademacher complexities is established, as such facilitating analysis and providing a notion of certainty of prediction. This analysis is related to Support Vector Machines by means of a margin transformation. The power of the MAM principle is illustrated further by application to ordinal regression tasks, resulting in an O(n) algorithm able to process large datasets in reasonable time. 1
Sharp Generalization Error Bounds for Randomlyprojected Classifiers
 30th International Conference on Machine Learning (ICML 2013), JMLR W&CP
, 2013
"... We derive sharp bounds on the generalization error of a generic linear classifier trained by empirical risk minimization on randomlyprojected data. We make no restrictive assumptions (such as sparsity or separability) on the data: Instead we use the fact that, in a classification setting, the questi ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We derive sharp bounds on the generalization error of a generic linear classifier trained by empirical risk minimization on randomlyprojected data. We make no restrictive assumptions (such as sparsity or separability) on the data: Instead we use the fact that, in a classification setting, the question of interest is really ‘what is the effect of random projection on the predicted class labels? ’ and we therefore derive the exact probability of ‘label flipping ’ under Gaussian random projection in order to quantify this effect precisely in our bounds. 1.
SVM Active – Support Vector Machine Active Learning for Image Retrieval
 Proceedings of the 9 th ACM international conference on Multimedia
"... Relevance feedbackis often a critical componentwhen designingimage databases. With these databasesit is difficult to specify queries directly and explicitly. Relevance feedback interactively determinines a user’s desired output or query concept by asking the user whether certain proposed images are ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Relevance feedbackis often a critical componentwhen designingimage databases. With these databasesit is difficult to specify queries directly and explicitly. Relevance feedback interactively determinines a user’s desired output or query concept by asking the user whether certain proposed images are relevant or not. For a relevance feedback algorithm to be effective, it must grasp a user’s query concept accurately and quickly, while also asking the user to label only a small number of images. We propose the use of a support vector machine active learning (SVMActive) algorithm for conducting effective relevance feedback for image retrieval. To support efficient queryconcept learning and image retrieval, we also present our multiresolution imagecharacterization and highdimensional indexing methods. We further show that SVMActive can be effectively seeded by MEGA, another active learning algorighm that we developed, or by keyword searches. Experimental results show that our algorithm achieves significantly higher search accuracy than traditional query refinement schemes after just three to four rounds of relevance feedback.