• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

O & Schölkopf B 2003. Feature selection for support vector machines by means of genetic algorithms. Paper presented at the 15 th IEEE international conference on tools with artificial intelligence held 3-5 (2003)

by H Fröhlich, Chapelle
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 74
Next 10 →

Evolutionary tuning of multiple svm parameters

by Frauke Friedrichs, Christian Igel - In Proc. of the 12th European Symposium on Artificial Neural Networks (ESANN 2004 , 2004
"... The problem of model selection for support vector machines (SVMs) is considered. We propose an evolutionary approach to determine multiple SVM hyperparameters: The covariance matrix adaptation evolution strategy (CMA-ES) is used to determine the kernel from a parameterized kernel space and to contro ..."
Abstract - Cited by 74 (5 self) - Add to MetaCart
The problem of model selection for support vector machines (SVMs) is considered. We propose an evolutionary approach to determine multiple SVM hyperparameters: The covariance matrix adaptation evolution strategy (CMA-ES) is used to determine the kernel from a parameterized kernel space and to control the regularization. Our method is applicable to optimize non-differentiable kernel functions and arbitrary model selection criteria. We demonstrate on benchmark datasets that the CMA-ES improves the results achieved by grid search already when applied to few hyperparameters. Further, we show that the CMA-ES is able to handle much more kernel parameters compared to grid-search and that tuning of the scaling and the rotation of Gaussian kernels can lead to better results in comparison to standard Gaussian kernels with a single bandwidth parameter. In particular, more flexibility of the kernel can reduce the number of support vectors. Key words: support vector machines, model selection, evolutionary algorithms 1
(Show Context)

Citation Context

...ve. Evolutionary algorithms have been successfully applied to model selection for neural networks [11,18,25]. This includes the recent applications of genetic algorithms for feature selection of SVMs =-=[6,7,14,17]-=-. We use the covariance matrix adaptation evolution strategy (CMA-ES, [10]) to search for an appropriate hyperparameter vector. The fitness function that is optimized directly corresponds to some gene...

A GA-based feature selection and parameters optimization for support vector machines

by Cheng-Lung Huang , Chieh-jen Wang , 2006
"... ..."
Abstract - Cited by 47 (0 self) - Add to MetaCart
Abstract not found

Credit scoring with a data mining approach based on support vector machines

by Cheng-Lung Huang , Mu-Chen Chen, Chieh-Jen Wang , 2006
"... The credit card industry has been growing rapidly recently, and thus huge numbers of consumers’ credit data are collected by the credit department of the bank. The credit scoring manager often evaluates the consumer’s credit with intuitive experience. However, with the support of the credit classifi ..."
Abstract - Cited by 37 (0 self) - Add to MetaCart
The credit card industry has been growing rapidly recently, and thus huge numbers of consumers’ credit data are collected by the credit department of the bank. The credit scoring manager often evaluates the consumer’s credit with intuitive experience. However, with the support of the credit classification model, the manager can accurately evaluate the applicant’s credit score. Support Vector Machine (SVM) classification is currently an active research area and successfully solves classification problems in many domains. This study used three strategies to construct the hybrid SVM-based credit scoring models to evaluate the applicant’s credit score from the applicant’s input features. Two credit datasets in UCI database are selected as the experimental data to demonstrate the accuracy of the SVM classifier. Compared with neural networks, genetic programming, and decision tree classifiers, the SVM classifier achieved an identical classificatory accuracy with relatively few input features. Additionally, combining genetic algorithms with SVM classifier, the proposed hybrid GA-SVM strategy can simultaneously perform feature selection task and model parameters optimization. Experimental results show that SVM is a promising addition to the existing data mining methods.

The Genetic Kernel Support Vector Machine: Description and Evaluation

by Tom Howley, Michael G. Madden - Artificial Intelligence Review , 2005
"... Abstract. The Support Vector Machine (SVM) has emerged in recent years as a popular approach to the classification of data. One problem that faces the user of an SVM is how to choose a kernel and the specific parameters for that kernel. Applications of an SVM therefore require a search for the optim ..."
Abstract - Cited by 29 (1 self) - Add to MetaCart
Abstract. The Support Vector Machine (SVM) has emerged in recent years as a popular approach to the classification of data. One problem that faces the user of an SVM is how to choose a kernel and the specific parameters for that kernel. Applications of an SVM therefore require a search for the optimum settings for a particular problem. This paper proposes a classification technique, which we call the Genetic Kernel SVM (GK SVM), that uses Genetic Programming to evolve a kernel for a SVM classifier. Results of initial experiments with the proposed technique are presented. These results are compared with those of a standard SVM classifier using the Polynomial, RBF and Sigmoid kernel with various parameter settings.

Multi-objective model selection for support vector machines

by Christian Igel - Proceedings of the Third International Conference on Evolutionary MultiCriterion Optimization (EMO 2005), volume 3410 of LNAI , 2005
"... Abstract. In this article, model selection for support vector machines is viewed as a multi-objective optimization problem, where model complexity and training accuracy define two conflicting objectives. Different optimization criteria are evaluated: Split modified radius margin bounds, which allow ..."
Abstract - Cited by 18 (7 self) - Add to MetaCart
Abstract. In this article, model selection for support vector machines is viewed as a multi-objective optimization problem, where model complexity and training accuracy define two conflicting objectives. Different optimization criteria are evaluated: Split modified radius margin bounds, which allow for comparing existing model selection criteria, and the training error in conjunction with the number of support vectors for designing sparse solutions. 1
(Show Context)

Citation Context

...—the algorithms are prone to getting stuck in local optima. In [21, 22], single-objective evolution strategies were proposed for adapting SVM hyperparameters, which partly overcome these problems; in =-=[23]-=- a single-objective genetic algorithm was used for SVM feature selection (see also [24–26]) and adaptation of the (discretized) regularization parameter. Like gradient-based techniques, these methods ...

Large-scale attribute selection using wrappers

by Martin Gütlein, Eibe Frank, Mark Hall, Andreas Karwath
"... Abstract—Scheme-specific attribute selection with the wrapper and variants of forward selection is a popular attribute selection technique for classification that yields good results. However, it can run the risk of overfitting because of the extent of the search and the extensive use of internal cr ..."
Abstract - Cited by 15 (0 self) - Add to MetaCart
Abstract—Scheme-specific attribute selection with the wrapper and variants of forward selection is a popular attribute selection technique for classification that yields good results. However, it can run the risk of overfitting because of the extent of the search and the extensive use of internal cross-validation. Moreover, althoughwrapperevaluators tendtoachievesuperior accuracy compared to filters, they face a high computational cost. The problems of overfitting and high runtime occur in particular on high-dimensional datasets, like microarray data. We investigate Linear Forward Selection, a technique to reduce the number of attributes expansions in each forward selection step. Our experiments demonstrate that this approach is faster, finds smaller subsets and can even increase the accuracy compared to standard forward selection. We also investigate a variant that applies explicit subset size determination in forward selection to combat overfitting, where the search is forced to stop at a precomputed “optimal ” subset size. We show that this technique reduces subset size while maintaining comparable accuracy. I.

Prediction of cytochrome P450 3A4, 2D6, and 2C9 Inhibitors and substrates by using support vector machines

by C. W. Yap, Y. Z. Chen - J. Chem. Inf. Model. 2005
"... Statistical learning methods have been used in developing filters for predicting inhibitors of two P450 isoenzymes, CYP3A4 and CYP2D6. This work explores the use of different statistical learning methods for predicting inhibitors of these enzymes and an additional P450 enzyme, CYP2C9, and the substr ..."
Abstract - Cited by 12 (1 self) - Add to MetaCart
Statistical learning methods have been used in developing filters for predicting inhibitors of two P450 isoenzymes, CYP3A4 and CYP2D6. This work explores the use of different statistical learning methods for predicting inhibitors of these enzymes and an additional P450 enzyme, CYP2C9, and the substrates of the three P450 isoenzymes. Two consensus support vector machine (CSVM) methods, “positive majority ” (PM-CSVM) and “positive probability ” (PP-CSVM), were used in this work. These methods were first tested for the prediction of inhibitors of CYP3A4 and CYP2D6 by using a significantly higher number of inhibitors and noninhibitors than that used in earlier studies. They were then applied to the prediction of inhibitors of CYP2C9 and substrates of the three enzymes. Both methods predict inhibitors of CYP3A4 and CYP2D6 at a similar level of accuracy as those of earlier studies. For classification of inhibitors of CYP2C9, the best CSVM method gives an accuracy of 88.9 % for inhibitors and 96.3 % for noninhibitors. The accuracies for classification of substrates and nonsubstrates of CYP3A4, CYP2D6, and CYP2C9 are 98.2 and 90.9%, 96.6 and 94.4%, and 85.7 and 98.8%, respectively. Both CSVM methods are potentially useful as filters for predicting inhibitors and substrates of P450 isoenzymes. These methods generally give better accuracies than single SVM classification systems, and the performance of the PP-CSVM method is slightly better than that of the PM-CSVM method.

Evolutionary learning with kernels: a generic solution for large margin problems

by Ingo Mierswa - In GECCO ’06: Proceedings of the 8th annual conference on Genetic and evolutionary computation
"... In this paper we embed evolutionary computation into statistical learning theory. First, we outline the connection between large margin optimization and statistical learning and see why this paradigm is successful for many pattern recognition problems. We then embed evolutionary computation into the ..."
Abstract - Cited by 10 (2 self) - Add to MetaCart
In this paper we embed evolutionary computation into statistical learning theory. First, we outline the connection between large margin optimization and statistical learning and see why this paradigm is successful for many pattern recognition problems. We then embed evolutionary computation into the most prominent representative of this class of learning methods, namely into Support Vector Machines (SVM). In contrast to former applications of evolutionary algorithms to SVMs we do not only optimize the method or kernel parameters. We rather use both evolution strategies and particle swarm optimization in order to directly solve the posed constrained optimization problem. Transforming the problem into the Wolfe dual reduces the total runtime and allows the usage of kernel functions. Exploiting the knowledge about this optimization problem leads to a hybrid mutation which further decreases convergence time while classification accuracy is preserved. We will show that evolutionary SVMs are at least as accurate as their quadratic programming counterparts on six real-world benchmark data sets. The evolutionary SVM variants frequently outperform their quadratic programming competitors. Additionally, the proposed algorithm is more generic than existing traditional solutions since it will also work for non-positive semidefinite kernel functions and for several, possibly competing, performance criteria.
(Show Context)

Citation Context

...) evolutionary optimization approach. Former applications of evolutionary algorithms to SVMs include the optimization of method and kernel parameters [6, 19], the selection of optimal feature subsets =-=[7]-=-, and the creation of new kernel functions by means of genetic programming [10]. The latter is particularly interesting since it cannot be guaranteed that the resulting kernel functions are again posi...

Genetic Algorithm-based Feature Set Partitioning for Classification Problems

by Lior Rokach
"... Feature set partitioning generalizes the task of feature selection by partitioning the feature set into subsets of features that are collectively useful, rather than by finding a single useful subset of features. This paper presents a novel feature set partitioning approach that is based on a geneti ..."
Abstract - Cited by 9 (5 self) - Add to MetaCart
Feature set partitioning generalizes the task of feature selection by partitioning the feature set into subsets of features that are collectively useful, rather than by finding a single useful subset of features. This paper presents a novel feature set partitioning approach that is based on a genetic algorithm. As part of this new approach a new encoding schema is also proposed and its properties are discussed. We examine the effectiveness of using a Vapnik-Chervonenkis dimension bound for evaluating the fitness function of multiple, oblivious tree classifiers. The new algorithm was tested on various datasets and the results indicate the superiority of the proposed algorithm to other methods. 1.
(Show Context)

Citation Context

...heory for evaluating the generalization error bound. This choice follows from the use of VC theory in previous works to evaluate decision trees [44] and oblivious decision trees [33]. Fröhlich et al. =-=[45]-=- have used a VC dimension bound for guiding a GA while solving the feature selection problem in support vector machines. In the same spirit we opt for using VC dimension theory in this paper. 2.6 Obli...

EVOLUTIONARY OPTIMIZATION OF SEQUENCE KERNELS FOR DETECTION OF BACTERIAL GENE STARTS

by Britta Mersch, Tobias Glasmachers, Peter Meinicke, Christian Igel
"... ..."
Abstract - Cited by 7 (3 self) - Add to MetaCart
Abstract not found
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University