DMCA
Contents lists available at ScienceDirect Expert Systems with Applications journal homepage: www.elsevier.com/locate/eswa
Citations
6594 |
C4.5: Programs for Machine Learning
- Quinlan
- 1992
(Show Context)
Citation Context ...5 Wine 3 178 Yeast 10 1484 Zoo 7 101 (9) lp: linear perceptron with softmax outputs trained by gradient-descent to minimize cross-entropy. (10) c45: the most widely-used C4.5 decision tree algorithm (=-=Quinlan, 1993-=-). (11) ldt: this is a multivariate tree where unlike C4.5 which uses univariate and axis-orthogonal splits uses splits that are arbitrary hyper-planes using all inputs (Loh & Shih, 1997). (12)–(14) s... |
6467 | LIBSVM: a library for support vector machines
- Chang, Lin
- 2011
(Show Context)
Citation Context ...svm: support vector machines (SVM) with a linear kernel (sv1), polynomial kernel of degree 2 (sv2), and a radial (Gaussian) kernel (svr). We use the LIBSVM 2.82 library that implements pairwise SVMs (=-=Chang & Lin, 2001-=-). 3.1.3. Division of training, validation, and test sets The methodology is as follows: A dataset is first divided into two parts, with 1/3 as the test set, test, and 2/3 as the training set, train-a... |
3453 |
UCI repository of machine learning databases. http://www. ics.uci.edu/~mlearn/MLRepository.html
- BLAKE, J
- 1998
(Show Context)
Citation Context ...ring information, and which should be fine tuned to get accurate results. 3. Experiments 3.1. Experimental setup 3.1.1. Base datasets We use a total of 38 base datasets where 35 of them are from UCI (=-=Blake & Merz, 2000-=-) and 3 are from Delve (Hinton, 1996) repositories (see Table 1). 3.1.2. Base classifiers We use fourteen base classifiers which we have chosen to span as much as possible the wide spectrum of possibl... |
3134 | Data Mining: Concepts and Techniques
- Han, Kamber
- 2001
(Show Context)
Citation Context ...3nn are very close but svr and sv2 are not. Automatic systems that can recommend good classifiers would be very useful in data mining applications where users need not be experts in machine learning (=-=Han & Kamber, 2000-=-). A similarity calculation strategy as we do in this paper would be useful in such a case.O.T. Yıldız / Expert Systems with Applications 38 (2011) 3697–3702 3701 20 700 10 mlp4 mlp3 mlp1 mlp5 mlp2 l... |
2443 | A global geometric framework for nonlinear dimensionality reduction - TENENBAUM, SILVA, et al. - 2000 |
718 | Approximate statistical tests for comparing supervised classi learning algorithms
- Dietterich
- 1998
(Show Context)
Citation Context ...ows: A dataset is first divided into two parts, with 1/3 as the test set, test, and 2/3 as the training set, train-all. The training set, train-all, is then resampled using 5 2 cross-validation (cv) (=-=Dietterich, 1998-=-) where two-fold cv is done five times (with stratification) and the roles swapped at each fold to generate ten training and validation folds, tra i, val i, i =1,...,10. trai are used to train the bas... |
263 |
Methods of multivariate analysis
- Rencher
- 2002
(Show Context)
Citation Context ...n the paper. We give our experiments and results in Section 3 and conclude in Section 4. E-mail address: olcaytaner@isikun.edu.tr 2.1. Principal component analysis Principal component analysis (PCA) (=-=Rencher, 1995-=-) projects data points xi 2 R d onto lower dimensional coordinates yj 2 R p for best information preservation. The linear projection is given by Y XW; where W is an d p projection matrix found to ma... |
128 | Split selection methods for classification trees
- Loh, Shih
- 1997
(Show Context)
Citation Context ... tree algorithm (Quinlan, 1993). (11) ldt: this is a multivariate tree where unlike C4.5 which uses univariate and axis-orthogonal splits uses splits that are arbitrary hyper-planes using all inputs (=-=Loh & Shih, 1997-=-). (12)–(14) svm: support vector machines (SVM) with a linear kernel (sv1), polynomial kernel of degree 2 (sv2), and a radial (Gaussian) kernel (svr). We use the LIBSVM 2.82 library that implements pa... |
100 | Complexity measures of supervised classification problems - Ho, Basu - 2002 |
70 | Ranking learning algorithms: Using ibl and meta-learning on accuracy and time results. - Brazdil, Soares, et al. - 2003 |
10 |
Methods of comparison, in
- Henery
- 1994
(Show Context)
Citation Context ...s supervised classification problems that characterize the difficulty of a classification problem. The metrics they propose focus on the geometrical properties of the class boundary. In another work (=-=Henery, 1994-=-), datasets are characterized using meta-attributes which use general, statistical and information theoretic measures. Such measures can also be used together with posterior probability estimates of e... |
3 |
Delve project, data for evaluating learning in valid experiments
- Hinton
- 1996
(Show Context)
Citation Context ... tuned to get accurate results. 3. Experiments 3.1. Experimental setup 3.1.1. Base datasets We use a total of 38 base datasets where 35 of them are from UCI (Blake & Merz, 2000) and 3 are from Delve (=-=Hinton, 1996-=-) repositories (see Table 1). 3.1.2. Base classifiers We use fourteen base classifiers which we have chosen to span as much as possible the wide spectrum of possible machine learning algorithms: (1)–(... |