• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 35,998
Next 10 →

Table 2 Measures assessing the general efficacy of the machine learning method

in Applying Machine Learning Toward an Automatic Classification of It Richard Evans, School of Humanities, Languages and Social Sciences, University of Wolverhampton,
by Stafford Street, Wv Sb
"... In PAGE 25: ...Table Captions Table 1 Performance of the machine learning and rule-based classification methods Table2 Measures assessing the general efficacy of the machine learning method ... ..."

Table 3. Generalization accuracy of IDIBL and several well-known machine learning models.

in An Integrated Instance-Based Learning Algorithm
by D. Randall Wilson, Tony R. Martinez 2000
"... In PAGE 23: ...lgorithms, i.e., they make decisions on which instances to prune before examining all of the available training data. The results of these comparisons are presented in Table3 . The highest accuracy achieved for each dataset is shown in bold type.... In PAGE 23: ... In addition, a one-tailed Wilcoxon signed ranks test was used to verify whether the average accuracy on this set of classification tasks was significantly higher than each of the others. The bottom row of Table3 gives the confidence level at which IDIBL is significantly higher than each of the other classification systems. As can be seen from the table, the average accuracy for IDIBL over this set of tasks was significantly higher than all of the other algorithms except for Backpropagation at over a 99% confidence level.... In PAGE 23: ... As can be seen from the table, the average accuracy for IDIBL over this set of tasks was significantly higher than all of the other algorithms except for Backpropagation at over a 99% confidence level. The accuracy for each of these datasets is shown in Table 2 for kNN, IVDM, DROP4 and LCV, but for the purpose of comparison, the average accuracy on the smaller set of 21 applications in Table3 is given here as follows: kNN, 76.... ..."
Cited by 20

Table 3. Generalization accuracy of IDIBL and several well-known machine learning models.

in An Integrated Instance-Based Learning Algorithm
by D. Randall Wilson, Tony R. Martinez 1997
"... In PAGE 23: ...lgorithms, i.e., they make decisions on which instances to prune before examining all of the available training data. The results of these comparisons are presented in Table3 . The highest accuracy achieved for each dataset is shown in bold type.... In PAGE 23: ... In addition, a one-tailed Wilcoxon signed ranks test was used to verify whether the average accuracy on this set of classification tasks was significantly higher than each of the others. The bottom row of Table3 gives the confidence level at which IDIBL is significantly higher than each of the other classification systems. As can be seen from the table, the average accuracy for IDIBL over this set of tasks was significantly higher than all of the other algorithms except for Backpropagation at over a 99% confidence level.... In PAGE 23: ... As can be seen from the table, the average accuracy for IDIBL over this set of tasks was significantly higher than all of the other algorithms except for Backpropagation at over a 99% confidence level. The accuracy for each of these datasets is shown in Table 2 for kNN, IVDM, DROP4 and LCV, but for the purpose of comparison, the average accuracy on the smaller set of 21 applications in Table3 is given here as follows: kNN, 76.... ..."
Cited by 20

Table 3. Generalization accuracy of IDIBL and several well-known machine learning models.

in Advances in instance-based learning algorithms
by D. Randall Wilson, Tony R. Martinez 1997
"... In PAGE 23: ...lgorithms, i.e., they make decisions on which instances to prune before examining all of the available training data. The results of these comparisons are presented in Table3 . The highest accuracy achieved for each dataset is shown in bold type.... In PAGE 23: ... In addition, a one-tailed Wilcoxon signed ranks test was used to verify whether the average accuracy on this set of classification tasks was significantly higher than each of the others. The bottom row of Table3 gives the confidence level at which IDIBL is significantly higher than each of the other classification systems. As can be seen from the table, the average accuracy for IDIBL over this set of tasks was significantly higher than all of the other algorithms except for Backpropagation at over a 99% confidence level.... In PAGE 23: ... As can be seen from the table, the average accuracy for IDIBL over this set of tasks was significantly higher than all of the other algorithms except for Backpropagation at over a 99% confidence level. The accuracy for each of these datasets is shown in Table 2 for kNN, IVDM, DROP4 and LCV, but for the purpose of comparison, the average accuracy on the smaller set of 21 applications in Table3 is given here as follows: kNN, 76.... ..."
Cited by 20

Table 3. Generalization accuracy of IDIBL and several well-known machine learning models.

in Advances in instance-based learning algorithms
by D. Randall Wilson, Tony R. Martinez 1997
"... In PAGE 23: ...lgorithms, i.e., they make decisions on which instances to prune before examining all of the available training data. The results of these comparisons are presented in Table3 . The highest accuracy achieved for each dataset is shown in bold type.... In PAGE 23: ... In addition, a one-tailed Wilcoxon signed ranks test was used to verify whether the average accuracy on this set of classification tasks was significantly higher than each of the others. The bottom row of Table3 gives the confidence level at which IDIBL is significantly higher than each of the other classification systems. As can be seen from the table, the average accuracy for IDIBL over this set of tasks was significantly higher than all of the other algorithms except for Backpropagation at over a 99% confidence level.... In PAGE 23: ... As can be seen from the table, the average accuracy for IDIBL over this set of tasks was significantly higher than all of the other algorithms except for Backpropagation at over a 99% confidence level. The accuracy for each of these datasets is shown in Table 2 for kNN, IVDM, DROP4 and LCV, but for the purpose of comparison, the average accuracy on the smaller set of 21 applications in Table3 is given here as follows: kNN, 76.... ..."
Cited by 20

Table 1. General constructive induction idea. (i) the machine learning algorithm, (ii) the constructive induction module, and (iii) an evaluation component.

in How does the Hue contribute to construct better colour features?
by Giovani Gomez Estrada, Eduardo Morales
"... In PAGE 4: ... We are interested in inducing simple models as they are relevant to applications which require fast response times, such as, semi-automatic calibration of colour targets, gesture recognition, face and human tracking, etc. The general approach followed in this paper for constructive induction is shown in Table1 . The idea, is to start with some primitive attributes and a set of constructive operators, create a new representation space, run an inductive learning algorithm, and select the best attributes of this new space.... ..."

Table 3: Generalization on the MONK apos;s problems The generalization accuracy of the individual learning algorithms on the MONK apos;s problems and that of combining prior symbolic knowledge with machine learning is shown in Table 3. As anticipated, the generalization of a combined approach surpasses that of each approach individually.

in Combining Prior Symbolic Knowledge And Constructive Neural Network Learning
by Dr. Zoran Obradovi'c, Zoran Obradovic, Justin Fletcher, Justin Fletcher 1993
Cited by 25

Table 1 shows the results for the rst automaton, which is a Mealy machine. In this table we present the number of epochs needed to learn the task and the generalization performance of the net, for N = 2; : : : ; 12 neurons. With bigger values of N, we observed that the net either fails to learn the translation task, or has worse generalization performance. The best generalization result was obtained for N = 6 hidden neurons, with 156,000 epochs of Alopex.

in Recursive Hetero-Associative Memories for Translation
by Mikel L. Forcada, Ramon P. Neco 1997
"... In PAGE 8: ...Table1 . Results for automaton M1.... ..."
Cited by 1

Table 1. The Data Sets

in Learning More Accurate Metrics for Self-Organizing Maps
by Jaakko Peltonen, Arto Klami, Samuel Kaski 2002
"... In PAGE 4: ... In the empirical tests of Section 5 we have used T = 10 evaluation points and W = 10 winner candidates, resulting in a 20-fold speed-up compared to the unwinnowed T -point approximation, but computational time compared to the 1-point approximation was still about 100-fold. 5 Empirical Testing The methods were compared on five different data sets ( Table1 ). The class labels were used as the auxiliary data and the data sets were preprocessed by removing the classes with only a few samples.... ..."
Cited by 4

Table 5. Percentages of Accurate Generalization

in The Potential of Prototype Styles of Generalization
by D. Randall Wilson, Tony R. Martinez 1993
"... In PAGE 6: ... 3. Empirical Results Each of the generalization styles described above was implemented and tested on several well-known training sets from the repository of Machine Learning Databases at University of California Irvine9, and the generalization accuracy of each style on each application is given in Table5 . The column labeled Max contains the maximum generalization accuracy of any of the styles for each application.... ..."
Cited by 6
Next 10 →
Results 1 - 10 of 35,998
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University