• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 55,258
Next 10 →

Table 1: Flaws subsets size.

in classification with SVM, Boosting and Hyperrectangle based method
by J. Mitéran, S. Bouillant, M. Paindavoine, F. Mériaudeau, J. Dubois
"... In PAGE 5: ... These five flaws have been selected by the industrial partner as the most critical for the production. To classify these anomalies and evaluate the performances of the final system, training and test sets were built from 1606 sound pieces and 245 defected pieces ( Table1 ). Since the number of available samples per categories of flaw is very small, a ten-fold cross validated error was used to evaluate the classification performances.... ..."

Table 2: Subset size and accuracy : Heart Disease

in Efficient Feature Selection in Conceptual Clustering
by Mark Devaney, Ashwin Ram 1997
Cited by 35

Table 3: Subset size and accuracy : LED

in Efficient Feature Selection in Conceptual Clustering
by Mark Devaney, Ashwin Ram 1997
Cited by 35

Table 8: Breakdown of subset sizes by document type.

in Identifying and tracking changing interests
by Barry Crabtree, Stuart Soltysiak, Mlb Pp, Ip Re 1998
"... In PAGE 15: ... Figure 3: Interest themes over time for User A. 21 points have been plotted, representing the different themes over the time of the experiment (see Table8 ). For example the PAAM theme has nine points plotted for each of the time periods in which that theme appeared.... In PAGE 17: ... First, notice that the scale on the x axis is 1000 times that of the y axis. The point labels represent the time period that the theme first appeared ( Table8 ). This theme is quite constant from week 2 to week 7, then it moves away during weeks 8, 9 and 10.... ..."
Cited by 15

Table 4 Effect of increasing informant subset size

in of alternatively spliced transcripts
by Paul Flicek, Michael R Brent 2006
"... In PAGE 5: ...Table4 ). This increase corresponds partly to the usage of distantly related informant species in the informant set.... ..."

Table 1. Results summary for feature subset size 60 according to significance.

in Unsupervised Feature Selection for Text Data
by Nirmalie Wiratunga, Rob Lothian, Stewart Massie 2006
"... In PAGE 12: ... 6.2 Evaluation Summary We checked the significance of observed differences between GREEDY and CLUSTER, using a 2-tailed t-test with a 95% confidence level for feature subset size, a28 equal to 60 (see Table1 ). This test indicated that the superiority of CLUSTER over GREEDY was significant in all three datasets (bold font), but that of GREEDY on USREMAIL was not shown to be significant at this level.... ..."
Cited by 1

TABLE III COMPARISON OF SUBSET SIZE, DEPENDENCY VALUE, amp; RUN TIMES

in Distance Measure Assisted Rough Set Feature Selection
by unknown authors

Table 2: MSE comparison of MRX, QR amp; ORMP for the Boston dataset for varying subset sizes. Subset Methods

in Some Greedy Learning Algorithms for Sparse Regression and Classification with Mercer Kernels
by Prasanth B. Nair, Arindam Choudhury, Andy J. Keane, Carla Brodley, Andrea Danyluk 2002
"... In PAGE 15: ...14 Subset size (n) Normalized Residual Error ORMP MRX QR Figure 5: Convergence of the averaged residual error using greedy algorithms on the Boston dataset. Figure 5 and Table2 show the convergence trends of the residual error and the mean square testing error, respectively, as the subset size is increased, for the Boston dataset. The trends shown are averaged over 100 random splits of the mother data into a training set of 481 and a testing set... ..."
Cited by 15

Table 2: MSE comparison of MRX, QR amp; ORMP for the Boston dataset for varying subset sizes. Subset Methods

in Some Greedy Learning Algorithms for Sparse Regression and Classification with Mercer Kernels
by Prasanth B. Nair, Arindam Choudhury, Andy J. Keane, Carla Brodley, Andrea Danyluk 2002
"... In PAGE 15: ...14 Subset size (n) Normalized Residual Error ORMP MRX QR Figure 5: Convergence of the averaged residual error using greedy algorithms on the Boston dataset. Figure 5 and Table2 show the convergence trends of the residual error and the mean square testing error, respectively, as the subset size is increased, for the Boston dataset. The trends shown are averaged over 100 random splits of the mother data into a training set of 481 and a testing set... ..."
Cited by 15

Table 2. Classification accuracy for feature subset selection Feature Subset Size Classifier Accuracy

in A HYBRID FEATURE SELECTION STRATEGY FOR IMAGE DEFINING FEATURES: TOWARDS INTERPRETATION OF OPTIC NERVE IMAGES
by Jin Yu, Syed Sibte Raza Abidi, Paul Habib Artes
"... In PAGE 5: ... Furthermore, we presented a novel feature subset selection strategy that minimized the feature space without the loss of information. Table2 indicates that through the first pass of our feature subset selection strategy we managed to reduce the feature set from 254 moments to a much smaller feature subset comprising just 47 salient moments, whilst achieving a slight increase in the classification accuracy. The second pass of our feature subset selection strategy, involves the use of a Markov Blanket as a filter model to the 47 features.... ..."
Next 10 →
Results 1 - 10 of 55,258
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University