• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 49,265
Next 10 →

Table 2.9: Wrapper induction algorithm

in unknown title
by unknown authors

Table 4: Algorithm for induction

in Distributed Interactive Learning in Multi-Agent Systems
by unknown authors
"... In PAGE 4: ... Then it invokes the induction process and either returns back with the induced definition or fails if the induction fails. Table4 outlines the algorithm for induction. During in- duction, an agent first induces any undefined predicates be- fore using them as background knowledge.... ..."

Table 1. The top-down induction algorithm for PCTs.

in Ensembles of Multi-Objective Decision Trees
by Dragi Kocev, Celine Vens, Jan Struyf
"... In PAGE 3: ... PCTs can be constructed with a standard top-down induction of decision trees (TDIDT) algorithm [19]. The algorithm is shown in Table1 . The heuristic that is used for selecting the tests is the reduction in variance caused by parti- tioning the instances (see line 4 of BestTest).... In PAGE 4: ...1 Ensembles for Multi-Objective Decision Trees In order to apply bagging to MODTs, the procedure PCT(Ei) (Table 1) is used as a base classifier. For applying random forests, the same approach is followed, changing the procedure BestTest ( Table1 , right) to take a random subset of size f(x) of all possible attributes. In order to combine the predictions output by the base classifiers, we take the average for regression, and apply a probability distribution vote instead of a simple majority vote for classification, as suggested by Bauer and Kohavi [23].... ..."

Table 7 DL-EM-X decision list induction algorithm

in Understanding the Yarowsky Algorithm
by Steven Abney 2004
Cited by 13

Table 8 The decision list induction algorithm DL-1-R

in Understanding the Yarowsky Algorithm
by Steven Abney 2004
Cited by 13

Table 9 The decision list induction algorithm DL-1-VS

in Understanding the Yarowsky Algorithm
by Steven Abney 2004
Cited by 13

Table 2. Percentage accuracies for three induction algorithms on four classi cation domains (Shavlik et al., 1991).

in The Experimental Study of Machine Learning
by Patrick W. Langley, Dennis Kibler
"... In PAGE 12: ...o Shavlik et al. apos;s study, which we discussed in Section 4.1, and consider it in more depth. Table2 presents additional results for their three algorithms on four separate clas- si cation tasks. These include the soybean domain described earlier, a task that involves predicting the winner of chess end games based on 36 high-level features... ..."

Table 1. Order 3 context for the example in Figure 1, constructed by means of the inductive algorithm.

in Object Oriented Design Pattern Inference
by Paolo Tonella, Giulio Antoniol 1999
"... In PAGE 5: ... Let us consider the example in Figure 1. If classes B; C; Y and Z own method f, the associated con- text has to be augmented with the three attributes ohfi(1); ohfi(2); ohfi(3), since the classes owning f may occupy all the three available sequence positions in the con- text given in Table1 . By applying concept analysis to the resulting context, three design patterns are obtained after simplification.... ..."
Cited by 20

Table 1. Order 3 context for the example in Figure 1, constructed by means of the inductive algorithm.

in Object Oriented Design Pattern Inference
by Paolo Tonella, Giulio Antoniol 1999
"... In PAGE 5: ... Let us consider the example in Figure 1. If classes B; C; Y and Z own method f, the associated con- text has to be augmented with the three attributes ohfi(1); ohfi(2); ohfi(3), since the classes owning f may occupy all the three available sequence positions in the con- text given in Table1 . By applying concept analysis to the resulting context, three design patterns are obtained after simplification.... ..."
Cited by 20

Table 1. Performance of verb argument structure induction algorithm Verbs Sentences Structures P amp; R (%)

in Unsupervised Learning of Verb Argument Structures
by Thiago Alex, Re Salgueiro Pardo, Daniel Marcu, Maria Graças, Volpe Nunes
"... In PAGE 7: ...ng that it includes low (i.e., rare), medium and high-frequency (i.e., common) verbs in our corpus. The first row in Table1 shows the selected verbs: hook , spin and yell are examples of low-frequency verbs; raise and spend are examples of medium-frequency verbs; buy , die and help are examples of high-frequency verbs. We compare our results to the results obtained with a baseline algorithm.... In PAGE 8: ...gorithm. The 2nd and 3rd columns in Table1 show the number of sentences used for train- ing our model for each verb and the number of argument structures considered in the experiments. The 4th column in Table 1 shows Precision (the average for the three judges) and Recall for each verb.... In PAGE 8: ... The 2nd and 3rd columns in Table 1 show the number of sentences used for train- ing our model for each verb and the number of argument structures considered in the experiments. The 4th column in Table1 shows Precision (the average for the three judges) and Recall for each verb. In relation to precision, the annotation agreement between judges was high: the kappa statistic (Carletta, 1996) was 0.... ..."
Next 10 →
Results 1 - 10 of 49,265
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University