Results 1 - 10
of
49,265
Table 4: Algorithm for induction
"... In PAGE 4: ... Then it invokes the induction process and either returns back with the induced definition or fails if the induction fails. Table4 outlines the algorithm for induction. During in- duction, an agent first induces any undefined predicates be- fore using them as background knowledge.... ..."
Table 1. The top-down induction algorithm for PCTs.
"... In PAGE 3: ... PCTs can be constructed with a standard top-down induction of decision trees (TDIDT) algorithm [19]. The algorithm is shown in Table1 . The heuristic that is used for selecting the tests is the reduction in variance caused by parti- tioning the instances (see line 4 of BestTest).... In PAGE 4: ...1 Ensembles for Multi-Objective Decision Trees In order to apply bagging to MODTs, the procedure PCT(Ei) (Table 1) is used as a base classifier. For applying random forests, the same approach is followed, changing the procedure BestTest ( Table1 , right) to take a random subset of size f(x) of all possible attributes. In order to combine the predictions output by the base classifiers, we take the average for regression, and apply a probability distribution vote instead of a simple majority vote for classification, as suggested by Bauer and Kohavi [23].... ..."
Table 7 DL-EM-X decision list induction algorithm
2004
Cited by 13
Table 8 The decision list induction algorithm DL-1-R
2004
Cited by 13
Table 9 The decision list induction algorithm DL-1-VS
2004
Cited by 13
Table 2. Percentage accuracies for three induction algorithms on four classi cation domains (Shavlik et al., 1991).
"... In PAGE 12: ...o Shavlik et al. apos;s study, which we discussed in Section 4.1, and consider it in more depth. Table2 presents additional results for their three algorithms on four separate clas- si cation tasks. These include the soybean domain described earlier, a task that involves predicting the winner of chess end games based on 36 high-level features... ..."
Table 1. Order 3 context for the example in Figure 1, constructed by means of the inductive algorithm.
1999
"... In PAGE 5: ... Let us consider the example in Figure 1. If classes B; C; Y and Z own method f, the associated con- text has to be augmented with the three attributes ohfi(1); ohfi(2); ohfi(3), since the classes owning f may occupy all the three available sequence positions in the con- text given in Table1 . By applying concept analysis to the resulting context, three design patterns are obtained after simplification.... ..."
Cited by 20
Table 1. Order 3 context for the example in Figure 1, constructed by means of the inductive algorithm.
1999
"... In PAGE 5: ... Let us consider the example in Figure 1. If classes B; C; Y and Z own method f, the associated con- text has to be augmented with the three attributes ohfi(1); ohfi(2); ohfi(3), since the classes owning f may occupy all the three available sequence positions in the con- text given in Table1 . By applying concept analysis to the resulting context, three design patterns are obtained after simplification.... ..."
Cited by 20
Table 1. Performance of verb argument structure induction algorithm Verbs Sentences Structures P amp; R (%)
"... In PAGE 7: ...ng that it includes low (i.e., rare), medium and high-frequency (i.e., common) verbs in our corpus. The first row in Table1 shows the selected verbs: hook , spin and yell are examples of low-frequency verbs; raise and spend are examples of medium-frequency verbs; buy , die and help are examples of high-frequency verbs. We compare our results to the results obtained with a baseline algorithm.... In PAGE 8: ...gorithm. The 2nd and 3rd columns in Table1 show the number of sentences used for train- ing our model for each verb and the number of argument structures considered in the experiments. The 4th column in Table 1 shows Precision (the average for the three judges) and Recall for each verb.... In PAGE 8: ... The 2nd and 3rd columns in Table 1 show the number of sentences used for train- ing our model for each verb and the number of argument structures considered in the experiments. The 4th column in Table1 shows Precision (the average for the three judges) and Recall for each verb. In relation to precision, the annotation agreement between judges was high: the kappa statistic (Carletta, 1996) was 0.... ..."
Results 1 - 10
of
49,265