Results 21 - 30
of
39,706
Table 10. Generalization accuracies (in terms of percentages of cor- rectly classi ed test instances) on the gs, pos, pp, and np tasks, by ib1-ig with k = 1, 2, 3, and 5.
1999
"... In PAGE 27: ... We performed experiments with ib1-ig on the four tasks with k = 2, k = 3, and k = 5, and mostly found a decrease in generalization accuracy. Table10 lists the e ects of the higher values of k. For all tasks except np, setting k gt; 1 leads to a harmful abstraction from the best-matching instance(s) to a more smoothed best... ..."
Cited by 99
Table 10. Generalization accuracies (in terms of percentages of cor- rectly classi ed test instances) on the gs, pos, pp, and np tasks, by ib1-ig with k = 1, 2, 3, and 5.
1999
"... In PAGE 27: ... We performed experiments with ib1-ig on the four tasks with k = 2, k = 3, and k = 5, and mostly found a decrease in generalization accuracy. Table10 lists the e ects of the higher values of k. For all tasks except np, setting k gt; 1 leads to a harmful abstraction from the best-matching instance(s) to a more smoothed best... ..."
Cited by 99
Table 1: Generalization matrix associated with classical discounting in the case K =3 (only non zero terms are represented).
2006
Table 2: The computed clusters and the associated categories
2007
"... In PAGE 11: ... In general, the clustering performance presents interesting results: the computed values in terms of recall and precision are rather accurate for the most of experiments. Table2 shows the main categories elicited by this experimentation: according to the topics of terms in the reference set, a pertinent category is associated. Furthermore, for each category, the more relevant sample of documents associated to the flrst three page links of returned list is shown.... ..."
Table 3. General distribution of source-language terms across their number of possible translations: in the input aligned word pairs (left), and in the output aligned n-gram pairs (right).
Table 1 summarizes the status of the project. In general terms, most of the project goals regarding computer vision algorithmic have been achieved, while the end-user part remains to be done in 2006. It is obviously the last part of the project because it depends on the reliability of the tasks developed in the steps 1.1, 1.2 and 1.3.
"... In PAGE 5: ... Table1 : Status of the project goals at August, 2005 2 Tasks and development In this section we detail the progress of the tasks related to each of the project goals, as well... ..."
Table 6.1: Subsampling speed and error This is the time taken to perform colour detection on a particular 240 a0 180 pixel image. The figure pixels in error is the number of pixels that are different to the colour detected image created with no subsampling. In general terms the error is particularly small (331 errors is 0.8% error for a 240 a0 180 image).
2001
Table 1. Leading terms in the computational and memory costs. These are associated with the two-term and general versions of the MFLAP and PFLAP algorithms for both computations (A) and (B). For (B) we only include the increment to go from bf (A) to (B). See text for more details.
"... In PAGE 6: ... Since we can overlap these two functions, only 2` ? 1 storage is required. Below in Table1 we summarize these complexity and storage results. In Table 1 the \general quot; method assumes optimization appropriate to the while the \two- term quot; method assumes we save work and space as mentioned above.... ..."
TABLE 1. Cortical interaction functions for continuity terms of differential order p H11005 1 to 4 in the generalized elastic net in 1D
2004
Cited by 3
TABLE 1. Cortical interaction functions for continuity terms of differential order p H11005 1 to 4 in the generalized elastic net in 1D
2004
Cited by 3
Results 21 - 30
of
39,706