Results 1 - 10
of
50,241
Table 2 - Highlight from combination algorithms Classifiers Detection False
2005
Cited by 5
Table II: Relative Ranks of HC, AHC and COMIT (Algorithm with Rank=1 highlighted)
1997
Cited by 7
Table II: Relative Ranks of HC, AHC and COMIT (Algorithm with Rank=1 highlighted)
1997
Cited by 7
Table 5.1. Naive Approach: Pseudo code highlighting the algorithm for each iteration.
2004
Table 7 Optimality of the constant threshold algorithm. The worst-case results are highlighted in red.
"... In PAGE 11: ...able 6 Global power states for the sensor node ........................................................ 58 Table7 Optimality of the constant threshold algorithm. The worst-case results are highlighted in red.... In PAGE 111: ... Table7 lists the performance of the constant threshold algorithm as compared to the optimal algorithm for Nodes 71, 55, 48, and 9. The results are listed for three different values of Teven.... In PAGE 116: ... As the wakeup overhead increases, it replaces Node 72 as the worst-case node. Let us compare Table 9 to the results of the constant threshold algorithm listed in Table7 .... ..."
TABLE V TESTING ERROR FOR DBN, BOOSTED DBN AND CRF. FOR EACH DATASET, THE ALGORITHM WITH THE BEST CLASSIFICATION ACCURACY IS HIGHLIGHTED.
Table 2: Performance of batch algorithms averaged over 5 runs. Best nMI for every dataset is highlighted.
"... In PAGE 5: ... Experiment 1 compares the perfor- mance of the three batch algorithms|LDA, EDCM, and vMF|on the 7 datasets. Table2 shows the nMI and run time results averaged across 5 runs, where vMF has the highest nMI accuracy and lowest run time for 10 20 30 40 50 60 70 80 90 100 0.1 0.... ..."
Table 4.2: The mean frontal matrix size (favg) for di erent variants of the MSRO algorithm. The smallest values are highlighted. indicates favg not improved by reordering.
Table 1: Gini coefficients for different data sets and algorithms. For each data set, the best coefficient is highlighted in bold, the second best in italics.
2004
"... In PAGE 6: ... As this example demonstrates, using gradient boosting stage outputs as additional inputs to subsequent boosting stages can produce a considerable increase in the rate of convergence. 4 Experimental evaluation Table1 shows evaluation results that were obtained on eight data sets used to compare the performance of the transform regression algorithm to the underlying LRT algorithm that it employs. Also shown are results for the first gradient boosting stage of transform regression, and for the stepwise linear regression algorithm that is used both in the leaves of linear regression trees and in the greedy one-pass additive modeling method.... ..."
Cited by 5
TABLE I Performance: this table highlights the geometric complexity of different benchmarks we tested, as well as the performance of our algorithm.
2007
Cited by 1
Results 1 - 10
of
50,241