### Table 2. Results of the experiments with the new boosting algorithms. The recognition rate of the classi er trained on the whole training set is 66.23 %.

"... In PAGE 7: ... ga weighted voting (ga-v): Like weighted voting, but the optimal weights are calculated by a genetic algorithm based on the results of the classi ers on the training set. All methods were tested three times and the averaged results of the exper- iments are shown in Table2... ..."

### Table 2: Comparing GentleBoost, AdaBoost.ME and AdaBoost.ML with AdaBoost.MH. Inside () are the standard errors of the test error rates.

"... In PAGE 23: ... (2001), we used 8-node regression trees in GentleBoost. Table2 summarizes these data sets and the test error rates using a single decision tree. Table 2 shows the test error rates of the four boosting algorithms.... In PAGE 23: ...l. (2001), we used 8-node regression trees in GentleBoost. Table 2 summarizes these data sets and the test error rates using a single decision tree. Table2 shows the test error rates of the four boosting algorithms. Although the comparsion results slightly favor our new boosting algorithms, overall the four algorithms have similar performances and are quite comparable.... ..."

### Table 1. The boosting-based algorithm for conditional density es- timation.

2002

"... In PAGE 4: ... The result is a new machine-learning algorithm for solving conditional density estimation problems, described in detail in the remainder of this section. Table1 shows pseudo-code for the entire algorithm. Abstractly, we are given pairs #28x 1;y 1#29;:::;#28x m ;y m #29 where each x i belongs to a space X and each y i is in R.... ..."

Cited by 24

### Table 1. Comparison of results between grids with and without diagonals. New results

1994

"... In PAGE 2: ... For two-dimensional n n meshes without diagonals 1-1 problems have been studied for more than twenty years. The so far fastest solutions for 1-1 problems and for h-h problems with small h 9 are summarized in Table1 . In that table we also present our new results on grids with diagonals and compare them with those for grids without diagonals.... ..."

Cited by 11

### Table 1 shows the naive boosting algorithm. The cal-

2000

"... In PAGE 6: ...Table1 . A naive algorithm for the boosting loss function.... ..."

Cited by 137

### Table 1: The VirtualBoost Algorithm

2007

"... In PAGE 4: ... When a boosted classifier is trained using this virtual evidence [Pearl, 1988], VirtualBoost allows the trainer to consider all possible labels in proportion to their weight. Table1 specifies VirtualBoost. For simplicity we assume two classes, although the algorithm generalizes to multiple classes.... ..."

Cited by 2

### Table 1: The VirtualBoost Algorithm

2007

"... In PAGE 4: ... When a boosted classifler is trained using this \virtual evidence quot; [Pearl, 1988], VirtualBoost allows the trainer to consider all possible labels in pro- portion to their weight. Table1 specifles VirtualBoost. For simplicity we as- sume two classes, although the algorithm generalizes to multiple classes.... ..."

Cited by 2

### Table 1. Boosting-LDA algorithm

"... In PAGE 3: ... 2.1 Boosting-LDA In this section, the AdaBoost algorithm is incorporated into the B-LDA scheme ( Table1 ), where the component classifier is the standard Fisherface method. A set of trained weak-LDA classifiers can be obtained via B-LDA algorithm, and the majority voting method is used to combine these weak-LDA classifiers.... ..."

### Table 1. Pseudocodedescribing the FeatureBoost algorithm.

2000

"... In PAGE 3: ... Similar work with k nearest neighbors has appeared (Bay, 1998), though the goals and algorithms differ considerably. The goal of FeatureBoost (see Table1 ) is to search for alter- nate hypotheses amongst the features. A distribution over features is kept and updated: at each iteration a43 , this distri- bution is altered and stored in a44 a45 .... In PAGE 4: ... Pseudocode for the importance calculation is provided in Table 2. We have experimented with several approaches to the DEEMPHASIZE function for biasing LEARN by the distri- bution over the features a44 a45 in Step 4 of Table1 . Options range from hard (e.... ..."

Cited by 14