### Table 7.4: Comparison LNPBoost with 1-norm MI-SVM with feature selection. 10-fold crossvalidation accuracies and p-values, using linear and RBF features. Bold values indicate which of the accuracies is greater. The p-values indicate the probability of error that must be accepted in order for the difference to be deemed statistically significant. Thus, there is: (1) essentially zero probability of error in accepting the hypothesis that LNPBoost performs significantly better than 1-norm MI-SVM with linear features on TREC9 test set #9; (2) essentially 100% probability of error in accepting the hypothesis that LNPBoost performs significantly better than 1-norm MI-SVM with RBF features on the Fox data set; and (3) a 24% probability of error in accepting the hypothesis that LNPBoost performs significantly better than 1-norm MI-SVM with RBF features on the MUSK2 data set. The following observation provides a (rather weak) argument in favor of LNPBoost: if one is willing to accept 72% chance of error, then nine out of 17 comparisons are statistically significant and favor LNPBoost, while none of the significant differences favor the 1-norm MI- SVM algorithm.

### Table 2: Performance of the SDP/SVM method using various combinations of kernels. Each row in the table corresponds to one experiment, classifying the 497 known yeast membrane proteins versus the 1876 known non-membrane proteins in yeast. The data is split into train and test sets in a ratio of 80/20, and the classifier is a 1-norm soft margin SVM with C=1. The first seven columns indicate the average weight assigned via SDP to each of the seven kernel matrices. A hyphen indicates that the corresponding kernel is not considered in the combination. The rightmost columns list two performance metrics, test set accuracy (TSA) and ROC score, along with standard deviations computed across 30 randomly generated 80/20 splits.

2003

"... In PAGE 11: ... This approach can deliver superior predictive performance (18), and would seem particularly appropriate in gene or protein classification problems, where the entities to be classified are often known a priori. A Overview of all results Table2 lists results for various combinations of kernel matrices. This table includes the results depicted in Figure 1.... ..."

Cited by 2

### Table 2: Performance of the SDP/SVM method using various combinations of kernels. Each row in the table corresponds to one experiment, classifying the 497 known yeast membrane proteins versus the 1876 known non-membrane proteins in yeast. The data is split into train and test sets in a ratio of 80/20, and the classi er is a 1-norm soft margin SVM with C=1. The rst seven columns indicate the average weight assigned via SDP to each of the seven kernel matrices. A hyphen indicates that the corresponding kernel is not considered in the combination. The rightmost columns list two performance metrics, test set accuracy (TSA) and ROC score, along with standard deviations computed across 30 randomly generated 80/20 splits.

2003

"... In PAGE 11: ... This approach can deliver superior predictive performance (18), and would seem particularly appropriate in gene or protein classi cation problems, where the entities to be classi ed are often known a priori. A Overview of all results Table2 lists results for various combinations of kernel matrices. This table includes the results depicted in Figure 1.... ..."

Cited by 2

### Table 1: Comparison of the L1 norm of the error

"... In PAGE 4: ...Table 1: Comparison of the L1 norm of the error (a) Low error (b) High error Figure 3: Example tetrahedra relative to the u = 0:5 isosurface Table1 compares the L1 norm of the error for the model hyperbolic problem for an isotropic h- re nement scheme and for a hr-re nement scheme using the node movement algorithm (4) prior to each h-re nement step. In this case two Jacobi sweeps were performed on the mesh with a xed solution, then the solution was recomputed.... In PAGE 4: ... The results indicate that while the node movement scheme can help to reduce the error in the problem the improvement is not great in this case, although a lower error was obtained on each grid at the end of the r-re nement stage. Table1 (b) shows that the results are improved by choosing a more aligned initial mesh (in the sense of Iliescu [8]) as expected. There are various parameters in the adaptivity algorithm that can be tuned, such as the number of sweeps and the number of times the solution is recomputed during r-re nement, and improved results can be obtained by repeated experimentation.... In PAGE 7: ... The scheme leads to low solution errors quite rapidly, but the convergence stalls after the nal entry in the table. However the errors are much lower than those produced by the interpolation error scheme in Table1 for the same resolution, and are also superior to h-re nement based on the functional value. Figure 6 shows two tetrahedra with low functional value near the layer on the rst grid after optimisation.... ..."

Cited by 1

### Table 3 Performance of the SVM

2005

### Table 1. Results: percentage of correct classifications.

2006

"... In PAGE 7: ... Otherwise, it would have performed worse on data sets with skewed class distribution. Table1 gives the predictive accuracy of the induced rule sets as estimated by tenfold cross-validation for the proposed system in comparison to PART (Frank amp; Witten, 1998) and SLIPPER (Cohen amp; Singer, 1999). The table indicates that MMV performs bet- ter than the 1-norm SVM in 14 of the 18 cases and better than empirical margin optimization in 15 cases.... ..."

Cited by 5

### Table 1: Simulation results of BD-norm and BE-norm SVM Test Error (SE)

2003

"... In PAGE 6: ... We generate BDBCBCBC test data to compare the BD-norm SVM and the standard BE-norm SVM. The average test errors over BHBC simulations, with different numbers of noise inputs, are shown in Table1 . For both the BD-norm SVM and the BE-norm SVM, we choose the tuning parameters to minimize the test error, to be as fair as possible to each method.... In PAGE 6: ... For comparison, we also include the results for the non-penalized SVM. From Table1 we can see that the non-penalized SVM performs significantly worse than the penalized ones; the BD-norm SVM and the BE-norm SVM perform similarly when there is no noise input (line 1), but the BE-norm SVM is adversely affected by noise inputs (line 2 - line 5). Since the BD-norm SVM has the ability to select relevant features and ignore redundant features, it does not suffer from the noise inputs as much as the BE-norm SVM.... In PAGE 6: ... Since the BD-norm SVM has the ability to select relevant features and ignore redundant features, it does not suffer from the noise inputs as much as the BE-norm SVM. Table1 also shows the number of basis functions D5 and the number of joints on the piece-wise linear solution path. Notice that D5 BO D2 and there is a striking linear relationship between CYBWCY and... ..."

Cited by 28

### Table 1: Simulation results of a4 -norm and a3 -norm SVM Test Error (SE)

2003

"... In PAGE 6: ... We generate a4a60a98a23a98a23a98 test data to compare the a4 -norm SVM and the standard a3 -norm SVM. The average test errors over a109 a98 simulations, with different numbers of noise inputs, are shown in Table1 . For both the a4 -norm SVM and the a3 -norm SVM, we choose the tuning parameters to minimize the test error, to be as fair as possible to each method.... In PAGE 6: ... For comparison, we also include the results for the non-penalized SVM. From Table1 we can see that the non-penalized SVM performs significantly worse than the penalized ones; the a4 -norm SVM and the a3 -norm SVM perform similarly when there is no noise input (line 1), but the a3 -norm SVM is adversely affected by noise inputs (line 2 - line 5). Since the a4 -norm SVM has the ability to select relevant features and ignore redundant features, it does not suffer from the noise inputs as much as the a3 -norm SVM.... In PAGE 6: ... Since the a4 -norm SVM has the ability to select relevant features and ignore redundant features, it does not suffer from the noise inputs as much as the a3 -norm SVM. Table1 also shows the number of basis functions a103 and the number of joints on the piece-wise linear solution path. Notice that a103a199a198a81a91 and there is a striking linear relationship between a74 a82 a74 and a157a67a200 a131 a88a90a91 a136 a80... ..."

Cited by 4