### Table 1. Results on test data of numerical experiments on the Vapnik regression dataset. The sparseness is expressed in the rate of components which is selected only if the input is relevant (100% means the original structure was perfectly recovered).

2005

Cited by 1

### Table 1. Results on test data of numerical experiments on the Vapnik regression dataset. The sparseness is expressed in the rate of components which is selected only if the input is relevant (100% means the original structure was perfectly recovered).

2005

Cited by 1

### Table 1: Performances of SVMs, LS-SVMs and Sparse LS-SVMs expressed in Mean Squared Error (MSE) on a test set in the case of regression or Percentage Correctly Classified (PCC) in the case of classification. Sparseness is expressed in percentage of support vectors w.r.t. number of training data.

### Table 4 Comparison of sparse Bayesian kernel logistic regression, the support vector machine (SVM) and the relevance vector machine (RVM) over seven benchmark datasets, in terms of test set error and the number of representer vectors used. The results for the SVM and RVM are taken from Tipping [14].

"... In PAGE 19: ... It is possible that a greedy algorithm that selects representer vectors so as to maximise the evidence would result in a greater degree of sparsity, however this has not yet been investigated. [ Table4 about here.]... ..."

### Table 1: Evaluation of linear sparse multinomial logistic regression methods over a set of nine benchmark datasets. The best results for each statistic are shown in bold. The final column shows the logarithm of the ratio of the training times for the SMLR and SBMLR, such that a value of 2 would indicate that SBMLR is 100 times faster than SMLR for a given benchmark dataset.

in Abstract

"... In PAGE 5: ... 3 Results The proposed sparse multinomial logistic regression method incorporating Bayesian regularisation using a Laplace prior (SBMLR) was evaluated over a suite of well-known benchmark datasets, against sparse multinomial logistic regression with five-fold cross-validation based optimisation of the regularisation parameter using a simple line search (SMLR). Table1 shows the test error rate and cross-entropy statistics for SMLR and SBMLR methods over these datasets. Clearly, there is little reason to prefer either model over the other in terms of generalisation performance, as neither consistently dominates the other, either in terms of error rate or cross-entropy.... In PAGE 5: ... Clearly, there is little reason to prefer either model over the other in terms of generalisation performance, as neither consistently dominates the other, either in terms of error rate or cross-entropy. Table1 also shows that the Bayesian regularisation scheme results in models with a slightly higher degree of sparsity (i.... ..."

### Table 2. The selected 26 genes in Colon cancer data. Index denotes the serial number of the selected gene in the original data. Hits is the number of hits criterion used in our algorithm. Rank denotes the rank in the p-values of Wilcoxon rank sum test. SLR (8) denotes the rank in the 8 genes selected by the sparse logistic regression algorithm of [Shevade and Keerthi, 2003]. RFE (7) denotes the rank in the 7 genes selected by recursive feature elimination using the support vector machines of [Guyon et al., 2002].

2005

"... In PAGE 4: ... A lower LOO error can be achieved using the gene rank of our algorithm, although this may involve using more genes than when using the p-value rankings on the colon and leukaemia data. The selected genes are listed in Table2 - 4 with more des- criptions. In Table 2, we found that all the 8 genes selected by Shevade and Keerthi [2003] and 6 of 7 genes selected by Guyon et al.... In PAGE 4: ... The selected genes are listed in Table 2 - 4 with more des- criptions. In Table2 , we found that all the 8 genes selected by Shevade and Keerthi [2003] and 6 of 7 genes selected by Guyon et al. [2002] are also in our list.... ..."

### Table 4: Averages of the MISE ratios for each regression method, broken out by dimension (indicated by p) and model sparseness levels, for the case where function is Linear. Averages were taken over the noise and sample size levels. Asterisks denote cases in which no data were available.

1997

Cited by 1

### Table 5: Averages of the MISE ratios for each regression method, broken out by dimension (indicated by p) and model sparseness levels, for the case where function is Gaussian. Averages were taken over the noise and sample size levels. Asterisks denote cases in which no data were available.

1997

Cited by 1

### Table 7: Averages of the MISE ratios for each regression method, broken out by dimension (indicated by p) and model sparseness levels, for the case where function is Mixture. Averages were taken over the noise and sample size levels. Asterisks denote cases in which no data were available.

1997

Cited by 1

### Table 8: Averages of the MISE ratios for each regression method, broken out by dimension (indicated by p) and model sparseness levels, for the case where function is Product. Averages were taken over the noise and sample size levels. Asterisks denote cases in which no data were available.

1997

Cited by 1