### Table 14: Backward elimination selection results

2007

"... In PAGE 10: ...able 13 Bottom Quartile Ranking Participants Based on Pronunciation Score ........54 Table14 Backward Elimination Selection Results .... ..."

### Table 2: Investigation of the most relevant features using Backward Elimination (BE).

### Table 3: Bias and standard deviations of the ordinary predictor ^ ORD and two predictors based on backward elimination variable selection for some patients with speci c values of the covariate vector x.

"... In PAGE 12: ... For some selected values of the covariate vector x the bias of ^ BE, (x) is shown in Table 3. We observe that for patients with a covariate vector x such that 0(x) is larger than m0 Table3 about here the predictor almost always underestimates the true conditional probability of an event. For example the patient considered already above with a true conditional probability of 0:864 has to expect a value of the prognostic index of 0:741 if backward elimination with a nominal level of 5% is used, and of 0:809 if the level is 15%.... In PAGE 13: ... j is related to the probability of selecting the jth covariate. This is illustrated by the contradiction in the lower part of Table3 . If = (?2:25; 1:5; 1:2; 0:9; 0:6; 0:3; 0; 0; 0; 0; 0) and x = (1; 1; 0; 0; 0; 0; 0; 0; 0; 0) then in spite of 0(x) = 0:611 gt; 0:5 = m0 the predic- tions ^ BE, (x) overestimate 0(x).... ..."

### Table 4. Comparative analysis of overall classification accuracies for 3-class problems on artificial test data set with 5 clusters (BE stands for Backward Elimination, LM for Levenberg- Marquardt algorithm)

Cited by 3

### Table 4.5: Best suboptimal feature subset obtained with random subset sequential backwards elimination with linear discriminant analysis (LDA). The performance score of LDA with these 16 features is 77.7%.

2001

### Table 4.5: Best suboptimal feature subset obtained with random subset sequential backwards elimination with linear discriminant analysis (LDA). The performance score of LDA with these 16 features is 77.7%.

2001

2001

### Table 3.3: Backward elimination of features to remove statistically insignificant features. p- values are shown for features in each iteration. A significance level of 0.02 is used.

### Table 5: Minimum SSE across Hidden Nodes and Number of Variables Overall results indicate that the backward elimination procedure identifies the same ``best apos; apos; models as the all-possible-combinations approach in each of the three validation samples. Neural networks are quite robust with respect to architecture and feature selection. Networks with 2 or 3 hidden nodes seem to be appropriate for this data set.

2003

"... In PAGE 15: ... These correspond to a total of 1,016 networks. Table5 shows the minimum SSEs across all hidden nodes and sets of input variables for each validation sample. In sample 1, among the seven 1-variable networks, variable 6 (not shown) with 4 hidden nodes is tied with variable 6 with 3 hidden nodes with SSE equal to 103.... In PAGE 15: ... The seven input variables were trained in 8 network architectures, hidden nodes from 0 to 7. With validation sample 1, Table5 shows the network with 2 hidden nodes has the smallest SSE of 73.73 for 7 variables.... In PAGE 15: ... So the recommended feature set, based on validation sample 1, is (2,3,4,5,6,7) and the network architecture is the one with 2 hidden nodes. This is the ``best apos; apos; selection indicated by the all-combination experiment ( Table5 ). With validation sample 2, the backward elimination method ends with the same ``best apos; apos; selection.... ..."