### Table 2 Comparison of Results With Ordinary Least Squares (OLS) and Tobit Regression Models

"... In PAGE 9: ...LP producing an R2 of .22 with training data and .04 with test data (see Table 1). OLS and Tobit Regression Models OLS and Tobit regression models had lower levels of predictive accuracy than did the best performing neutral networks. As indicated in Table2 , R2 for OLS regression was .... In PAGE 10: ...training data and an R2 of .059 on test data (see Table2 ). Although not statistically signifi- cant (w2 = 11:24, p gt;:05), the Tobit regression model generated greater predictive accu- racy than did OLS regression but lagged the best performing ANN by a considerable margin (R2 of .... ..."

### Table 3 Proposed BN Perceptron

"... In PAGE 24: ...istribution between -0.5 and 0.5. The same termination criteria as before were used. Generalization ability results are given in Table3 as average percentages of correctly classi ed bits in the test sets of the resulting 100 training sessions per benchmark and algorithm. In four of the benchmarks our method exhibits better generalization ability than the other methods, while in the remaining six its generalization ability is second best.... In PAGE 29: ... Table 2 : Average classi cation performance (percentage of correctly classi- ed bits) achieved by the proposed algorithm, the BN method, the perceptron rule and the CG algorithm for various classi cation problems. Table3 : Average generalization ability (percentage of correctly classi ed bits in the test set) achieved by the proposed algorithm, the BN method and the perceptron rule.... ..."

### Table 3 Proposed BN Perceptron

"... In PAGE 24: ...istribution between -0.5 and 0.5. The same termination criteria as before were used. Generalization ability results are given in Table3 as average percentages of correctly classified bits in the test sets of the resulting 100 training sessions per benchmark and algorithm. In four of the benchmarks our method exhibits better generalization ability than the other methods, while in the remaining six its generalization ability is second best.... In PAGE 29: ... Table 2 : Average classification performance (percentage of correctly classi- fied bits) achieved by the proposed algorithm, the BN method, the perceptron rule and the CG algorithm for various classification problems. Table3 : Average generalization ability (percentage of correctly classified bits in the test set) achieved by the proposed algorithm, the BN method and the perceptron rule.... ..."

### Table 2. MLP Architecture and Training Parameters

"... In PAGE 8: ....3.2. Multi-Layer Perceptron (MLP) Layer The MLP layer realizes the automatic classification using features obtained from the discrete wavelet layer. The training parameters and the structure of the MLP used in this study are as shown in Table2 . These were selected for the best performance, after several different experiments, such as the number of hidden layers, the size of the hidden layers, value of the moment constant and learning rate, and type of activation functions.... ..."

### Table 9: Single Layer Training | Thermal perceptron algorithm

1997

Cited by 42

### Table 2: Training errors (%) of several perceptron learning algorithms initialized with FLD

2005

Cited by 6

### Table 2: Training errors (%) of several perceptron learning algorithms initialized with FLD

2005

Cited by 6