### Table 1 summarizes the results on the test set of different approaches before us- ing AdaBoost. The Diabolo classifier (even without hand-selected sub-classes in the training set) performs quite well with respect to the multi-layer perceptrons. The exper- iments suggest that fully connected neural networks are not well suited for this task: small nets do poorly on both training and test sets, while large nets overfit.

1997

"... In PAGE 5: ...the classification is invariant), therefore incorporating prior knowledge on the task. Table1 . Error rates for different unboosted classifiers Diabolo classifier fully connected MLP no subclasses hand-selected 22-10-10 22-30-10 22-80-10 train: 2.... ..."

Cited by 4

### Table 7.7: A comparison of the recognition rates for testing, the number of weights and feed- forward CPU time for a single pattern presentation are given for a Single Layer Perceptron (SLP), a fully connected Multi-Layer Perceptron (MLP), a Radial Basis Function (RBF) net- work, a sparsely connected Multi-Layer Perceptron (sMLP) and a sparsely connected Higher Order Neural Network (HONN). The standard networks are evaluated as a replacement of the entire Hierarchical Neural Network (HNN) architecture. All networks received as input binary contour images of sizes between 32 32 and 128 128 and classi ed the ventricle patterns into 4 classes. The con gurations for the fully connected MLP and RBF network were optimised. The recognition results are averaged over 6 networks, except for the RBF network which is based on a single experiment only. The networks were not regularised, thus, over training may be present. The CPU times are measured on a SUN-SPARC 10 workstation.

### Table 1: Prediction accuracies of the up to now evaluated prediction techniques Elman net MLP Bayesian network State predictor Markov predictor

2005

"... In PAGE 2: ... Moreover we evaluated the well-known Markov predictor. Table1 compares the prediction accuracies of the Neural networks Elman net and multi-layer perceptron (MLP), Bayesian network, State predictor, and Markov predictor showing always the best results yielded for each person. The configurations may vary for different person.... ..."

Cited by 1

### Table 1: Weight discretization in multilayer neural networks: o -chip learning.

"... In PAGE 4: ... neural network paradigms. A compact overview of a large variety of results on the e ects of limited precision in neural networks can be found in Table1 to 4. These tables list the number of bits that are required for satisfactory (learning) performance and brie y describe the core idea of the algorithms.... In PAGE 4: ... Only the forward propagation pass in the recall phase is performed on-chip whichmakes these quantization e ects amenable for mathematical analysis using a statistical model. Some of the results have been summarized in Table1 which indicate that the accuracy needed in the on-chip forward pass is around 8 bits. In [Pich e-95] a comparison between Heaviside and sigmoidal multilayer networks is given, showing that the weight precision required inaHeaviside network is much higher and even doubles when a layer is added to the network.... In PAGE 6: ...lgorithms with the entropy(number of bits) upper bounds of the data set [Beiu-96.2]. Finally,wewould like to point out that a comparativebenchmarking study of quantization e ects on di erent neural network models and the improvements that can be obtained byweight discretization algorithms has not yet been done. The accuracies listed in Table1 to 4 are therefore highly biased by... ..."

### Table 1 Genetic-algorithm encoding of a multilayer neural network spatial interaction model Bits Meaning

1998

"... In PAGE 16: ... On the other hand, using a larger quantity of data to evaluate the CNN may imply that bigger networks could be trained more precisely than smaller ones and, thus, the implicit pruning process would be reluctant to remove links. Encoding Scheme: Table1 illustrates how a string is built. The string representation has several desirable properties.... ..."

### Table 2 Summary neural net prediction of vfadig

"... In PAGE 8: ...ccuracy for the combined targets was 98.0% and 98.2% respectively. Table2 and Table 3 report summary data from applying the models to the data values in TESTSET, for the prediction of vfadig and coddig with the filtered experimental values. TESTSET values have not been used for training the neural net.... In PAGE 8: ... 4.4 Estimates of vfadig and coddig From Table2 it can be seen that the predicted value of vfadig, is very well correlated to the experimental values since R2 = 0.... ..."

### TABLE 7 CONVENTIONAL ALGORITHMS ON THE NEURAL NET

1994

Cited by 42

### TABLE 8 SIMULATED ANNEALING ON THE NEURAL NET

1994

Cited by 42