### Table 3. Simulation Results of Integrating Two Simple Heuristics

in Simple and Integrated Heuristic Algorithms for Scheduling Tasks with Time and Resource Constraints

1987

Cited by 21

### Table 1: Di erent network structures. The type of arrow drawn indicates the sort of functions of the weighted sum minus threshold that can be alternatively used yielding the same result. A simple arrow denotes sign function, one with tail means that the argument is twice a sign, so instead of taking the sign we can just divide by two. The double arrow means that the sign function is absolutely redundant.

"... In PAGE 13: ... Finally, the composition of the rst two steps of c gives a three-layer network (N; N; N) called c0. By way of summarizing and completing this picture, all the quantities occurring are listed in Table1 and Table 2. 3 Accessibilities.... ..."

### Table 3: The Choice of Weights

2002

"... In PAGE 29: ... 9. The Choice of Weights Table3 shows the results of applying the iterative procedure described in the last section to the choice of voting weights in the IMF. The iterative procedure (which has also been used in Leech [21]) was applied here using the algorithm for the Banzhaf index described in Leech [22]; full convergence was achieved with a simple sum of squares distance function and a stopping rule which required it to be less than 10-15.... In PAGE 30: ... As before the results for the two bodies are broadly similar. Table3 about here For ordinary decisions, the voting weight of the United States should be reduced to under15 percent, and the voting weight of the other member countries increased slightly .in order to achieve the levels of voting power given in the appendix to the IMF Annual Report for 1999: United States 17.... ..."

Cited by 3

### Table 1 OLS selection procedure for the simple scalar function modeling problem

2001

"... In PAGE 9: ... For this simple example, many sets of different noisy training data were generated, and the modeling results were consistent and similar to the results shown below, which were typical. It is informative to examine the selection process of the OLS algorithm, listed in Table1 . Notice that the normalized MSE continuously decreased as more terms were added.... In PAGE 10: ... This produced a 15-term model. The model weights had very large value, as can be seen in Table1 . This was a typical sign of over-fitting.... ..."

Cited by 9

### Table 4 Summary of extraction performance on simple logic functions

1999

"... In PAGE 12: ... Networks were trained on the complete pattern space of each function; rules were then extracted and tested against all the patterns. Table4 summarizes the results over 10 replications of each problem with different initial weights. In addition to the perfect classification perfor- mance of the rules, the high values of p* and low values of extraction error provide evidence that the extraction process is accurate.... ..."

### Table 4 Summary of extraction performance on simple logic functions

"... In PAGE 12: ... Networks were trained on the complete pattern space of each function; rules were then extracted and tested against all the patterns. Table4 summarizes the results over 10 replications of each problem with different initial weights. In addition to the perfect classification perfor- mance of the rules, the high values of p* and low values of extraction error provide evidence that the extraction process is accurate.... ..."

### Table 3. Best weighting function, given the interests of the developer. Robust Good tuning guaranteed

"... In PAGE 5: ... Conclusions and Future Work Given the results, we can draw a few conclusions: 1) the weighting functions do not affect significantly the quality of the PCP; 2) discrete and gaussian weightings are not robust; 3) there is no absolute winner . Nevertheless, Table3 suggests the use of some functions, depending on the interest of the developer. If he wants a simple solution, no ... ..."

### Table 1 summarizes parameters used in PS and Simple RBF learning along with their corresponding errors. Other parameters that remained constant in our experiment were as follows:

"... In PAGE 5: ... An uneven sample distribution generated by trigonometric functions Figure 4 indicates that the error of a Simple RBF (which is itself a function of its weights) constantly oscillates with very large ripples, while that of PS-RBF converges quickly. Moreover, it is clear from Table1 that the average error of PS-RBF is much less than that of Simple RBF. Reconstruction results can be seen in the bottom plots of Figure 2.... In PAGE 6: ...Table1 . PS-RBFN vs.... ..."

### Table 1: w(si)

"... In PAGE 13: ... In general, this choice should reflect the relative frequency of time signatures. We propose the following form for the prior of S = [si] p(S) / e? Pi w(si) (16) where w(si) is a simple weighting function given in Table1 . This form prefers subdivisions by small prime numbers, which reflects the intuition that rhythmic subdivisions by prime numbers... ..."

### Table 2. The table shows that initially the best string in the population corresponds to a rather unsuccessful con guration of functional links. In later generations the tness value increases (or the error rate decreases), and eventually the optimal pattern transformation is found. Note that the weights of the simple perceptron indicate that the bias term is of no use for the considered pattern classi cation problem.

"... In PAGE 6: ... Table2 : The result of a typical run on the \sinus data-set quot; if symbolic encoding is used. For each considered generation the best con guration of functional links is shown, together with the corresponding perceptron weights and the classi cation error on test patterns.... ..."