### Table 7: Final Considerations

"... In PAGE 7: ... Additional metrics and comments can be specified in the associated table row. Final considerations Supplementary information specific to the surveyed course and/or project can be inserted in Table7 that includes questions about: problems encountered during the project iteration, an evaluation of the project, comments about the project with respect to previous iterations, and any additional relevant information. 5.... ..."

### Table 7: Final Considerations

"... In PAGE 7: ... Additional metrics and comments can be specified in the associated table row. Final considerations Supplementary information specific to the surveyed course and/or project can be inserted in Table7 that includes questions about: problems encountered during the project iteration, an evaluation of the project, comments about the project with respect to previous iterations, and any additional relevant information. 5.... ..."

### Table 4: Storage Consideration: Case Study 1

1998

"... In PAGE 17: ...hen storage limitations are also considered based on the ideas of section 2.1. The STN representation of the production network in this case is shown in Figure 4 where the number in parentheses correspond to the states for the materials, tasks for the mixing, storage and packing processing tasks, and the suitable units. (a) Approximation of storage timings As shown in Table4 , the proposed formulation in this case requires also the consideration of 7 event points and results in a MILP model having 336 integer variables, 1361 contin- uous variables and 3395 constraints. The solution of the resulting MILP problem using... ..."

Cited by 19

### Table 1: Lower bounds on the star network. The algorithms derived here for all of the above problems are optimal in terms of time and number of message transmissions. Some of the methods used in this sections to derive lower bounds for the commu- nications problems under consideration are similar to the methods used in [7] to derive lower bounds for similar problems on the hypercube network.

1996

"... In PAGE 14: ... The lower bounds for the algorithms with controlled degree of fault tolerance will be derived in the following sections along with the description of the algorithms. Table1 below summarizes the lower bounds for all of the above problems, with degree of fault tolerance n ? 2, and M messages transmitted to each node. By tn we denote the quantity n!(n + 2... ..."

Cited by 13

### TABLE 1: Selected Macroeconomic Indicators in CEEC-5 (1996) Country GDP Unemploy-

### Table 5 compares the results obtained by us, using the approach described in this paper, with previous work done on the same problem. Thus, our system is seen as a considerable improvement compared with recent research on the recognition of E.Coli promoters.

"... In PAGE 8: ...9883 0.9818 Table5 . Comparison of our results with previous work on the same problem ... ..."

### Table 2: cumulative results Moreover, there are 8 problems solved by our new algorithm while the algorithm proposed in [14] fails on these problems. The failures are caused by excessive number of iterations or functions evaluations.On the basis of these results, the new method generates a considerable computational savings, along with an increase in robustness.

1996

"... In PAGE 11: ... In the Appendix we report the complete results of both the algorithms on all the test problems. In order to give a summary of this extensive numerical testing, in Table 1 we report the number of times each method performs the best in terms of number of iterations, function and gradient evaluations: NEW ALGORITHM ALGORITHM [14] tie iterations 69 17 91 function evaluations 155 18 4 gradient evaluations 69 17 91 Table 1: number of times each method performs the best Table2 shows the cumulative results for all the problems solved by both algorithms. In this table iterations stands for the total number of iterations needed to solve all these problems; the same for function and gradient evaluations.... ..."

Cited by 5

### Table 2: cumulative results Moreover, there are 8 problems solved by our new algorithm while the algorithm proposed in [14] fails on these problems. The failures are caused by excessive number of iterations or functions evaluations.On the basis of these results, the new method generates a considerable computational savings, along with an increase in robustness.

1996

"... In PAGE 11: ... In the Appendix we report the complete results of both the algorithms on all the test problems. In order to give a summary of this extensive numerical testing, in Table 1 we report the number of times each method performs the best in terms of number of iterations, function and gradient evaluations: NEW ALGORITHM ALGORITHM [14] tie iterations 69 17 91 function evaluations 155 18 4 gradient evaluations 69 17 91 Table 1: number of times each method performs the best Table2 shows the cumulative results for all the problems solved by both algorithms. In this table iterations stands for the total number of iterations needed to solve all these problems; the same for function and gradient evaluations.... ..."

Cited by 5

### Table 6 shows that the Bi-CGSTAB convergence with some damping in the Helmholtz problem is considerably faster than for = 0. This was already expected from the spectra in Figure 3. Furthermore, the number of iterations in the case of damping grows only slowly for increasing wavenumbers, especially for the ( 1; 2) = (1; 0:5)-preconditioner. The di erence between the two preconditioners with 1 = 1 is more pronounced if we compute higher wavenumbers. The Bi-CGSTAB convergence and CPU time for the higher wavenumbers, without and with damping in the Helmholtz problem are presented in Table 7. Also for the higher wavenumbers damping in the Helmholtz

2006

"... In PAGE 16: ...part of Table 6. In the middle part of Table6 , the Bi-CGSTAB convergence with the ( 1; 2) = (1; 1)-preconditioner is presented. In the lower lines of Ta- ble 6 the ( 1; 2) = (1; 0:5)-preconditioner is employed.... In PAGE 16: ...4 Ghz and 2 Gb RAM. From the results in Table6 we conclude that the preferred methods among the choices are the preconditioners with 1 = 1. This was already expected from the spectra in Figure 1.... ..."

Cited by 5

### Table 1: Results comparing MORGAN, Full and Restarted GMRES These results show a number of important points. In terms of time MORGAN(26,4) is at least twice as fast as GMRES(30) for the same memory requirements. Full GMRES is not much faster than MORGAN. There are good parallel e ciencies between 1 and 4 processors (approximately 75%) on a shared memory parallel computer. This is due to the fact that the problem is dense and reasonably large. For sparse problems the e ciencies would be considerably worse.

1998

Cited by 7