### Table V summarizes the numerical results produced by the SOR method and those by the first hybrid algorithm. The hy- brid algorithm was initialized with the same relaxation factors as those used by the SOR method. It is clear from Table V that the hybrid algorithm performed significantly better than the SOR method. The hybrid algorithm was able to find a better approx- imate solution after 300 iterations than that could be found by the SOR method after 1000 iterations. Furthermore, different re- laxation factors had a crucial impact on the performance of the SOR method. The SOR method with relaxation factor 1.25 per- formed much worse than that with relaxation factor 1.75. How- ever, the hybrid algorithm is much more robust against changes in initial relaxation factors. This reduces the burden on the user to find/guess a near-optimal relaxation factor.

### Table 5: A comparison of the restoration algorithms. 8 Conclusion The choice of restoration algorithms is explored and a hybrid model is proposed for network restoration. A new rerouting algorithm is developed based on the recurrent neural network which uses simulated annealing. The results of the proposed method for restoration shows the robustness of the model to restore in case of network failure.

"... In PAGE 20: ... All the problems in the existing restoration algorithms are alleviated by using the proposed model. Table5 gives the comparison of the restoration algorithms based on local rerouting, local destination rerouting, source based rerouting and the proposed hybrid rerouting against various parameters [19].... ..."

Cited by 1

### Table III: Hybridization Methods

### Table 4.- Comparison of results considering three different initial control profiles (u0) for case study I.

"... In PAGE 4: ... However its CPU time, although quite reasonable, was significantly larger than that of HyMDO. In order to evaluate the robustness of the hybrid method, case I was solved considering three different initial constant control profiles, and the results are shown in Table4 , where the solutions obtained with gOPT are also reported. It can be seen that HyMDO is not sensitive to initialization, arriving to essentially the same solutions in all the runs, while gOPT, although a bit faster, converged to worse results in two runs.... ..."

### Table 2: Results of Hybrid Methods

"... In PAGE 4: ...92 Figure 1: Graphs representing the overall results between the two sets of methods, published and hybrid. Inflections in the graph representing hybrid methods may not exactly match the results in Table2 due to merging of redundant data points. The hybrid graph represents data points in the incremental methods, as well as the combined results of the voting methods.... In PAGE 4: ... Table 1 shows the results, as well as the recall and precision values, for each of the published methods. Table2 shows the same results for the hybrid methods, both with and without incorporating semantic information. Figure 1 gives a general view of the range of precision and recall for each larger set of methods.... In PAGE 4: ... Figure 1 gives a general view of the range of precision and recall for each larger set of methods. As seen in Table2 , incorporating semantic information into the technique initially reduced the set of exact matches; however, both sets of incremental methods provided steadily increasing recall, with only a relatively small drop in precision. Performance of the voting schemes was somewhat similar to that of the incremental methods.... ..."

### Table 2: Results of Hybrid Methods

"... In PAGE 4: ...92 Figure 1: Graphs representing the overall results between the two sets of methods, published and hybrid. Inflections in the graph representing hybrid methods may not exactly match the results in Table2 due to merging of redundant data points. The hybrid graph represents data points in the incremental methods, as well as the combined results of the voting methods.... In PAGE 4: ... Table 1 shows the results, as well as the recall and precision values, for each of the published methods. Table2 shows the same results for the hybrid methods, both with and without incorporating semantic information. Figure 1 gives a general view of the range of precision and recall for each larger set of methods.... In PAGE 4: ... Figure 1 gives a general view of the range of precision and recall for each larger set of methods. As seen in Table2 , incorporating semantic information into the technique initially reduced the set of exact matches; however, both sets of incremental methods provided steadily increasing recall, with only a relatively small drop in precision. Performance of the voting schemes was somewhat similar to that of the incremental methods.... ..."

### Table 1. Comparison of Hybrid Misclassi cation Minimization (HMM) with Parametric Misclassi cation Minimization (PMM) [8, 1] amp; Robust Linear Programming (RLP) [2]

1996

"... In PAGE 6: ... The HMM Algorithm and the robust linear program algorithm were implemented using C and called MINOS as a subroutine to solve the linear programs. Table1 gives a summary of the numerical results. To address the possibility that the reported CPU times might be biased against AMPL, because of the overhead involved when AMPL calls the MINOS solver, we have also included another comparative criterion: the average number of LPs solved by each method.... ..."

Cited by 11

### Table 3: Comparison with hybrid methods and their source methods

### Table 2. Robustness of watermarking

"... In PAGE 24: ... 4.5 Discussion Table2 sums up robustness experiments. Robustness results for our method... ..."