### Table 5. Number of Solutions within x percent near optimal for 1000 Tasks Sets.

"... In PAGE 6: ... For each experiment, a workload of 10 tasks has been generated with an overload (120% utilization for each task set). Results shown in Table5 indicate the number of solutions within a certain per- cent close to optimal. For the two optimality criteria a near optimal solution (more than 91%) is obtained using AP(2).... In PAGE 7: ... Figure 4 shows that the complexity of the algorithm is relatively low even with a high degree of quality ( = 0:02). From the results shown in Table5 it can be concluded that for values of k 2, 92.5% and 100% of the solutions are 95%-close to optimal when the criterion is to maximize criti- cality and utilization, respectively.... ..."

### Table 3: Throughput comparison Greedy Near-Optimal

2006

"... In PAGE 10: ... All results are the average over 20 random topologies generated by the setdest tool [21]. Table3 shows the maximum, minimum and average end- to-end throughput over 20 random topologies for both greedy and near-optimal ad-hoc relay. Greedy ad-hoc relay protocol achieves throughput gains of 572 897% with an average throughput gain of 785%.... ..."

Cited by 3

### Table 13: Improving Near-Optimal Tours with DPC

in Linear Time Dynamic-Programming Algorithms for New Classes of Restricted TSPs: A Computational Study

### Table 13: Improving Near-Optimal Tours with DPC

in Linear Time Dynamic-Programming Algorithms for New Classes of Restricted TSPs: A Computational Study

### Table 1: Near-optimal feature transform functions.

"... In PAGE 6: ... This leads us with the following scoring function: score(d) = n X i=1 wiTi(Fi(d)) For each feature, we chose a transform function that we em- pirically determined to be well-suited. Table1 shows the chosen transform functions. We tuned the scalar weights by selecting 5000 queries at random from the test set, us- ing an iterative refinement process to determine the weight that maximized the given performance measure, fixed the weight, and used the remaining 23,043 queries to assess the performance of the scoring function.... ..."

### Table 1: Near-optimal feature transform functions.

"... In PAGE 6: ... We chose transform functions that we empirically determined to be well-suited. Table1 shows the chosen transform functions. This type of linear combination is appropriate if we as- sume features to be independent with respect to relevance and an exponential model for link features, as discussed in [8].... ..."

### Table 1 show the ideal throughput as compared to actual schedul- ing results for the four workload application (W =8and D =1). The actual throughput that is achieved by our mapping algorithm is around 1% less. Since only complete ADAG nodes can be allo- cated to processing elements, the applications cannot be distributed entirely evenly over all processing elements and some process- ing stalls are introduced. Nevertheless, a near-optimal schedule is achieved by randomized mapping.

"... In PAGE 5: ...2 Limitations on Instruction Store One main limitation on current commercial network processors is the amount of instruction store that is available to each processing element (typically only a few thousand instructions). While the static instruction store does not directly translate into the dynamic instructions shown in Table1 , there is still a limit on how many ADAGs can be mapped to current NPs. Figure 5 shows the throughput performance of two applications for different number of parallel ADAGs (x-axis) for different archi- tectures.... In PAGE 6: ...Table1 : Ideal vs. Actual Performance for Different Applications.... ..."

### Table 2: Comparison of CCEA, cCCEA, and pC- CEA on MTQ1. We give the number of runs out of 250 which produce a near-optimal pair of variable settings, as well as the value of the highest-valued individual from each run, averaged across all 250 runs. Note cCCEA and pCCEA are roughly com- parable, but outperform CCEA significantly.

"... In PAGE 5: ... We repeated this experiment 250 times for each algorithm. Table2 reports the number of runs which found an individual near the global optimum3 and gives the mean objective value of the highest-valued individual from each of the 250 runs. Recall that the objective value of the higher peak is 150, while the objective value of the lower peak is 50.... ..."

### Table 2: Comparison of CCEA, cCCEA, and pC- CEA on MTQ1. We give the number of runs out of 250 which produce a near-optimal pair of variable settings, as well as the value of the highest-valued individual from each run, averaged across all 250 runs. Note cCCEA and pCCEA are roughly com- parable, but outperform CCEA signi cantly.

"... In PAGE 5: ... We repeated this experiment 250 times for each algorithm. Table2 reports the number of runs which found an individual near the global optimum3 and gives the mean objective value of the highest-valued individual from each of the 250 runs. Recall that the objective value of the higher peak is 150, while the objective value of the lower peak is 50.... ..."