### Table 1: Near-optimal feature transform functions.

"... In PAGE 6: ... This leads us with the following scoring function: score(d) = n X i=1 wiTi(Fi(d)) For each feature, we chose a transform function that we em- pirically determined to be well-suited. Table1 shows the chosen transform functions. We tuned the scalar weights by selecting 5000 queries at random from the test set, us- ing an iterative refinement process to determine the weight that maximized the given performance measure, fixed the weight, and used the remaining 23,043 queries to assess the performance of the scoring function.... ..."

### Table 2: Near-optimal configurations estimated through our fast methodology compared to optimal configurations found with the exhaustive search. (Simulations have been performed on different workstations working in parallel)

"... In PAGE 3: ... Ta- ble 2 compares the final configurations estimated with our method- ology with the optimal configurations found with the exhaustive search. Table2 shows also the percentage errors between the op- timal BXBW and the near-optimal BXBW values. For all benchmarks but one, the optimal configuration has been found without explor- ing the whole design space.... In PAGE 3: ...54% error with re- spect to the optimal BXBW value (CM CR D3D4D8 and CM CQ D3D4D8 differ from CR D3D4D8 and CQ D3D4D8 ). Table2 depicts also the exhaustive search time as well as the overall speedup obtained by our methodology. Note how simula- tion times can be reduced approximately by a factor of six.... ..."

### Table 1: Optimal and near optimal solutions comparison Movement-Based

"... In PAGE 12: ... Sensors reliability is assumed to be fixed all the time. As shown in Table1 , the two heuristics are generally able to achieve reasonable average coverage performance. The coverage performance of movement-based greedy ... ..."

### Table I and Fig. 2). Next we describe two simple algorithms, collect-en- route and report-en-route, (presented in [14]) that im- prove the above solution to the route exploration problem, and analyze their performance using our model. Following this discussion we turn to more sophisticated solutions that achieve near optimal performances.

### Table 4: Performance results using the same strategy for all documents, measuring the total turnaround time (TaT), the number of stale documents that were returned, and the total consumed bandwidth. Optimal and near-optimal values are highlighted for each metric.

2002

"... In PAGE 11: ...1 Applying a Global Strategy In our first experiment each document was assigned the same strategy. The results are shown in Table4 .... In PAGE 11: ... Combining SU50 with a caching strategy for the remaining intermediate servers improves the total turnaround time, but also leads to returning stale documents. Table4 also shows that most strategies are relatively good with respect to one or more metrics, but no strategy is optimal in all cases. In the next section, we discuss the effects if a global strategy is replaced by assigning a strategy to each document separately and show that per- document replication policies lead to better performance with respect to all metrics at the same time.... In PAGE 13: ... Because we assume that a lower value in a metric always indicates a better performance, using result vectors introduces a partial ordering on the complete set A of arrangements, such that totala3 A1a6 a10a9 totala3 A2a6 iff a11 i a0 a0 1a4 a7 a7 a7 a4 N a1 : totala3 A1a6 a3a2 ia4a13a12 totala3 A2a6 a3a2 ia4 and a14 j a0 a0 1a4 a7 a7 a7 a4 N a1 : totala3 A1a6 a3a2 ja4 a9 totala3 A2a6 a3a2 ja4 Obviously, if totala3 A1a6 a10a9 totala3 A2a6 then A1 should be considered to be better than A2 as it leads to better performance values for each metric. As an example, consider the results from Table4 for FAU Erlangen. Let ACV be the arrange- ment in which each document is assigned strategy CV, ACLV be the arrangement with CLV and ACDV be the one with CDV for each document.... ..."

Cited by 45

### Table 4: Performance results using the same strategy for all documents, measuring the total turnaround time (TaT), the number of stale documents that were returned, and the total consumed bandwidth. Optimal and near-optimal values are highlighted for each metric.

"... In PAGE 11: ...1 Applying a Global Strategy In our first experiment each document was assigned the same strategy. The results are shown in Table4 .... In PAGE 11: ... Combining SU50 with a caching strategy for the remaining intermediate servers improves the total turnaround time, but also leads to returning stale documents. Table4 also shows that most strategies are relatively good with respect to one or more metrics, but no strategy is optimal in all cases. In the next section, we discuss the effects if a global strategy is replaced by assigning a strategy to each document separately and show that per- document replication policies lead to better performance with respect to all metrics at the same time.... In PAGE 13: ... Because we assume that a lower value in a metric always indicates a better performance, using result vectors introduces a partial ordering on the complete set A of arrangements, such that totalB4A1B5 BO totalB4A2B5 iff BKi BE CU1BNBMBMBMBNNCV : totalB4A1B5CJiCL AK totalB4A2B5CJiCL and BL j BE CU1BNBMBMBMBNNCV : totalB4A1B5CJ jCL BO totalB4A2B5CJ jCL Obviously, if totalB4A1B5 BO totalB4A2B5 then A1 should be considered to be better than A2 as it leads to better performance values for each metric. As an example, consider the results from Table4 for FAU Erlangen. Let ACV be the arrange- ment in which each document is assigned strategy CV, ACLV be the arrangement with CLV and ACDV be the one with CDV for each document.... ..."

### Table 1: A near-optimal population of classi ers from a 2 level 6 multiplexer experiment at an advanced point in condensation. Only the rst classi er shown is not a member of [O], and it alone accounts for the non-zero system error.

1996

"... In PAGE 27: ... This would seem to indicate that system error and performance are di erent measures of the system apos;s progress in mapping the environment. Table1 shows a near-optimal population during an advanced stage of condensation. It has a full [O], but has an additional classi er (the rst shown) which would eventually be eliminated by condensation (note its low tness and numerosity).... ..."

Cited by 34

### Table 3: Throughput comparison Greedy Near-Optimal

2006

"... In PAGE 10: ... All results are the average over 20 random topologies generated by the setdest tool [21]. Table3 shows the maximum, minimum and average end- to-end throughput over 20 random topologies for both greedy and near-optimal ad-hoc relay. Greedy ad-hoc relay protocol achieves throughput gains of 572 897% with an average throughput gain of 785%.... ..."

Cited by 3

### Table 45 clearly demonstrates that the column generation heuristic is able to solve small problems to near-optimality; the reported average optimality gap is with respect to the known optimal solutions to these problems found using the smart enumeration method. Further, Table 46 demonstrates that the heuristic solution approach is much faster than

in Seaports

2006

"... In PAGE 88: ....8.3 Performance of Heuristics The performance of the column generation heuristics is now evaluated by comparing the solutions obtained from the heuristic with the optimal solutions obtained from the smart enumeration method. Table45 summarizes the performance of LSP pricing heuristic on UDTP problems. Table 46 compares the computational time required by the two methods for the same problem sizes; note that computation times are decomposed into label gener- ation time and branch-and-bound time.... In PAGE 88: ... All values represent averages over 10 instances. Table45 : Performance of Root Column Generation Heuristic Using LSP for UDTP: Solu- tion Quality Problem No. of Inst- Avg Avg Size ances solved optimality Requests optimally gap per vehicle 11 X 11 5 0.... ..."