### Table 6: Client-sidelatency for sending hundred requests per iteration

1996

"... In PAGE 10: ... The performance improvements obtained through this experiment are shown in Table 5. 8 Table6 shows the time required in seconds for invoking 8ORBeline uses inline hashing for server-side dispatching and hence we did not optimize ORBeline code.... ..."

Cited by 123

### TABLE II NUMBER OF ITERATIONS (MEAN, STANDARD DEVIATION, AND WORST CASE OVER ONE HUNDRED RUNS) TO ROUTE A RANDOMLY CHOSEN PERMUTATION BY OUR RANDOMIZED ALGORITHM.

2005

### Table 1: Final accuracies for lexicon with purity 0:1 The highest accuracy is obtained by the logarithmic measure. This accuracy is however obtained at the expense of about twice as many rules, compared to the \original quot; measure. The \paper quot; measure behaves the worst, seemingly because of choosing a wrong rule within the rst couple hundred iterations. This rule causes a large drop in accuracy after which the tagger cannot catch up with the other scoring functions. The overall performance of the \paper quot; rule indicates however that variants of it might represent good scoring criteria. Although the \original quot; measure does not reach the same accuracy as the logarithmic mea- sure, it is more concise (converges in fewer rules). The small di erence in accuracy does not justify the use of almost twice as many rules. The graph of the \original quot; scoring function is 6

### Table 4. Number of branch and bound iterations for triangulation and resectioning on the Dinosaur and Corridor datasets. More parameters are estimated for resection- ing, but the main reason for the difference in performance between triangulation and resectioning is that several hundred points are visible to each camera for the latter.

2006

"... In PAGE 12: ... The L1 methods also result in low reprojection errors as measured by the RMS cri- terion. More interesting is, perhaps, the number of iterations on a standard PC (3 GHz), see Table4 . In the case of triangulation, a point is typically visible in a couple of frames.... ..."

Cited by 10

### Table 4. Number of branch and bound iterations for triangulation and resectioning on the Dinosaur and Corridor datasets. More parameters are estimated for resection- ing, but the main reason for the difference in performance between triangulation and resectioning is that several hundred points are visible to each camera for the latter.

2006

"... In PAGE 13: ... The L1 methods also result in low reprojection errors as mea- sured by the RMS criterion. More interesting is, perhaps, the number of iter- ations on a standard PC (3 GHz), see Table4 . In the case of triangulation, a point is typically visible in a couple of frames.... ..."

Cited by 10

### Table 4: Number of branch and bound iterations for triangulation and re- sectioning on the Dinosaur and Corridor datasets. More parameters are estimated for resectioning, but the main reason for the difference in perfor- mance between triangulation and resectioning is that several hundred points are visible to each camera for the latter.

### Table 1: Percentage of Total Matches Found Within K Iterations: Uniform Work- load To determine how many iterations it would take in practice for parallel iterative matching to complete, we simulated the algorithm on a variety of request patterns. Table 1 shows the results of these tests for a 16 by 16 switch. The rst column lists the probability p that there is a cell queued, and thus a request, for a given input-output pair; several hundred thousand patterns were generated for each value of p. The remaining columns show the percentages of matches found within one through four iterations, where 100% represents the number of matches found by running iterative matching to completion. Table 1 shows that additional matches are hardly ever found after four iterations in a 16 by 16 switch; we observed similar 10

1993

Cited by 187

### Table 1: Results of applying the MML criterion to samples of various sizes with true values w = 0:1 and C = 10. One hundred replications were used for each sample size.

1998

"... In PAGE 5: ... For a number of sample sizes n, one hundred replications were carried out in which n model points i and sample points Yi were generated from the model with these true parameter values, and the estimates obtained by the MML method were found. In Table1 we present the results of the MML approach, implemented using this choice of initial conditions and then iterating the alternating max- imization algorithm to convergence. The consistency of the approach is borne out by the... In PAGE 7: ...7 Table 2: Results of applying the CML criterion to a samples of various sizes with true values w = 0:1 and C = 10, the same samples as in Table1 . One hundred replications were used for each sample size.... ..."

Cited by 14

### Table 2: Coordinated links by partial decomposition. The experiments reported in [21] show that in this way we can solve problems of remarkable sizes - with hundreds of thousands of variables and constraints by using standard workstations connected in a network. We assign as big subproblems as pos- sible to the workstations; owing to the use of interior point methods they can be quite large. Messages about the approximation points ~ xl and multiplier iterations are passed via the network. The stability of the multiplier method makes coordination of the subproblems possible, even in the presence of numerical errors. 21

"... In PAGE 25: ...nd only for these (l; t) we carry out iteration (4.6). So, similarly to the full de- composition scheme, exchange of information between subproblems occurs only along coordinated links (l; t) $ ( (l; t); t) with (l; t) 2 @+Ik; k = 1; : : : ; K. In the example of Figure 2, assuming that subproblem 1 contains scenarios 1, 2, 3 and 4, subproblem 2 contains scenarios 5 and 6, subproblem 3 contains scenario 7 and subproblem 4 contains scenario 8, only links shown in Table2 need be coordinated. The resulting communication structure of subproblems is shown in Figure 4.... ..."