### Table 1: Data statistics. The number of DA units is averaged over each of the 10 random cuts of data.

2005

"... In PAGE 2: ....19%, 6.42% and 59.00% respectively. We divided the 75 annotated sessions into the following classes, each with a specific number of randomly selected ses- sions as shown in Table1 . The validation set was not used in this set of experiments, but was set aside nevertheless to serve as a basis for continuing experiments on active learning aug- mentedwith the partially supervisediterative training procedure discussed in [4].... ..."

Cited by 3

### Table 1: Data statistics. The number of DA units is averaged over each of the 10 random cuts of data.

2005

"... In PAGE 2: ....19%, 6.42% and 59.00% respectively. We divided the 75 annotated sessions into the following classes, each with a specific number of randomly selected ses- sions as shown in Table1 . The validation set was not used in this set of experiments, but was set aside nevertheless to serve as a basis for continuing experiments on active learning aug- mentedwith the partially supervisediterative training procedure discussed in [4].... ..."

Cited by 3

### Table 2. Number of nodes, time, and number of cuts for IP: random instances

### Table 2: Number of nodes, time, and number of cuts for IP: random instances

2003

### Table 4: A Comparison of Cut-edge Results for the Random Sea

2000

Cited by 10

### Table 6: Graph partitions of random graphs generated by cutting the hypercube and grid em-

"... In PAGE 16: ... We #0Cnd that the bisection widths for hypercube embeddings are about the same for all hyperplanes whereas for grid embeddings, the two partitions dividing the grid in half vertically and horizontally give the best partitions. Table6 shows how the Mob hypercube and grid embedding algorithms perform as graph- partitioning algorithms. The data for random graphs on the performance of the Mob graph- partitioning algorithm and the KL graph-partitioning algorithm is taken from our study of local search graph-partitioning heuristics in #5B19,21#5D.... In PAGE 17: ...Table6 by the percentage of all edges that cross the cut between A and B.We found that 16-to-1 grid and hypercube embeddings with our Mob-based heuristics produced bisection widths comparable to those for the Mob heuristic for graph-partitioning.... In PAGE 17: ... The performance of the Mob embedding algorithms interpreted as graph-partitioning algorithms is remarkable, considering that Mob is optimizing the #5Cwrong quot; cost function. While the data in Table6 cannot show conclusively how good the Mob embedding algorithms are, the existence of a better graph-embedding algorithm would also imply the existence of a better graph-partitioning algorithm. 3.... ..."

### Table 6: Comparing the size of cuttings computed by CutRandomInc, with or without using merging. Each entry is the size of the minimal cutting computed, divided by r2.

2000

"... In PAGE 19: ...10, it is an interesting question whether or not using merging results in smaller cuttings generated by CutRandomInc. We tested this empirically, and the results are presented in Table6 . As can be seen in Table 6, using merging does generate smaller cuttings, but the improvement in the cutting size is rather small.... ..."

Cited by 8

### Table 3: Experimental results on random Max-Cut instances. (Number within () indicate the number of instance could not nish within 1 hour)

2005

Cited by 6

### Table 5. Branch-and-Cut results, random networks, Euclidean edge lengths

"... In PAGE 18: ...3.1. Branch-and-Cut for Euclidean edge lengths This subsection is devoted to the results obtained by the Branch-and-Cut algo- rithm for instances in which the edge lengths are equal to the rounded Euclidean distances. Table 2 reports results obtained for problems coming from real applications, while Table5 reports average results obtained for randomly generated problems. The gaps are relative to the best upper bound found.... ..."

### Table 9. Branch-and-Cut results, random networks, unit edge lengths

"... In PAGE 19: ... The most di cult cases seem to be for K between 3 and 5, and especially for K = 4. Table 6 reports results obtained for problems coming from real applications, while Table9 reports the average results obtained for randomly generated prob- lems. These instances were not tested in Fortz et al.... ..."