### Table 6. Comparison of MPGA and MOGA based on two significant factors (ratio of average process times to average setup times and WIP status) of test problems Quantitative Qualitative

2003

"... In PAGE 14: ... For the factor of WIP status, except when it is at high level (all jobs are ready at the beginning), MPGA outperforms MOGA at the similar margins. Table6 shows the results according to factors Ratio and WIP. Both algorithms produce few solutions when the ratio is low and WIP is moderate or low.... ..."

Cited by 1

### Table 4: Finding as many MUSes as possible. Instance #var #cla L. amp;S. ASMUS

"... In PAGE 15: ... We have compared the ASMUS method with the complete algorithm proposed in [20] from an experimental point of view. Table4 shows that both approaches appear to deliver the exact sets of MUSes on the simple aim benchmarks, using similar run-times. On more difficult instances like Aleat30_75_*, ASMUS almost extracts all MUSes and its computation time is in general better than the complete method one.... ..."

### Table 2: Look-up table used by the MODS to determine how many object instances can be placed in a test image.

"... In PAGE 7: ...Table2 ). For example, if the maximum dimension of an object is 90 then the MODS will place 9 instances of the object in each test image.... ..."

### Table 3: A possible database state we can have many possible instances for virtual classes, some of them are shown in Table 4.

### Table 2.2: Weight and cardinality statistics of the covers produced by the various algorithms on 30 random permutations of a data set consisting of biological data (56 sequences of 75 nucleotides each). Here GREEDY2 does outperform GREEDY1 on many instances.

### Table 2: Weight and cardinality statistics of the covers produced by the various al- gorithms on 30 random permutations of a data set consisting of biological data (56 sequences of 75 nucleotides each). Here GREEDY2 does outperform GREEDY1 on many instances.

"... In PAGE 15: ... For the unweighted case, GREEDY2 sometimes did not perform as well as GREEDY1, so the additional complexity of GREEDY2 is perhaps not justi ed. Table2 shows the performance of the various algorithms for the (weighted) WOPC problem, where the objective is to minimize the total weight of the cover rather than its cardinality. Both the weight and cardinality of the solutions produced by GREEDY1 and GREEDY2 for the data sets are shown in the table.... ..."

### Table 2: Weight and cardinality statistics of the covers produced by the various al- gorithms on 30 random permutations of a data set consisting of biological data (56 sequences of 75 nucleotides each). Here GREEDY2 does outperform GREEDY1 on many instances.

"... In PAGE 11: ... We conclude that the two greedy heuristics are quite e ective in primer number minimization. Table2 shows the performance of the various algorithms for the (weighted) WOPC problem, where the objective is to minimize the total weight of the cover rather than its cardinality. Both the weight and cardinality of the solutions produced by GREEDY1 and GREEDY2 for the data sets are shown in the table.... ..."

### Table 5 Musk Data Set 1. 10-fold cross-validation. For each fold, the left half of the ta- ble indicates the change in the number of relevant dimensions (starting with 166) with each iteration of the Iterated Discrimination algorithm. The right half of the table indicates how many of the instances selected by back tting changed in each iteration. The values for iteration 1 show the number of positive molecules in the training set (and hence, the total number of selected instances).

"... In PAGE 35: ...5 and backpropagation can represent APR apos;s, they choose other, less appropriate, hypotheses in this domain. Table5 gives some insight into the behavior of the Iterated Discrimination algorithm. For each fold of the 10-fold cross-validation, this table shows how the number of relevant dimensions and the set of selected positive instances changes.... ..."

### Table 1: Evaluation of learned models on test data. N is the number of instances, split 50:25:25 into training, validation, and test sets. Columns for correlation coefficient and RMSE indicate values using only raw features as basis functions, and then using raw features and their pairwise products. For the last entry, we restricted experiments to a random subset of the data and 1 run since Novelty+ was very slow on many instances of QWH.

"... In PAGE 3: ... Figure 1(c) shows predictions for the Novelty+ algorithm on the SAT04 dataset; the higher error in this figure is partly due to the non-homogeneity of the data, partly to the smaller number of runs over which the median is taken, and partly to the smaller amount of training data. Note that our predictions for Novelty+ and SAPS are qualitatively similar on all experiments we conducted (see Table1 ). Finally, Figure 1(d) shows the performance of our... ..."

### Table 2 below presents an enumeration of the total observed concurrent events. In short, it shows that every file system operation suffers from the fact that it may at one point be enacted while another file system client is already using the targeted file. In fact, as shown in the table below there are many instances where the targeted file is operated upon several times while it is in use. In short, this data should motivate the file system designer to resolve the semantic mismatches that arise when these situations occur.

2002

"... In PAGE 19: ... Table2 : Multiple Semantic Events In order to understand the information provided by the file system semantic analysis tool, it is necessary to observe and count all of the instances of possible errors in the results of the tool. The table below enumerates several classes of errors that are a cause for erroneous results.... ..."