### Table 2 Means and Standard Deviations of the Word Frequencies, Context Frequencies, and Orthographic Distinctiveness Scores for Words in Experiment

2003

"... In PAGE 8: ..., 2002) so that the average letter frequency of the words in the four groups was approximately the same. Table2 shows means and standard deviations of the word frequencies, context frequencies, and orthographic distinctiveness scores for words in these conditions. Two study lists and two test lists were constructed randomly and anew for each subject.... ..."

Cited by 3

### Table 1: Performance of FM counters in terms of % mean error for absolute values of distinct IP and distinct Port counters.

"... In PAGE 4: ... The above evaluation is done on the same trace for different values of m = 8, m = 32 and m = 256. Table1 presents the mean percentage error observed between the actual count and the estimated count. The mean error increases with the decrease in the number of hash functions used.... ..."

### Table 1. Setting of the simulation with synthetic data de- scribed in Section 6. All data sets have two classes. In 1 and 3 the classes have distinct means. In 2 and 4 the distribution is independent of the class label. The size of each data set is given by n.

2001

"... In PAGE 5: ... 6. Simulation study with synthetic data In order to test the plausibility of the proposed estima- tors, we followed the experimental framework of (Efron amp; Tibshirani, 1997), described in Table1 . In all cases, there are two classes.... ..."

Cited by 1

### Table 1. Setting of the simulation with synthetic data de- scribed in Section 6. All data sets have two classes. In 1 and 3 the classes have distinct means. In 2 and 4 the distribution is independent of the class label. The size of each data set is given by n.

2001

"... In PAGE 5: ... 6. Simulation study with synthetic data In order to test the plausibility of the proposed estima- tors, we followed the experimental framework of (Efron amp; Tibshirani, 1997), described in Table1 . In all cases, there are two classes.... ..."

Cited by 1

### Table 2: Mean and stdev scores in correct distinction of the 11 visual components. (BS = blind students, SS = sighted students).

"... In PAGE 7: ... 63,64% 61,36% 63,64% 84,09% 63,64% 54,55% 56,82% 70,45% 0,00% 10,00% 20,00% 30,00% 40,00% 50,00% 60,00% 70,00% 80,00% 90,00% 1st VERSION 2nd VERSION 3rd VERSION 4th VERSION AVERAGE OF THE CORRECT FINDINGS PER VERSION BLIND SIGHTED Figure 8: Comparative table in recognizing differentiated auditory components per version. Table2 contains statistical information (although 8 students are close to being too few to motivate statistical tests). The total number of the prosodic components in the stimulus material was 11 (5 for the bold, 5 for the italic and 1 for the bullet).... ..."

### Table 2 Means and Standard Deviations of the Word Frequencies, Context Frequencies, and Orthographic Distinctiveness Scores

2003

"... In PAGE 3: ...Table2 shows the means and standard deviations of the word frequencies, context frequencies, and orthographic distinctiveness scores for words in these conditions. We constructed two study lists and two test lists randomly for each subject.... ..."

Cited by 3

### Table 6: Summary information on the number of distinct global optima found, given as the percentage of the total number of distinct local optima. Min 1st qu. Median 3rd qu. Max Mean

"... In PAGE 12: ...anges from 0.47% to 85.12% for LOLIB, while for the other instance classes the cor- responding percentages are much smaller. Summary data on these values are given in Table6 . In fact, these results also suggest that especially the LOLIB instances can effectively be solved by a random restart algorithm that is run long enough.... ..."

Cited by 1

### Table 1 summarizes our upper and lower bounds. The rows correspond to the different restrictions on the set A of algorithms, and the columns to the restrictions on the set D of databases and on the aggregation function t. Note that SM means strictly monotone and SMV means strictly monotone in each variable. Distinctness means that D is the collection of databases that satisfy the distinctness property. Note also that c = max {cR cS , cS cR }. For each such combination we provide our upper and lower bounds, along with the theorem where these bounds are proven. The upper bounds are stated above the single separating lines and the lower bounds are below them. (The upper bounds are stated explicitly after the proofs of the referenced theorems.) The lower bounds may be deterministic or probabilistic.

2001

"... In PAGE 37: ...5 access Lower bound: m Thm 9.5 (t strict) Table1 : Summary of Upper and Lower Bounds... In PAGE 39: ...2 says that in the case of no wild guesses and a strict aggregation function, TA is tightly instance optimal. In the case of no wild guesses, for which aggregation functions is TA tightly instance optimal?19 What are the possible optimality ratios? For the other cases where we showed instance optimality of one of our algorithms (as shown in Table1 ), is the algorithm in question in fact tightly instance optimal? For cases where our algorithms might turn out not to be tightly instance optimal, what other algorithms are tightly instance optimal? There are several other interesting lines of investigation. One is to find other scenarios where in- stance optimality can yield meaningful results.... ..."

Cited by 231

### Table 2: The mean of the ratio between the probability of each message in each cluster and its probability in the general popu- lation. For ratios higher than 1 their inverse is taken. A smaller mean ratio implies a more distinctive clustering.

"... In PAGE 6: ... The Spearman- based approach yields the most distinctive clusters. To numerically estimate this difference, Table2 shows the mean ratio between the probability of a message appear- ing in a cluster and its probability within the entire sample set, averaged over all the clusters in each test. 5 Summary We presented a novel approach to ranking log messages based on sampling a population of computer systems and using a new feature construction scheme that proves to be highly appropriate for the ranking objective.... ..."

### Table 1. Distributions of k0 and k1 Conditional on the number of components (k0 and k1) for each Gibbs iteration, the distinct normal means and all other parameters were obtained. This allows for direct inference on the characteristics of each component of the mixture distribution (West and Cao (1993); Escobar and West (1995)). The approximate predictive noise density appears in Figure 2(a), illustrating the match with the observed noise sample 10

"... In PAGE 10: ... All deconvolution analyses were conditional on k0 and k1. For these priors and this data set, the induced prior probabilities are summarized in the columns labeled \Prior quot; in Table1 , for later comparison with the posteriors.... In PAGE 11: ...532, and so forth. The height of the third line in Figure 2(c) is very close to zero, corre- sponding to the posterior probabilities for k0 in the second column of Table1 . We note that, though the prior for the noise distribution was heavily in favor of a single normal distribution, the posterior probabilities strongly suggest two components; the map from prior to posterior for k0 dramatically indicates the data support for two components.... In PAGE 11: ... We note that, though the prior for the noise distribution was heavily in favor of a single normal distribution, the posterior probabilities strongly suggest two components; the map from prior to posterior for k0 dramatically indicates the data support for two components. For the signal distribution, a more typical picture emerges in comparison of columns three and four of Table1 . Though the prior for k1 is heavily concentrated at a single signal level, the posterior is dramatically di erent, supporting at least ve components, and most likely 5, 6 or 7.... ..."