### Table 1: Precision/recall values for default (lead) and tf*idf methods. number of extr. sent. (a) lead method (b) tf*idf method

1996

"... In PAGE 3: ... 4 Results and Discussion 4.1 Automatically created abstracts Table1 shows the precision/recall values for the tf*idf -method described in section 2 and for a default method that selects just the rst N sen- tences from the beginning of each article (\lead quot;- method). Whereas the lead method most likely provides a higher readability (see Brandow et al.... ..."

Cited by 29

### Table 3. Accuracies of leading secondary structure prediction methods

2006

Cited by 11

### Tables 1, 2 and 3 summarize the problems solved in this paper, the time and number of message transmissions achieved for each of them, and the method that leads to optimal results in each case. 1-message M-messages

1995

Cited by 24

### Table 3 Evaluation of Sentence Extraction Methods Len Metric RAND LEAD MEAD VSM baseLM DAM GAM

"... In PAGE 16: ...2 Comparison of Sentence Extraction Methods Using ROUGE, We evaluate the result summaries on six aspects and two gold stan- dards separately, then take the average score over the six aspects and two standards as the nal evaluation. Table3 summarizes the averaged Average R score for all evaluated methods on sentence extraction. We vary the length of summaries for each aspect among 1, 5, 10 and 15 sentences.... In PAGE 16: ... The default selection of parameters are presented in Table 2 unless otherwise speci ed. Table 2 Default Selection of Parameter Values Method baseLM DAM GAM Parameter Value B = 0:1 B = 0:8 B = 0:3 From Table3 , we make a number of observations as the following: Comparison among baseline methods: Among the three baseline methods, MEAD performs signi cantly better than RAND and LEAD. This is not surprising, since MEAD is a featured document summarization system which integrates many text features, whereas the other two simply extract sentences randomly or from the front of each document.... ..."

### Table 3 Evaluation of Sentence Extraction Methods Len Metric RAND LEAD MEAD VSM baseLM DAM GAM

"... In PAGE 16: ...2 Comparison of Sentence Extraction Methods Using ROUGE, We evaluate the result summaries on six aspects and two gold stan- dards separately, then take the average score over the six aspects and two standards as the nal evaluation. Table3 summarizes the averaged Average R score for all evaluated methods on sentence extraction. We vary the length of summaries for each aspect among 1, 5, 10 and 15 sentences.... In PAGE 16: ... The default selection of parameters are presented in Table 2 unless otherwise speci ed. Table 2 Default Selection of Parameter Values Method baseLM DAM GAM Parameter Value B = 0:1 B = 0:8 B = 0:3 From Table3 , we make a number of observations as the following: Comparison among baseline methods: Among the three baseline methods, MEAD performs signi cantly better than RAND and LEAD. This is not surprising, since MEAD is a featured document summarization system which integrates many text features, whereas the other two simply extract sentences randomly or from the front of each document.... ..."

### Table 4. Performance of CONTRAfold relative to leading secondary struc- ture prediction methods

2006

Cited by 11

### Table 5: Comparison between the uniform and biased selection schemes. Values represent percentage of cases in which one method has greater probability of leading to a successful relink than the other.

"... In PAGE 12: ... Therefore, for each instance, 1,000 selections were simulated (100 for each random seed). The results are summarized in Table5 . For each instance, we show the percentage of the cases in which a method has greater probability of success than the other (when the probabilities are equal, we consider the experiment a tie).... ..."

### Table 5: Comparison between the uniform and biased selection schemes. Values represent percentage of cases in which one method has greater probability of leading to a successful relink than the other.

"... In PAGE 13: ... Therefore, for each instance, 1,000 selections were simulated (100 for each random seed). The results are summarized in Table5 . For each instance, we show the percentage of cases in which one method has greater probability of success than the other (when the probabilities are equal, we consider the experiment a tie).... ..."

### Table 5: Comparison of kernel widths derived automatically and manually from the distances histogram. Even with noisy histograms, and varying data dispersion inside clusters, where several peaks can be observed, the histogram method leads to quite acceptable results. The kernel width and the number of misclassiflcations are in most cases similar to those obtained by the best found by checking of a large range of values and using the known class labels.

2004

"... In PAGE 22: ... The pure K-means algorithm is, of course, faster. Table5 evaluates the quality of the choice of with the proposed histogram method. One can see that in most cases the chosen from the histogram is close to the best value, and the same is true even for the number of clustering errors.... ..."

Cited by 4

### Table 1. The lter H4 and the entries in the corre- sponding M2( n). To nd orthonormal lters of length L, the above method leads to the wearying task of solving L2 nonlinear equations in L2 variables. However, by use of some Maple-routines we succeeded in generating Hd 6 and some orthogonal lter H8 corresponding to a wavelet with 4 vanishing moments.

1997

"... In PAGE 29: ...14) then by construction S(1 ? L=2) n = n, but it has a more e cient implementation as shown in Table 1. 1 Operation n n ] mult apos;s (L=2 + 1) n L n ] add apos;s L=2 n (L ? 1) n Table1 . The operation cost of ltering a set of n points using the pair of lters Hd L; Gd L by 2 di erent techniques.... In PAGE 43: ...43 Then by construction n = S(L=2 ? 1) n S(L=2 ? 1) n, but n has a more e cient implementation, as shown in Table 1. 1 Operation n n n n n ] mult apos;s 3 8Ln2 (L + 1)n2 2Ln2 ] add apos;s 7 8Ln2 Ln2 (2L ? 2)n2 Table1 . The operation-cost of ltering a set of n n points using the pair of lters Hd L; Gd L by 3 di erent tech- niques.... In PAGE 49: ...2). The two solution-sets are mirror-images of each other, it is the lter shown in Table1 that corresponds to our choice of rotation-matrix M2( k).... In PAGE 61: ...78 19.80 Table1 . Numerical results from testing the perfor- mance of cm;r on D2( ), D3( ).... ..."

Cited by 7