### Table 1: Absolute numbers of lyrics (from total 258) in specified recall intervals for values of t. Results were achieved using the unsmoothed kwit method. thresholdt

### Table 3 Recall Probability as a Function of Test Instructions, Interval Placement, and Processing Simila@i

2003

"... In PAGE 4: ... This change in the false-alarm rate seems reliable as it was also found in a previous experiment using these two conditions with lists of single words (Hall, 1996). Table3 shows the probability of recall as a function of instructions, interval placement, and processing similarity. Recall performance on List 1 items ranged from a low of.... ..."

Cited by 1

### Table 4 The Conditional Probability of a Yes Response to a List I Item Given Successjiil Recall to That Item as a Function of Test Instructions, Interval Placement, and Processing Similarity

2003

"... In PAGE 4: ... However, with an interlist interval, where the retention interval for List 2 was shorter, there may have been a tendency for recall to be better for List 2 than for List 1 items. Table4 shows the proba- bility of recognition conditional on correct recall for List 1 items (List 2 items were presented singly for half the participants). Note that when participants had a zero probability of recalling a List 1 item, they were dropped from this analysis.... In PAGE 5: ... For each of the eight between-participants conditions, 2,000 random samples from the participants were created. Sampling was done with replacement, and the sample size was the number of participants in that condi- tion who had recalled one or more List 1 items on the final recall test (see the n values in Table4 ). For each sample in the inclusion conditions, the probability of an inclusion error, P(T), and the same probability conditional on correct recall were calculated.... ..."

Cited by 1

### Table 1. Overall results. All window-based algorithms are evaluated at their best window size. In addition to general recall and precision rates, more precise estimates of the odds multipliers and their 95% confidence intervals are given.

"... In PAGE 3: ... Fortunately, there are a number of non-parametric al- gorithms that perform much better than the window-based set. Table1 presents a complete list of the performance of all algorithms. It is divided into the two traditional met- rics of recall, or the percentage of symbols on the page that were found during the OMR process, and precision, or the percentage of symbols in the OMR output that were in fact on the page.... In PAGE 4: ...44, the difference is statistically significant. Table1 is ordered by recall performance, which is the most important measure to optimise when trying to reduce human editing costs after the OMR process. The clear winner is Brink and Pendock 1996, which performs sig- nificantly better than all others in both performance and recall; a sample of its output appears in figure 2a.... In PAGE 4: ... None of them has received much attention to date, and indeed, the most commonly used binarisation algorithm is the notably mediocre Otsu 1979 (see figure 2c). A more visual representation of some of the data in Table1 appears in figure 4. This figure is a box-and- whisker plot on recall performance for every image in the test set.... ..."

### Table 4. Co-occurrence agreement probability (COAP), segmen- tation precision (SegPrec) and segmentation recall (SegRecall) of four learners on the FAQ dataset. All these averages have 95% confidence intervals of 0.01 or less.

2000

"... In PAGE 6: ... Consequently, training does not involve Baum-Welch reestimation. Table4 shows the performance of the four models on FAQ data. It is clear from the table that MEMM is the best of the methods tested.... ..."

Cited by 257

### Table 4. Co-occurrence agreement probability (COAP), segmen- tation precision (SegPrec) and segmentation recall (SegRecall) of four learners on the FAQ dataset. All these averages have 95% confidence intervals of 0.01 or less.

2000

"... In PAGE 7: ... Table4 presents the results of our experiments. It is clear from the table that the maximum entropy Markov model is the best of the methods tested.... ..."

Cited by 257

### Table 4. Co-occurrence agreement probability (COAP), segmen- tation precision (SegPrec) and segmentation recall (SegRecall) of four learners on the FAQ dataset. All these averages have 95% confidence intervals of 0.01 or less.

2000

"... In PAGE 6: ... Consequently, training does not involve Baum- Welch.BF Table4 presents the results of our experiments. It is clear from the table that the maximum entropy Markov model is the best of the methods tested.... ..."

Cited by 257

2000

"... In PAGE 6: ... Consequently, training does not involve Baum-Welch reestimation. Table4 shows the performance of the four models on FAQ data. It is clear from the table that MEMM is the best of the methods tested.... ..."

Cited by 257

2000

"... In PAGE 6: ... Consequently, training does not involve Baum- Welch.BF Table4 presents the results of our experiments. It is clear from the table that the maximum entropy Markov model is the best of the methods tested.... ..."

Cited by 257

2000

"... In PAGE 7: ... Table4 presents the results of our experiments. It is clear from the table that the maximum entropy Markov model is the best of the methods tested.... ..."

Cited by 257