### Table3 Precision, Accuracy, and Range Resolution

### Table 4.4: Comparing result with other hardware shot de-

2007

### Table 6: Accuracy, precision and recall

2004

"... In PAGE 6: ... The results were evaluated through three measures: ac- curacy of the classification (positive or negative), preci- sion of positive paraphrase pairs, and recall of positive paraphrase pairs. Table6 shows the result. The accuracy, precision and recall of F-set1 were 76 %, 70 % and 73 % respectively.... ..."

Cited by 2

### Table 4. Amount of hardware for three architectures. Precision, Accuracy:

2005

Cited by 1

### Table 3: Computation of Precision, Recall and Accuracy .

"... In PAGE 3: ...Table3 and equations 2-4 show how the measures preci- sion, recall and accuracy are computed for accents. For phrase boundaries the performance measures are computed in a similar way.... ..."

Cited by 2

### Table 6: Statistics of the bias, precision and accuracy of the proposed technique for different minMSE values. No sampling was used.

"... In PAGE 27: ... Table6 presents summary statistics (minimal value, lower quartile, median, upper quartile and maximal value) of the bias, precision and accuracy of the FN estimate produced by the proposed method without making use of sampling. The estimates are precise: the largest variability of 312.... ..."

### Table 3: Estimation bias, precision and accuracy of the proposed technique for different minMSE (m) values.

"... In PAGE 21: ... Table3 quantifies the results depicted in Figure 4. The results show that the proposed scheme can outperform SRS at the sample size of 200,000 in precision as well as in accuracy for 4 of the 6 minMSE values.... ..."

### Table 1. Recall, precision, and accuracy with respect to some document analysis com- ponents

1999

"... In PAGE 1: ... In this paper, we propose that the standard measures re- call, precision, and accuracy/error rate are adequate measu- res to compute the effectiveness of document analysis com- ponents. Table1 defines these standard measures in the con- text of different document analysis components. Due to the simple relation between error rate and accuracy, the error rate is not listed in the table.... In PAGE 2: ... Even if algo- rithm BD outperforms algorithm BE on a test set, this may not be a clear indication that BD is really better than algorithm BE. A closer look at the ratios given in Table1 helps to gain a statistical access to the problems. It reveals that all ratios computed there have the form AZB4BT CK BU occurs in test setB5 AZB4BU occurs in test setB5 with BT and BU being some events.... ..."

Cited by 16

### Table 1: Results for detecting and completing OOV words. Speaker #Detected #Correct #Completed Recall Precision Accuracy

"... In PAGE 6: ... In the 30 queries used for our eval- uation, 14 word tokens (13 word types) were OOV words unlisted in the dictionary for speech recog- nition. Table1 shows the results on a speaker-by- speaker basis, where #Detected and #Correct denote the total number of OOV words detected by our method and the number of OOV words correctly detected, respectively. In addition, #Completed denotes the number of detected OOV words that were corresponded to correct index terms in 300 top documents.... In PAGE 6: ... We estimated recall and precision for detecting OOV words, and accuracy for query completion, as in Equation (4). recall = #Correct 14 precision = #Correct #Detect accuracy = #Completed #Detect (4) Looking at Table1 , one can see that recall was gen- erally greater than precision. In other words, our method tended to detect as many OOV words as pos- sible.... ..."