### Table 2. Sample profiles of nearest-neighbors search algorithm.

1999

"... In PAGE 4: ... Figure 5 illustrates the graceful computation degradation capability of the nearest-neighbors search algorithm for the sequence FOREMAN. The figure shows the rate-distortion performance of the full search algorithm and the nearest- neighbors search algorithm, for the basic, low-computation, and high-quality profiles outlined in Table2 . Clearly, even for difficult sequences, the high-quality profile can achieve a performance level that is close to that of the full search 20 25 30 35 40 45 50 55 60 65 30 30.... ..."

Cited by 12

### Table 2. Nearest-Neighbor Classifier Mine Recognition Results

"... In PAGE 4: ... The class label corresponding to the best match (stored data element with the smallest similarity score) is returned. Table2 summarizes the results of this technique on the experiments discussed above. Note that the nearest-neighbor algorithm achieves excellent accuracy on all five experiments; in particular, it discriminates reliably between MLO-1 vs.... ..."

### Table 2. Nearest-Neighbor Classifier Mine Recognition Results

"... In PAGE 4: ... The class label corresponding to the best match (stored data element with the smallest similarity score) is returned. Table2 summarizes the results of this technique on the experiments discussed above. Note that the nearest-neighbor algorithm achieves excellent accuracy on all five experiments; in particular, it discriminates reliably between MLO-1 vs.... ..."

### Table 1. The Condensed Nearest Neighbor Algorithm (CNN).

"... In PAGE 3: ...inimal consistent subset (i.e., the subset with the minimum cardinality) to minimize the cost of storage and computation. The CNN algorithm is given in Table1 . Starting from an empty stored subset, we pass one by one over the patterns and add a pattern to the subset if it cannot be classified correctly with the already stored subset.... ..."

### Table 2: Sample pro#0Cles of nearest-neighbors search algorithm.

"... In PAGE 6: ... Figure 6 illustrates the graceful computation degradation capability of the nearest-neighbors search algorithm for the sequence foreman at QCIF and CIF resolutions. The #0Cgure shows the rate-distortion performance of the full search algorithm and the nearest-neighbors search algorithm, for the basic, low-computation, and high-quality pro#0Cles outlined in Table2 . Clearly, even for di#0Ecult sequences, the high-quality pro#0Cle can achieve a performance level that is close to that of the full search algorithm.... ..."

### Table 2. Confusion matrix obtained with the nearest neighbor algorithm on 125 coordinates.

2004

"... In PAGE 11: ... There are BDBEBH coordinates of each connection in KDD 99 after transformation for discrete attribute values as explained in section 3 (41 attributes added to the representation of each discrete value that has at least two coordinates). Table2 presents the confusion matrix when applying directly the nearest neighbor on the feature space generated by these BDBEBH coordinates. Table 2.... ..."

Cited by 1

### Table 2 : Results of coding methods with the nearest neighbor algorithm

### TABLE I Best-case bounds for the nearest-neighbor algorithm. Concept boundary Lower Upper Bound Bound Hyperplane 2 2

1995

Cited by 10

### Table 1. Dataset information includes the number of attributes, sizes of training and test sets, the least possible error (Bayesian error) and NN error (the test set error of nearest neighbor classifier given all the points in the training set).

"... In PAGE 20: ... The feature spaces are illustrated in figure 10. The dataset information is summarized in Table1 . It shows the number of attributes for each dataset, the sizes of (disjoint) training and test sets, the error rate of the best possible (Bayesian) classifier, and the error rate of the nearest neighbor classifier based on all the la- beled points of the training set.... In PAGE 21: ... It has a better average performance and a more stable one than other selective sampling methods (Tables 2 and 3). Interest- ingly, in the Pima Indians Diabetes dataset, learning only 10% of the training set with the LSS algorithm yields a better classifier than the nearest neighbor classifier trained on all available points (compare with Table1 ). Also, in the Ionosphere dataset the LSS algo- rithm requires only 8% of the training set in order to achieve the same average accuracy as the nearest neighbor classifier based on all the training points (figure 11(a)).... In PAGE 25: ...3.1% (Table 2). The IB3 and DEL filtering algorithms achieve accuracy of 69.8% and 71.6% respectively ( Table1 in Wilson and Martinez (2000)) when retaining datasets of approximately the same size. This better performance of our algorithm was achieved de- spite that IB3 and DEL have used perfect information (i.... ..."

### Table 1: Dataset information includes the number of attributes, sizes of training and test sets, the least possible error (Bayesian error) and NN error (the test set error of nearest neighbor classi er given all the points in the training set).

1999

"... In PAGE 22: ... It has a bet- ter average performance and a more stable one than other selective sampling methods (Tables 2 and 3). Interestingly, in the Pima Indians Diabetes dataset, learning only 10% of the training set with the LSS algorithm yields a better classi er than the nearest neighbor classi er trained on all available points (com- pare with Table1... In PAGE 26: ... The LSS algorithm, sampling only 10% of the training set, achieves accuracy of 73:1% (Table 2). The IB3 and DEL ltering algorithms achieve accuracy of 69:8% and 71:6% respectively ( Table1 in (Wilson amp; Martinez, 2000)) when retaining datasets of approximately the same size. This better performance of our algorithm was achieved despite that IB3 and DEL have used perfect information (i.... ..."

Cited by 31